The UI Editor was my specialization project during my time at The Game Assembly. I've always enjoyed working with UI, especially working close to the UI artists while also having a say in the first impression a game has on a person. I also enjoy the process of trying out how different interfaces appeal to my artists, making UX as an important part of the process as the actual functionality of the editor.
What I call the UI essentials refer to what every artist expects when building a menu. This includes button, sliders, toggles, text and images. With this, one can basically build the simplest of menus and it will get you to your goal. By using a component system behind the scenes, the artist is able to do a simple drag and drop or create a child of another object by right clicking and choosing what UI element he or she wishes. These are then fully editable in the editor, including things such as hovered/on click textures and colors for the different elements, aswell setting up hit boxes, anchor points, pivots, custom shaders and sort order.
One big focus for me and the reason behind many choices I make is the visual part of the job. Its where I find the joy of the programming job. Therefore, I decided to give alot of focus to the animation part of the editor. This gives the artist a tool to make the elements interact with and give feedback to the user. By capturing keyframes in a timeline, the artist can set up animations for scale, color, shader parameters, position and rotation. One big priority for me was testing it on my artist, and I tried building the UX to their liking, using ImGui as a base.
Another key features that many artists take for granted is the ability to keep elements in a child/parent hierarchy, so when editing or animating many elements who will share most of their traits, they do not have to duplicate work for all subsequent elements. I also wanted this to be easy, by the click of a button or by just dragging and dropping an element onto another, one creates a child/parent relationship between the two, making the feature recognizable from other engines.
In addition to using a standard image I wanted the ability to use a live scene as a background for the menus. This was implemented in our custom engines editor. It gives the user the ability to switch between the UI element editing and the live scene editing. I also wanted a way to set the custom camera angle, letting the user toggle between a "Edit" camera and a "View" camera, although still making it possible to edit the live scene through the "View" camera, to give more freedom to the artist.
I wanted a system that made integrating a HUD into our pipeline easy, and I also wanted artists to be able to create animations for certain UI elements, such as a pulsing health bar when health is low or a cooldown indicator.
The solution was a UIBinding system. Each UI element can be given a Binding Type, for example PlayerHealthNormalized, which maps it to a value type defined in a central enum. When the player takes damage, the game calls the UIBindingManager with the new normalized value. The manager then looks up all registered UI elements with that binding type and pushes the new value to them. Each element then lerps smoothly towards the new target value over time, rather than snapping to it.
Each binding also has configurable thresholds: OnCriticalLow, OnLow, OnHigh and OnCriticalHigh. When the value crosses one of these thresholds, the binding fires a trigger to the UIAnimator component on the same element, which plays the corresponding animation clip if one exists. When the value returns to normal, an OnNormal trigger is fired to return the element to its idle state. This let the artists define all the visual behaviour in the editor without any additional code changes needed from the programmers, which was exactly what I was going for.
I first started making a detached system for our UI. We had just started our merge to a component system, which made the current state of our engine a hybrid between a component system and a pure OOP style system. Since I wasn't too comfortable with a component system at the time and since I had not gotten the chance to learn it properly, the first thing I did was making separate Game objects for the different elements such as button/sliders etc. Even though it could have worked in theory, I quickly decided to switch focus and start integrating it with our component system instead.
I started sketching on how a system could look, with each UI element being a GameObject with corresponding components that get updated individually. One simple component was the UIImage component. It basically takes a default texture and color and slaps it onto the Game object. Then, when sliders/buttons/toggles want to fetch their image, they call the same game object they are components of, fetching their corresponding image, making it very easy to manipulate the underlying image.
class UIImage_C : public Component, public UIAnimatable
{
public:
UIImage_C() = default;
~UIImage_C() override = default;
void Init() override;
void Update(float aDeltaTime) override;
void Setup(const Tga::UITransformData& aTransform, const Tga::UIImageData& aImage);
void UpdateTransform();
void SetVisible(bool aVisible) { myVisible = aVisible; }
void SetTexture(const Tga::StringId& aTexture);
void SetColor(const Tga::Color& aColor) override;
void SetSize(Tga::Vector2f aSize) override;
void SetScale(Tga::Vector2f aScale) override;
void SetPosition(Tga::Vector2f aPosition) override;
void SetRotation(float aRotationDeg) override;
void SetCustomShader(const Tga::SpriteShader* aShader);
void SetShaderParam(Tga::Vector4f aParam);
Tga::Color GetColor() const override { return myInstanceData.myColor; }
Tga::Vector2f GetSize() const override { return myTransformData.size; }
Tga::Vector2f GetScale() const override { return myAnimatedScale; }
Tga::Vector2f GetPosition() const override;
float GetRotation() const override;
Tga::Vector4f GetShaderParam() const { return myShaderParam; }
bool HasCustomShader() const { return myHasCustomShader; }
private:
Tga::SpriteSharedData mySharedData = {};
Tga::Sprite3DInstanceData myInstanceData = {};
Tga::UITransformData myTransformData;
Tga::UIImageData myImageData;
Tga::Vector4f myShaderParam = { 0.f, 0.f, 0.f, 0.f };
Tga::Vector2f myAnimatedScale = { 1.f, 1.f };
bool myHasCustomShader = false;
bool myVisible = true;
int mySortOrder = 0;
};
One mistake I made when continuing my work was making separate textures/colors for my sliders and toggles, with them changing their underlying image themselves, instead of manipulating the underlying images, something I am looking to change. They just need a way to distinguish which UI image component belongs to which part (fill, handle, and background for toggle for example).
void UIButton_C::ApplyState() const
{
auto img = myImage.lock();
if (!img)
{
return;
}
switch (myState)
{
case State::Normal:
{
img->SetTexture(myData.normalTexture);
break;
}
case State::Hovered:
{
img->SetTexture(myData.hoveredTexture.IsEmpty() ? myData.normalTexture : myData.hoveredTexture);
break;
}
case State::Pressed:
{
img->SetTexture(myData.pressedTexture.IsEmpty() ? myData.normalTexture : myData.pressedTexture);
break;
}
case State::Disabled:
{
img->SetTexture(myData.disabledTexture.IsEmpty() ? myData.normalTexture : myData.disabledTexture);
break;
}
default:
{
break;
}
}
}
The parenting system was made by adding a component I called a UIRelationship component. This component can either have children, a parent, or both. By making it go both ways, it was easier to keep track of the component.
I initially tried setting up the parent/child relationship when fetching the Scene objects in the editor and assigning the relationship component as we go. This caused many crashes since we didn't have the full picture yet, meaning a child could appear in the list before its parent. This was solved by adding a second pass when we built the scene. This adds one extra iteration through all objects but solves the problem with the relationships not being completely known until all objects are fetched. And since most UI scenes won't contain more than a few hundred objects, this is a fine trade off.
Right now, there is a known limitation in that an artist could assign A as child to B and B as child to A, which could happen if they change their mind mid editing. That would cause infinite recursion. Looking into fixing this.
In our engine and editor we have two separate representations of an object. In game we use the class GameObjects as the representation of our objects and in the editor the class SceneObject is used. I wanted the UI editor to be able to have a live preview. But this meant marking both the editor and the live scene as dirty when the user either moved a scene object or the animator animated a gameobject. Both of them manipulated objects, triggering a rebuild.
At first, I saved our objects for every property changed on it during animation, back to the editor. This quickly turned into a huge mess, with saving json properties with almost every click. This made me implement a working copy pattern.
All edits go into a local copy of the animation data. Nothing touches the actual scene property store until a save is called, which issues a single command to save everything back to the editor. This means the artist can freely experiment without polluting the scene with every tiny edit.
void UIAnimationEditor::AddClip(Tga::StringId aName)
{
Tga::UIAnimationClip newClip;
newClip.name = aName;
myWorkingAnimData.clips.push_back(newClip);
mySelectedClip = aName;
myRestPosition = mySnapshot.position;
myRestRotation = mySnapshot.rotation;
}
Tga::SceneProperty newProperty = mySourceProperty;
newProperty.value = Tga::Property::Create<Tga::CopyOnWriteWrapper<Tga::UIAnimationData>>(
Tga::CopyOnWriteWrapper<Tga::UIAnimationData>::Create(myWorkingAnimData));
auto command = std::make_shared<ChangePropertyOverridesCommand>(
mySceneObjectId, newProperty, myOverrideProperty);
CommandManager::DoCommand(command);
if (myOnDirty)
{
myOnDirty();
}
The drawback is the artist cannot change the scene in realtime while the animation editor is open, but this was a trade off I decided was ok to keep, since the whole point of opening an animation editor is to animate the UI.
The keyframes were solved by capturing a snapshot of the object state when the user opens up the editor. The animation data of the SceneObject is also collected and cached for that session, while the object is being animated. The object is then being animated through the GameObject which gets a working copy of the animation data. This prevents polluting the SceneObject and it also makes it easy to work with.
struct AnimationFrame
{
Tga::Vector4f shaderParam = { 0.f, 0.f, 0.f, 0.f };
Tga::Vector2f position = { 0.f, 0.f };
Tga::Vector2f size = { 100.f, 100.f };
Tga::Vector2f scale = { 1.f, 1.f };
Tga::Color color = { 1.f, 1.f, 1.f, 1.f };
float rotation = 0.f;
bool valid = false;
Tga::Vector2f restPosition = { 0.f, 0.f };
float restRotation = 0.f;
};
mySnapshot = {};
if (myLiveScene)
{
auto it = myLiveScene->GetGameObjectMap().find(mySceneObjectId);
if (it != myLiveScene->GetGameObjectMap().end())
{
if (auto img = it->second->GetComponent<UIImage_C>())
{
mySnapshot.position = img->GetPosition();
mySnapshot.size = img->GetSize();
mySnapshot.scale = img->GetScale();
mySnapshot.color = img->GetColor();
mySnapshot.rotation = img->GetRotation();
mySnapshot.valid = true;
}
}
}
Tga::Scene* scene = GetActiveScene();
if (scene)
{
Tga::SceneObject* sceneObject = scene->GetSceneObject(mySceneObjectId);
if (sceneObject)
{
mySnapshot.sceneObjectPosition = sceneObject->GetPosition();
mySnapshot.sceneObjectRotation = sceneObject->GetEuler();
mySnapshot.sceneObjectScale = sceneObject->GetScale();
}
}
When the artist clicks "Add Keyframe", CaptureKeyframe reads the live object's current state directly from UIImage_C and packages it into a UIKeyFrame struct with a timestamp such as color/scale etc. The keyframe is then inserted into the clip's keyframe array and sorted by time. If a keyframe already exists at that time it gets replaced.
When the artist closes the editor, we do the save operation, which saves the data to the json file, ready to be used ingame or in the live preview of the editor.
One thing I found hard to implement in a performant way was the ability to live edit a background scene while also being able to switch to preview mode for the UI. When I first started making the UI Editor, I started off by having an orthographic camera and setting a reference resolution size which the artist applied on the main canvas. This worked well for editing a single scene with a single non-perspective camera, where every object in the scene represented a UI element. A good start, but when adding the ability to live edit a 3D scene in the background, this quickly introduced some challenges.
We already had a way to render a 3D scene, but our editor was only made for editing and viewing a single scene file at a time. The implementation had a pointer to the active document, with that document owning the current scene. So to actually swap scenes, I would have either needed to switch document completely, or switch the scene the document pointed to during its render and update.
// Member added in UIScene:
std::shared_ptr<RunTimeScene> myBackgroundScene;
void UIScene::Update(float aDeltaTime)
{
if (myBackgroundScene)
{
myBackgroundScene->Update(aDeltaTime);
}
for (auto& element : myUIElements)
{
element->Update(aDeltaTime);
}
}
void RenderManager::RenderScene(UIScene& aUIScene)
{
if (RunTimeScene* bgScene = aUIScene.GetBackgroundScene().get())
{
RenderScene(*bgScene, bgScene->GetSceneType());
}
NCE::UIRenderData renderData = BuildUIRenderData(aUIScene);
SetUICanvasViewport();
ExecuteUIRenderPipeline(renderData, nullptr, nullptr, aUIScene);
}
I decided to go with the latter, switching the pointer to the scene currently being rendered. This way I could stay in the same viewport and just swap between scenes. We also already had a representation of an in-game scene, which I decided to reuse for the live preview of the UI scene. I gave the UI scene a reference to its background scene, and ran our already implemented update and render functions for game scenes on it. This meant that all loading of the background scene went through the same pipeline as a regular game scene, including physics setup, audio setup and so on. I managed to strip out some of the subsystems that were not relevant, although some had to stay, which is not optimal, but with the time constraints I had it was a reasonable tradeoff for this project. A lightweight scene representation is something I want to implement in the future (Read more below).
if (myUIContext.GetLiveScene() && !myUIContext.GetLiveScene()->GetBackgroundScenePath().IsEmpty())
{
ImGui::TableSetColumnIndex(7);
if (ImGui::Selectable(myIsEditingBackground ? ICON_LC_IMAGE : ICON_LC_CLAPPERBOARD, myIsEditingBackground, 0, toolbarItemSize))
{
myIsEditingBackground = !myIsEditingBackground;
if (myIsEditingBackground)
{
myUIScene = myScene;
Tga::StringId bgPath = myUIContext.GetLiveScene()->GetBackgroundScenePath();
myScene = myCache.GetSceneUsingCache(bgPath);
}
else
{
Save();
if (myViewport.IsUsingBackgroundCamera())
{
myViewport.ToggleBackgroundCamera(myScene, Editor::GetEditor()->GetSceneObjectDefinitionManager());
}
myScene = myUIScene;
myUIScene = nullptr;
myUIContext.ClearBackgroundScene();
myUIContext.MarkDirty();
}
mySceneSelection.ClearSelection();
mySwitchingToBackground = true;
mySwitchingFrameCounter = 0;
SetActiveScene(myScene);
}
ImGui::PopFont();
if (ImGui::IsItemHovered())
{
ImGui::SetTooltip(myIsEditingBackground ? "Switch to UI editing" : "Switch to background scene editing");
}
ImGui::PushFont(ImGuiInterface::GetIconFontLarge());
}
As mentioned above, a lightweight background scene representation is a high priority. Right now the background scene runs the same update and render logic as a regular game scene, with some constraints already in place, physics is disabled for example. But there is still a lot of unnecessary logic running, such as enemies checking against the player each frame, which we have no need for in the editor. What we actually need is a simple way to play animations and render objects. Our current setup also requires a player to be present, which I worked around by keeping a non-rendered player in the background scene just to get the first iteration going. Stripping all of this out would make the background scene a lot more lightweight and easier to work with.
Another thing I would like to implement is the ability to render UI elements in 3D space, similar to how Unity handles it. Right now the only way to view the 3D world is to switch scene representation completely, which stops the UI elements from rendering. Doing this properly would require rendering UI sprites into 3D world space, handling billboarding, correct position calculations and so on. Ideally it would also mean having a single scene representation containing both UI elements and 3D objects, rather than having to switch between the two. My current setup works well as a first iteration though, and the ability to switch camera projection type mid-render would make this a natural next step.
One thing I also want to improve is relative position support in the parenting system. Right now if you animate a parent panel to move left by X units, the children clump to the parent's position on the first keyframe rather than maintaining their offsets. The solution I am looking at is storing each child's offset from its parent at the start of the animation, and adding that offset back when applying keyframe positions, so the parent moves by a relative amount and the children maintain their spacing throughout.