← Back
Tool

UI Editor with Animator

The UI Editor was my specialization project during my time at The Game Assembly. I've always enjoyed working with UI, especially working close to the UI artists while also having a say in the first impression a game has on a person. I also enjoy the process of trying out how different interfaces appeal to my artists, making UX as an important part of the process as the actual functionality of the editor.

The UI Essentials

What I call the UI essentials refer to what every artist expects when building a menu. This includes button, sliders, toggles, text and images. With this, one can basically build the simplest of menus and it will get you to your goal. By using a component system behind the scenes, the artist is able to do a simple drag and drop or create a child of another object by right clicking and choosing what UI element he or she wishes. These are then fully editable in the editor, including things such as hovered/on click textures and colors for the different elements, aswell setting up hit boxes, anchor points, pivots, custom shaders and sort order.

Code
1 / 2
// Example of the toggle component
class UIToggle_C : public Component, public InputObserver
{
public:
    void Init() override;
    void Update(float aDeltaTime) override;
    void Setup(const Tga::UITransformData& aTransform, const Tga::UIToggleData& aToggle, Tga::Vector2f aRefRes);
    void SetImages(const std::shared_ptr<UIImage_C>& aBackground, const std::shared_ptr<UIImage_C>& aCheckmark);
    void ReceiveInput(InputEvent anEvent, const std::any& someData) override;
    void RegisterInput() override;
    void UnregisterInput() override;
    void SetOnValueChanged(const std::function<void(bool)>& aCallback) { myOnValueChanged = aCallback; }
    void SetValue(bool aValue);
    void SetVisible(bool aVisible) { myTransformData.visible = aVisible; }
    bool          GetValue()     const { return myValue; }
    int           GetSortOrder() const { return myTransformData.sortOrder; }
    bool          IsVisible()    const { return myTransformData.visible; }
    Tga::Vector2f GetCenter()    const;
    Tga::Vector2f GetSize()      const { return myTransformData.size; }
    Tga::Vector2f GetHitSize()   const { return myHitSize; }
    float         GetRotation()  const { return NCE::GetUIWorldRotationRad(myGameObject); }
private:
    void UpdateTransform() const;
    void ApplyState() const;
    Tga::UIToggleData    myData = {};
    Tga::UITransformData myTransformData = {};
    Tga::Vector2f        myHitSize = {};
    Tga::Vector2f        myCheckmarkSize = {};
    Tga::Vector2f        myRefRes = {};
    bool                 myValue = false;
    // Holds weak ptrs to the UI image components of the same game object, to manipulate them
    std::weak_ptr<UIImage_C>  myBackgroundImage;
    std::weak_ptr<UIImage_C>  myCheckmarkImage;
    std::function<void(bool)> myOnValueChanged;
}
// Example of adding image components of the same game object to slider and toggle:
auto makeUIImage = [&](const Tga::StringId& aTexture,
    const Tga::Color& aColor, int aSortOffset,
    Tga::Vector2f aSize) -> std::shared_ptr<UIImage_C>
    {
        Tga::UIImageData imgData;
        imgData.texture = aTexture;
        imgData.color = aColor;
        Tga::UITransformData transformOverride = *uiTransform;
        transformOverride.size = aSize;
        auto img = gameObject->AddComponent<UIImage_C>();
        img->Setup(transformOverride, imgData);
        img->SetSortOrder(uiTransform->sortOrder + aSortOffset);
        return img;
    };
if (uiSlider && uiTransform)
{
    auto trackImg  = makeUIImage(uiSlider->trackTexture,  uiSlider->trackColor,  0, uiTransform->size);
    auto fillImg   = makeUIImage(uiSlider->fillTexture,   uiSlider->fillColor,   1, uiTransform->size);
    auto handleImg = makeUIImage(uiSlider->handleTexture, uiSlider->handleColor, 2, uiSlider->handleSize);
    auto sliderComp = gameObject->AddComponent<UISlider_C>();
    sliderComp->SetImages(trackImg, fillImg, handleImg);
    sliderComp->Setup(*uiTransform, *uiSlider, { refWidth, refHeight });
    sliderComp->Init();
}
if (uiToggle && uiTransform)
{
    Tga::Vector2f ckSize = (uiToggle->checkmarkSize.x > 0.f && uiToggle->checkmarkSize.y > 0.f)
        ? uiToggle->checkmarkSize : uiTransform->size;
    auto bgImg = makeUIImage(uiToggle->backgroundOffTexture, uiToggle->backgroundOffColor, 0, uiTransform->size);
    auto ckImg = makeUIImage(uiToggle->checkmarkTexture, { 0.f, 0.f, 0.f, 0.f }, 1, ckSize);
    auto toggleComp = gameObject->AddComponent<UIToggle_C>();
    toggleComp->SetImages(bgImg, ckImg);
    toggleComp->Setup(*uiTransform, *uiToggle, { refWidth, refHeight });
    toggleComp->Init();
}
UI Animator

One big focus for me and the reason behind many choices I make is the visual part of the job. Its where I find the joy of the programming job. Therefore, I decided to give alot of focus to the animation part of the editor. This gives the artist a tool to make the elements interact with and give feedback to the user. By capturing keyframes in a timeline, the artist can set up animations for scale, color, shader parameters, position and rotation. One big priority for me was testing it on my artist, and I tried building the UX to their liking, using ImGui as a base.

Code
1 / 2
void UIAnimator_C::Update(float aDeltaTime)
{
    if (!myIsPlaying) return;
    const Tga::UIAnimationClip* clip = GetClip(myCurrentClip);
    if (!clip)
    {
        return;
    }
    myCurrentTime += aDeltaTime;
    const float clipEnd = clip->keyframes.empty() ? 0.f : clip->keyframes.back().time;
    if (myCurrentTime >= clipEnd)
    {
        if (clip->loop)
        {
            myCurrentTime = 0.f;
        }
        else
        {
            myCurrentTime = clipEnd;
            myIsPlaying = false;
            ApplyFrame(EvaluateClip(*clip, myCurrentTime), clip);
            if (clip->trigger == Tga::UIAnimationTrigger::OnClick)
            {
                if (auto btn = GetComponent<UIButton_C>())
                    btn->ReEvaluateState();
            }
            return;
        }
    }
    ApplyFrame(EvaluateClip(*clip, myCurrentTime), clip);
}
// Priority system to not let hover interrupt click for example.
void UIAnimator_C::FireTrigger(Tga::UIAnimationTrigger aTrigger)
{
    auto GetPriority = [](Tga::UIAnimationTrigger t) -> int
        {
            switch (t)
            {
            case Tga::UIAnimationTrigger::OnClick:
            case Tga::UIAnimationTrigger::OnPress: return 2;
            case Tga::UIAnimationTrigger::OnHover: return 1;
            default:                               return 0;
            }
        };
    const Tga::UIAnimationClip* target = nullptr;
    for (const auto& clip : myData.clips)
    {
        if (clip.trigger == aTrigger)
        {
            target = &clip;
            break;
        }
    }
    if (!target)
    {
        return;
    }
    if (myIsPlaying)
    {
        const Tga::UIAnimationClip* current = GetClip(myCurrentClip);
        if (current && !current->loop)
        {
            if (aTrigger != Tga::UIAnimationTrigger::OnNormal)
            {
                if (GetPriority(aTrigger) < GetPriority(myCurrentTrigger))
                {
                    return;
                }
            }
            else
            {
                if (GetPriority(myCurrentTrigger) >= 2)
                {
                    return;
                }
            }
        }
    }
    Play(target->name);
}
Parenting system

Another key features that many artists take for granted is the ability to keep elements in a child/parent hierarchy, so when editing or animating many elements who will share most of their traits, they do not have to duplicate work for all subsequent elements. I also wanted this to be easy, by the click of a button or by just dragging and dropping an element onto another, one creates a child/parent relationship between the two, making the feature recognizable from other engines.

Code
1 / 2
#pragma once
#include <tge/components/Component.h>
#include <memory>
#include <vector>
class UIRelationship_C : public Component
{
public:
    void SetParent(std::shared_ptr<GameObject> aParent) { myParent = aParent; }
    GameObject* GetParent() const { return myParent.get(); }
    void AddChild(std::shared_ptr<GameObject> aChild) { myChildren.push_back(aChild); }
    const std::vector<std::shared_ptr<GameObject>>& GetChildren() const { return myChildren; }
private:
    std::shared_ptr<GameObject> myParent;  // if this parent has a parent
    std::vector<std::shared_ptr<GameObject>> myChildren;
};
// Applying the relationship component during build:

// Third pass — resolve parent relationships
for (const auto& object : objects | std::views::values)
{
    std::vector<ScenePropertyDefinition> properties;
    object->CalculateCombinedPropertySet(aDefinitionManager, properties);
    const UITransformData* uiTransform = nullptr;
    for (auto& property : properties)
    {
        if (property.type == GetPropertyType<CopyOnWriteWrapper<UITransformData>>())
        {
            uiTransform = &property.value.Get<CopyOnWriteWrapper<UITransformData>>()->Get();
        }
    }
    if (!uiTransform || uiTransform->parentName.IsEmpty())
    {
        continue;
    }
    auto child  = objectsByName.find(StringRegistry::RegisterOrGetString(object->GetName()));
    auto parent = objectsByName.find(uiTransform->parentName);
    if (child == objectsByName.end() || parent == objectsByName.end())
    {
        continue;
    }
    if (child->second.get() == parent->second.get())
    {
        printf("ERROR: Object is its own parent!\n");
        continue;
    }
    // Add the parent to the child
    auto childRelationshipComp = child->second->GetComponent<UIRelationship_C>();
    if (!childRelationshipComp)
    {
        childRelationshipComp = child->second->AddComponent<UIRelationship_C>();
    }
    childRelationshipComp->SetParent(parent->second);
    // Add the child to the parent
    auto parentRelationshipComp = parent->second->GetComponent<UIRelationship_C>();
    if (!parentRelationshipComp)
    {
        parentRelationshipComp = parent->second->AddComponent<UIRelationship_C>();
    }
    parentRelationshipComp->AddChild(child->second);
}
3D background scene with live editing

In addition to using a standard image I wanted the ability to use a live scene as a background for the menus. This was implemented in our custom engines editor. It gives the user the ability to switch between the UI element editing and the live scene editing. I also wanted a way to set the custom camera angle, letting the user toggle between a "Edit" camera and a "View" camera, although still making it possible to edit the live scene through the "View" camera, to give more freedom to the artist.

Code
1 / 2
// First pass — find canvas and extract ref res
for (const auto& object : objects | std::views::values)
{
    std::vector<ScenePropertyDefinition> properties;
    object->CalculateCombinedPropertySet(aDefinitionManager, properties);
    for (auto& property : properties)
    {
        if (property.type == GetPropertyType<CopyOnWriteWrapper<UICanvasData>>())
        {
            const UICanvasData& canvasData = property.value.Get<CopyOnWriteWrapper<UICanvasData>>()->Get();
            refWidth  = canvasData.referenceResolution.x;
            refHeight = canvasData.referenceResolution.y;
        }
        if (property.type == GetPropertyType<CopyOnWriteWrapper<SceneReference>>())
        {
            const SceneReference& backgroundScene = property.value.Get<CopyOnWriteWrapper<SceneReference>>()->Get();
            if (backgroundScene.enabled)
            {
                scene->SetBackgroundScenePath(backgroundScene.path);
            }
        }
    }
}
// Setting the background scene during rebuilds of the UI scene when changed.
if (!myLiveScene->GetBackgroundScenePath().IsEmpty())
{
    if (existingBackground)
    {
        // Reuse the existing background scene — no reload needed
        myLiveScene->SetBackgroundScene(existingBackground);
    }
    else
    {
        aSceneManager.LoadScene(SceneID::UIMainMenuBackground, false);
        auto bgScene = aSceneManager.GetRunTimeScene(SceneID::UIMainMenuBackground);
        bgScene->SetupAsBackground();
    }
}
HUD Integration

I wanted a system that made integrating a HUD into our pipeline easy, and I also wanted artists to be able to create animations for certain UI elements, such as a pulsing health bar when health is low or a cooldown indicator.

The solution was a UIBinding system. Each UI element can be given a Binding Type, for example PlayerHealthNormalized, which maps it to a value type defined in a central enum. When the player takes damage, the game calls the UIBindingManager with the new normalized value. The manager then looks up all registered UI elements with that binding type and pushes the new value to them. Each element then lerps smoothly towards the new target value over time, rather than snapping to it.

Each binding also has configurable thresholds: OnCriticalLow, OnLow, OnHigh and OnCriticalHigh. When the value crosses one of these thresholds, the binding fires a trigger to the UIAnimator component on the same element, which plays the corresponding animation clip if one exists. When the value returns to normal, an OnNormal trigger is fired to return the element to its idle state. This let the artists define all the visual behaviour in the editor without any additional code changes needed from the programmers, which was exactly what I was going for.

Code
1 / 3
// UIBinding_C — each UI element that reacts to game state gets this component
void UIBinding_C::Update(float aDeltaTime)
{
    if (std::abs(myCurrentValue - myTargetValue) > 0.0001f)
    {
        myCurrentValue = std::lerp(myCurrentValue, myTargetValue,
            std::clamp(myData.lerpSpeed * aDeltaTime, 0.f, 1.f));
        ApplyValue();
    }

    if (!myIsRegistered) { return; }

    const bool isCriticalLow  = myCurrentValue <= myData.criticalLowThreshold;
    const bool isLow          = myCurrentValue <= myData.lowThreshold;
    const bool isHigh         = myCurrentValue >= myData.highThreshold;
    const bool isCriticalHigh = myCurrentValue >= myData.criticalHighThreshold;

    if (auto animator = GetComponent<UIAnimator_C>())
    {
        if (isCriticalLow && !myWasCriticalLow)
            TryFireTrigger(animator.get(), Tga::UIAnimationTrigger::OnValueCriticalLow);
        else if (isLow && !myWasLow)
            TryFireTrigger(animator.get(), Tga::UIAnimationTrigger::OnValueLow);
        else if (isCriticalHigh && !myWasCriticalHigh)
            TryFireTrigger(animator.get(), Tga::UIAnimationTrigger::OnValueCriticalHigh);
        else if (isHigh && !myWasHigh)
            TryFireTrigger(animator.get(), Tga::UIAnimationTrigger::OnValueHigh);
        else if (!isLow && (myWasLow || myWasCriticalLow))
            TryFireTrigger(animator.get(), Tga::UIAnimationTrigger::OnNormal);
        else if (!isHigh && (myWasHigh || myWasCriticalHigh))
            TryFireTrigger(animator.get(), Tga::UIAnimationTrigger::OnNormal);
    }

    myWasCriticalLow  = isCriticalLow;
    myWasLow          = isLow;
    myWasHigh         = isHigh;
    myWasCriticalHigh = isCriticalHigh;
}

void UIBinding_C::ApplyValue() const
{
    if (auto img = GetComponent<UIImage_C>())
    {
        auto param = img->GetShaderParam();
        param.x = myCurrentValue;
        img->SetShaderParam(param);
    }
}
// UIBindingManager — central manager, game code calls this when values change
void UIBindingManager::SetFloat(UIFloatBindingType aType, float aValue)
{
    switch (aType)
    {
        case UIFloatBindingType::VolumeMaster:
            AudioManager::GetInstance()->SetMasterVolume(aValue);
            break;
        case UIFloatBindingType::VolumeSFX:
            AudioManager::GetInstance()->SetSFXVolume(aValue);
            break;
        // HUD values — push to all registered bindings of this type
        case UIFloatBindingType::PlayerHealthNormalized:
        case UIFloatBindingType::PlayerAmmoNormalized:
        case UIFloatBindingType::BatteryPower:
            myFloatValues[static_cast<size_t>(aType)] = aValue;
            for (const auto& binding : myBindings)
            {
                if (binding->GetBindingType() == aType)
                {
                    binding->SetTargetValue(aValue);
                }
            }
            break;
        default:
            break;
    }
}

void UIBindingManager::RegisterBinding(std::shared_ptr<UIBinding_C> aBinding)
{
    // Seed with current value so element starts at the right state
    const float current = GetFloat(aBinding->GetBindingType());
    aBinding->SetTargetValue(current);
    myBindings.emplace_back(std::move(aBinding));
}
// Example of how the player calls it when health changes:
UIBindingManager::GetInstance().SetFloat(
    UIFloatBindingType::PlayerHealthNormalized,
    static_cast<float>(hp) / static_cast<float>(hpMax)
);

// UIFloatBindingType enum — all bindable float values in the game
enum class UIFloatBindingType : uint8_t
{
    None,
    VolumeMaster,
    VolumeSFX,
    VolumeMusic,
    VolumeAmbience,
    PlayerHealthNormalized,
    PlayerAmmoNormalized,
    BatteryPower,
    Count
};

Integrating the UI elements into our component system

I first started making a detached system for our UI. We had just started our merge to a component system, which made the current state of our engine a hybrid between a component system and a pure OOP style system. Since I wasn't too comfortable with a component system at the time and since I had not gotten the chance to learn it properly, the first thing I did was making separate Game objects for the different elements such as button/sliders etc. Even though it could have worked in theory, I quickly decided to switch focus and start integrating it with our component system instead.

I started sketching on how a system could look, with each UI element being a GameObject with corresponding components that get updated individually. One simple component was the UIImage component. It basically takes a default texture and color and slaps it onto the Game object. Then, when sliders/buttons/toggles want to fetch their image, they call the same game object they are components of, fetching their corresponding image, making it very easy to manipulate the underlying image.

class UIImage_C : public Component, public UIAnimatable
{
public:
    UIImage_C()           = default;
    ~UIImage_C() override = default;

    void Init() override;
    void Update(float aDeltaTime) override;
    void Setup(const Tga::UITransformData& aTransform, const Tga::UIImageData& aImage);
    void UpdateTransform();

    void SetVisible(bool aVisible) { myVisible = aVisible; }
    void SetTexture(const Tga::StringId& aTexture);
    void SetColor(const Tga::Color& aColor)         override;
    void SetSize(Tga::Vector2f aSize)               override;
    void SetScale(Tga::Vector2f aScale)             override;
    void SetPosition(Tga::Vector2f aPosition)       override;
    void SetRotation(float aRotationDeg)            override;
    void SetCustomShader(const Tga::SpriteShader* aShader);
    void SetShaderParam(Tga::Vector4f aParam);

    Tga::Color    GetColor()    const override { return myInstanceData.myColor; }
    Tga::Vector2f GetSize()     const override { return myTransformData.size; }
    Tga::Vector2f GetScale()    const override { return myAnimatedScale; }
    Tga::Vector2f GetPosition() const override;
    float         GetRotation() const override;
    Tga::Vector4f GetShaderParam() const { return myShaderParam; }
    bool HasCustomShader() const { return myHasCustomShader; }

private:
    Tga::SpriteSharedData       mySharedData = {};
    Tga::Sprite3DInstanceData   myInstanceData = {};
    Tga::UITransformData        myTransformData;
    Tga::UIImageData            myImageData;
    Tga::Vector4f               myShaderParam = { 0.f, 0.f, 0.f, 0.f };
    Tga::Vector2f               myAnimatedScale = { 1.f, 1.f };
    bool                        myHasCustomShader = false;
    bool                        myVisible = true;
    int                         mySortOrder = 0;
};

One mistake I made when continuing my work was making separate textures/colors for my sliders and toggles, with them changing their underlying image themselves, instead of manipulating the underlying images, something I am looking to change. They just need a way to distinguish which UI image component belongs to which part (fill, handle, and background for toggle for example).

void UIButton_C::ApplyState() const
{
    auto img = myImage.lock();
    if (!img)
    {
        return;
    }

    switch (myState)
    {
        case State::Normal:
        {
            img->SetTexture(myData.normalTexture);
            break;
        }
        case State::Hovered:
        {
            img->SetTexture(myData.hoveredTexture.IsEmpty() ? myData.normalTexture : myData.hoveredTexture);
            break;
        }
        case State::Pressed:
        {
            img->SetTexture(myData.pressedTexture.IsEmpty() ? myData.normalTexture : myData.pressedTexture);
            break;
        }
        case State::Disabled:
        {
            img->SetTexture(myData.disabledTexture.IsEmpty() ? myData.normalTexture : myData.disabledTexture);
            break;
        }
        default:
        {
            break;
        }
    }
}

Parenting system

The parenting system was made by adding a component I called a UIRelationship component. This component can either have children, a parent, or both. By making it go both ways, it was easier to keep track of the component.

I initially tried setting up the parent/child relationship when fetching the Scene objects in the editor and assigning the relationship component as we go. This caused many crashes since we didn't have the full picture yet, meaning a child could appear in the list before its parent. This was solved by adding a second pass when we built the scene. This adds one extra iteration through all objects but solves the problem with the relationships not being completely known until all objects are fetched. And since most UI scenes won't contain more than a few hundred objects, this is a fine trade off.

1 / 2
for (const auto& object : objects | std::views::values)
{
    std::vector<ScenePropertyDefinition> properties;
    object->CalculateCombinedPropertySet(aDefinitionManager, properties);

    const UITransformData* uiTransform = nullptr;
    for (auto& property : properties)
    {
        if (property.type == GetPropertyType<CopyOnWriteWrapper<UITransformData>>())
        {
            uiTransform = &property.value.Get<CopyOnWriteWrapper<UITransformData>>()->Get();
        }
    }

    if (!uiTransform || uiTransform->parentName.IsEmpty())
    {
        continue;
    }

    auto child  = objectsByName.find(StringRegistry::RegisterOrGetString(object->GetName()));
    auto parent = objectsByName.find(uiTransform->parentName);

    if (child == objectsByName.end() || parent == objectsByName.end())
    {
        continue;
    }

    if (child->second.get() == parent->second.get())
    {
        printf("ERROR: Object is its own parent!\n");
        continue;
    }

    auto childRelationshipComp = child->second->AddComponent<UIRelationship_C>();
    childRelationshipComp->SetParent(parent->second);

    auto parentRelationshipComp = parent->second->GetComponent<UIRelationship_C>();
    if (!parentRelationshipComp)
    {
        parentRelationshipComp = parent->second->AddComponent<UIRelationship_C>();
    }

    parentRelationshipComp->AddChild(child->second);
}
class UIRelationship_C : public Component
{
public:
    void SetParent(std::shared_ptr<GameObject> aParent) { myParent = aParent; }
    GameObject* GetParent() const { return myParent.get(); }
    void AddChild(std::shared_ptr<GameObject> aChild) { myChildren.push_back(aChild); }
    const std::vector<std::shared_ptr<GameObject>>& GetChildren() const { return myChildren; }
private:
    std::shared_ptr<GameObject> myParent;
    std::vector<std::shared_ptr<GameObject>> myChildren;
};

float GetUIWorldRotation(const GameObject* aObject)
{
    float rot = aObject->GetTransform().GetRotationAsQuaternion().GetYawPitchRoll().z;
    if (auto parentComp = aObject->GetComponent<UIRelationship_C>())
    {
        if (auto parent = parentComp->GetParent())
        {
            rot += GetUIWorldRotation(parent);
        }
    }
    return rot;
}

Tga::Vector3f GetUIWorldPosition(const GameObject* aObject)
{
    Tga::Vector3f pos = aObject->GetTransform().GetPosition();
    if (auto relationship = aObject->GetComponent<UIRelationship_C>())
    {
        if (auto parent = relationship->GetParent())
        {
            Tga::Vector3f parentWorldPos = GetUIWorldPosition(parent);
            float parentWorldRotRad = GetUIWorldRotation(parent) * (FMath::Pi / 180.f);
            float cos = std::cos(parentWorldRotRad);
            float sin = std::sin(parentWorldRotRad);
            Tga::Vector3f rotated = {
                pos.x * cos - pos.y * sin,
                pos.x * sin + pos.y * cos,
                pos.z
            };
            return parentWorldPos + rotated;
        }
    }
    return pos;
}

Right now, there is a known limitation in that an artist could assign A as child to B and B as child to A, which could happen if they change their mind mid editing. That would cause infinite recursion. Looking into fixing this.

Rebuilding the scene while editing animations and saving correct values for key frames

In our engine and editor we have two separate representations of an object. In game we use the class GameObjects as the representation of our objects and in the editor the class SceneObject is used. I wanted the UI editor to be able to have a live preview. But this meant marking both the editor and the live scene as dirty when the user either moved a scene object or the animator animated a gameobject. Both of them manipulated objects, triggering a rebuild.

At first, I saved our objects for every property changed on it during animation, back to the editor. This quickly turned into a huge mess, with saving json properties with almost every click. This made me implement a working copy pattern.

All edits go into a local copy of the animation data. Nothing touches the actual scene property store until a save is called, which issues a single command to save everything back to the editor. This means the artist can freely experiment without polluting the scene with every tiny edit.

void UIAnimationEditor::AddClip(Tga::StringId aName)
{
    Tga::UIAnimationClip newClip;
    newClip.name = aName;
    myWorkingAnimData.clips.push_back(newClip);
    mySelectedClip = aName;
    myRestPosition = mySnapshot.position;
    myRestRotation = mySnapshot.rotation;
}

Tga::SceneProperty newProperty = mySourceProperty;
newProperty.value = Tga::Property::Create<Tga::CopyOnWriteWrapper<Tga::UIAnimationData>>(
    Tga::CopyOnWriteWrapper<Tga::UIAnimationData>::Create(myWorkingAnimData));
auto command = std::make_shared<ChangePropertyOverridesCommand>(
    mySceneObjectId, newProperty, myOverrideProperty);
CommandManager::DoCommand(command);

if (myOnDirty)
{
    myOnDirty();
}

The drawback is the artist cannot change the scene in realtime while the animation editor is open, but this was a trade off I decided was ok to keep, since the whole point of opening an animation editor is to animate the UI.

The keyframes were solved by capturing a snapshot of the object state when the user opens up the editor. The animation data of the SceneObject is also collected and cached for that session, while the object is being animated. The object is then being animated through the GameObject which gets a working copy of the animation data. This prevents polluting the SceneObject and it also makes it easy to work with.

struct AnimationFrame
{
    Tga::Vector4f   shaderParam = { 0.f, 0.f, 0.f, 0.f };
    Tga::Vector2f   position = { 0.f, 0.f };
    Tga::Vector2f   size = { 100.f, 100.f };
    Tga::Vector2f   scale = { 1.f, 1.f };
    Tga::Color      color = { 1.f, 1.f, 1.f, 1.f };
    float           rotation = 0.f;
    bool            valid = false;

    Tga::Vector2f   restPosition = { 0.f, 0.f };
    float           restRotation = 0.f;
};
mySnapshot = {};
if (myLiveScene)
{
    auto it = myLiveScene->GetGameObjectMap().find(mySceneObjectId);
    if (it != myLiveScene->GetGameObjectMap().end())
    {
        if (auto img = it->second->GetComponent<UIImage_C>())
        {
            mySnapshot.position = img->GetPosition();
            mySnapshot.size     = img->GetSize();
            mySnapshot.scale    = img->GetScale();
            mySnapshot.color    = img->GetColor();
            mySnapshot.rotation = img->GetRotation();
            mySnapshot.valid    = true;
        }
    }
}

Tga::Scene* scene = GetActiveScene();
if (scene)
{
    Tga::SceneObject* sceneObject = scene->GetSceneObject(mySceneObjectId);
    if (sceneObject)
    {
        mySnapshot.sceneObjectPosition = sceneObject->GetPosition();
        mySnapshot.sceneObjectRotation = sceneObject->GetEuler();
        mySnapshot.sceneObjectScale    = sceneObject->GetScale();
    }
}

When the artist clicks "Add Keyframe", CaptureKeyframe reads the live object's current state directly from UIImage_C and packages it into a UIKeyFrame struct with a timestamp such as color/scale etc. The keyframe is then inserted into the clip's keyframe array and sorted by time. If a keyframe already exists at that time it gets replaced.

When the artist closes the editor, we do the save operation, which saves the data to the json file, ready to be used ingame or in the live preview of the editor.

3D Background Scene Representation

One thing I found hard to implement in a performant way was the ability to live edit a background scene while also being able to switch to preview mode for the UI. When I first started making the UI Editor, I started off by having an orthographic camera and setting a reference resolution size which the artist applied on the main canvas. This worked well for editing a single scene with a single non-perspective camera, where every object in the scene represented a UI element. A good start, but when adding the ability to live edit a 3D scene in the background, this quickly introduced some challenges.

We already had a way to render a 3D scene, but our editor was only made for editing and viewing a single scene file at a time. The implementation had a pointer to the active document, with that document owning the current scene. So to actually swap scenes, I would have either needed to switch document completely, or switch the scene the document pointed to during its render and update.

// Member added in UIScene:
std::shared_ptr<RunTimeScene> myBackgroundScene;

void UIScene::Update(float aDeltaTime)
{
    if (myBackgroundScene)
    {
        myBackgroundScene->Update(aDeltaTime);
    }
    for (auto& element : myUIElements)
    {
        element->Update(aDeltaTime);
    }
}

void RenderManager::RenderScene(UIScene& aUIScene)
{
    if (RunTimeScene* bgScene = aUIScene.GetBackgroundScene().get())
    {
        RenderScene(*bgScene, bgScene->GetSceneType());
    }
    NCE::UIRenderData renderData = BuildUIRenderData(aUIScene);
    SetUICanvasViewport();
    ExecuteUIRenderPipeline(renderData, nullptr, nullptr, aUIScene);
}

I decided to go with the latter, switching the pointer to the scene currently being rendered. This way I could stay in the same viewport and just swap between scenes. We also already had a representation of an in-game scene, which I decided to reuse for the live preview of the UI scene. I gave the UI scene a reference to its background scene, and ran our already implemented update and render functions for game scenes on it. This meant that all loading of the background scene went through the same pipeline as a regular game scene, including physics setup, audio setup and so on. I managed to strip out some of the subsystems that were not relevant, although some had to stay, which is not optimal, but with the time constraints I had it was a reasonable tradeoff for this project. A lightweight scene representation is something I want to implement in the future (Read more below).

if (myUIContext.GetLiveScene() && !myUIContext.GetLiveScene()->GetBackgroundScenePath().IsEmpty())
{
    ImGui::TableSetColumnIndex(7);
    if (ImGui::Selectable(myIsEditingBackground ? ICON_LC_IMAGE : ICON_LC_CLAPPERBOARD, myIsEditingBackground, 0, toolbarItemSize))
    {
        myIsEditingBackground = !myIsEditingBackground;
        if (myIsEditingBackground)
        {
            myUIScene = myScene;
            Tga::StringId bgPath = myUIContext.GetLiveScene()->GetBackgroundScenePath();
            myScene = myCache.GetSceneUsingCache(bgPath);
        }
        else
        {
            Save();
            if (myViewport.IsUsingBackgroundCamera())
            {
                myViewport.ToggleBackgroundCamera(myScene, Editor::GetEditor()->GetSceneObjectDefinitionManager());
            }
            myScene = myUIScene;
            myUIScene = nullptr;
            myUIContext.ClearBackgroundScene();
            myUIContext.MarkDirty();
        }
        mySceneSelection.ClearSelection();
        mySwitchingToBackground = true;
        mySwitchingFrameCounter = 0;
        SetActiveScene(myScene);
    }
    ImGui::PopFont();
    if (ImGui::IsItemHovered())
    {
        ImGui::SetTooltip(myIsEditingBackground ? "Switch to UI editing" : "Switch to background scene editing");
    }
    ImGui::PushFont(ImGuiInterface::GetIconFontLarge());
}

What I want to implement next

As mentioned above, a lightweight background scene representation is a high priority. Right now the background scene runs the same update and render logic as a regular game scene, with some constraints already in place, physics is disabled for example. But there is still a lot of unnecessary logic running, such as enemies checking against the player each frame, which we have no need for in the editor. What we actually need is a simple way to play animations and render objects. Our current setup also requires a player to be present, which I worked around by keeping a non-rendered player in the background scene just to get the first iteration going. Stripping all of this out would make the background scene a lot more lightweight and easier to work with.

Another thing I would like to implement is the ability to render UI elements in 3D space, similar to how Unity handles it. Right now the only way to view the 3D world is to switch scene representation completely, which stops the UI elements from rendering. Doing this properly would require rendering UI sprites into 3D world space, handling billboarding, correct position calculations and so on. Ideally it would also mean having a single scene representation containing both UI elements and 3D objects, rather than having to switch between the two. My current setup works well as a first iteration though, and the ability to switch camera projection type mid-render would make this a natural next step.

One thing I also want to improve is relative position support in the parenting system. Right now if you animate a parent panel to move left by X units, the children clump to the parent's position on the first keyframe rather than maintaining their offsets. The solution I am looking at is storing each child's offset from its parent at the start of the animation, and adding that offset back when applying keyframe positions, so the parent moves by a relative amount and the children maintain their spacing throughout.