Fredrik
Lundin

Game Programmer

Cycles of Deluge C++ · HLSL · Custom Engine

3D action game on a custom engine. Engine programmer across render, scene management, AI and audio.

Contributions
Render PipelineScene ManagementEnemy AIAudio & FMODVFX
view project →
[]
Human Resources C++ · HLSL · Custom Engine

Engine iteration on the same codebase — render refactor, editor integration and new rendering features.

Contributions
Render RefactorEditor IntegrationAsset PreviewEnemy Behaviour
view project →
Cannibal Crossing C++ · Custom Engine

Fully responsible for enemy systems — pathfinding, behaviour and state handling across three enemy types.

Contributions
A* PathfindingEnemy BehaviourState MachineProjectile System
view project →
Captain Hornswaggle C++ · Custom Engine

GOTY winner. Audio, cinematics and project management on a pirate adventure.

Contributions
Custom Audio ManagerCinematicsProject Manager
view project →
Frogment C# · Unity

UI, audio and game management in Unity.

Contributions
UIAudio ManagerGame Manager
view project →
Last Unit C# · Unity

First game ever made. Player movement and checkpoint system.

Contributions
Player MovementCheckpoint System
view project →

about me

I'm a game programming student at The Game Assembly in Stockholm, specialising in engine systems, rendering, and editor tooling. I build the things that make games possible — from deferred rendering pipelines to artist-facing UI editors.

I care about writing systems that are fast, readable, and actually useful for the people who work with them. Outside of code I'm an actor, which probably explains why I care so much about what ends up on screen.

Currently looking for internship and junior positions in engine or tools programming.

Get in Touch

[]
Tool

Asset Brush

[]

Spacing & Draw Mode

Both draw and erase mode let the user choose between hold-and-drag or single click, giving fine-grained control over how assets are placed. Asset selection can be either random or sequential — useful when a more predictable outcome is wanted. The draw mode also supports preview and none preview mode, for even more control.

Spacing is handled in two ways. The user can control the distance between each brush stroke, so each batch is placed at a set interval from the previous one. On top of that, minimum spacing between individual objects can be enabled — either against all previously painted objects, or just within the current batch.

Code
1 / 3
void AssetBrush::Update(bool isLeftMouseDown)
{
    HandleInput();

    if (!myIsActive || !myBrush.isValid)
    {
        myWasMouseDownLastFrame = false;
        return;
    }

    if (myBrush.mode == BrushMode::Paint)
    {
        const bool hasAssets = !myAssetPalette.empty();
        if (!hasAssets)
        {
            myWasMouseDownLastFrame = false;
            return;
        }
    }

    // --- Hold and drag ---
    if (myPlacementMode == PlacementMode::HoldAndDrag)
    {
        if (isLeftMouseDown)
        {
            bool shouldAct = false;

            // First click - always act
            if (!myWasMouseDownLastFrame)
            {
                shouldAct = true;
            }
            // Holding - check distance from last action
            else
            {
                float distanceFromLast = (myBrush.position - myLastPlacedPosition).LengthSqr();
                if (distanceFromLast >= FMath::Sq(myBrush.strokeSpacing))
                {
                    shouldAct = true;
                }
            }

            if (shouldAct)
            {
                if (myBrush.mode == BrushMode::Paint)
                {
                    PlaceObject();
                }
                else if (myBrush.mode == BrushMode::Erase)
                {
                    EraseObjects();
                }
                myLastPlacedPosition = myBrush.position;
            }
        }
    }
    // --- Single click ---
    else if (myPlacementMode == PlacementMode::SingleClick)
    {
        if (isLeftMouseDown && !myWasMouseDownLastFrame)
        {
            if (myBrush.mode == BrushMode::Paint)
            {
                PlaceObject();
            }
            else if (myBrush.mode == BrushMode::Erase)
            {
                EraseObjects();
            }
        }
    }

    myWasMouseDownLastFrame = isLeftMouseDown;
}
std::shared_ptr<Tga::SceneObject> AssetBrush::PlaceObjectWithAsset(Tga::StringId aAssetPath, std::vector<Tga::Vector3f>& someCurrentBatchPositions) const
{
    constexpr int maxAttempts = 10;
    for (int attempt = 0; attempt < maxAttempts; attempt++)
    {
        OffsetResult offsetResult = CalculateOffsetPosition();
        if (!offsetResult.valid)
        {
            continue;
        }

        Tga::Vector3f position = offsetResult.position;
        Tga::Vector3f surfaceNormal = offsetResult.normal;

        if (!IsSlopeValid(surfaceNormal))
        {
            continue;
        }

        if (myBrush.useMinimumSpacing)
        {
            if (myBrush.checkOnlyCurrentBatch)
            {
                if (IsPositionTooCloseInBatch(position, myBrush.minimumSpacing, someCurrentBatchPositions))
                {
                    continue;
                }
            }
            else
            {
                if (IsPositionTooClose(position, myBrush.minimumSpacing))
                {
                    continue;
                }
            }
        }

        someCurrentBatchPositions.push_back(position);

        Tga::Quaternionf finalRotation = CalculateRandomRotation(surfaceNormal);
        Tga::Vector3f rotationEuler = finalRotation.GetYawPitchRoll();
        Tga::Vector3f randomScale = CalculateRandomScale();

        auto object = std::make_shared<Tga::SceneObject>();

        std::filesystem::path assetFilePath(aAssetPath.GetString());
        Tga::StringId objectDefinitionName = Tga::StringRegistry::RegisterOrGetString(assetFilePath.stem().string());

        object->SetSceneObjectDefintionName(objectDefinitionName);
        object->GetTRS().translation = position;
        object->GetTRS().rotation = rotationEuler;
        object->GetTRS().scale = randomScale;

        if (myAdjustPivot)
        {
            // Only used for preview objects
            float zeroOffset = 0.f;
            CalculateNewPivot(object, zeroOffset);
        }

        return object;
    }
    return nullptr;
}
bool AssetBrush::IsPositionTooClose(const Tga::Vector3f& aNewPos, float aMinDistance) const
{
    int cellX = static_cast<int>(floorf(aNewPos.x / mySpatialCellSize));
    int cellY = static_cast<int>(floorf(aNewPos.y / mySpatialCellSize));
    int cellZ = static_cast<int>(floorf(aNewPos.z / mySpatialCellSize));

    for (int dx = -1; dx <= 1; dx++)
    {
        for (int dy = -1; dy <= 1; dy++)
        {
            for (int dz = -1; dz <= 1; dz++)
            {
                auto it = mySpatialGrid.find({ cellX + dx, cellY + dy, cellZ + dz });
                if (it == mySpatialGrid.end())
                {
                    continue;
                }

                for (const Tga::Vector3f& pos : it->second)
                {
                    if ((pos - aNewPos).LengthSqr() < FMath::Sq(aMinDistance))
                    {
                        return true;
                    }
                }
            }
        }
    }

    return false;
}

bool AssetBrush::IsPositionTooCloseInBatch(const Tga::Vector3f& aNewPos, float aMinDistance, const std::vector<Tga::Vector3f>& someBatchPositions)
{
    // Check against all objects in the current placement batch,
    // does not check against other batches placed by brush this session
    for (const auto& pos : someBatchPositions)
    {
        float distance = (pos - aNewPos).LengthSqr();
        if (distance < FMath::Sq(aMinDistance))
        {
            return true;
        }
    }

    return false;
}
Multi-Asset Drawing

As a first iteration I only allowed placement of one unique asset — this quickly became underwhelming. Multi asset drawing gives the user unlimited choices in how to design their clusters. With randomization per asset, spacing and the ability to tune how many assets to place, each brush stroke can generate something unique.

Code
1 / 2
// Slide 1 code
// Slide 2 code
Randomize

Randomization gives the user a tool for creating unique assets in each batch. I've chosen to divide it into yaw/pitch/roll and scale, to give the user an easy overview of what is available. I noticed that with the inclusion of previews and the ability to turn off auto generate, this was also very useful for more detailed work, such as single asset placement, when working in a concentrated area.

Code
1 / 2
// Slide 1 code
// Slide 2 code
Slope Filtering

Slope filtering works by sampling the normals in the terrain map, giving the user the ability to only target specific elevations in the terrain, without worrying about being careful around edges such as walls or steep cliffs. This also works the other way around, if the user only wants to work on cliff walls or paint along a cliff wall for example.

Code
1 / 2
// Slide 1 code
// Slide 2 code

Performance

I encountered several performance issues during the development of the tool. As a first iteration, I kept iterating over all previously placed assets by the brush when doing checks for object spacing, an O(n) operation, meaning every new placement had to check against every single previously placed object, growing slower the more objects existed. I noticed this quickly became a blocker which introduced the need to handle scaleable performance. A first solution was to create a spatial grid based on the minimum spacing allowed between objects, with the current object checking a 3D grid around itself for other objects placed. This effectively removed several unnecessary iterations during the check.

Even with this improvement, performance was still an issue. After profiling, I found that the biggest bottleneck was our naming system in the editor, which iterated over all instances of the same asset to generate a unique name when adding an asset to the scene. After adding a simple counter system which names assets uniquely based on whether they are painted by the brush or not, and checking how many instances of one asset exist during startup, I effectively eliminated the lag that was previously caused.

[]
Before
[]
After
1 / 2
struct CellKey
{
    int x, y, z;
    bool operator==(const CellKey& other) const
    {
        return x == other.x && y == other.y && z == other.z;
    }
};

struct CellKeyHash
{
    size_t operator()(const CellKey& key) const
    {
        size_t h1 = std::hash<int>{}(key.x);
        size_t h2 = std::hash<int>{}(key.y);
        size_t h3 = std::hash<int>{}(key.z);
        return h1 ^ (h2 << 1) ^ (h3 << 2);
    }
};

uint64_t AssetBrush::GetSpatialKey(int cellX, int cellZ)
{
    return (static_cast<uint64_t>(static_cast<uint32_t>(cellX)) << 32)
        | (static_cast<uint64_t>(static_cast<uint32_t>(cellZ)));
}
void AssetBrush::InitNameCounters()
{
    myAssetNameCounters.clear();

    Tga::Scene* scene = Tga::GetActiveScene();
    for (const auto& [id, obj] : scene->GetSceneObjects())
    {
        std::string name = obj->GetName();

        // Brush-placed assets are named "AssetName_b#"
        size_t separatorPos = name.rfind("_b");
        if (separatorPos == std::string::npos)
        {
            continue;
        }

        std::string baseName = name.substr(0, separatorPos);
        std::string indexStr = name.substr(separatorPos + 2);

        if (indexStr.empty() || !std::all_of(indexStr.begin(), indexStr.end(), ::isdigit))
        {
            continue;
        }

        int index = std::stoi(indexStr);
        int& counter = myAssetNameCounters[baseName];
        if (index >= counter)
        {
            counter = index + 1;
        }
    }
}

GBuffer Readback & Picking

1 / 2
void TerrainMap::ReadPositions(ID3D11Texture2D* source)
{
    D3D11_MAPPED_SUBRESOURCE mapped;
    HRESULT hr = Tga::DX11::Context->Map(source, 0, D3D11_MAP_READ, 0, &mapped);

    // Position is R32G32B32A32_FLOAT — 4 floats per pixel, read directly
    const float* srcData = static_cast<const float*>(mapped.pData);

    for (int y = 0; y < myViewportSize.y; y++)
    {
        for (int x = 0; x < myViewportSize.x; x++)
        {
            int index    = y * myViewportSize.x + x;
            int srcIndex = (y * (mapped.RowPitch / sizeof(float))) + (x * 4);
            myCachedPositions[index] = Tga::Vector3f(
                srcData[srcIndex + 0],
                srcData[srcIndex + 1],
                srcData[srcIndex + 2]
            );
        }
    }
    Tga::DX11::Context->Unmap(source, 0);
}
void TerrainMap::ReadNormals(ID3D11Texture2D* source)
{
    D3D11_MAPPED_SUBRESOURCE mapped;
    Tga::DX11::Context->Map(source, 0, D3D11_MAP_READ, 0, &mapped);

    // Normal is R10G10B10A2_UNORM — packed into a single 32-bit uint
    const uint32_t* srcData = static_cast<const uint32_t*>(mapped.pData);

    for (int y = 0; y < myViewportSize.y; y++)
    {
        for (int x = 0; x < myViewportSize.x; x++)
        {
            uint32_t packed = srcData[(y * (mapped.RowPitch / sizeof(uint32_t))) + x];

            // Unpack 10 bits per channel
            float nx = ((packed >>  0) & 0x3FF) / 1023.0f;
            float ny = ((packed >> 10) & 0x3FF) / 1023.0f;
            float nz = ((packed >> 20) & 0x3FF) / 1023.0f;

            // GBuffer stores normals as (n * 0.5 + 0.5), reverse that:
            nx = (nx * 2.0f) - 1.0f;
            ny = (ny * 2.0f) - 1.0f;
            nz = (nz * 2.0f) - 1.0f;
            myCachedNormals[y * myViewportSize.x + x] = Tga::Vector3f(nx, ny, nz);
        }
    }
    Tga::DX11::Context->Unmap(source, 0);
}

Since we already had GBuffer textures in place, I decided to use those as a source for generating the textures I needed for the picking. Two staging textures were used for this, one containing the vertex normals and one containing the world position of the currently drawn scene. I created a component tag that could be added to any scene object, used to mark it as paintable terrain. Only objects carrying this tag were drawn to the staging textures, meaning the brush would only snap to intentionally marked surfaces. In the render pipeline, I then drew the terrain objects first. After these were drawn, we could fetch the textures.

The world position is drawn to a texture using the format DXGI_FORMAT_R32G32B32A32_FLOAT. Since the data is stored as 4 values of 32 bits and we use floats as a representation of our world position, reading the world position back is straightforward.

The normals however needed converting back to a -1 to 1 range, since when drawing to the GBuffer texture, we remap to 0-1 because the shader can't store negative values in a texture. The normals are saved to a texture with the format DXGI_FORMAT_R10G10B10A2_UNORM, using 10 bits for each channel. On readback, the 10-bit channels are unpacked and the encoding is reversed back to -1 to 1.

[]
Vertex Normal Texture
[]
World Position Texture

Preview Mode

Preview mode came with a lot of challenges. We now needed to save a version of the object that could be updated manually by the user if they wished to do so. For my first iteration, I did not need to care since the assets got updated between each brush stroke in regards to the settings chosen by the user. Now, we needed to keep the position/rotation/scale for the object, or some combination of them, based on the settings.

For this, I created a struct related to the already existing scene object. By caching the current previews of the current stroke and batch of that stroke, I was able to update it cleanly as the user dragged the mouse across the screen. This also introduced the "Update on move" and "Manual update on R" settings, since the need for more control arose as a product of the preview tool.

struct PreviewObject
{
    std::shared_ptr<Tga::SceneObject> sceneObject;
    bool isValid;

    Tga::Vector2f cachedOffsetTangent;
    Tga::Quaternionf cachedRotation;
    Tga::Vector3f currentTerrainNormal;
    Tga::Vector3f cachedScale;
    Tga::StringId assetPath;
    float pivotOffset = 0.f;
};

Tangent & Bitangent

void AssetBrush::GetSurfaceTangentFrame(
    const Tga::Vector3f& normal,
    Tga::Vector3f& outTangent,
    Tga::Vector3f& outBitangent)
{
    // Pick a reference vector that isn't parallel to the normal
    Tga::Vector3f reference = (fabsf(normal.y) < 0.99f)
        ? Tga::Vector3f(0, 1, 0)
        : Tga::Vector3f(1, 0, 0);

    outTangent = reference.Cross(normal);
    outTangent.Normalize();
    outBitangent = normal.Cross(outTangent);
    outBitangent.Normalize();
}

An interesting geometric problem arose from the fact that the brush uses a flat 2D disc as a representation of the area the assets can be placed in. Since the brush originally scattered objects in the XZ plane, it completely broke when trying to scatter on vertical or very steep surfaces, placing the objects in a line rather than spreading them out. This was solved by finding the tangent and bitangent of the surface normal and using those as the up/down and left/right directions for the position offset.

One problem that still persists is that the scatter area is in screen space, occasionally placing objects far away if the disc is outside the intended area but where valid terrain still exists. This can be solved by doing a simple world position check as well, so that the asset is not too far from the world position of the brush center — a feature I am looking to add.

[]

Pivot

Another problem that arose was the pivot of the object not always aligning with the actual surface we are placing our asset on. Some of our assets had their pivot in the center which made the object clip through the surface rather than sitting on top of it. This was solved by subtracting the difference between the object's Y boundary and the object's center if a difference was found. I made this feature a toggle as well, so that the artist was able to control the pivot offset themselves.

Undo Integration

One problem I encountered was that the undo action was a bit inconsistent. Every user expects a clean and functional undo action, which is easy to deprioritize during development since focus lies elsewhere. At first, the undo action in our editor meant undoing the previous command. When placing many assets this became tedious since one undo meant removing only one asset. I fixed this to undo by batch and brush stroke instead, by adding all assets painted in that stroke into one single command.

[]
Tool

UI Editor with Animator

[]

The UI Essentials

What I call the UI essentials refer to what every artist expects when building a menu. This includes button, sliders, toggles, text and images. With this, one can basically build the simplest of menus and it will get you to your goal. By using a component system behind the scenes, the artist is able to do a simple drag and drop or create a child of another object by right clicking and choosing what UI element he or she wishes. These are then fully editable in the editor, including things such as hovered/on click textures and colors for the different elements, aswell setting up hit boxes, anchor points, pivots, custom shaders and sort order.

Code
// Add your code here
UI Animator

One big focus for me and the reason behind many choices I make is the visual part of the job. Its where I find the joy of the programming job. Therefore, I decided to give alot of focus to the animation part of the editor. This gives the artist a tool to make the elements interact with and give feedback to the user. By capturing keyframes in a timeline, the artist can set up animations for scale, color, shader parameters, position and rotation. One big priority for me was testing it on my artist, and I tried building the UX to their liking, using ImGui as a base.

Code
// Add your code here
Parenting system

Another key features that many artists take for granted is the ability to keep elements in a child/parent hierarchy, so when editing or animating many elements who will share most of their traits, they do not have to duplicate work for all subsequent elements. I also wanted this to be easy, by the click of a button or by just dragging and dropping an element onto another, one creates a child/parent relationship between the two, making the feature recognizable from other engines.

Code
// Add your code here
3D background scene with live editing

In addition to using a standard image I wanted the ability to use a live scene as a background for the menus. This was implemented in our custom engines editor. It gives the user the ability to switch between the UI element editing and the live scene editing. I also wanted a way to set the a custom camera angle, letting the user toggle between a "Edit" camera and a "View" camera, although still making it possible to edit the live scene through the "View" camera, to give more freedom to the artist.

Code
// Add your code here

Integrating the UI elements into our component system

I first started making a detached system for our UI. We had just started our merge to a component system, which made the current state of our engine a hybrid between a component system and a pure OOP style system. Since I wasn't too comfortable with a component system at the time and since I had not gotten the chance to learn it properly, the first thing I did was making separate Game objects for the different elements such as button/sliders etc. Even though it could have worked in theory, I quickly decided to switch focus and start integrating it with our component system instead.

I started sketching on how a system could look, with each UI element being a GameObject with correpsonding components that get updated individually. One simple component was the UIImage component. It basically takes a default texture and color and slaps it onto the Game object. Then, when sliders/buttons/toggles wnat to fetch their image, they call the same game object they are components off, fetching their corresponding image, making it very easy to manipulate the underlying image.

class UIImage_C : public Component, public UIAnimatable
{
public:
    UIImage_C()           = default;
    ~UIImage_C() override = default;

    void Init() override;
    void Update(float aDeltaTime) override;
    void Setup(const Tga::UITransformData& aTransform, const Tga::UIImageData& aImage);
    void UpdateTransform();

    void SetVisible(bool aVisible) { myVisible = aVisible; }
    void SetTexture(const Tga::StringId& aTexture);
    void SetColor(const Tga::Color& aColor)         override;
    void SetSize(Tga::Vector2f aSize)               override;
    void SetScale(Tga::Vector2f aScale)             override;
    void SetPosition(Tga::Vector2f aPosition)       override;
    void SetRotation(float aRotationDeg)            override;
    void SetCustomShader(const Tga::SpriteShader* aShader);
    void SetShaderParam(Tga::Vector4f aParam);

    Tga::Color    GetColor()    const override { return myInstanceData.myColor; }
    Tga::Vector2f GetSize()     const override { return myTransformData.size; }
    Tga::Vector2f GetScale()    const override { return myAnimatedScale; }
    Tga::Vector2f GetPosition() const override;
    float         GetRotation() const override;
    Tga::Vector4f GetShaderParam() const { return myShaderParam; }
    bool HasCustomShader() const { return myHasCustomShader; }

private:
    Tga::SpriteSharedData       mySharedData = {};
    Tga::Sprite3DInstanceData   myInstanceData = {};
    Tga::UITransformData        myTransformData;
    Tga::UIImageData            myImageData;
    Tga::Vector4f               myShaderParam = { 0.f, 0.f, 0.f, 0.f };
    Tga::Vector2f               myAnimatedScale = { 1.f, 1.f };
    bool                        myHasCustomShader = false;
    bool                        myVisible = true;
    int                         mySortOrder = 0;
};

One mistake I made when continuing my work was making separate textures/colors for my sliders and toggles, with them cahnging their underlying image themselves, instead of manipulating the underlying images, something I am looking to change the coming week. They just need a way to distinguish which UI image component that beloings to which part (fill, handle, and background for toggle for example).

void UIButton_C::ApplyState() const
{
    auto img = myImage.lock();
    if (!img)
    {
        return;
    }

    switch (myState)
    {
        case State::Normal:
        {
            img->SetTexture(myData.normalTexture);
            break;
        }
        case State::Hovered:
        {
            img->SetTexture(myData.hoveredTexture.IsEmpty() ? myData.normalTexture : myData.hoveredTexture);
            break;
        }
        case State::Pressed:
        {
            img->SetTexture(myData.pressedTexture.IsEmpty() ? myData.normalTexture : myData.pressedTexture);
            break;
        }
        case State::Disabled:
        {
            img->SetTexture(myData.disabledTexture.IsEmpty() ? myData.normalTexture : myData.disabledTexture);
            break;
        }
        default:
        {
            break;
        }
    }
}

Parenting system

The parenting system was made by adding a component I called a UIRelationship compponent. This component can either have children, a parent, or both. By making it go both ways, it was easier to keep track of the component.

I intially tried setting up the parent/child relationship when fetching the Scene objects in the editor and assigning the relationsship component as we go. This caused many crashes since we ddint have the full picture yet, meaning a child could appear in the list before its parent. This was solved by adding a seconds pass when we built the scene. This adds one extra iteration thorugh all objects but solves the problem with the relationships not being completly known until all objects are fetched. And since most UI scenes wont contain more than a few hundred objects, this is a fine trade off.

1 / 2
for (const auto& object : objects | std::views::values)
{
    std::vector<ScenePropertyDefinition> properties;
    object->CalculateCombinedPropertySet(aDefinitionManager, properties);

    const UITransformData* uiTransform = nullptr;
    for (auto& property : properties)
    {
        if (property.type == GetPropertyType<CopyOnWriteWrapper<UITransformData>>())
        {
            uiTransform = &property.value.Get<CopyOnWriteWrapper<UITransformData>>()->Get();
        }
    }

    if (!uiTransform || uiTransform->parentName.IsEmpty())
    {
        continue;
    }

    auto child  = objectsByName.find(StringRegistry::RegisterOrGetString(object->GetName()));
    auto parent = objectsByName.find(uiTransform->parentName);

    if (child == objectsByName.end() || parent == objectsByName.end())
    {
        continue;
    }

    if (child->second.get() == parent->second.get())
    {
        printf("ERROR: Object is its own parent!\n");
        continue;
    }

    auto childRelationshipComp = child->second->AddComponent<UIRelationship_C>();
    childRelationshipComp->SetParent(parent->second);

    auto parentRelationshipComp = parent->second->GetComponent<UIRelationship_C>();
    if (!parentRelationshipComp)
    {
        parentRelationshipComp = parent->second->AddComponent<UIRelationship_C>();
    }

    parentRelationshipComp->AddChild(child->second);
}
class UIRelationship_C : public Component
{
public:
    void SetParent(std::shared_ptr<GameObject> aParent) { myParent = aParent; }
    GameObject* GetParent() const { return myParent.get(); }
    void AddChild(std::shared_ptr<GameObject> aChild) { myChildren.push_back(aChild); }
    const std::vector<std::shared_ptr<GameObject>>& GetChildren() const { return myChildren; }
private:
    std::shared_ptr<GameObject> myParent;
    std::vector<std::shared_ptr<GameObject>> myChildren;
};

float GetUIWorldRotation(const GameObject* aObject)
{
    float rot = aObject->GetTransform().GetRotationAsQuaternion().GetYawPitchRoll().z;
    if (auto parentComp = aObject->GetComponent<UIRelationship_C>())
    {
        if (auto parent = parentComp->GetParent())
        {
            rot += GetUIWorldRotation(parent);
        }
    }
    return rot;
}

Tga::Vector3f GetUIWorldPosition(const GameObject* aObject)
{
    Tga::Vector3f pos = aObject->GetTransform().GetPosition();
    if (auto relationship = aObject->GetComponent<UIRelationship_C>())
    {
        if (auto parent = relationship->GetParent())
        {
            Tga::Vector3f parentWorldPos = GetUIWorldPosition(parent);
            float parentWorldRotRad = GetUIWorldRotation(parent) * (FMath::Pi / 180.f);
            float cos = std::cos(parentWorldRotRad);
            float sin = std::sin(parentWorldRotRad);
            Tga::Vector3f rotated = {
                pos.x * cos - pos.y * sin,
                pos.x * sin + pos.y * cos,
                pos.z
            };
            return parentWorldPos + rotated;
        }
    }
    return pos;
}

Right now, there is a known limitaion in that artist could assign a A as child to B and B as child to A, which could happen if they "change their mind" mid editing. That would cause infinite recursion. Looking into fixing this.

Rebuilding the scene while editing animations and saving correct values for key frames

In our engine and editor we have two separate representations of a an object. In game we use the class GameObjects as the representation of our objects and in the editor the class SceneObject is used. I wanted the UI editor to be able to have a live preview. But his meant marking theboth the editor and the live scene as dirty when the user either moved a scene object or the animator animated a gameobject. Both of them manipulated objects, triggering a rebuild.

At first, I saved our objects for every property changed on it during animation, back to the editor. This quickly turned into a huge mess, with saving json properties with almost every click. This made me implement a working copy pattern.

All edits go into a local copy of the animation data. Nothing touches the actual scene property store until a save is called, which issues a single command to save everything back to the editor. This means the artist can freely experiment without polluting the scene with every tiny edit.

void UIAnimationEditor::AddClip(Tga::StringId aName)
{
    Tga::UIAnimationClip newClip;
    newClip.name = aName;
    myWorkingAnimData.clips.push_back(newClip);
    mySelectedClip = aName;
    myRestPosition = mySnapshot.position;
    myRestRotation = mySnapshot.rotation;
}

Tga::SceneProperty newProperty = mySourceProperty;
newProperty.value = Tga::Property::Create<Tga::CopyOnWriteWrapper<Tga::UIAnimationData>>(
    Tga::CopyOnWriteWrapper<Tga::UIAnimationData>::Create(myWorkingAnimData));
auto command = std::make_shared<ChangePropertyOverridesCommand>(
    mySceneObjectId, newProperty, myOverrideProperty);
CommandManager::DoCommand(command);

if (myOnDirty)
{
    myOnDirty();
}

The drawback is the artist cannot change the scene in realtime while the animation editor is open, but this was a trade off I decided was ok to keep, since the whole point of opening an animation editor is to animate the UI.

The keyframes were solved by capturing a snapshot of the object state when the user opens up the editor. The animation data of the SceneObject is also collected and cached for that session, while the object is being animated. The object is then being animated through the GameObject which gets a working copy of the animation data. This prevents polluting the SceneObject and it also makes it easy to work with.

struct AnimationFrame
{
    Tga::Vector4f   shaderParam = { 0.f, 0.f, 0.f, 0.f };
    Tga::Vector2f   position = { 0.f, 0.f };
    Tga::Vector2f   size = { 100.f, 100.f };
    Tga::Vector2f   scale = { 1.f, 1.f };
    Tga::Color      color = { 1.f, 1.f, 1.f, 1.f };
    float           rotation = 0.f;
    bool            valid = false;

    Tga::Vector2f   restPosition = { 0.f, 0.f };
    float           restRotation = 0.f;
};
mySnapshot = {};
if (myLiveScene)
{
    auto it = myLiveScene->GetGameObjectMap().find(mySceneObjectId);
    if (it != myLiveScene->GetGameObjectMap().end())
    {
        if (auto img = it->second->GetComponent<UIImage_C>())
        {
            mySnapshot.position = img->GetPosition();
            mySnapshot.size     = img->GetSize();
            mySnapshot.scale    = img->GetScale();
            mySnapshot.color    = img->GetColor();
            mySnapshot.rotation = img->GetRotation();
            mySnapshot.valid    = true;
        }
    }
}

Tga::Scene* scene = GetActiveScene();
if (scene)
{
    Tga::SceneObject* sceneObject = scene->GetSceneObject(mySceneObjectId);
    if (sceneObject)
    {
        mySnapshot.sceneObjectPosition = sceneObject->GetPosition();
        mySnapshot.sceneObjectRotation = sceneObject->GetEuler();
        mySnapshot.sceneObjectScale    = sceneObject->GetScale();
    }
}

When the artist clicks "Add Keyframe", CaptureKeyframe reads the live object's current state directly from UIImage_C and packages it into a UIKeyFrame struct with a timestampm such as color/scale etc. The keyframe is then inserted into the clip's keyframe array and sorted by time. If a keyframe already exists at that time it gets replaced.

When the artist closes the editor, we do the save operation, which saves the data to the json file, ready to be used ingame or in the live preview of the editor.

[]
Project

Cycles of Deluge

▶ Watch Trailer
2024 The Game Assembly Engine Programmer C++ · DirectX 11 · HLSL · FMOD

A 3D action game built on a custom engine. I was responsible for several core systems — render pipeline, scene management, enemy AI and audio — working closely with other disciplines throughout.

[]
Render Pipeline

Responsible for the full render pipeline. Had to balance requests from other disciplines while building something fast and maintainable. Collecting render data each frame and setting up all passes — shadows, GBuffer, lighting, bloom and post processing.

// ***** SHADOWS *****
graphicsStateStack.Push();
graphicsStateStack.SetRasterizerState(RasterizerState::Shadow);
graphicsStateStack.SetCamera(myShadowCamera);

myRenderData.shadowMap.Clear();
myRenderData.shadowMap.SetAsActiveTarget();
DrawModelsToShadowMap(shadowCasters, graphicsEngine, graphicsStateStack);

graphicsStateStack.Pop();

// ***** GBUFFER PASS *****
graphicsStateStack.Push();

myGBuffer.ClearTextures();

std::array<ID3D11ShaderResourceView*, 5> nullViews = {};
Tga::DX11::Context->PSSetShaderResources(6, 5, nullViews.data());

graphicsStateStack.SetBlendState(BlendState::Disabled);
myGBuffer.SetAsActiveTarget(DX11::DepthBuffer);
DrawModelsToGBuffer(culledOpaqueModels, graphicsStateStack);

graphicsStateStack.Pop();
Scene & Game Object Management

Responsible for loading and caching scenes and scene objects, including the full pipeline from engine to editor. Built a Game Object factory and a Property applier to keep it maintainable and readable — every programmer on the team was going to work with this the entire project. The registry pattern in the factory made adding new object types a one-liner.

1 / 2
std::weak_ptr<GameObject> SceneManager::BuildGameObject(
    Tga::TextureManager& aTextureManager,
    const std::shared_ptr<GameScene>& aGameScene,
    std::vector<Tga::ScenePropertyDefinition>& someObjectProperties,
    const Tga::Matrix4x4f& anObjectTransform, bool aMarkForAdd)
{
    bool isPlayer = false;
    TagData objectTagData = { Tag::Other };

    // Find tag in ObjectProperties
    for (auto& property : someObjectProperties)
    {
        if (property.type == GetPropertyType<CopyOnWriteWrapper<TagData>>())
        {
            objectTagData = property.value.Get<CopyOnWriteWrapper<TagData>>()->Get();
            break;
        }
    }

    GameObjectFactory gameObjectFactory(someObjectProperties);
    std::shared_ptr<GameObject> gameObject = gameObjectFactory.CreateGameObject(objectTagData, isPlayer);

    PropertyApplier::ApplyGeneralProperties(aTextureManager, gameObject, someObjectProperties, anObjectTransform);

    gameObject->Init();

    if (auto actor = std::dynamic_pointer_cast<Actor, GameObject>(gameObject))
    {
        actor->ApplyProps(someObjectProperties);
    }

    aGameScene->AddGameObject(gameObject, aMarkForAdd);
    return gameObject;
}
GameObjectFactory::GameObjectFactory(
    const std::vector<Tga::ScenePropertyDefinition>& aObjectProperties)
    : myObjectProperties(aObjectProperties)
{
    // Register known game object types — add new types here
    myGameObjectRegistry =
    {
        { Tga::Tag::Other,   [] { return std::make_shared<GameObject>(); } },
        { Tga::Tag::Player,  [] { return std::make_shared<Player>(); } },
        { Tga::Tag::Melee,   [] { return std::make_shared<WickerMan>(); } },
        { Tga::Tag::Ranged,  [] { return std::make_shared<BogHag>(); } },
        { Tga::Tag::Boss,    [] { return std::make_shared<Boss>(); } },
        { Tga::Tag::Attack,  [&]() -> std::shared_ptr<GameObject>
        {
            if (auto property = PropertyApplier::FindProperty<Tga::AttackProp>(myObjectProperties))
            {
                switch (property->Get().type)
                {
                    case Tga::AttackProp::AttackType::StaticAttack:
                        return std::make_shared<Attack>(property->Get());
                    case Tga::AttackProp::AttackType::ProjectileAttack:
                        return std::make_shared<ProjectileAttack>(property->Get());
                    default:
                        return std::make_shared<Attack>(property->Get());
                }
            }
            return std::make_shared<GameObject>();
        }},
    };
}
class VFXManager
{
public:
    VFXAsset GetVFX(Tga::StringId aVFX);
    const std::unordered_map<Tga::StringId, VFXAsset>& GetCached3DVFX();

    static VFXManager* GetInstance();
    static void DestroyInstance();

    void CacheVfx(std::vector<Tga::ScenePropertyDefinition>& someProps, Tga::TextureManager& aTextureManager);
    void PlayVFX(Tga::StringId id, const Tga::Vector3f& pos);
    void PlayVFX(Tga::StringId id, const Tga::Matrix4x4f& aTransform);
    std::shared_ptr<SpawnedObject> PlayVFXReturnObject(Tga::StringId vfxName, const Tga::Matrix4x4f& aTransform);

private:
    static VFXManager* ourInstance;
    std::unordered_map<Tga::StringId, VFXAsset> myCachedVFXAssets;
};
VFX Manager

Built a dedicated VFX Manager to cache and hold VFX assets, with a clean public interface for triggering them by ID from anywhere in the game.

Enemy State Machine & Baseline Behaviour

Built the enemy foundation for colleagues to iterate on — a state machine with shareable states across all enemy types, while still allowing custom unique states per enemy. Each controller registers its own states on startup, keeping things decoupled and easy to extend.

1 / 3
void StateMachine::RegisterState(EnemyStateID aStateID, std::unique_ptr<EnemyState> anEnemyState)
{
    anEnemyState->SetOwner(this);
    myCachedStates.try_emplace(aStateID, std::move(anEnemyState));
}

void StateMachine::ChangeState(Enemy* anEnemy, EnemyStateID anEnemyStateID)
{
    if (myCachedStates.contains(anEnemyStateID))
    {
        myCurrentStateID = anEnemyStateID;
        myCurrentState->OnExit(anEnemy);
        myCurrentState = myCachedStates.at(anEnemyStateID).get();
        anEnemy->SetAnimation(anEnemyStateID);
        myCurrentState->OnEnter(anEnemy);
    }
}
void BogZombieController::Init()
{
    // Register all states for this enemy type
    myStateMachine.RegisterState(EnemyStateID::Idle,   std::make_unique<IdleState>());
    myStateMachine.RegisterState(EnemyStateID::Wander, std::make_unique<WanderState>());
    myStateMachine.RegisterState(EnemyStateID::Engage, std::make_unique<EngageState>());
    myStateMachine.RegisterState(EnemyStateID::Attack, std::make_unique<AttackState>());

    myStateMachine.SetInitialState(EnemyStateID::Idle);
}
void AttackState::Update(Enemy* anEnemy, [[maybe_unused]] float aDeltaTime)
{
    const float attackRange = anEnemy->GetData().attackRange;
    const Tga::Vector3f playerPosition = Blackboard::GetInstance()->GetVector3("playerPosition");
    const Tga::Vector3f difference = playerPosition - anEnemy->GetPosition();

    if (difference.Length() > attackRange)
    {
        myStateMachine->ChangeState(anEnemy, EnemyStateID::Engage);
        return;
    }

    anEnemy->RotateTowardsVelocityOrDirection(difference, aDeltaTime);

    if (anEnemy->TryAttack())
    {
        anEnemy->SetAnimation(EnemyStateID::Attack);
    }
}
bool AudioManager::Init()
{
    FMOD_RESULT result;
    myStudioSystem = nullptr;
    result = FMOD::Studio::System::create(&myStudioSystem);

    if (result != FMOD_OK)
    {
        return false;
    }

    myStudioSystem->initialize(
        1024,
        FMOD_STUDIO_INIT_LIVEUPDATE | FMOD_STUDIO_INIT_NORMAL,
        FMOD_INIT_NORMAL,
        nullptr
    );

    myStudioSystem->getCoreSystem(&myCoreSystem);
    PlayReverbEventAtStartUp();

    return true;
}
Audio Manager & FMOD Integration

Built the Audio Manager and integrated FMOD, ready for colleagues to iterate on. Also responsible for communication with APA (a school for game audio) and adding SFX and music. Handled reverb and spatial sound integration with the game world.

[]
Project

Human Resources

▶ Watch Trailer
2024 The Game Assembly Engine Programmer C++ · DirectX 11 · HLSL · SSAO

Continuation of the same custom engine from Cycles of Deluge. The focus shifted towards iteration and editor integration — refactoring the render pipeline and building new rendering features while making the editor actually usable for level designers.

[]
Render Pipeline Refactor

Responsible for a full refactor of the render pipeline, introducing a command pattern with render data containing commands for each pass. This made it easier to add features and also brought the editor up to speed — level designers could now see changes without launching the game. New features added this project: spotlights, SSAO, line lights and a player flashlight.

// Add your render pipeline refactor code here
Editor Integration

Earlier the editor used actual scene object properties for rendering and had its own ID pass for outlining selected objects. With the removal of those passes, I built render commands — including ID commands — from the scene objects, feeding them into the same renderer the game uses.

// Add your editor integration code here
Single Asset & VFX Preview

A long-standing request from Cycles of Deluge. Added the ability to preview individual assets in the editor with live lighting, and to preview and live-edit VFX while viewing them.

Enemy Behaviour

Integrated enemy behaviour using third-party libraries for navmesh generation, pathfinding and agent separation. With limited time, I reused the state machine as a behaviour tree — creating nodes for each state with Enter, Update and Exit phases.

// Add your enemy behaviour code here
[]
Project

Cannibal Crossing

▶ Watch Trailer
2023 The Game Assembly AI & Gameplay Programmer C++ · A* · Custom Engine

My first time working with AI. I was fully responsible for all enemy systems — pathfinding, behaviour and state handling across three different enemy types.

[]
A* Pathfinding

My own A* implementation as the movement foundation. Getting it to work reliably meant handling obstacle avoidance, diagonal movement edge cases, height transitions and field of view — among other things.

1 / 2
void PathFind(const Tga::Vector3f aStartPosition, const Tga::Vector3f aEndPosition, std::vector<Tga::Vector3f>& aPath)
{
    const int startIndex = locGrid.GetCellIndexFromWorld(aStartPosition);
    int endIndex         = locGrid.GetCellIndexFromWorld(aEndPosition);

    if (CellFull(startIndex))
    {
        endIndex = GetValidNeighbor(locGrid.GetNeighbours(endIndex), aEndPosition, endIndex, startIndex);
    }

    locGrid.GetCells()[startIndex].g       = 0.f;
    locGrid.GetCells()[startIndex].h       = std::abs(aStartPosition.x - aEndPosition.x) + std::abs(aStartPosition.z - aEndPosition.z);
    locGrid.GetCells()[startIndex].f       = locGrid.GetCells()[startIndex].g + locGrid.GetCells()[startIndex].h;
    locGrid.GetCells()[startIndex].isStart = true;
    locGrid.GetCells()[startIndex].status  = CellStatus::Open;
    locGrid.GetCells()[endIndex].isTarget = true;

    sortedCells.push(startIndex);
    touchedCells.push_back(startIndex);

    while (!sortedCells.empty())
    {
        const int currentCell = sortedCells.top();
        sortedCells.pop();
        locGrid.GetCells()[currentCell].status = CellStatus::Closed;

        if (locGrid.GetCells()[currentCell].isTarget)
        {
            CreatePath(currentCell, aPath);
            ResetPath();
            return;
        }

        for (const int neighbor : locGrid.GetNeighbours(currentCell))
        {
            UpdateCell(neighbor, currentCell, endIndex);
        }
    }

    aPath.clear();
    ResetPath();
}
// Diagonal movement — check both cardinal neighbours are clear
if (aDX != 0 && aDZ != 0)
{
    Tga::Vector3f horizontalNeighborPos = aOriginPos + Tga::Vector3f(aDX * CELL_SIZE, 0.0f, 0.0f);
    Tga::Vector3f verticalNeighborPos   = aOriginPos + Tga::Vector3f(0.0f, 0.0f, aDZ * CELL_SIZE);

    bool horizontalClear = DiagonalMovementTolerated(aDX, 0, horizontalNeighborPos, aOriginPos);
    bool verticalClear   = DiagonalMovementTolerated(0, aDZ, verticalNeighborPos, aOriginPos);

    if (!horizontalClear || !verticalClear)
    {
        return false;
    }
}

// Height transition check — East example
if (aDX == 1 && aDZ == 0)
{
    return almostEqual(origin->heightTR, neighbor->heightTL)
        && almostEqual(origin->heightBR, neighbor->heightBL);
}
[]
Enemy Behaviour

Three enemy types: melee, ranged and elite. Melee used a seek-and-alert pattern. Ranged added a flee behaviour to create distance from the player. The surround system was something I was particularly proud of — enemies check occupied cells to avoid clumping, surrounding the player instead.

1 / 3
void MeleeEnemy::UpdateAttacking(const float aDeltaTime)
{
    AnimationRunner& animRunner = myAnimationData.animationRunner;
    animRunner.SetNextAnimation(MeleeAnimationData::AnimationState_Attack);
    animRunner.SetAnimationSpeed(myData.attackSpeed);

    if (!animRunner.HasCurrentAnimationPassedFrame(myData.framesWithLockOn))
    {
        Utility::RotateTowardsTargetY(entity, myPlayerTargetEntity.Transform().Position(), myData.rotationSpeed, aDeltaTime);
    }

    const bool isInHitWindow = animRunner.HasCurrentAnimationPassedFrame(attackWindowStartFrame)
                             && !animRunner.HasCurrentAnimationPassedFrame(attackWindowEndFrame);

    if (isInHitWindow && !myData.hasHit)
    {
        myData.hasHit = true;
        CollisionSystem::HitInfo hit;
        if (CheckPlayerCollision(myData.attackRange, hit, entity))
        {
            if (hit.alignment > myData.accuracy)
            {
                AttackSystem::TransferDamage(hit.hitEntity, entity.Stats().GetDamage().damage, Stats::DamageSource::EnemyMelee);
                BehaviorRegistry::GetPlayer()->ApplyKnockback(attackDirection);
            }
        }
    }

    if (animRunner.HasCurrentAnimationFinishedPlaying())
    {
        myData.hasHit = false;
        myState       = EnemyState::Engaging;
    }
}
void RangedEnemy::UpdateFleeing(const float aDeltaTime)
{
    CollisionSystem::HitInfo hit;

    if (!CheckPlayerCollision(myData.attackRadius, hit, entity))
    {
        myState = EnemyState::Idle;
        return;
    }

    myData.fleeTimer += aDeltaTime;
    if (myData.fleeTimer > myData.fleeDuration)
    {
        myState          = EnemyState::Attacking;
        myData.fleeTimer = 0.f;
        return;
    }

    const Tga::Vector3f fleeDirection = transform.Position() - myPlayerTargetEntity.Transform().Position();
    Utility::RotateTowardsTargetY(entity, transform.Position() + fleeDirection, myData.rotationSpeed, aDeltaTime);
    MoveInDirection(fleeDirection, entity, myData.speed, aDeltaTime);
}
bool CellFull(int aIndex)
{
    if (locGrid.GetCells()[aIndex].occupants >= maxOccupants)
    {
        return true;
    }

    return false;
}
Enemy State Handling

First time working with AI, so I kept it simple — a state machine where each enemy type has its own update per state. Easy to maintain and more than enough for the scale of the game. A learning I take with me.

1 / 2
enum class EnemyState
{
    Idle,
    Roaming,
    Engaging,
    Attacking,
    Fleeing,
    Count,
};

// State dispatch in Update
switch (myState)
{
    case EnemyState::Idle:     { UpdateIdle(aDeltaTime);     break; }
    case EnemyState::Engaging: { UpdateEngaging(aDeltaTime); break; }
    case EnemyState::Attacking:{ UpdateAttacking(aDeltaTime);break; }
    case EnemyState::Roaming:  { UpdateRoaming(aDeltaTime);  break; }
    default: break;
}
class BaseEnemy
{
public:
    BaseEnemy();

    virtual void UpdateIdle(float aDeltaTime) = 0;
    virtual void UpdateRoaming(float aDeltaTime) = 0;
    virtual void UpdateAttacking(float aDeltaTime) = 0;
    virtual void UpdateFleeing(float aDeltaTime) = 0;
    virtual void UpdateEngaging(float aDeltaTime) = 0;

    void UpdateCellIndex();
    bool CheckPlayerCollision(float aRadiusToCheck, CollisionSystem::HitInfo& aHitInfo, Entity aEntity) const;
    void MoveInDirection(Tga::Vector3f aDirection, Entity& aEntity, float aSpeed, float aDeltaTime);
    void Alert(Entity aEnemyEntity, float aAlertRadius);

protected:
    Entity myPlayerTargetEntity;
    SharedData mySharedData;
    int myCurrentCellIndex  = 0;
    int myPreviousCellIndex = 0;
};
1 / 2
void SpawnProjectile(
    Tga::Vector3f aEntityPosition, Tga::Vector3f aEntityForwardDirection,
    Tga::Vector3f aEntityRotation, ProjectileType aType,
    float aScalar, float aSpeed, float aAngleOffset)
{
    GameEntity::EntityInfo entityInfo;
    ProjectileInfo projectileInfo;

    if (aType == ProjectileType::Player)
    {
        projectileInfo = locPlayerProjInfo;
        Tga::Vector3f scale = { 1.f, 1.f, 1.f };
        scale *= aScalar;
        locPlayerProjInfo.projTransformInfo.scale = scale;
    }
    else
    {
        projectileInfo = locEnemyProjInfo;
    }

    // Calculate rotated direction — angle offset creates an arch spread
    // -15 spawns left, +15 spawns right, used for player shotgun spread
    float radians         = Tga::DegToRad(aAngleOffset);
    Tga::Vector3f forward  = aEntityForwardDirection.GetNormalized();
    Tga::Vector3f rotatedDirection;
    rotatedDirection.x = forward.x * std::cos(radians) - forward.z * std::sin(radians);
    rotatedDirection.y = forward.y;
    rotatedDirection.z = forward.x * std::sin(radians) + forward.z * std::cos(radians);

    Tga::Vector3f spawnPos = aEntityPosition + rotatedDirection * 20.0f;
    Tga::Vector3f rotation = aEntityRotation;
    rotation.y -= aAngleOffset;

    projectileInfo.projTransformInfo.position = spawnPos + Tga::Vector3f{ 0.f, 50.f, 0.f };
    projectileInfo.projTransformInfo.rotation = rotation;

    entityInfo.transform = &projectileInfo.projTransformInfo;
    entityInfo.tag       = &projectileInfo.projTagInfo;
    entityInfo.collider  = &projectileInfo.projColliderInfo;
    entityInfo.meshModel = &projectileInfo.projModelInfo;
    entityInfo.stats     = &projectileInfo.projStatsInfo;

    [[maybe_unused]] Entity entity{ Create(entityInfo) };
    toAdd.push_back({ .entity = entity, .direction = rotatedDirection, .speed = aSpeed });
}
void Projectile::Update(const float aDeltaTime)
{
    myAliveTimer += aDeltaTime;
    Tga::Vector3f pos = entity.Transform().Position();
    pos += myDirection * mySpeed * aDeltaTime;

    if (GridSystem::IsUnderGrid(pos) || myAliveTimer >= ALIVE_TIME)
    {
        SelfDestroy();
    }

    entity.Transform().SetPosition(pos);

    std::vector<CollisionSystem::HitInfo> hits;
    if (CheckSphereCollision(entity, hits))
    {
        for (auto& [hitEntity, alignment] : hits)
        {
            // Enemy projectile hits player
            if ((entity.Tag().StringID() == locEnemyProjID) && (hitEntity.Tag().StringID() == locPlayerID))
            {
                AttackSystem::TransferDamage(hitEntity, entity.Stats().GetDamage().damage, Stats::DamageSource::EnemyRanged);
                SelfDestroy();
            }
            // Player projectile hits enemy
            else if ((entity.Tag().StringID() == locPlayerProjID) && (
                hitEntity.Tag().StringID() == locEnemyID || hitEntity.Tag().StringID() == locRangedID))
            {
                AttackSystem::TransferDamage(hitEntity, entity.Stats().GetDamage().damage, Stats::DamageSource::PlayerShotgun);
                SelfDestroy();
            }
        }
    }

    // Collide with world objects (ignoring player/enemy)
    if (CollisionSystem::Collided(entity, {locPlayerID, locEnemyID, locRangedID, locPlayerProjID, locEnemyProjID}))
    {
        SelfDestroy();
    }
}
Projectile System

Built a modular projectile system to support ranged enemies. Kept it flexible since the player used the same system. The angle offset in SpawnProjectile is used for the player's shotgun spread — spawning multiple projectiles across an arc.

[]
Project

Captain Hornswaggle

▶ Watch Trailer
2023 The Game Assembly Engine Programmer · Project Manager C++ · Custom Audio · Custom Engine
★ Game of the Year — The Game Assembly

A pirate adventure game that won Game of the Year. I wore a few hats — custom audio from scratch, cinematics, and project management across the team.

Custom Audio Manager

First time working on game audio. Built the manager from scratch without any third-party libraries — spatial audio, surface-type audio, loading and caching SFX and music, full master/ambient/SFX volume control, mixing based on APA feedback, sound variants and continuous collaboration with APA throughout.

1 / 3
void WASD::AudioManager::PlaySfx(const SoundData aSoundData, const Tga::Vector3f aPosition)
{
    if (!GetSfxIsMuted())
    {
        SoundID soundID = CreateSoundID(aSoundData);

        const int variantToPlay          = GetRandomVariant(mySfx.at(soundID).size(), myLastVariantsPlayed.at(soundID));
        myLastVariantsPlayed.at(soundID) = variantToPlay;

        const float volume = AdjustProximityVolume(aPosition, mySfx.at(soundID)[variantToPlay]);

        if (volume > 0.f)
        {
            Tga::Vector3f soundDistanceToPlayer;

            if (aPosition.x != 0.f && aPosition.y != 0.f)
            {
                soundDistanceToPlayer   = aPosition - globalPlayerPosition;
                soundDistanceToPlayer.x = std::clamp(soundDistanceToPlayer.x / mySfxMaxPanDistance, -1.0f, 1.0f);
                soundDistanceToPlayer.y = std::clamp(soundDistanceToPlayer.y / mySfxMaxPanDistance, -1.0f, 1.0f);
            }

            mySfx.at(soundID)[variantToPlay].SetVolume(volume);
            mySfx.at(soundID)[variantToPlay].Play();
            mySfx.at(soundID)[variantToPlay].SetPosition(soundDistanceToPlayer);
        }
    }
}
float WASD::AudioManager::AdjustProximityVolume(
    const Tga::Vector3f& aObjectPosition, const Tga::Audio& aAudioInstance) const
{
    const Tga::Vector3f soundDistanceToPlayer = aObjectPosition - globalPlayerPosition;
    const float volume = (1.f - (soundDistanceToPlayer.Length() / mySfxMaxDistance))
                        * GetSfxVolumeOfInstance(aAudioInstance);
    return std::clamp(volume, 0.f, aAudioInstance.GetMaxVolume());
}

void WASD::AudioManager::LoadSound(
    const SoundType aSoundType, const MaterialType aMaterialType,
    const SubSoundType aSubSoundType, const std::string_view aPath,
    const float aBaseMaxVolume, const bool aShouldLoop)
{
    SoundData soundData;
    soundData.soundType    = aSoundType;
    soundData.materialType = aMaterialType;
    soundData.subSoundType = aSubSoundType;

    const std::string fullFilePath = "audio/sfx/" + std::string(aPath);
    mySfx[CreateSoundID(soundData)].emplace_back().Init(fullFilePath.c_str(), aBaseMaxVolume, aShouldLoop);
    myLastVariantsPlayed[CreateSoundID(soundData)] = 0;
}
// Packs SoundType, MaterialType and SubSoundType into a single integer ID
// by bit-shifting each enum value into its own lane.
// This lets us use one flat map lookup instead of nested maps,
// and guarantees a unique key for every (type, material, sub) combination.
constexpr WASD::SoundID CreateSoundID(const WASD::SoundData aSoundData)
{
    WASD::SoundID soundID = WASD::EVal(aSoundData.soundType);
    soundID <<= std::numeric_limits<std::underlying_type_t<MaterialType>>::digits;
    soundID |= WASD::EVal(aSoundData.materialType);
    soundID <<= std::numeric_limits<std::underlying_type_t<SubSoundType>>::digits;
    soundID |= WASD::EVal(aSoundData.subSoundType);
    return soundID;
}

// The three enums that combine to form a SoundID:
enum class MaterialType : uint8 { Wood, Stone, Crate, TentacleSkin, Cannon, SkeletonHead, PirateHead, ParrotHead, Player, None, Count };
enum class SubSoundType  : uint8 { Player, Pirate, Parrot, Skeleton, Tentacle, Cannon, UI, None, Count };
enum class SoundType     : uint8 { Landed, Attacking, Moving, Jumping, Exploding, DoubleJumping, Died, LevelComplete, CheckpointEntered, WallSliding, Breaking, Hovering, None, Count };
[]
Cinematic Handler

Built the system handling all cutscenes — image indexing, sound queues and the game outro. Working closely with APA on this gave the cinematics a lot of character.

1 / 2
void CinematicPlayer::Update(const float& aDeltaTime)
{
    myCinematicData.firstSoundTimer += aDeltaTime;

    if (myCinematicData.firstSoundTimer > myCinematicData.timeToPlayFirstSound
        && !myCinematicData.firstSoundPlayed)
    {
        Locator::GetAudioManager()->PlayCinematicSfx(myCinematicData.myCinematicID, myCinematicData.sfxIndex);
        myCinematicData.sfxIndex++;
        myCinematicData.firstSoundPlayed = true;
    }

    if (myCinematicData.imageIsSwapping)
    {
        ChangeImage(aDeltaTime);
        myCinematicData.showing          = false;
        myCinematicData.instructionTimer = 2.5f;
        myCinematicData.fadeValue        = 0.f;
    }

    myCinematicData.instructionTimer -= aDeltaTime;

    if (myCinematicData.instructionTimer <= 0.f)
    {
        myCinematicData.showing = true;
        FadeInstructions(aDeltaTime);
    }
}
void CinematicPlayer::ChangeImage(const float& aDeltaTime)
{
    if (myCinematicData.startIndex == myCinematicData.maxIndex)
    {
        myCinematicData.imageIsSwapping = false;
        SkipCinematic();
        return;
    }

    float spriteMoveSpeed = myCinematicData.spriteMoveSpeed * myCinematicData.resolutionScaleFactor;
    Tga::Vector2f targetPos = myCinematicData.spriteWayPoints[myCinematicData.startIndex + 1];

    const float movement = spriteMoveSpeed * aDeltaTime;
    const Tga::Vector2f step = myCinematicData.movementDirection * movement;
    myCinematicData.currentProgressToNextFrame += movement;
    myCinematicData.instance.myPosition += step;

    if (myCinematicData.currentProgressToNextFrame > myCinematicData.totalDistanceToNextFrame)
    {
        myCinematicData.instance.myPosition = targetPos;
        myCinematicData.startIndex++;
        myCinematicData.imageIsSwapping = false;

        // Trigger outro sequence on final image
        if (myCinematicData.startIndex == outroStartIndex
            && myCinematicData.myCinematicID == WASD::SceneID::CinematicOutro)
        {
            myCinematicData.outroTimer   = 0.f;
            myCinematicData.outroPlaying = true;
            myCinematicData.targetPos    = myCinematicData.spriteWayPoints.back();
            myCinematicData.movementDirection = myCinematicData.targetPos - myCinematicData.instance.myPosition;
            myCinematicData.totalDistanceToNextFrame = myCinematicData.movementDirection.Length();
            myCinematicData.movementDirection.Normalize();
        }
    }
}
Project Manager

Handled communication gaps across disciplines, booked meetings and tracked attendance. Main responsibility for sprint reviews and presenting them. Always on the lookout for opportunities to improve how the team was working together.

[]
Project

Frogment

▶ Watch Trailer
2023 The Game Assembly Programmer C++ · Unity

A smaller project focused on UI, audio and game management. Built in Unity.

UI

Responsible for the UI setup — button events, triggers and audio events. Close collaboration with the graphics/UI artists.

Game Manager

Handled scene and menu transitions, managing the scene stack as a whole.

Audio Manager

Responsible for all audio events, music and SFX.

[]
First Project

Last Unit

▶ Watch Trailer
2022 The Game Assembly Programmer C++ · Unity

My very first game. A good starting point — player movement, checkpoints and learning the ropes.

Player Movement

Responsible for player movement, including game feel and responsiveness.

Checkpoint System

Built and maintained the checkpoint system.