Game Programmer
3D action game on a custom engine. Engine programmer across render, scene management, AI and audio.
Engine iteration on the same codebase — render refactor, editor integration and new rendering features.
Fully responsible for enemy systems — pathfinding, behaviour and state handling across three enemy types.
GOTY winner. Audio, cinematics and project management on a pirate adventure.
UI, audio and game management in Unity.
First game ever made. Player movement and checkpoint system.
I'm a game programming student at The Game Assembly in Stockholm, specialising in engine systems, rendering, and editor tooling. I build the things that make games possible — from deferred rendering pipelines to artist-facing UI editors.
I care about writing systems that are fast, readable, and actually useful for the people who work with them. Outside of code I'm an actor, which probably explains why I care so much about what ends up on screen.
Currently looking for internship and junior positions in engine or tools programming.
[]
Both draw and erase mode let the user choose between hold-and-drag or single click, giving fine-grained control over how assets are placed. Asset selection can be either random or sequential — useful when a more predictable outcome is wanted. The draw mode also supports preview and none preview mode, for even more control.
Spacing is handled in two ways. The user can control the distance between each brush stroke, so each batch is placed at a set interval from the previous one. On top of that, minimum spacing between individual objects can be enabled — either against all previously painted objects, or just within the current batch.
As a first iteration I only allowed placement of one unique asset — this quickly became underwhelming. Multi asset drawing gives the user unlimited choices in how to design their clusters. With randomization per asset, spacing and the ability to tune how many assets to place, each brush stroke can generate something unique.
Randomization gives the user a tool for creating unique assets in each batch. I've chosen to divide it into yaw/pitch/roll and scale, to give the user an easy overview of what is available. I noticed that with the inclusion of previews and the ability to turn off auto generate, this was also very useful for more detailed work, such as single asset placement, when working in a concentrated area.
Slope filtering works by sampling the normals in the terrain map, giving the user the ability to only target specific elevations in the terrain, without worrying about being careful around edges such as walls or steep cliffs. This also works the other way around, if the user only wants to work on cliff walls or paint along a cliff wall for example.
I encountered several performance issues during the development of the tool. As a first iteration, I kept iterating over all previously placed assets by the brush when doing checks for object spacing, an O(n) operation, meaning every new placement had to check against every single previously placed object, growing slower the more objects existed. I noticed this quickly became a blocker which introduced the need to handle scaleable performance. A first solution was to create a spatial grid based on the minimum spacing allowed between objects, with the current object checking a 3D grid around itself for other objects placed. This effectively removed several unnecessary iterations during the check.
Even with this improvement, performance was still an issue. After profiling, I found that the biggest bottleneck was our naming system in the editor, which iterated over all instances of the same asset to generate a unique name when adding an asset to the scene. After adding a simple counter system which names assets uniquely based on whether they are painted by the brush or not, and checking how many instances of one asset exist during startup, I effectively eliminated the lag that was previously caused.
Since we already had GBuffer textures in place, I decided to use those as a source for generating the textures I needed for the picking. Two staging textures were used for this, one containing the vertex normals and one containing the world position of the currently drawn scene. I created a component tag that could be added to any scene object, used to mark it as paintable terrain. Only objects carrying this tag were drawn to the staging textures, meaning the brush would only snap to intentionally marked surfaces. In the render pipeline, I then drew the terrain objects first. After these were drawn, we could fetch the textures.
The world position is drawn to a texture using the format DXGI_FORMAT_R32G32B32A32_FLOAT. Since the data is stored as 4 values of 32 bits and we use floats as a representation of our world position, reading the world position back is straightforward.
The normals however needed converting back to a -1 to 1 range, since when drawing to the GBuffer texture, we remap to 0-1 because the shader can't store negative values in a texture. The normals are saved to a texture with the format DXGI_FORMAT_R10G10B10A2_UNORM, using 10 bits for each channel. On readback, the 10-bit channels are unpacked and the encoding is reversed back to -1 to 1.
Preview mode came with a lot of challenges. We now needed to save a version of the object that could be updated manually by the user if they wished to do so. For my first iteration, I did not need to care since the assets got updated between each brush stroke in regards to the settings chosen by the user. Now, we needed to keep the position/rotation/scale for the object, or some combination of them, based on the settings.
For this, I created a struct related to the already existing scene object. By caching the current previews of the current stroke and batch of that stroke, I was able to update it cleanly as the user dragged the mouse across the screen. This also introduced the "Update on move" and "Manual update on R" settings, since the need for more control arose as a product of the preview tool.
struct PreviewObject { std::shared_ptr<Tga::SceneObject> sceneObject; bool isValid; Tga::Vector2f cachedOffsetTangent; Tga::Quaternionf cachedRotation; Tga::Vector3f currentTerrainNormal; Tga::Vector3f cachedScale; Tga::StringId assetPath; float pivotOffset = 0.f; };
void AssetBrush::GetSurfaceTangentFrame( const Tga::Vector3f& normal, Tga::Vector3f& outTangent, Tga::Vector3f& outBitangent) { // Pick a reference vector that isn't parallel to the normal Tga::Vector3f reference = (fabsf(normal.y) < 0.99f) ? Tga::Vector3f(0, 1, 0) : Tga::Vector3f(1, 0, 0); outTangent = reference.Cross(normal); outTangent.Normalize(); outBitangent = normal.Cross(outTangent); outBitangent.Normalize(); }
An interesting geometric problem arose from the fact that the brush uses a flat 2D disc as a representation of the area the assets can be placed in. Since the brush originally scattered objects in the XZ plane, it completely broke when trying to scatter on vertical or very steep surfaces, placing the objects in a line rather than spreading them out. This was solved by finding the tangent and bitangent of the surface normal and using those as the up/down and left/right directions for the position offset.
One problem that still persists is that the scatter area is in screen space, occasionally placing objects far away if the disc is outside the intended area but where valid terrain still exists. This can be solved by doing a simple world position check as well, so that the asset is not too far from the world position of the brush center — a feature I am looking to add.
Another problem that arose was the pivot of the object not always aligning with the actual surface we are placing our asset on. Some of our assets had their pivot in the center which made the object clip through the surface rather than sitting on top of it. This was solved by subtracting the difference between the object's Y boundary and the object's center if a difference was found. I made this feature a toggle as well, so that the artist was able to control the pivot offset themselves.
One problem I encountered was that the undo action was a bit inconsistent. Every user expects a clean and functional undo action, which is easy to deprioritize during development since focus lies elsewhere. At first, the undo action in our editor meant undoing the previous command. When placing many assets this became tedious since one undo meant removing only one asset. I fixed this to undo by batch and brush stroke instead, by adding all assets painted in that stroke into one single command.
[]
What I call the UI essentials refer to what every artist expects when building a menu. This includes button, sliders, toggles, text and images. With this, one can basically build the simplest of menus and it will get you to your goal. By using a component system behind the scenes, the artist is able to do a simple drag and drop or create a child of another object by right clicking and choosing what UI element he or she wishes. These are then fully editable in the editor, including things such as hovered/on click textures and colors for the different elements, aswell setting up hit boxes, anchor points, pivots, custom shaders and sort order.
One big focus for me and the reason behind many choices I make is the visual part of the job. Its where I find the joy of the programming job. Therefore, I decided to give alot of focus to the animation part of the editor. This gives the artist a tool to make the elements interact with and give feedback to the user. By capturing keyframes in a timeline, the artist can set up animations for scale, color, shader parameters, position and rotation. One big priority for me was testing it on my artist, and I tried building the UX to their liking, using ImGui as a base.
Another key features that many artists take for granted is the ability to keep elements in a child/parent hierarchy, so when editing or animating many elements who will share most of their traits, they do not have to duplicate work for all subsequent elements. I also wanted this to be easy, by the click of a button or by just dragging and dropping an element onto another, one creates a child/parent relationship between the two, making the feature recognizable from other engines.
In addition to using a standard image I wanted the ability to use a live scene as a background for the menus. This was implemented in our custom engines editor. It gives the user the ability to switch between the UI element editing and the live scene editing. I also wanted a way to set the a custom camera angle, letting the user toggle between a "Edit" camera and a "View" camera, although still making it possible to edit the live scene through the "View" camera, to give more freedom to the artist.
I first started making a detached system for our UI. We had just started our merge to a component system, which made the current state of our engine a hybrid between a component system and a pure OOP style system. Since I wasn't too comfortable with a component system at the time and since I had not gotten the chance to learn it properly, the first thing I did was making separate Game objects for the different elements such as button/sliders etc. Even though it could have worked in theory, I quickly decided to switch focus and start integrating it with our component system instead.
I started sketching on how a system could look, with each UI element being a GameObject with correpsonding components that get updated individually. One simple component was the UIImage component. It basically takes a default texture and color and slaps it onto the Game object. Then, when sliders/buttons/toggles wnat to fetch their image, they call the same game object they are components off, fetching their corresponding image, making it very easy to manipulate the underlying image.
class UIImage_C : public Component, public UIAnimatable { public: UIImage_C() = default; ~UIImage_C() override = default; void Init() override; void Update(float aDeltaTime) override; void Setup(const Tga::UITransformData& aTransform, const Tga::UIImageData& aImage); void UpdateTransform(); void SetVisible(bool aVisible) { myVisible = aVisible; } void SetTexture(const Tga::StringId& aTexture); void SetColor(const Tga::Color& aColor) override; void SetSize(Tga::Vector2f aSize) override; void SetScale(Tga::Vector2f aScale) override; void SetPosition(Tga::Vector2f aPosition) override; void SetRotation(float aRotationDeg) override; void SetCustomShader(const Tga::SpriteShader* aShader); void SetShaderParam(Tga::Vector4f aParam); Tga::Color GetColor() const override { return myInstanceData.myColor; } Tga::Vector2f GetSize() const override { return myTransformData.size; } Tga::Vector2f GetScale() const override { return myAnimatedScale; } Tga::Vector2f GetPosition() const override; float GetRotation() const override; Tga::Vector4f GetShaderParam() const { return myShaderParam; } bool HasCustomShader() const { return myHasCustomShader; } private: Tga::SpriteSharedData mySharedData = {}; Tga::Sprite3DInstanceData myInstanceData = {}; Tga::UITransformData myTransformData; Tga::UIImageData myImageData; Tga::Vector4f myShaderParam = { 0.f, 0.f, 0.f, 0.f }; Tga::Vector2f myAnimatedScale = { 1.f, 1.f }; bool myHasCustomShader = false; bool myVisible = true; int mySortOrder = 0; };
One mistake I made when continuing my work was making separate textures/colors for my sliders and toggles, with them cahnging their underlying image themselves, instead of manipulating the underlying images, something I am looking to change the coming week. They just need a way to distinguish which UI image component that beloings to which part (fill, handle, and background for toggle for example).
void UIButton_C::ApplyState() const { auto img = myImage.lock(); if (!img) { return; } switch (myState) { case State::Normal: { img->SetTexture(myData.normalTexture); break; } case State::Hovered: { img->SetTexture(myData.hoveredTexture.IsEmpty() ? myData.normalTexture : myData.hoveredTexture); break; } case State::Pressed: { img->SetTexture(myData.pressedTexture.IsEmpty() ? myData.normalTexture : myData.pressedTexture); break; } case State::Disabled: { img->SetTexture(myData.disabledTexture.IsEmpty() ? myData.normalTexture : myData.disabledTexture); break; } default: { break; } } }
The parenting system was made by adding a component I called a UIRelationship compponent. This component can either have children, a parent, or both. By making it go both ways, it was easier to keep track of the component.
I intially tried setting up the parent/child relationship when fetching the Scene objects in the editor and assigning the relationsship component as we go. This caused many crashes since we ddint have the full picture yet, meaning a child could appear in the list before its parent. This was solved by adding a seconds pass when we built the scene. This adds one extra iteration thorugh all objects but solves the problem with the relationships not being completly known until all objects are fetched. And since most UI scenes wont contain more than a few hundred objects, this is a fine trade off.
Right now, there is a known limitaion in that artist could assign a A as child to B and B as child to A, which could happen if they "change their mind" mid editing. That would cause infinite recursion. Looking into fixing this.
In our engine and editor we have two separate representations of a an object. In game we use the class GameObjects as the representation of our objects and in the editor the class SceneObject is used. I wanted the UI editor to be able to have a live preview. But his meant marking theboth the editor and the live scene as dirty when the user either moved a scene object or the animator animated a gameobject. Both of them manipulated objects, triggering a rebuild.
At first, I saved our objects for every property changed on it during animation, back to the editor. This quickly turned into a huge mess, with saving json properties with almost every click. This made me implement a working copy pattern.
All edits go into a local copy of the animation data. Nothing touches the actual scene property store until a save is called, which issues a single command to save everything back to the editor. This means the artist can freely experiment without polluting the scene with every tiny edit.
void UIAnimationEditor::AddClip(Tga::StringId aName) { Tga::UIAnimationClip newClip; newClip.name = aName; myWorkingAnimData.clips.push_back(newClip); mySelectedClip = aName; myRestPosition = mySnapshot.position; myRestRotation = mySnapshot.rotation; } Tga::SceneProperty newProperty = mySourceProperty; newProperty.value = Tga::Property::Create<Tga::CopyOnWriteWrapper<Tga::UIAnimationData>>( Tga::CopyOnWriteWrapper<Tga::UIAnimationData>::Create(myWorkingAnimData)); auto command = std::make_shared<ChangePropertyOverridesCommand>( mySceneObjectId, newProperty, myOverrideProperty); CommandManager::DoCommand(command); if (myOnDirty) { myOnDirty(); }
The drawback is the artist cannot change the scene in realtime while the animation editor is open, but this was a trade off I decided was ok to keep, since the whole point of opening an animation editor is to animate the UI.
The keyframes were solved by capturing a snapshot of the object state when the user opens up the editor. The animation data of the SceneObject is also collected and cached for that session, while the object is being animated. The object is then being animated through the GameObject which gets a working copy of the animation data. This prevents polluting the SceneObject and it also makes it easy to work with.
struct AnimationFrame { Tga::Vector4f shaderParam = { 0.f, 0.f, 0.f, 0.f }; Tga::Vector2f position = { 0.f, 0.f }; Tga::Vector2f size = { 100.f, 100.f }; Tga::Vector2f scale = { 1.f, 1.f }; Tga::Color color = { 1.f, 1.f, 1.f, 1.f }; float rotation = 0.f; bool valid = false; Tga::Vector2f restPosition = { 0.f, 0.f }; float restRotation = 0.f; };
mySnapshot = {};
if (myLiveScene)
{
auto it = myLiveScene->GetGameObjectMap().find(mySceneObjectId);
if (it != myLiveScene->GetGameObjectMap().end())
{
if (auto img = it->second->GetComponent<UIImage_C>())
{
mySnapshot.position = img->GetPosition();
mySnapshot.size = img->GetSize();
mySnapshot.scale = img->GetScale();
mySnapshot.color = img->GetColor();
mySnapshot.rotation = img->GetRotation();
mySnapshot.valid = true;
}
}
}
Tga::Scene* scene = GetActiveScene();
if (scene)
{
Tga::SceneObject* sceneObject = scene->GetSceneObject(mySceneObjectId);
if (sceneObject)
{
mySnapshot.sceneObjectPosition = sceneObject->GetPosition();
mySnapshot.sceneObjectRotation = sceneObject->GetEuler();
mySnapshot.sceneObjectScale = sceneObject->GetScale();
}
}When the artist clicks "Add Keyframe", CaptureKeyframe reads the live object's current state directly from UIImage_C and packages it into a UIKeyFrame struct with a timestampm such as color/scale etc. The keyframe is then inserted into the clip's keyframe array and sorted by time. If a keyframe already exists at that time it gets replaced.
When the artist closes the editor, we do the save operation, which saves the data to the json file, ready to be used ingame or in the live preview of the editor.
A 3D action game built on a custom engine. I was responsible for several core systems — render pipeline, scene management, enemy AI and audio — working closely with other disciplines throughout.
Responsible for the full render pipeline. Had to balance requests from other disciplines while building something fast and maintainable. Collecting render data each frame and setting up all passes — shadows, GBuffer, lighting, bloom and post processing.
// ***** SHADOWS ***** graphicsStateStack.Push(); graphicsStateStack.SetRasterizerState(RasterizerState::Shadow); graphicsStateStack.SetCamera(myShadowCamera); myRenderData.shadowMap.Clear(); myRenderData.shadowMap.SetAsActiveTarget(); DrawModelsToShadowMap(shadowCasters, graphicsEngine, graphicsStateStack); graphicsStateStack.Pop(); // ***** GBUFFER PASS ***** graphicsStateStack.Push(); myGBuffer.ClearTextures(); std::array<ID3D11ShaderResourceView*, 5> nullViews = {}; Tga::DX11::Context->PSSetShaderResources(6, 5, nullViews.data()); graphicsStateStack.SetBlendState(BlendState::Disabled); myGBuffer.SetAsActiveTarget(DX11::DepthBuffer); DrawModelsToGBuffer(culledOpaqueModels, graphicsStateStack); graphicsStateStack.Pop();
Responsible for loading and caching scenes and scene objects, including the full pipeline from engine to editor. Built a Game Object factory and a Property applier to keep it maintainable and readable — every programmer on the team was going to work with this the entire project. The registry pattern in the factory made adding new object types a one-liner.
class VFXManager { public: VFXAsset GetVFX(Tga::StringId aVFX); const std::unordered_map<Tga::StringId, VFXAsset>& GetCached3DVFX(); static VFXManager* GetInstance(); static void DestroyInstance(); void CacheVfx(std::vector<Tga::ScenePropertyDefinition>& someProps, Tga::TextureManager& aTextureManager); void PlayVFX(Tga::StringId id, const Tga::Vector3f& pos); void PlayVFX(Tga::StringId id, const Tga::Matrix4x4f& aTransform); std::shared_ptr<SpawnedObject> PlayVFXReturnObject(Tga::StringId vfxName, const Tga::Matrix4x4f& aTransform); private: static VFXManager* ourInstance; std::unordered_map<Tga::StringId, VFXAsset> myCachedVFXAssets; };
Built a dedicated VFX Manager to cache and hold VFX assets, with a clean public interface for triggering them by ID from anywhere in the game.
Built the enemy foundation for colleagues to iterate on — a state machine with shareable states across all enemy types, while still allowing custom unique states per enemy. Each controller registers its own states on startup, keeping things decoupled and easy to extend.
bool AudioManager::Init() { FMOD_RESULT result; myStudioSystem = nullptr; result = FMOD::Studio::System::create(&myStudioSystem); if (result != FMOD_OK) { return false; } myStudioSystem->initialize( 1024, FMOD_STUDIO_INIT_LIVEUPDATE | FMOD_STUDIO_INIT_NORMAL, FMOD_INIT_NORMAL, nullptr ); myStudioSystem->getCoreSystem(&myCoreSystem); PlayReverbEventAtStartUp(); return true; }
Built the Audio Manager and integrated FMOD, ready for colleagues to iterate on. Also responsible for communication with APA (a school for game audio) and adding SFX and music. Handled reverb and spatial sound integration with the game world.
Continuation of the same custom engine from Cycles of Deluge. The focus shifted towards iteration and editor integration — refactoring the render pipeline and building new rendering features while making the editor actually usable for level designers.
Responsible for a full refactor of the render pipeline, introducing a command pattern with render data containing commands for each pass. This made it easier to add features and also brought the editor up to speed — level designers could now see changes without launching the game. New features added this project: spotlights, SSAO, line lights and a player flashlight.
// Add your render pipeline refactor code hereEarlier the editor used actual scene object properties for rendering and had its own ID pass for outlining selected objects. With the removal of those passes, I built render commands — including ID commands — from the scene objects, feeding them into the same renderer the game uses.
// Add your editor integration code hereA long-standing request from Cycles of Deluge. Added the ability to preview individual assets in the editor with live lighting, and to preview and live-edit VFX while viewing them.
Integrated enemy behaviour using third-party libraries for navmesh generation, pathfinding and agent separation. With limited time, I reused the state machine as a behaviour tree — creating nodes for each state with Enter, Update and Exit phases.
// Add your enemy behaviour code hereMy first time working with AI. I was fully responsible for all enemy systems — pathfinding, behaviour and state handling across three different enemy types.
My own A* implementation as the movement foundation. Getting it to work reliably meant handling obstacle avoidance, diagonal movement edge cases, height transitions and field of view — among other things.
Three enemy types: melee, ranged and elite. Melee used a seek-and-alert pattern. Ranged added a flee behaviour to create distance from the player. The surround system was something I was particularly proud of — enemies check occupied cells to avoid clumping, surrounding the player instead.
First time working with AI, so I kept it simple — a state machine where each enemy type has its own update per state. Easy to maintain and more than enough for the scale of the game. A learning I take with me.
Built a modular projectile system to support ranged enemies. Kept it flexible since the player used the same system. The angle offset in SpawnProjectile is used for the player's shotgun spread — spawning multiple projectiles across an arc.
A pirate adventure game that won Game of the Year. I wore a few hats — custom audio from scratch, cinematics, and project management across the team.
First time working on game audio. Built the manager from scratch without any third-party libraries — spatial audio, surface-type audio, loading and caching SFX and music, full master/ambient/SFX volume control, mixing based on APA feedback, sound variants and continuous collaboration with APA throughout.
Built the system handling all cutscenes — image indexing, sound queues and the game outro. Working closely with APA on this gave the cinematics a lot of character.
Handled communication gaps across disciplines, booked meetings and tracked attendance. Main responsibility for sprint reviews and presenting them. Always on the lookout for opportunities to improve how the team was working together.
A smaller project focused on UI, audio and game management. Built in Unity.
Responsible for the UI setup — button events, triggers and audio events. Close collaboration with the graphics/UI artists.
Handled scene and menu transitions, managing the scene stack as a whole.
Responsible for all audio events, music and SFX.
My very first game. A good starting point — player movement, checkpoints and learning the ropes.
Responsible for player movement, including game feel and responsiveness.
Built and maintained the checkpoint system.