About
This page is a summary of my professional experience so far. It's (roughly) ordered chronologically, and is mostly technology related.
(Click on the faded part of a section to expand it.)
Amutri (February 2024 - Present)
- Senior Full-Stack Engineer (November 2024 - Present)
- Backend Engineer (February 2024 - November 2024)
Languages and technologies
I joined Amutri as a backend engineer, where I was immersed in the world of full-stack web development and distributed systems. Nine months into my tenure I was promoted to senior full-stack engineer. It's been an eye-opening experience and I've learned more over the last year than I think I have in the last five. It continues to be an energising experience, and one I wouldn't trade for the world.
A handful of the technologies I work with include...
- AWS Amplify - Used to manage and deploy the Amutri full-stack
web application.
The stack is made up of a bunch of AWS services, including:- AWS Lambda
- Amazon DynamoDB
- Amazon Cognito
- Amazon EC2
- Amazon S3
- React Native and TypeScript - Used to build the frontend Amutri web application.
- Terraform - Used to deploy AWS infrastructure to power the Amutri 3D render experiences.
- Unreal Engine - The beating heart of the 3D experience.
- Pixel Streaming Infrastructure (WebRTC) - The magic to stream the 3D render experience to the user's browser.
Some of the most impactful features I've delivered include...
Social sharing features
This allows users to share 3D experiences with one another without needing to manually share a cumbersome 3D CAD file. Other users are added as either guests or hosts to control their permissions and access levels. This significantly improved the ease of use of the product and made onboarding new users much simpler.
Microsoft Azure conversion
Amutri was originally built as an AWS cloud native service, however it became clear that enterprise customers would require certain workloads to run in their own Azure infrastructure instead. I owned the process of porting Amutri's critical functionality from AWS to Azure. This meant migrating from AWS Lambda to Azure Function Apps, Amazon EC2 to Azure Virtual Machines, and Amazon S3 to Azure Storage Containers (among many other things...), all the while keeping the main Amutri service running in AWS (I did turn quite a bit greyer during this time).
Viewpoints
This feature allows owners or hosts of a 3D experience to record camera locations with associated information about a product or space, to create what we called 'Guided Tours' of a scene. This makes navigating the scene a lot easier for new users and allows product information to be displayed along with a particular view.
Frontend player optimisation
The React application responsible for allowing users to interact with the 3D experience was critical to ensure a smooth experience using the product. As with any application, over time as features are added, the performance of the player had started to suffer. Using the excellent React Developer Tools, I was able to profile the application and understand when and where expensive re-renders where happening. By isolating and fixing the most expensive components, overall CPU utilization went from around 40% to down to ~1-2%.
Amazon Web Services (AWS) (April 2017 - January 2024)
- Software Development Manager (May 2022 - January 2024)
- Senior Software Development Engineer (December 2019 - May 2022)
- Software Development Engineer (April 2017 - December 2019)
Languages and technologies
I joined AWS as part of the Amazon Lumberyard team building a new game engine. I started out working on tooling to convert/migrate legacy systems (coming from the version of CryEngine Lumberyard was built on) to the new component entity system Lumberyard created. This progressed to building the initial spline component, which led to building out improved editor tooling to edit splines in the viewport. This eventually became the new manipulator system, which evolved into what we called Component Mode and the new Viewport Interaction Model. This work culminated in development of a new tool for creating simple 3D models directly inside the editor viewport called the White Box Tool.
After progressing to a senior software engineer role, I made the transition to management, and spent several years managing a team of ~6-8 engineers across a variety of features (animation, physics, viewport).
After a reorg, myself, and the team I managed found ourselves moved to the High Performance Computing organisation. This was a big change for everyone on the team (myself included), where we had to adapt to building services instead of client side applications (game engines/tools). I worked with some incredible managers and engineers during my time with HPC and learnt a great deal. After helping the team settle as best I could, I decided it was time for a new challenge, and decided to make the jump back to a startup again.
A handful of the technologies I got to work with included...
- C++ - Amazon Lumberyard (which later became Open 3D Engine, or O3DE) was written primarily in C++ with its own custom renderer (Atom) and architecture patterns (Event Buses).
- Qt - Open 3D Engine made extensive use of the Qt UI framework which was used for all editor tools outside the main viewport (even some inside, after the Viewport team I led added Qt Viewport UI overlays).
Some of the most impactful features I delivered included...
New viewport interaction model and component mode
An initiative I delivered with a brilliant designer on the team was an overhaul of the Lumberyard/O3DE viewport to make it consistent with the new manipulator system we created to make editing components in the viewport possible (something I'd built previously). Before, legacy gizmos we inherited from CryEngine were the default way of interacting with entities in the scene, which was confusing and frustrating for users. We also needed a way to handle sub modes or contexts within the viewport, to allow bespoke editing experiences based on the component (for example being able to adjust the dimensions of a shape directly in the viewport without being forced to change values directly in the entity inspector). This led to the creation of Component Mode, a feature and API to allow component authors to add viewport editing to their components (this was particularly useful for visually adjusting properties of things like lights (rendering) or colliders (physics)).
The new Viewport Interaction Model reimagined how to handle transforming (translating, rotating, scaling) entities in the scene. One feature I'm particularly proud of was our dynamic reference space switching, which made moving between local/parent/world spaces much faster than traditional toggles.
To learn more about the feature see the O3DE 3D viewport documentation.
White box tool
Along with the same designer I worked with on component mode, we set about creating a new 3D geometry creation tool that could be used directly inside the O3DE viewport. This built on all the same systems we created when we built component mode (including manipulators). The White Box Tool allowed users to create simple shapes to 'white box' a level and prototype layouts quickly in the editor before handing the simplified scene to a 3D artist. We focussed on ease of use and intuitive controls for the initial release, and also provided a scriptable Python API for creating more complex shapes. Later we added a viewport UI overlay to dynamically adjust based on what was being edited. The feedback we received was very positive and customers were able to get up and running with the tool very quickly.
Editor camera
A smaller project, but one with a wide impact, was the new editor camera. Lumberyard and O3DE had always suffered with a difficult to use viewport camera. We decided we could build something more usable and flexible to replace the current system, and make a camera that could be reused across multiple viewports (including the animation and material editor). This guaranteed a consistent experience and avoided duplicated code.
The result was a new camera, entirely decoupled from the main viewport, and one that worked across multiple editors. A walkthrough the design and implementation of the camera can be found below.
Glowmade (January 2016 - April 2017)
- Software Engineer/Gameplay Programmer (January 2016 - April 2017)
Languages and technologies
I joined Glowmade (originally Trink) as the first employee, and was part of a tiny team working on building an app that we hoped might become something akin to a 3D version of Instagram (you'd create these beautiful little vignettes with photos and objects to share with friends). That unfortunately didn't pan out quite as we'd hoped, but after a classic startup pivot, we switched to building a game, and the end result was WonderWorlds, an action platformer released on iOS.
A handful of the technologies I got to work with included...
- C++ - Custom engine written in C++ using bgfx for rendering and Dear ImGui for developer tools.
- iOS - WonderWorlds was built for iOS using Apple's platform/tooling (macOS/Xcode).
Some of the most impactful features I delivered included...
Bullet physics integration
We wanted the scenes users were creating to feel alive and interactive, and there's no better way to achieve this than with physics. I was responsible for integrating the Bullet physics library into the app, and exposing a number of features (most notably joints) for designers to work with to breath life into the 3D experiences.
Gameplay scripting
After we pivoted from building an app to a game, we still wanted to keep the content creation aspect alive. All levels created in WonderWorlds were built using the in-game level editor, which we also exposed to players to build and share their own levels (similar to Little Big Planet). I was responsible for creating a number of reusable gameplay components such as Movers and Rotators that could be controlled and triggered with in-game tools to allow platform like levels to be created with ease.
Camera system
To support both editing and gameplay contexts, I created a camera system to handle navigating a scene on an iPad/iPhone. The camera system supported smooth transitions and a transform stack to handle nested editing situations in unconventional coordinate spaces.
Fireproof Games (October 2014 - December 2015)
- Software Engineer/Gameplay Programmer (October 2014 - December 2015)
Languages and technologies
Fireproof Games was my second job in the games industry. I had the privilege of working with an incredibly talented team and was able to work on both The Room Three (where I implemented a good chunk of the game's puzzles) and Omega Agent (a little known, but super polished VR game) for the Samsung Gear VR headset.
A handful of the technologies I got to work with included...
- Unity - The Room Three and Omega Agent were both built using Unity.
- iOS/Android - All projects were mobile focussed and required building and testing on either Android or iOS.
Some of the most impactful features I delivered included...
Automated bug reporting tool
As we reached the end of development for The Room Three, we identified we needed a more streamlined way for the QA team we were working with to report bugs. We'd always ask for a screenshot and the associated logs with each report, but these weren't always included as part of the ticket. I created a tool built into the development build of the game that would, with a single button press, capture the current state of the game, a screenshot, and any associated logs, and upload them to a new bug ticket in Assembla. Bug reports improved in quality significantly, and time to resolution reduced as less back and forth was required to request additional information about the bug.
Mobile handset automation
With The Room Three, we wanted to support as many devices as possible, but due to the enormous variety of handsets with differing CPU/GPUs and performance profiles, testing was an enormous challenge. To improve the efficiency of testing the end to end flow of the game, I created a tool that could play the game from start to finish, taking screenshots at key moments to verify puzzle elements rendered correctly. We deployed this to a broad array of handsets in the cloud, and could then quickly review the logs/screenshots for any inconsistencies. This significantly reduced the manual testing effort required to ensure the game worked across as many devices as possible.
Omega Agent gameplay trailer
With Omega Agent being a VR game, we wanted a way for people to be able to experience the game before purchasing it. To achieve this, we needed a way to record player inputs and play them back offline, rendering a full 360 degree view to allow people to experience the game as a trailer in VR. I built a system that would capture all inputs while the game was being played (allowing a designer to craft a fun and engaging trailer experience), and then play these inputs back to recreate the experience, only at ~1fps while the offline render would record a full 360 degree view (a useful application of the strategy design pattern if memory serves). This allowed us to build a fun trailer people with a VR headset could experience before purchasing the game.
Electronic Arts
(July 2011 - September 2014)
Criterion Games, Ghost Games
- Software Engineer (November 2012 - September 2014)
- Graduate Software Engineer (July 2011 - October 2012)
Languages and technologies
I joined Criterion Games straight out of university and was lucky enough to start working on the Need for Speed: Most Wanted reboot (released in 2012). I worked primarily as an AI and gameplay programmer working on the traffic system, racer avoidance and cop pursuit behaviours. After Most Wanted, I spent a year in Sweden as part of Ghost Games where I continued to work as a gameplay programmer and also did quite a bit of UI programming as well. After returning to the UK I started work on the next Need for Speed before pivoting to work prototyping a new IP in Unity.
A handful of the technologies I got to work with included...
- Frostbite - EA's proprietary internal engine (now used across nearly all internal studios).
- PlayerStation/Xbox - Both Need for Speed Most: Most Wanted and Rivals shipped across multiple SKUs including PlayStation 3, Xbox 360, PlayStation 4 and Xbox One.
Some of the most impactful features I delivered included...
Need for Speed: Most Wanted traffic system
The traffic system that shipped in Need for Speed: Most Wanted evolved
from the one used in Burnout Paradise. The code had gone through a
series of changes in Need For Speed: Hot Pursuit, and needed updating to
be compatible with the open world that Most Wanted supported. At the
time I was still incredibly junior, and in hindsight was in way over my
head, but I managed to (for the most part) tame and understand
the sprawling traffic system (I remember TrafficEntitySystem.cpp
was > 20,000 lines long... 🙈), and make some small updates and improvements
like evasive 'drive around' behaviours and improved collision recovery.
Need for Speed: Rivals AllDrive UI
Towards the release of Need for Speed: Rivals, the much promoted AllDrive feature (the ability to instantly join other player's games and start spontaneous races with them) was not working as designed. The main problem was it was hard to know if a friend was close, or where they were, to help differentiate them from other AI racers. Weeks before going gold, it was decided we needed much better sign posting for this, so I (with the help of brilliant designers and UX artists) added a new UI feature to the HUD to signal when a friend joined and when they came within range, making the world feel much more alive and populated. This really helped sell the feature and was well received by players.