I work with companies to improve their product design lifecycle through agentic AI tooling. As a product designer with deep software engineering fluency, my design process focuses on the following areas:
Agentic Systems for Product Development
Designing AI-augmented workflows for faster design-to-production cycles
Experimenting with how to keep product vision intact through implementation
Building tooling that reduces phase transitions and handoff overhead
Continuous Prototyping + User Testing
Owning creative ideation based on product needs
High-fidelity prototypes that validate before committing to production
Rapid iteration cycles informed by user feedback
Bridging the gap between design intent and shipped product
Complex Tools & AI-Native Products
Interaction design for multi-panel layouts and data-dense interfaces
Agentic and conversational UI patterns
Designing for uncertainty and AI-driven workflows
Bridge Design and Engineering
Full-stack engineer with over 10 years experience
API design and system architecture literacy
Prototypes that become production code
I work best embedded in product teams building complex tools, AI-native systems, or anything where the gap between design intent and shipped product is usually where things break.
Intuitive Surgical — Advanced Product Design
Roles: Design Technologist, Data Visualization, AI/ML
Designing and engineering advanced prototypes for future surgical video and analytics tools, exploring new ways for clinicians and researchers to navigate, understand, and summarize complex procedures.
Sole design engineer / design technologist embedded in an advanced product design team, responsible for building future-facing prototypes for surgical video and data tools. Developed internal case-explorer concepts that link procedure video with rich system data and event timelines, and prototyped interfaces for structured procedure insight and automated post-case review.
UX RESEARCH: Agentic 3D Environments
Roles: UX Research, AI/ML, 3D Visualization
Ongoing research into the capabilities of Agentic UX where AI agents are natively equipped with visualization capabilities to communicate more effectively. This case study shows a working agent that can traverse and display user queries in a 3D mapped environment.
Designed and built a creative product that makes AI-generated speech playable. FOAM transforms synthesized voice into phonetic elements—enabling musicians to trigger stutters, glitch consonants, and vowel textures via MIDI or the built-in step sequencer.
Built an async processing pipeline that generates speech via text-to-speech APIs, runs forced alignment to extract frame-accurate phoneme boundaries, and delivers playable sample bundles—all orchestrated through job queues with webhook-based payments and automated error recovery. Shipped a complete product now used by producers worldwide.
Exhibition Roles: Lead Developer, Exhibition Design, AR/VR
For the permanent Adidas AR exhibition at Adidas HQ in Germany, I acted as creative technical lead for the interactive exhibition. The experience featured a number of interactive AR experiences that were triggered by physical markers.
Exquisite Landscape
LandscapeClock is a generative AI art piece that creates a continuously panning 24-hour landscape panorama synchronized to real time. A Railway background worker runs daily, using LangChain + OpenAI to generate 24 chained narrative prompts—each referencing the previous for continuity—then iteratively builds a seamless panorama using Stability AI's mask-based inpainting, preserving existing content while extending the scene segment by segment. The resulting image and continuity files are uploaded to Vercel Blob storage, where a Nuxt/Vue frontend pans through the panorama based on current time, with hourly descriptions appearing via typewriter animation. Multi-day continuity is achieved by using each day's final segment as the next day's starting point.
This prototype shows the potential of dynamic head tracking and AR interaction with remote content using AirPods. This opens up a range of new spatial interactions, in particular with wayfinding using spatial audio cues. The AirPods send motion data to the iPhone app which then sends data to the server. Multiple apps running on other screen can then leverage the motion data.
Immersive Motion Drawer
This prototype uses immersive 3D particle to transform body motion into an immersive interactive experience. The technology behind the demo has two main components: 1. an interactive 3D particle environment powered by Unity3d software, and 2. a web-app which uses rotation data from a mobile device to remotely interact with the interactive 3D display. The system also utilizes a Firebase realtime database to transmit the data from the web app to the interactive display.
Interactive 3D Web Campaign
Lead Software Development for a seamless 3D web experience that allowed users to experience a new product in a fun way. The experience was built using Three.js and WebGL to create a fully interactive 3D environment that could be explored in the browser.
Touchless Web Prototypes
Roles: Lead Developer, AR/VR, javascript, three.js
In response to the covid public health crisis, Touchless is a series of prototypes created to envision ways in which touchless technology can be used in physical environments. I was the creative technology lead in the remote manipulation prototype. Here viewers can interact with exhibit artifacts from their smartphone.
8th Wall AR Experiments
A prototype for 8th Wall that demonstrates an interactive AR experience that allows users to explore a 3D space with their mobile device.
WebXR Experiments
Roles: Lead Developer, AR/VR, javascript, three.js
Experiments with fully web-based XR using Three.js
AR 3D drawing system draws the outline of 3d objects with custom software
Permanent Installation at Microsoft Cybercrime Center
Roles: Software Development, Creative Technology, Interactive Data Art and Exhibition Design
At The Office for Creative Research, we created a permanent installation for the Microsoft Cybercrime enter which maps and visualizes botnets in the wild to give researchers a more intuitive way of understanding their activity over time. Using realtime datasets from millions of infected computers, we created an interactive application that allowed the data to be explored visually and sonically.
ScreamOmeter – Breaking glass with sound at Norwegian Science Museum
A collaboration with Gagarin for an installation where people get a chance to break a wine glass by using nothing but their own voice. As a game experience, the installation demonstrated the physics of sympathetic resonance where an audience member's voice would cause a real glass to shatter. A custom system incorporating architecture, software, physical computing brought the experience to life.
Wonwei is a research-driven design & technology studio working on commissions, products and artworks. Working as Art Director and Technical Lead on a number of projects. Wonwei was commissioned by Universal Music Group to create a realtime and immersive 3D visual show for musician Ólafur Arnalds' world tour. A software system clandscapes were created to create a atmospheric narrative in response to music during the concert. Each landscape would interact to the live music and movement from the performer using a Kinect camera.
Study For Resonators
Roles: Art Direction, Software Development, Circuit Design, Creative Technology
Fifty resonating structures create a evolving polyrhythmic installation that transform the gallery space into a living sound sculpture. The percussive instruments create an perpetually evolving musical composition, developed using custom software and physical computing to activate the custom designed instruments. Commisioned by media art festival Raflost in Reykjavík, Iceland.