The Immerse SDK - Interaction Package

Helped Immerse reduce engineering time and boost usability in VR enterprise training by transforming project-specific UX into standardized, reusable interaction modules for multiple teams.

Role

VR UX Designer

Target Hardware

Multiple Standalone and PCVR devices

Industries

Developer Tools / XR Development

Date

2017-2025 on-going development

Problem

As Immerse shipped more and more varied VR training solutions, a custom SDK began to take shape, designed to make common tasks like snapping or teleportation easier to implement. When I joined, some of these foundations were already in place in a basic form. But as projects grew in complexity, so did the repetition and UX inconsistencies between them.


Even with the SDK, it became clear we needed to:

  • Stay aligned with evolving immersive design standards, and

  • Reduce project overhead, especially for systems that kept getting rebuilt (like UI patterns, hand-object interactions, or basic onboarding flows).


We wanted to avoid duplicating effort, and we needed systems that would help developers move faster while giving users, often new to VR, reliable, consistent experiences.


My challenge was to embed UX thinking into the SDK in a way that respected developer constraints but quietly lifted the polish, usability, and scalability across every project that used it.

My Role

My involvement with the SDK wasn’t formal ownership. I originally joined Immerse as a Producer (also responsible for UX), and through both direct project work and a personal interest in VR gaming, I was encouraged to suggest systems or interaction improvements to solve immediate project needs and address broader usability gaps. Over time, some of these patterns made sense to elevate into the SDK.


Much of this started informally: sketches, headset recordings, and embedded feedback loops. But they accumulated into something more systemic.

Process

It began with the QinetiQ project, where I redesigned the teleport controls to simplify spatial navigation. That redesign was later standardized into the SDK’s core locomotion logic and reused across multiple apps. I went on to push for additional locomotion modes like snap turning and continuous movement to better support varied training contexts and user preferences. For continuous motion, I tested headset builds and gathered feedback to tune the acceleration curve and top speed for comfort and clarity.


A similar shift happened during the Ford VR Authoring project. I designed a system of spatial guides, initially using invisible alignment lines, to help users intuitively place and orient components in 3D space. It worked well, so I proposed integrating it into the SDK. From there, the system expanded:

  • Line guides supported movement along straight paths with controlled rotation

  • Point guides, first used in the Yale medical project, subtly guided hands to specific positions regardless of entry angle

  • Plane guides enabled constrained object movement across surfaces like tables


This system became a core, reusable SDK feature applied across diverse projects.


Across AstraZeneca, Shell, and the Immerse App, I continued surfacing interaction patterns that could be generalized: snapping behavior, highlight states, modular UI layouts, hand pose definitions. As the SDK matured, so did my involvement. Over time, I became the go-to for end-user UX considerations, helping translate fragmented learnings into systems that fed directly into the SDK. Some of these contributions are detailed below.

Solution

Many of the SDK contributions began in the field. I’d record headset walkthroughs, sketch in ShapesXR or Figma, and use these artefacts to propose turning one-off solutions into shared tools. This approach let us validate each system through real use before pushing it into the SDK.


Snapping & Feedback

Inconsistent snap behavior often caused user hesitation. I designed a more responsive system with hover states, active highlights, and subtle haptics, refined across projects to ensure users could anticipate and understand each interaction.


UI System Development

UI often required time-consuming work that didn’t align with project budgets. To address this, I conducted a comparative review of UI patterns across a range of VR apps and games, focusing on layout conventions, input responsiveness, and visual hierarchy. I documented the findings and used them to help define a modular UI system that balanced standardisation with flexibility.

Working alongside another designer, we built Figma components, defined spatial sizing and Z-depth rules, and validated layouts directly in Unity. I pushed for support of both laser-pointer and direct-touch input, and later introduced a token-based theming system that auto-generated accessible color sets from a minimal base palette.


Hand Pose Integration

For AstraZeneca, I helped define custom hand poses and tuned offsets per headset to support precise interaction. This work evolved into a shared system used across other projects, ensuring consistency in pose behavior and feel.

Outcome

The cumulative result of these contributions was a more robust, usable, and scalable SDK.

  • Interaction Components: Snapping, grabbing, guides, and highlight states were standardized across projects.

  • UI System: A reusable, themeable VR UI framework that could scale with client needs and device constraints.

  • Developer Efficiency: Studios teams could prototype and ship faster with prefabbed UX behaviors that just worked.

  • Cross-Project Consistency: Whether it was a collaborative space or a pharma cleanroom, users encountered the same visual and interaction logic.

Reflection

What I Learned

Designing for a developer-facing SDK sharpened my ability to think in systems, not just at the end-user level. It pushed me to consider not only the experience inside the headset, but also the developer experience of implementing that UX repeatably and efficiently. I also saw how much polish and perceived quality in VR comes from small details: highlight timing, tooltip depth, snap thresholds, even the way a hand pose lands on an object.


Challenges

Because I didn’t own the SDK directly, every contribution had to be justified, whether through headset footage, developer pain points, or design prototypes that made the benefit clear. It sometimes took persistence to make the case for changes, especially where features needed to stay lightweight.


Looking Ahead

There’s still room to push further: adding support for hand-tracked UI, integrating Meta avatars, layering in spatial audio cues for multi-user feedback, and capturing telemetry to close the loop on real usage. But the foundation we built turned scattered project learnings into a toolkit that makes future projects faster, more consistent, and more usable.