How This Site Was Made

A self-archiving sprint turned into a systems-thinking design experiment

Problem

This site was created after the closure of Immerse, during a period where I needed to consolidate over 8 years of immersive UX work into a cohesive portfolio. My goals were to:

  • Reconstruct case studies quickly but accurately from internal documentation

  • Build a CMS-backed site that could grow with future projects

  • Test how AI could support content structuring, debugging, and design iteration

  • Prototype a visually engaging component system that reflected my design values


This came together in three overlapping strands: one visual, one structural, one technical.

LiquidGlass UI Component

I wanted a visual element that would modernize the portfolio and act as a consistent motif across project thumbnails. Inspired by Apple’s liquid glass UI and building on earlier 'frosted glass' work from the Immerse App, I aimed to create a responsive, legible distortion-based glass effect in Framer. It was also an interesting challenge, as I wondered whether adding distortion would let me reduce the amount of blur required in the frosted glass effect.


The goal was to implement an animated SVG-based glass layer with blur, distortion, and tint. It needed to be responsive, support white text overlays, and stay performant. I used ChatGPT as a debug partner throughout. It helped troubleshoot workshop code, including masking issues on rounded corners, and added features I wanted to try like edge stroke effects, border color and opacity, a vignette, and edge shading. After refining the visual design, I built a new filter combining only the elements I actually needed.


I exposed several parameters I wanted to control directly in Framer, including distortion intensity, blur level, and border thickness and color, to help refine text legibility.


Separately, I used ChatGPT to help debug implementation issues inside Framer, including problems with z-index stacking.


The version now used on the site is a stripped-back version, refined to maintain performance. Earlier test versions caused noticeable slowdown when used in multiple instances on a single page.

Archive System for Case Studies

I had over 15 immersive projects spanning 8 years, each with layered documentation: JIRA tickets, Figma files, UX diagrams, video walkthroughs, and more. To make sense of it all, I created an archiving framework and used ChatGPT to help organize and surface my own contributions, UX decisions, and measurable outcomes from each file.


This wasn’t a one-click solution. I:

  • Designed and populated a folder structure organized by phase (Research, Design, Delivery, Outcomes)

  • Wrote targeted logging prompts instructing ChatGPT to extract facts only, without synthesis or assumptions, and refined these through trial and error

  • Manually batch-processed each file and saved the output into Word documents, creating a raw archive for each project

  • Documented the process to ensure it was repeatable and auditable


The result was a set of structured, traceable records I could mine for case study material - faster and more complete than working purely from memory. It gave me a detailed reminder of what I’d actually done across projects and allowed me to draft content I could refine and finalize with confidence.


Later in the process, I used the raw archive to generate rough outlines for each case study. These were treated as a rough framework, and I edited and rewrote each one (line by line) to shape the final content. All model interaction was done via a private Teams workspace to maintain security and control.

Part of the workflow used to build a raw archive of past projects. Prompts were designed to extract facts, not generate content.

AI Assisted Design

Once the project case studies were completed in documents, I set about populating the CMS (in Framer). I noticed that plugins were available to export and import the CMS data and so I wondered if I could automate the process, at least to get the data in. I was learning Framer on the fly and needed to save time wherever I could. Using ChatGPT I built a lightweight Python script that parsed my .docx case study files and outputted CMS-ready CSV data.


This included field mapping for:

  • Title, Subtitle, Dates, Hardware, Role

  • Full project sections (Problem, Process, Outcome, Reflection)

  • CMS-friendly formatting (e.g., HTML in some fields, shortened date formats)


I went through a few CSV versions, adjusting the script and fixing formatting edge cases as I went. The final CSV import gave me a working, visually consistent portfolio in hours instead of days, ready for in-browser editing and iteration.

Outcome

  • A responsive, clean, CMS-powered portfolio published within 4 weeks

  • A custom LiquidGlass UI component deployed across the site

  • An AI-supported archive system used to build case studies and align my CV

  • A repeatable, script-based workflow for future case study import

Reflection

This project felt like an impossible task at the beginning. Instead, it became a kind of design operations experiment, made up of content triage, systems thinking, and AI collaboration.


The tools didn’t write my site or invent my case studies. But they did:

  • Help me create effects that would’ve previously been harder to achieve

  • Enable me to debug code

  • Surface patterns I might have missed

  • Translate complex ideas into structured data

  • Let me work through a backlog of project history with precision and scale


It reinforced how much design today depends on structured thinking, adaptable workflows, and a clear sense of purpose - especially when working with new tools.