Shifting
Interface
An interactive visual system where structured AI state data shapes atmosphere, motion, composition, and interaction in real time. Each mode feels behaviorally distinct — not through generated text, but through how the interface moves and feels.
// 01 — Concept & Intent
AI as behavior,
not output
Most AI projects treat the model as a text generator. This one treats it as a behavioral engine — a source of structured state that the interface then embodies visually and physically.
applyState() reads the object and distributes values across the scene. Two CSS custom properties — --ai-color-a and --ai-color-b — are written to :root. All eight visual layers update simultaneously with no individual DOM targeting needed.I built this as an interactive system where AI influences behavior rather than simply generating output. I translated structured state data into atmosphere, motion, composition, and interaction so each mode feels distinct and responsive.
My goal was to create something that feels less like a static interface and more like a living visual system. It reflects how I approach creative technology — through experimentation, systems thinking, and building experiences where code and interaction shape the final result.
The key constraint I imposed on myself: the AI model is never allowed to produce visual output directly. It can only produce structured data. All visual translation happens in the interface layer — deterministic, performant, and fully under design control, while still making the AI a genuine author of the experience.
// 02 — Architecture
Three files,
eight composited layers
No build step. No dependencies. The entire system is a deliberately minimal three-file structure. Complexity lives in the layering model, not the toolchain.
- .halosContainer for dynamically injected halo divs (z-index: 1)
- .ribbonsHorizontal streak container (z-index: 2)
- .contoursConcentric ring overlay via CSS background (z-index: 3)
- .text-fragmentsFloating semantic label layer (z-index: 8)
- .echo-layerAfter-image ghost copies on transition (z-index: 9)
- .fragments-layerThree primary interactive panels (z-index: 10)
- .grain + .vignetteFilm noise and edge framing (z-index: 25/30)
- :root tokens--ai-color-a/b plus bg ramp, glow, text, and line values
- Layer stylesEach layer absolute inset:0, stacked by explicit z-index
- Fragment transitionswidth/height/opacity/transform CSS-transitioned at 620–820ms
- mix-blend-modescreen on halos/ribbons, lighten on fragments, soft-light on grain
- @keyframes echoFadeGhost panel scales up and fades out over 1.6s on transition
- Fluid sizingmin() and clamp() throughout — no breakpoint overrides needed
- states[]Array of state objects — the AI integration point
- applyState(state)Writes CSS vars, repositions fragments, injects labels
- buildHalos(state)Clears and regenerates halo divs from state colors
- buildRibbons(state)Clears and regenerates ribbon divs with rotations
- mousemove listenerReads data-depth per fragment for parallax offset
- spawnEcho(el)Clones fragment geometry into .echo-layer with animation
- IntersectionObserverTriggers text fragment reveals on viewport entry
- shiftBtn listenerCycles stateIndex, calls spawnEcho then applyState
// 03 — Systems & Implementation
Four systems,
every detail hand-authored
Each card covers the underlying concept, exactly what was written, and how the pieces connect.
The entire visual system is governed by a single plain JavaScript object — the state. Each state defines visual and behavioral parameters that the engine distributes across all layers simultaneously. To wire in AI output, the hardcoded states array is replaced with parsed JSON from a model. The interface does the rest.
A state object is a structured description of how the scene should feel — not what it should look like pixel-by-pixel, but what parameters define its atmosphere. The schema is intentionally minimal:
state = { name: "resonant", // label for .state-label colorA: "#1a2a6c", // --ai-color-a → fragment base tone colorB: "#4a6fff", // --ai-color-b → halos, ribbons, glows energy: 0.8, // parallax amplitude multiplier (0–1) depth: 1.2, // overall depth scale textFragments: ["signal", "shift"] // semantic labels to surface }The key insight: two color variables cascade everywhere. Every halo, ribbon, fragment fill, and glow reads from --ai-color-a or --ai-color-b via color-mix() and radial gradients. Writing two values to :root repaints the entire atmosphere. The AI needs no knowledge of the DOM — it only needs to output valid hex colors and a handful of numbers.
- States array of plain objects — each with name, colorA, colorB, energy, depth, and textFragments
applyState(state): writes--ai-color-aand--ai-color-btodocument.documentElement.style; iterates fragments to update position, size, and opacity- State label element updated with current state name on each cycle
- Text fragment injection: clears
.text-fragments, creates onespanper label, positions randomly within safe inset, staggered timeout triggers CSS fade-in via.visible - Shift button cycles
currentStateIndexmodulostates.length— architecture is ready for AI-replaced states
function applyState(state) { const root = document.documentElement; root.style.setProperty('--ai-color-a', state.colorA); // cascades to all layers root.style.setProperty('--ai-color-b', state.colorB); stateLabel.textContent = `state: ${state.name}`; state.fragments.forEach((f, i) => { const el = fragmentEls[i]; el.style.width = f.w + 'px'; el.style.height = f.h + 'px'; el.style.left = f.x; el.style.top = f.y; }); injectTextFragments(state.textFragments); // surface semantic labels buildHalos(state); // regenerate background atmosphere buildRibbons(state); }
To connect a live model: fetch from any endpoint, JSON.parse() the content block, validate the schema, and pass the object directly to applyState(). Hardcoded array and live API feed are architecturally identical — the state machine makes no assumptions about where states come from.
Rather than mutating every element's color individually on state change, the design system routes all color through two root-level custom properties. Every gradient, glow, and tint in the stylesheet reads from these two values. A single setProperty() call cascades through eight visual layers simultaneously — no DOM traversal needed.
CSS custom properties are inherited. Writing to :root propagates to every element in the document that references that variable — without any JavaScript DOM traversal. This makes them ideal as a broadcast channel for state-driven theming:
color-mix(in srgb, A X%, B) derives tinted variants from the two AI color values — lighter for halos, more transparent for fragments. A deep navy from the AI produces appropriate tints everywhere with no additional color logic in JavaScript.
mix-blend-mode compounds the effect. Halos and ribbons use screen compositing — which adds light rather than covering — so overlapping glow layers brighten without washing out the scene. Fragments use lighten, preserving underlying detail while letting the color tint emerge.
- Full
:roottoken system:--bg-1/2/3background ramp,--ai-color-a/bas the AI-facing interface,--glow-soft/strong,--line/line-strongfor borders - Halo elements:
radial-gradientwithcolor-mix();mix-blend-mode: screen;filter: blur(46px);opacity: 0.16 - Ribbon elements:
linear-gradientwithcolor-mix()tint; horizontal blur streaks atfilter: blur(34px) - Fragment panels: layered
radial-gradientfill readingvar(--ai-color-b)for the inner glow;box-shadowdriven by--glow-soft - Contours layer: pure CSS concentric rings via
radial-gradientat matching radius percentages — entirely declarative, no JavaScript
/* AI-facing interface — only two variables for the model to set */ :root { --ai-color-a: #1a1a1a; /* fragment base — overwritten per state */ --ai-color-b: #2c3e50; /* atmosphere tint — halos, ribbons, glows */ } .halo { background: radial-gradient(circle at 50% 50%, color-mix(in srgb, var(--ai-color-b) 70%, white 5%) 0%, transparent 62%); mix-blend-mode: screen; /* additive — layers brighten, never cover */ filter: blur(46px); opacity: 0.16; } .fragment { background: radial-gradient(circle at 50% 50%, color-mix(in srgb, var(--ai-color-b) 35%, transparent), transparent 72%); mix-blend-mode: lighten; /* max channel — color tints without masking */ }
Each fragment panel carries a data-depth attribute (0–1) defining how strongly it responds to mouse movement. A mousemove listener computes a normalized cursor offset and applies proportional translate3d transforms — deeper fragments drift further. The state's energy value scales the entire amplitude, making the AI directly responsible for how alive the interface feels.
Parallax creates the illusion of depth by moving foreground elements faster than background ones. Each fragment has an explicit depth coefficient as a data attribute. The mousemove handler normalizes the cursor position to a [-0.5, 0.5] range, then applies per-fragment translation:
The three fragments use depths of 0.30 (appears furthest), 0.55 (mid-range anchor), and 0.82 (closest, most active). state.energy scales the amplitude globally — a low-energy state (0.2) feels still and meditative; a high-energy state (1.0) feels active and responsive. The AI state governs not just color but physical behavior.
translate3d() — rather than translate() — forces GPU compositing via the browser's compositor thread, keeping the motion off the main thread entirely. will-change: transform on fragment elements pre-promotes them to composited layers before any motion begins.
On state transition, before the new state is applied, each fragment's current position and size are captured. A ghost clone is constructed with matching geometry and injected into .echo-layer. A CSS keyframe animation plays once, scaling the ghost up slightly and fading it to zero over 1.6 seconds:
Echoes are absolutely positioned, match the source fragment's border-radius, and use a 1px semi-transparent border with blur(1px). The effect mimics persistence of vision — the old state lingers briefly as the new one fades in. Each echo removes itself from the DOM via an animationend listener to prevent accumulation.
- Three fragments with
data-depthvalues of 0.30, 0.55, and 0.82 — explicit per-element depth coefficients - Mousemove handler: cursor normalized to ±0.5 range;
translate3d(tx, ty, 0)applied as inline transform per fragment - Energy scaling:
currentState.energymultiplied into every offset — low energy states feel still, high energy states feel active spawnEcho(el): readsgetBoundingClientRect(), creates absolutely-positioned clone in#echoLayer, adds class to triggerechoFade, removes onanimationend- Fragment CSS transitions:
width 820ms,height 820ms,opacity 650ms,transform 620ms— allcubic-bezier(.22,1,.36,1)for a natural physical settle
document.addEventListener('mousemove', e => { const cx = e.clientX / window.innerWidth - 0.5; // −0.5 → +0.5 const cy = e.clientY / window.innerHeight - 0.5; fragmentEls.forEach(el => { const d = parseFloat(el.dataset.depth); const nrg = currentState.energy; // AI-governed amplitude el.style.transform = `translate3d(${cx*d*40*nrg}px, ${cy*d*28*nrg}px, 0)`; }); }); function spawnEcho(el) { const r = el.getBoundingClientRect(); const ghost = document.createElement('div'); ghost.className = 'fragment-echo'; Object.assign(ghost.style, { left: r.left + 'px', top: r.top + 'px', width: r.width + 'px', height: r.height + 'px', borderRadius: getComputedStyle(el).borderRadius }); echoLayer.appendChild(ghost); ghost.addEventListener('animationend', () => ghost.remove(), { once: true }); }
The scene is not a flat canvas — it is eight distinct planes stacked at explicit z-index values, each with a single visual responsibility. This separation is what allows a two-variable color change to produce complex atmospheric variety. Each layer responds independently to state changes, and all of them are composited together through blend modes rather than standard CSS occlusion.
Standard CSS stacking is opaque — higher elements cover lower ones. Blend modes change this. screen compositing adds the light values of two layers together, making overlapping regions brighter rather than covered. This is how photographic light leaks and lens flares work:
screen(A, B) = 1 − (1−A)(1−B)In practice: multiple halo layers using screen brighten the scene without washing it to white. The result is atmospheric light bloom that scales gracefully — add more halos, get more light, never pure white. Fragments using lighten (take the maximum of each channel) let their color tints emerge over the background without masking the layers beneath.
The .scene element uses isolation: isolate to contain all blend mode interactions within the scene boundary. perspective: 1400px and transform-style: preserve-3d establish the 3D stacking context for the parallax depth illusion.
- z:1 .halos — large blurred radials; set scene color temperature; rebuilt per state from colorB
- z:2 .ribbons — horizontal blur streaks; add directional energy; screen blend, opacity 0.12
- z:3 .contours — concentric CSS ring gradients; purely declarative topographic depth cue; scale(1.08)
- z:8 .text-fragments — floating uppercase labels from state.textFragments; staggered fade-in via .visible
- z:9 .echo-layer — ghost panel clones on state transition; echoFade keyframe animation
- z:10 .fragments-layer — three primary panels; position, size, and depth from active state
- z:25 .vignette — radial darkening overlay; frames the composition; pointer-events: none
- z:30 .grain — 3×3px repeating grid; film surface noise; soft-light blend, opacity 0.06
.scene { position: relative; width: 100%; height: 100%; overflow: hidden; perspective: 1400px; /* 3D context for depth illusion */ transform-style: preserve-3d; isolation: isolate; /* contains blend modes within scene */ } /* All eight layers share this base — different z-index only */ .halos, .ribbons, .contours, .text-fragments, .echo-layer, .fragments-layer, .grain, .vignette { position: absolute; inset: 0; } .grain { z-index: 30; opacity: 0.06; /* topmost — reads across everything */ mix-blend-mode: soft-light; background-image: linear-gradient(rgba(255,255,255,0.045) 1px, transparent 1px), linear-gradient(90deg, rgba(255,255,255,0.03) 1px, transparent 1px); background-size: 3px 3px; }
The grain layer is intentionally the topmost visual element (z:30) so the texture reads consistently across all panels, halos, and the vignette alike — grounding the composition in a single material surface. Removing it makes the interface feel purely digital. Keeping it at 0.06 opacity is barely perceptible but measurably warmer at close inspection.
// 04 — Skills Demonstrated
What building this
actually requires
A no-framework constraint forces direct fluency in browser APIs and CSS compositing that libraries normally abstract away entirely.
color-mix() and var(). Layered radial-gradient and linear-gradient backgrounds with no image assets. mix-blend-mode: screen / lighten / soft-light for additive compositing. isolation: isolate and perspective for stacking context control.applyState() as a single dispatch function — receives a state, distributes to all subsystems (CSS vars, DOM positions, generated elements, text labels). State source — hardcoded array or live AI endpoint — is entirely swappable with no other changes to the codebase.translate3d() instead of translate() to force GPU compositor-thread handling of parallax transforms. will-change: transform, opacity pre-promotes animated elements to composited layers. Echo clones removed from DOM via animationend listeners to prevent accumulation. IntersectionObserver for text fragment reveals — no scroll event listeners on the main thread.width, height, opacity, transform, filter, and border-color — all at different durations (280–820ms) with the same cubic-bezier(.22,1,.36,1) easing for a coordinated, non-mechanical settle. @keyframes echoFade for the after-image transition effect. Atmospheric orb drift via @keyframes od.