Accessible / recruiter view โ€” no animations or WebGL. ← Full interactive portfolio

Alyssa Barrientos

Technical Artist & Pipeline TD · AlyArtBar

"I build the systems that let artists build worlds."

Python / Maya API C++ / OpenGL USD / Houdini ML / Scikit-learn Unity / AR
Open to internships & collaborative projects

01 — About

Building at the edge of art & code.

I'm a computer science student who specialises at the intersection of engineering and visual craft. My work lives where a Python script meets a particle simulation — where a USD pipeline turns a forty-step process into one click.

I build tools that give artists more time to be artists. I write systems that make procedural worlds feel handmade. I'm drawn to the parts of production nobody else wants to touch — the validation pipeline, the edge-case geometry, the shader that shouldn't work but does — and I find them fascinating.

My real projects span Python pipeline tooling for Maya, a C++ OpenGL graphics engine built from scratch, ML motion classification for virtual production rigs, cross-platform AR with Unity, a render pass manager for Houdini and UE5, and an AI-driven browser visual system.

01.5 — Value Proposition

What I bring to a team.

Pipeline thinking

I see the full chain from source asset to final render. I build tools with failure modes in mind — what breaks, who breaks it, and how to make it impossible to break again.

Artist empathy

My tools are used by people whose job is to make things beautiful. I write UIs and APIs that stay out of the way and solve the actual problem.

Research instinct

When something doesn't exist yet, I prototype it. My R&D section documents real experiments — including the ones that didn't work.

Cross-discipline range

Python to C++, Maya to Unity, ML to shader code. I connect parts of a studio that usually don't talk to each other.

02 — Projects

Selected Work

All projects below are things I built and shipped. Impact metrics included where measurable.

  • Pipeline & Tools

    Maya Pipeline Tools

    Studio TD conventions

    Production-grade Scene Validator and Asset Publisher for Maya — built to mirror studio TD conventions.

    • Scene Validator: checks geometry, naming, transforms, materials, and scene units with a threaded PySide2 UI that never freezes
    • Asset Publisher: copies files to a versioned directory, writes a JSON manifest, exports a USD stub, and commits a semantic Git version tag
    • Graceful degradation — runs in demo mode without Maya, falls back to plain-text USD stub if usd-core is missing
    PythonMaya APIPySide2OpenUSDGit
  • Machine Learning · Virtual Production

    Virtual Prod ML Rig Predictor

    18 motion classes

    Two-model Random Forest system that classifies 18 human motion actions from 3D skeletal joint data — built for virtual production rigs.

    • Model 1: full 4,998-dimensional feature pipeline (StandardScaler → Random Forest, 300 trees) for maximum accuracy
    • Model 2: PCA-compressed pipeline (100 components) for faster inference in latency-sensitive live VP environments
    • Root-space normalisation, temporal resampling to 60 frames, and velocity features from the KARD dataset
    PythonScikit-learnRandom ForestPCANumPyPandas
  • Creative Technology · AI Systems

    Shifting Interface — AI-Driven Visual State System

    Live demo

    Interactive visual system where structured AI state data shapes atmosphere, motion, and composition in real time — treating the model as a behavioural engine, not a text generator.

    • State machine translates AI JSON into live CSS custom properties cascading through eight composited visual layers via color-mix() and blend modes
    • Parallax depth system with AI-governed energy scaling; GPU-composited translate3d keeps motion off the main thread
    • Echo transition layer spawns ghost clones of outgoing fragment geometry with animationend cleanup
    HTML5CSS Custom PropertiesVanilla JSState Machinemix-blend-mode
  • Render Tools · TD Work

    AOV Manager

    3 renderers, 1 config

    Shared-config render pass manager for Houdini (Mantra & Karma XPU) and Unreal Engine 5 — one file drives both a standalone PySide6 app and a native Houdini shelf tool.

    • Single aov_config.py drives both the desktop app and Houdini shelf tool — edit once, both update automatically
    • Generates production-correct AOV scripts for Mantra, Karma XPU, and Movie Render Queue configs for UE5
    • PySide6 standalone runs without Houdini; PySide2 shelf variant builds ROP network live into an active session
    PythonHoudini MantraKarma XPUUE5PySide6Movie Render Queue
  • Graphics Programming · C++

    Raytracer Dev — OpenGL Graphics Engine

    Built from scratch

    C++ graphics pipeline built from scratch across six assignments — Phong shading, OBJ import, parametric curves, skeletal animation, and real-time cloth physics with four interchangeable ODE integrators.

    • Raw OpenGL 3.3 Core — manual VAO/VBO/EBO, GLSL shaders, MVP matrix stack, Phong lighting with normal-matrix correction
    • Bezier and B-Spline curves via Bernstein basis, Frenet-Serret frames, surface-of-revolution mesh generation
    • Generic ODE physics: pure-virtual evalF(state) drives cloth/chain/pendulum through ForwardEuler, Trapezoid, Midpoint, and RK4 — swappable at runtime from ImGui
    C++17OpenGL 3.3GLSLGLMGLFWDear ImGui
  • Augmented Reality · Unity · AI

    Augmented Object Intelligence XR

    31 tests · 100% CI

    Mobile AR capstone anchoring AI-generated context onto physical objects in 3D space — point your camera at anything and an LLM explains it back to you.

    • Four-stage pipeline: AR Foundation capture → MediaPipe detection → LLM REST query (PaLI/LLaMA) → world-space anchor
    • Cross-platform Unity 2022.3: ARKit (iOS 12+), ARCore (Android 8+), Meta Quest 3 via AR Foundation subsystem abstraction
    • Sprint 4 scavenger hunt: tap → detect → match → anchor → register flow, 31 passing tests, 100% CI pass rate
    Unity 2022.3C# / URPAR FoundationMediaPipeLLM APIGitHub Actions

Restricted — NDA

Confidential Projects

Industry projects under NDA — pipeline systems, tools, and research built with studios and companies. Enter the access key to view full details.

🔒 access required

Vault · Classified Projects

These projects are under non-disclosure. If you have an access key, enter it below. Otherwise, reach out directly — I’m happy to walk through any of this work on a call.

Feature Film Pipeline · VFX Studio AAA Environment Tools · Game Studio ML Lookdev Preview · VFX R&D Broadcast Shot Assembly Spatial Audio XR · Research Lab

03 — Skills

Technical Skills

Derived from actual shipped projects only — no aspirational entries.

Core · heavily used across real projects

  • Python (3.9+)
  • C++ (C++17)
  • OpenGL 3.3
  • ML / Scikit-learn / Random Forest
  • GLSL (shaders, manual)
  • Maya API
  • OpenUSD
  • Houdini (Mantra, Karma XPU)
  • Unity / C# / URP

Strong & solid · used in shipped projects

  • PySide2 / PySide6
  • AR Foundation (ARKit / ARCore)
  • Unreal Engine 5 / Movie Render Queue
  • NumPy / Pandas / Matplotlib
  • VEX
  • CSS Systems / State Machines
  • Git / GitHub Actions CI
  • MediaPipe / LLM APIs (REST)

04 — R&D Lab

Research & Experiments

All entries are real shipped or active projects, documented honestly.

  • WIP

    Maya Pipeline Tools — Scene Validator & Asset Publisher

    Could production-grade validate → publish → version tooling be built in pure Python with no proprietary dependencies?

    Both ship. Validator catches geo, naming, transform, and material issues. Publisher handles file copy, USD export, JSON manifest, and Git tagging in one click.

  • Live

    Virtual Prod ML Rig Predictor — Motion Action Classification

    Could a Random Forest pipeline with hand-crafted skeletal features classify 18 motion actions from 3D joint data without deep learning?

    Both models ship. M1 maximises accuracy; M2 (PCA-reduced) trades a small margin for faster inference. Feature importances confirm the models learn meaningful motion-space structure.

  • Live

    Shifting Interface — AI-Driven Visual State System

    Could structured AI state data govern a layered visual system's atmosphere and motion in real time?

    Two CSS variables cascading through eight composited layers via color-mix() and blend modes produces meaningfully distinct atmospheric states. AI influence is felt physically through parallax amplitude, not just colour.

  • Live

    AOI XR — Augmented Object Intelligence

    Could on-device MediaPipe + swappable LLM REST deliver real-time context-aware AR on consumer mobile without cloud vision?

    Ships on physical iOS and Android. MediaPipe holds ≥5 FPS detection. 31 passing tests, 100% CI pass rate. Key lesson: Unity URP introduces AR-specific rendering failure modes requiring targeted CI gates.

  • WIP

    Raytracer Dev — C++ Graphics Pipeline

    Could a full rasterisation pipeline be implemented from scratch using only OpenGL 3.3 Core, GLM, and GLFW?

    All six assignments ship. The ODE integrator design is the clearest result: pure-virtual evalF(state) lets ForwardEuler, Trapezoid, Midpoint, and RK4 operate on all physics systems interchangeably.

  • Live

    AOV Manager — Houdini & UE5 Render Pass Tool

    Could a shared-config architecture drive both a standalone PySide6 app and a Houdini shelf tool from one source file?

    Confirmed. One aov_config.py drives all three renderers across two app contexts. The Houdini shelf variant executes generated scripts directly into a live session.

04.5 — Working Style

How I work.

On ambiguity

I ask one good question before writing any code. Most pipeline problems are specification problems in disguise.

On collaboration

I pair well with artists — I learned early to prototype in their tools before abstracting to Python.

On shipping

A tool that 80% works and can be iterated beats a perfect tool six months late.

On documentation

I document why, not just what. The next person maintaining my pipeline shouldn't need to read the code to understand the intent.

On failure

I keep notes on experiments that didn't work — you can see them in my R&D section. Failing well is a skill worth showing.

On learning

I'm a student who has shipped production tooling. That combination — academic depth with production instinct — is what I'm building toward.

05 — Contact

Let's talk.

Open to internships, short-term contracts, and research collaborations. Happy to walk through any work — including NDA projects — on a call.