C++ / OpenGL · Graphics Pipeline 2024 — 2025

Renderer
Dev

A C++17 and OpenGL real-time graphics project focused on rendering fundamentals, graphics math, and engine-style architecture. I built the pipeline from raw shader and buffer setup into a modular viewer supporting OBJ import, Phong lighting, parametric surfaces, skeletal hierarchies, and spring-mass cloth simulation.

Rather than using Unity, Unreal, or a prebuilt renderer, I worked directly with C++, GLSL, VAOs, VBOs, EBOs, shader uniforms, matrix transforms, and draw calls. The project gave me hands-on experience with the systems that game engines usually hide: GPU memory layout, lighting calculations, mesh generation, runtime controls, scene organization, and numerical integration.

C++17OpenGL 3.3 CoreGLFW GLADGLMDear ImGui GLSL ShadersVAO / VBO / EBO OBJ ParserBezier / B-Spline Frenet FrameSkeletal Rigging Spring-Mass PhysicsRK4
6
Assignments
~25
Source Files
4
ODE Integrators
Solo
Independent

// 01 — How the GPU pipeline works

The rasterization
pipeline, explained

Every assignment in this project plugs into the same fundamental sequence. Understanding it explains every OpenGL call, every GLSL shader, every matrix upload.

The rasterization pipeline — 6 stages
① CPU → GPU
C++ calls glBufferData to copy vertex positions, normals, and indices from RAM into GPU memory. A VAO records the layout, a VBO holds the raw bytes, an EBO holds triangle indices. None of this runs a shader yet — it's just uploading data.
② Vertex Shader
Runs once per vertex, entirely on the GPU, in parallel. Its job is one thing: output gl_Position in clip space by multiplying the vertex position by the MVP matrix chain. It can also pass data — normals, colours, UVs — downstream to the fragment stage via out variables.
③ Primitive Assembly + Rasterisation
OpenGL groups processed vertices into triangles, then rasterises them — figures out which screen pixels each triangle covers and generates a fragment per pixel. It also interpolates vertex outputs (normals, UVs) across the triangle surface using barycentric coordinates.
④ Fragment Shader
Runs once per fragment (pixel candidate). Receives the interpolated data from the vertex stage. Its job: output a final vec4 colour. This is where lighting math lives — Phong diffuse, specular highlights, all of it is just arithmetic on the interpolated normal and light direction vectors.
⑤ Depth Test + Framebuffer
Each fragment's depth (z-value) is tested against the depth buffer. If something closer has already been drawn there, this fragment is discarded. Survivors are written to the colour buffer, which glfwSwapBuffers flips to screen each frame.
∑ CG Concept — The MVP Matrix Chain

Every vertex goes through three coordinate space transformations before it reaches the screen. The vertex shader computes them as a single matrix multiply:

gl_Position = Projection · View · Model · vec4(position, 1.0)

Model matrix — moves the object from its own local origin into the shared world. glm::translate · rotate · scale compose this. Each object in the scene has its own model matrix.

View matrix — transforms world space so the camera is at the origin looking down −Z. glm::lookAt(cameraPos, target, up) builds this. In this project the camera is an orbital sphere — yaw, pitch, radius — converted to Cartesian then passed to lookAt.

Projection matrix — maps 3D camera space to 2D clip space, introducing perspective foreshortening. glm::perspective(fov, aspect, near, far) builds a frustum that makes distant objects smaller. glm::ortho — used in A1 — skips this, producing the flat "technical drawing" look.

Order matters: matrix multiplication is not commutative. The multiplication reads right-to-left — vertex is transformed by Model first, then View, then Projection. Swapping them produces wrong results. This is why the 4th homogeneous coordinate w=1 for positions and w=0 for direction vectors — so translation in the matrix affects positions but not directions.

// 02 — Architecture

Subsystems added
across six submissions

The codebase grows one assignment at a time without breaking the existing run loop. By A5 ~25 source files are organized into these subsystems.

A0 – A5 Application Core
  • Application.h/cppSingleton — owns GLFW window, ImGui context, run loop
  • Renderer.h/cppOrbital camera, lighting uniforms, axis draw
  • ShaderLoader.h/cppCompile/link GLSL, extract error logs
  • ErrorHandling.h/cppGLFW callbacks, tinyfd message boxes
  • Globals.h/cppShared state, window title, constants
A2 + Shape System
  • Shape.h/cppPolymorphic base — pure-virtual draw(GLuint)
  • ShapeManager.h/cppOwns Shape* heap ptrs — add/delete/select
  • FileImporter.h/cppOBJ + .swp + .skel + .attach loaders
  • ImportShape.h/cppOBJ → VAO/VBO/EBO pipeline
  • ColorPresets.h/cppPreset colour table + custom RGB
A3 Parametric Geometry
  • Curve.h/cppevalBezier + evalBSpline → CurvePoint {V,T,N,B}
  • Surface.h/cppmakeSurfRev — sweeps profile 2π around axis
  • ImportCurve.h/cppParses .swp control-point files
A4 Skeletal System
  • Joint.h/cppLocal mat4, bind-pose inverse, world mat, children[]
  • MatrixStack.h/cpppush/pop/top — std::vector<glm::mat4>
  • SkeletalModel.h/cppRecursive bind + current-frame traversal
  • ImportCharacter.h/cpp.skel JSON + .attach weight maps
A5 Physics Framework
  • ParticleSystem.hBase : Shape — m_state, pure-virtual evalF
  • TimeStepper.h/cppForwardEuler / Trapezoid / Midpoint / RK4
  • PendulumSystem.h/cppSpring-mass, wind, 5× VAO/VBO/EBO sets
  • SimpleSystem.h/cpp2-particle orbit — integrator smoke test
  • SimplePendulum.h/cppFixed-anchor single spring
  • SimpleChain.h/cppn-link spring chain
  • SimpleCloth.h/cppNxN grid, structural + shear springs
  • main.cpp3 lines — getInstance().initialize().run()

// 03 — Assignments & My Contributions

Six submissions,
every system hand-coded

Each card covers the CG concept behind the assignment, exactly what I wrote, and honest notes on where things worked and where they didn't.

00
OpenGL Environment & Animated Letterforms
OpenGL 3.3GLADGLFWGLSLVAO / VBO

First contact with the OpenGL pipeline. Configure GLAD/GLFW from scratch, write vertex and fragment shaders, render the letters of my name animating independently. Each glyph is built from hand-defined triangle vertices stored in an unordered_map<char, vector<float>>, uploaded to a VAO/VBO, and transformed each frame via a uniform mat4 transform.

⬡ CG Concept — VAO, VBO, EBO & GPU Memory

The GPU cannot read CPU-side vectors directly. Data has to be explicitly copied into GPU-side buffers and OpenGL told how to interpret the raw bytes. Three objects do this:

  • VBO (Vertex Buffer Object) — a block of GPU memory. Filled via glBufferData(GL_ARRAY_BUFFER, size, data, GL_STATIC_DRAW). At this point it's just bytes — OpenGL doesn't know what's in it yet.
  • EBO (Element Buffer Object) — also GPU memory, but holding an array of unsigned ints. Each int is an index into the VBO. Instead of duplicating a shared corner vertex three times, it's stored once and referenced three times. A cube's 8 unique corners can describe all 12 triangles.
  • VAO (Vertex Array Object) — records how to interpret the VBO bytes. Calling glVertexAttribPointer(0, 3, GL_FLOAT, false, stride, offset) inside a bound VAO stores: "attribute location 0 starts at this byte offset, is 3 floats wide, with the next attribute stride bytes away." Binding the VAO later restores all of this without re-specifying every layout call.

In A0, each letter has its own VAO/VBO pair. Every frame: bind the VAO, upload the transform as a uniform matrix, call glDrawArrays. The GPU reads the recorded layout and runs the vertex shader on every vertex automatically.

∑ CG Concept — Homogeneous Coordinates & Transform Composition

A 3×3 matrix can rotate and scale a 3D point, but not translate it — translation is an addition, not a multiplication. Homogeneous coordinates fix this by adding a 4th component, w:

vec4(x, y, z, w=1) → position vec4(x, y, z, w=0) → direction (translation has no effect)

Now a 4×4 matrix can encode rotation, scale, and translation simultaneously. The transform for A0 is a single matrix composing a rotation (built with glm::rotate) and a translation. Matrix multiplication is right-to-left, so T · R · vertex means: rotate first, then translate — the order is critical. Swapping them puts the pivot in the wrong place. Each letter's rotations[i] value increments at a unique rate per frame, creating the independent phase offsets.

My Contributions
  • Configured GLAD/GLFW for OpenGL 3.3 Core Profile — window hints, gladLoadGLLoader((GLADloadproc)glfwGetProcAddress), driver issues resolved with Prof. Garcia-Gomez
  • Authored vertex + fragment shaders; wrote compileShader and createShaderProgram helpers with glGetShaderInfoLog error extraction
  • Built all letter geometry by hand — each glyph a std::vector<float> of triangle vertices in std::unordered_map<char, std::vector<float>> letters
  • Per-letter animation state in std::vector<float> rotations and colorOffsets; composed transform each frame, uploaded via glUniformMatrix4fv
  • Wired full render loop: VAO/VBO creation, glBufferData, glVertexAttribPointer, uniform upload, glfwSwapBuffers, framebuffer_size_callback
GLSL — Vertex Shadermain.cpp
#version 330 core
layout(location = 0) in vec3 aPos;
uniform mat4 transform;   // T·R matrix, uploaded each frame per letter
void main() {
    gl_Position = transform * vec4(aPos, 1.0);   // w=1 → position
}
⚡ Notes

Primary challenge was environment setup — GLAD loader required driver-level work to configure. Once live, the assignment came together quickly. No known runtime bugs.

01
3D Lit Cube — MVP Stack & Phong Shading
GLMMVP MatrixPhong LightingPer-face NormalsKey Callback

First 3D pipeline. A cube with per-face normals and interleaved vertex data, Phong ambient+diffuse lighting in the fragment shader, keyboard-driven rotation and colour-cycling, and a movable point light. First use of GLM for matrix math.

⬡ CG Concept — Phong Shading & the Normal Matrix

Phong shading decomposes incoming light into components. Ambient is a constant floor — it prevents fully unlit surfaces from being pure black:

ambient = kₐ · lightColor

Diffuse uses Lambert's cosine law: the brightness of a surface depends on how directly it faces the light. The dot product of the normalised surface normal N and normalised light direction L gives this angle's cosine:

diffuse = max(dot(N, L), 0.0) · lightColor

The max(..., 0) clamps negative values — surfaces facing away contribute zero, not negative light. Final colour:

colour = (ambient + diffuse) · objectColor

The normal matrix problem: if you apply a non-uniform scale to the model (stretching it along one axis), the normals stored in the VBO become wrong — they no longer point perpendicular to the surface. The fix is transforming normals by the inverse transpose of the upper-left 3×3 of the model matrix:

correctedNormal = mat3(transpose(inverse(model))) · aNormal

This seems expensive but GLM computes it once per draw call. Without it, lighting on scaled meshes breaks — normals point in the wrong direction and Lambert's cosine law gives incorrect brightness.

My Contributions
  • Defined 24 vertices (4 per face × 6 faces) with interleaved position + normal data; wired two glVertexAttribPointer calls — location 0 (xyz position), location 1 (xyz normal) — stride 6 * sizeof(float)
  • Phong vertex shader: FragPos = vec3(model * vec4(aPos, 1.0)) and Normal = mat3(transpose(inverse(model))) * aNormal
  • Phong fragment shader: ambient and diffuse terms from uniform vec3 lightPos/lightColor/objectColor
  • MVP stack each frame: glm::rotate by accumulated rotation; glm::lookAt view; glm::ortho projection — all uploaded via glUniformMatrix4fv
  • key_callback: arrow keys accumulate rotation; C cycles 8 preset colours; J/L/I/K/U/O translate lightPos on all three axes
  • Smooth colour transition: glm::mix(currentColor, targetColor, transitionProgress) advancing transitionProgress += transitionSpeed per frame
GLSL — Phong Fragment Shadermain.cpp
uniform vec3 lightPos, lightColor, objectColor;
in  vec3 FragPos, Normal;          // interpolated from vertex shader
out vec4 FragColor;
void main() {
    vec3 ambient = 0.2 * lightColor;
    vec3 N = normalize(Normal);
    vec3 L = normalize(lightPos - FragPos);
    float diff = max(dot(N, L), 0.0);  // Lambert cosine law
    vec3 diffuse = diff * lightColor;
    FragColor = vec4((ambient + diffuse) * objectColor, 1.0);
}
⚡ Notes

Smooth colour interpolation had edge cases in the transition logic. Camera-movement extra credit was not completed. Core cube, Phong lighting, and all keyboard controls work correctly.

02
3D Shape Viewer — Singleton App, OBJ Import & ImGui
SingletonOBJ ParserDear ImGuiShapeManagerSpherical Camera

Major architecture upgrade. Moved to a Singleton Application with dedicated subsystems. Users load any OBJ mesh at runtime via native file dialog, view it in a 3D viewport with an orbital camera, and inspect it through ImGui panels. Ships with classic test meshes: bunny, teapot, skull, torus, garg.

⬡ CG Concept — OBJ Parsing & Index Buffers

An OBJ file separates what exists from how it connects. v lines list unique positions; f lines list faces as indices into that list. A cube has 8 unique corners but 12 triangles. Without index buffers you'd store 36 vertices (3 per triangle × 12). With an EBO you store 8 vertices and 36 indices — a significant saving on dense meshes like the Stanford bunny (~70k vertices).

Parsing means: stream the file line by line. When a v line arrives, push its three floats into a position vector. When an f line arrives, push its three integers (1-based in OBJ format, so subtract 1) into the index vector. At the end, call glBufferData twice — once for positions into the VBO, once for indices into the EBO — then set up the VAO layout.

∑ CG Concept — Spherical Coordinate Camera

An orbit camera stores its position in spherical coordinates — radius, yaw (theta), pitch (phi) — instead of Cartesian xyz. This makes orbital controls trivial: dragging horizontally increments theta, dragging vertically clamps phi. Converting back to Cartesian for the view matrix:

x = target.x + radius · cos(phi) · cos(theta) y = target.y + radius · sin(phi) z = target.z + radius · cos(phi) · sin(theta)

Then glm::lookAt(cameraPos, target, up) builds the view matrix. Spherical coordinates prevent gimbal lock at reasonable pitch values and make radius (zoom) and orbit (yaw/pitch) fully independent controls.

My Contributions
  • Singleton Application: static Application* instance = nullptr, getInstance() lazy-init, private constructor, deleted copy ctor + assignment operator, three statically-allocated subsystems
  • framebuffer_size_callback: calls glViewport then recomputes glm::perspective(45°, aspect, 0.1, 100) and re-uploads on every resize
  • Renderer with spherical-coord camera: theta/phi/radiusupdateCameraPosition() → Cartesian → glm::lookAt
  • ShapeManager full lifecycle: addShape, deleteShape (calls delete *it, erases from vector, clears selection), resetAllShapes
  • FileImporter with tinyfd_openFileDialog; parses OBJ v/f lines into position + index arrays, uploads to VAO/VBO/EBO
  • Dear ImGui integration — ImGui_ImplGlfw_InitForOpenGL + ImGui_ImplOpenGL3_Init — per-frame lifecycle wired around scene render
  • Renderer::drawAxis: per-frame VAO/VBO for RGB axis lines, glUniform1i(useLighting, 0) to bypass Phong
C++ — Singleton PatternApplication.cpp
Renderer     Application::renderer;      // static — lives for app lifetime
ShapeManager Application::shapeManager;
FileImporter Application::fileImporter;
Application* Application::instance = nullptr;

Application& Application::getInstance() {
    if (!instance) instance = new Application();  // private ctor
    return *instance;
}
// Header: Application(const Application&) = delete;
//         Application& operator=(const Application&) = delete;
03
Parametric Curves & Surfaces of Revolution
BezierB-SplineBernstein BasisFrenet FrameSurface of Revolution

Parametric geometry layered on the viewer. Custom .swp control-point files feed into evalBezier and evalBSpline, which tessellate smooth curves and compute a full Frenet-Serret frame at each sample. Those frames sweep surfaces of revolution. Test shapes: wineglass, torus, flircle, gentorus, weird.

∑ CG Concept — Bezier Curves & Bernstein Polynomials

A Bezier curve of degree n is defined by n+1 control points P₀…Pₙ. For any parameter t ∈ [0,1], the curve point is a weighted average of those control points:

B(t) = Σⱼ C(n,j) · (1−t)^(n−j) · tʲ · Pⱼ

The weight for each control point is a Bernstein polynomial. Several important properties follow directly from this formula:

  • At t=0 all weight goes to P₀; at t=1 all weight goes to Pₙ — the curve passes through its endpoints.
  • All Bernstein polynomials sum to 1 for any t, so the result is always a convex combination — it always lies inside the convex hull of the control points. The curve can never escape the bounding region defined by the controls.
  • Moving one control point pulls the curve smoothly toward it everywhere in t — no local edits without global influence. This is why the constraint P.size() % 3 == 1 (degree-3 segments sharing endpoints) creates piecewise cubics with G0 continuity.

The binomial coefficient C(n,j) is computed via Pascal's rule rather than factorials to avoid overflow: result *= (n−(k−i)); result /= i.

⬡ CG Concept — Frenet-Serret Frame & Surface Revolution

To build a surface from a curve, it's not enough to know where each sample point is — the orientation matters too: which way is forward, which way is up, and which way is sideways. The Frenet-Serret frame provides exactly this as three mutually perpendicular unit vectors at each sample:

  • T (tangent) — direction the curve is heading. Computed as the normalised difference between adjacent sample positions: normalize(P[i+1] − P[i]).
  • B (binormal) — perpendicular to T, computed as cross(T, prev_N). Parallel-transported from the previous frame to prevent sudden flips.
  • N (normal) — completes the right-handed frame: cross(B, T). Points toward the center of curvature.

For a surface of revolution, the profile curve is flat on the XZ plane. Each control point (px, 0, pz) is rotated around the Z axis at steps equally-spaced angles θ ∈ [0, 2π]:

x = px · cos(θ), y = px · sin(θ), z = pz

Connecting rings of rotated points into quads (split into two triangles each) produces the mesh. A wineglass profile swept 360° becomes a watertight wineglass mesh.

My Contributions
  • unsigned binomial(n, k) via Pascal's rule: result *= (n-(k-i)); result /= i — used directly as the Bernstein coefficient
  • evalBezier(P, steps): validates P.size() < 4 || P.size() % 3 != 1; evaluates Bernstein weighted sum at each t = i/steps into a CurvePoints vector of {V, T, N, B} structs
  • Frenet-Serret frame: T via finite-difference normalisation; N and B via cross-products, parallel-transported frame to frame
  • ImportCurve: parses .swp files line-by-line — reads type token, then successive lines as glm::vec3 control points
  • Surface::makeSurfRev(profile, steps): validates profile flatness, sweeps at 2π/steps increments, builds vertex + face arrays for VAO upload
  • ImGui step-count slider and curve-type selector, triggering live mesh rebuild on change
C++ — Bernstein Basis EvaluationCurve.cpp — evalBezier()
for (unsigned i = 0; i <= steps; ++i) {
    float t = (float)i / steps;
    glm::vec3 point(0.0f);
    unsigned n = P.size() - 1;
    for (unsigned j = 0; j <= n; ++j) {
        float w = binomial(n, j)
                * std::pow(1.0f - t, n - j)
                * std::pow(t, j);
        point += w * P[j];           // weighted blend of control points
    }
    // finite-diff tangent T, cross-product N/B → push CurvePoint
}
04
Skeletal Animation — Joint Hierarchy & World Transforms
Joint TreeMatrixStackBind Pose Inverseglm::yawPitchRoll.skel / .attach

Hierarchical skeletal system. Loads .skel JSON to build a Joint tree; .attach files bind mesh vertices to joints. A recursive MatrixStack traversal accumulates parent-to-world transforms for both the bind pose and the current animated frame. Four character models with varying rig complexity.

∑ CG Concept — Hierarchical Transforms, Bind Pose & Skinning

Every joint stores a local transform — how it sits relative to its parent. To animate a joint you change only its local transform, and child joints follow automatically because their world positions are computed by multiplying the chain upward:

T_world(joint) = T_world(parent) · T_local(joint)

A matrix stack implements this traversal efficiently: push a joint's local transform (which multiplies it onto the top), recurse into children (who each push their own), then pop when returning to the parent. stack.top() always equals the accumulated world transform at that depth in the tree.

The bind pose is the skeleton's rest configuration — the pose the mesh was designed to wrap around. Before any animation starts, you traverse the hierarchy and capture each joint's world-to-joint transform as:

T_bind⁻¹ = inverse(T_world at bind pose)

During animation, each vertex needs to know how far it has moved from its bind-pose position. The skinning matrix for each joint is:

T_skin = T_world(current frame) · T_bind⁻¹

Multiplying a vertex by this matrix first un-does the bind pose (moving it to joint-local space), then applies the current frame's world transform. Without T_bind⁻¹, vertices would double-transform and explode. glm::yawPitchRoll(rX, rY, rZ) converts ImGui Euler-angle sliders into a rotation matrix for each joint, composed with its stored translation.

My Contributions
  • Joint: stores glm::mat4 transform (local), bindWorldToJointTransform, currentJointToWorldTransform, glm::vec3 rotation/translation, std::vector<Joint*> children
  • MatrixStack: push(m) multiplies top by m and pushes; pop() removes; top() returns accumulated world transform — backed by std::vector<glm::mat4>
  • setJointTransform(i, rX, rY, rZ): builds glm::yawPitchRoll(rX,rY,rZ) composed with glm::translate(mat4(1), joint→getTranslation())
  • bindWorldToJointTransformRecursive: pushes local transform, stores glm::inverse(stack.top()) as bind-pose inverse, recurses children, pops
  • currentJointToWorldTransformsRecursive: pushes local transform, stores stack.top() as current world, recurses, pops — called each frame
  • ImportCharacter: loads skeleton JSON, OBJ mesh, and .attach weight table into a unified CharacterShape
C++ — Recursive Bind-Pose TraversalSkeletalModel.cpp
void bindWorldToJointRecursive(Joint* j, MatrixStack& s) {
    s.push(j->getTransform());         // accumulate: parent → this joint
    j->setBindWorldToJointTransform(
        glm::inverse(s.top()));         // T_bind⁻¹ captured once
    for (Joint* child : j->getChildren())
        bindWorldToJointRecursive(child, s);
    s.pop();                            // restore parent context
}
⚡ Honest Notes

The bind-pose traversal is correct. The failure was in live pose application — accumulated transforms reset rather than composing through the hierarchy during animation, preventing character posing. Skeleton loads and displays in bind pose. Debugging recursive world-space accumulation without per-joint visual feedback is genuinely difficult; the exact bug and its cause are documented in the README.

05
Particle Physics — Spring-Mass, Cloth & ODE Integrators
evalF(state)ForwardEulerTrapezoidMidpointRK4Spring-MassCloth

The most technically demanding assignment. A generic ODE framework drives four progressively complex particle systems through four interchangeable numerical integrators. Each integrator calls the pure-virtual evalF(state) without knowing which system it's stepping. The cloth computes structural and shear springs, recomputes face normals per frame, and responds to toggleable oscillating wind.

∑ CG Concept — ODEs, Spring Forces & Numerical Integration

A spring-mass simulation is fundamentally an Ordinary Differential Equation (ODE) problem. The state of the simulation at any moment is a flat vector of positions and velocities — one entry for each particle, interleaved:

state = [p₀, v₀, p₁, v₁, p₂, v₂, ...]

evalF(state) computes the derivative of this state — how fast each value is changing. Positions change at their velocity; velocities change at their acceleration, which comes from summing forces:

evalF returns: [v₀, a₀, v₁, a₁, ...] where a = F_total / mass

Spring force between particles i and j, rest length L₀, stiffness k, drag coefficient c:

d = pⱼ − pᵢ F_spring = −k(|d| − L₀) · normalize(d) − c · vᵢ

To advance time, we need to integrate these derivatives. All four integrators do this differently — the tradeoff is accuracy vs computation:

  • Forward Euler — 1 evalF call. s_new = s + dt · evalF(s). First-order accuracy — error per step is O(dt²). Simplest but diverges fastest at large timesteps; energy can "leak in" causing springs to vibrate indefinitely or explode.
  • Trapezoid — 2 evalF calls. Averages the derivative at the start and at an Euler-predicted endpoint. Second-order accuracy. More stable than Euler, noticeably better at moderate dt.
  • Midpoint — 2 evalF calls. Takes a half-step to estimate the midpoint derivative, then uses that for the full step. Also second-order but better at capturing curved trajectories.
  • RK4 — 4 evalF calls. Evaluates derivatives at the start, two midpoints, and the end, then combines them in a weighted average. Fourth-order accuracy — error per step O(dt⁵). The cloth simulation runs stably at much larger timesteps than Euler would allow.
⬡ CG Concept — Cloth as a Grid of Springs

A cloth mesh is an N×N grid of particles. Each particle is connected to its neighbours by springs of two types:

  • Structural springs — horizontal and vertical neighbours. Maintain the basic fabric structure. Resist stretching along the grid axes.
  • Shear springs — diagonal neighbours. Prevent the cloth from collapsing into a diamond shape when pulled at a corner. Without shear springs a grid of structural-only springs can shear freely.

Face normals are recomputed every frame by taking the cross product of two edge vectors of each triangle — normalize(cross(v1−v0, v2−v0)) — then averaging normals across each vertex's adjacent faces. This is why the lighting looks correct even as the cloth deforms. The cloth's VAO holds the face geometry; a separate wireframe VAO holds only the edge lines for the "show wireframe" toggle in ImGui.

My Contributions
  • ParticleSystem : Shape base: std::vector<glm::vec3> m_state (even index = position, odd = velocity); m_initialState for reset; pure-virtual evalF(const std::vector<glm::vec3>&) = 0
  • Four concrete systems: SimpleSystem (circular orbit smoke test), SimplePendulum (fixed anchor), SimpleChain (n-link chain), SimpleCloth (NxN with structural + shear springs stored as glm::vec4 from/to/rest/k tuples)
  • All four integrators as TimeStepper subclasses: ForwardEuler, Trapezoidal, Midpoint, RK4 — plus createIntegrator(IntegratorType) factory
  • Unit sphere mesh generator from scratch: generateUnitSphereMesh(radius, sectors, stacks) using spherical-to-Cartesian conversion; normals as normalize(position); instanced per-particle
  • Five separate VAO/VBO/EBO pipelines per system: particle spheres, spring lines (GL_LINES), cloth faces, wireframe overlay, debug axis
  • Wind: windDirection × windIntensity force term in evalF; updateWindOscillation(dt) drives sinusoidal direction change for oscillating wind (extra credit)
  • Per-system ImGui panels: integrator radio, timestep slider, wind toggles, wireframe/particle/spring toggles, RGB picker, reset button
C++ — TimeStepper Interface & RK4TimeStepper.h / .cpp
// Pure virtual — all integrators share this contract
virtual void takeStep(ParticleSystem* ps, float dt) = 0;

// RK4 — 4th order, 4 evalF calls per step
void RK4::takeStep(ParticleSystem* ps, float dt) {
    auto s  = ps->getState();
    auto k1 = ps->evalF(s);
    auto k2 = ps->evalF(s + (dt*0.5f) * k1);  // midpoint estimate 1
    auto k3 = ps->evalF(s + (dt*0.5f) * k2);  // midpoint estimate 2
    auto k4 = ps->evalF(s + dt         * k3);  // endpoint estimate
    // weighted average — centre samples count double
    ps->setState(s + (dt/6.0f)*(k1 + 2*k2 + 2*k3 + k4));
}
⚡ Honest Notes

A std::vector constructor called with 2 arguments where 3 were expected in PendulumSystem.cpp blocks clean compilation — localised and documented. The cloth renders and animates; extreme wind values cause numerical instability from insufficient velocity damping. Given more time: fix the constructor mismatch, add per-system damping controls, isolate wind forces to eliminate jitter.

// 04 — Skills Demonstrated

What building this
actually teaches

Writing a pipeline without an engine forces fluency in areas that engine abstractions hide completely.

OpenGL Core
A0 – A5, all files
Direct use of glGenVertexArrays, glBindBuffer, glBufferData, glVertexAttribPointer, glEnableVertexAttribArray, glDrawElements, glDrawArrays. Multiple separate VAO/VBO/EBO triples per scene — mesh, spring lines, particle spheres, cloth faces, wireframe — each owned and cleaned up by its system.
GLSL Shaders
A0 – A5
Authored vertex and fragment shaders from scratch across all assignments. Progressed from flat colour in A0 to Phong with normal-matrix correction in A1 to full ambient/diffuse/specular with multiple light uniforms in A2+. Normal correction via mat3(transpose(inverse(model))) written and understood, not copied.
GLM Math
A1 – A5
glm::mat4 MVP stacks with rotate / translate / scale; lookAt / perspective / ortho; cross / normalize / dot for normals and Frenet frames; yawPitchRoll for joint rotations; inverse for bind-pose matrices; mix for colour interpolation.
Parametric Curves
A3 — Curve.h/cpp
Bezier evaluation using Bernstein polynomials with hand-implemented binomial coefficient. B-spline evaluation. Frenet-Serret frame (T, N, B) computed via finite-difference tangent and cross-product at each tessellation sample. Convex-hull containment guarantees and the 3n+1 control-point constraint validated with clean diagnostics.
Skeletal Systems
A4 — Joint / SkeletalModel
Joint tree from JSON. Each joint stores local mat4, bind-pose inverse, and current-frame world transform. MatrixStack push/pop accumulates world transforms without modifying stored matrices. setJointTransform uses glm::yawPitchRoll composed with stored translation. Vertex weights loaded from .attach files.
Physics Simulation
A5 — ParticleSystem
Generic ODE framework: flat std::vector<glm::vec3> state with interleaved position/velocity. Pure-virtual evalF returns derivatives. Spring force model: F = −k(|d|−L₀)·d̂ − c·v. Four integrators — ForwardEuler O(dt²), Trapezoid, Midpoint, RK4 O(dt⁵) — operate generically, swappable at runtime from ImGui.
C++ Architecture
A2 – A5
Singleton with deleted copy/assignment, private constructor, static instance. Shape polymorphic base with pure-virtual draw(GLuint). ParticleSystem : Shape inheritance chain. TimeStepper factory method. Manual heap management — ShapeManager owns and deletes Shape* pointers. Virtual destructor on all base classes.