AI Projects · Methodology · Multi-Model Workflow

I don't just use AI.
I design how teams work with it.

A living methodology for maintaining design intent and code quality across AI-augmented build cycles. The manifesto, the internal tools, the shipped products, the multi-model workflow — all in one place.

Why this page exists

Most designers adopted AI. I built a methodology around it before the team knew it needed one.

Every project on this page is part of one argument: AI is not a button you press to make work faster. It's a collaborator that has no memory, no taste, and no shared context — and the team that thrives with it is the team that designs around those constraints instead of pretending they don't exist.

The manifesto is the artifact that makes the rest of the work possible. Break Lab and the UX Feedback Analyzer are what that methodology produces on the ground. Ehoro Village is the live proof that it scales to a real product with a real team.

01 — The Methodology

The AI-Augmented Development Manifesto.

AI doesn't remember your last session. Every new conversation starts from zero. Across 24+ build sessions on Ehoro Village, that reality kept producing the same failure mode: regressions, contradictions, design drift, wasted context-rebuilding at the top of every chat.

So I wrote a manifesto. A living technical document the team carries into every AI session — data model, coding rules, design philosophy, session logs, explicit rules for what AI should and shouldn't do. It grew from a one-page overview into a 12-section spec the entire team builds against.

From the manifesto

"The manifesto isn't documentation — it's a collaboration protocol. The document IS the product. The code just implements it."

It exists for one reason: the failure mode of building with AI isn't the AI being wrong, it's the team losing the shared mental model of what they're building. The manifesto is what holds that model still while velocity goes up.

It now travels with me. Every AI-augmented project I touch starts with a version of this document — and the methodology is becoming the team's standard playbook.

02 — Shipped AI-Augmented Products

Interaction Design · Audio · Constructivist Tool

Break Lab

Shipped

→ Browser-based drum sampler · Built for music producers · Complex interaction design under the hood

A browser-based drum sampler with waveform slicing, step sequencer, and looper. Designed for music producers (myself included) who chop breaks in FL Studio and want a faster path from sample to pattern. Built with an AI-augmented workflow following the same manifesto principles.

It's also a constructivist learning environment: the interface teaches audio sequencing logic through use, not tutorials. The complex interaction design — sub-millisecond playhead handling, drag-to-slice waveforms, real-time step state — is what makes it work for actual producers instead of being a toy.

Open the Break Lab case study →

Research Tooling · Python / Streamlit · Team Impact

UX Feedback Analyzer

Internal

→ 40–60% reduction in team triage time · Python · Streamlit · Built for the Ehoro team

An internal tool I built for the Ehoro team to identify high-risk users and surface priority signals out of unstructured feedback. Ingests review text, clusters by issue, and ranks by severity so the product track doesn't lose half its week to triage.

The outcome that matters: 40–60% reduction in triage time, freeing the team to spend more cycles on design and research rather than feedback bookkeeping. It's the kind of tool that proves a designer can build the things that make a team faster — not just the things users see.

Open the Analyzer case study →

Live Product · Team of 6 · Manifesto in Production

Ehoro Village — AI workflow in production

Live

→ Shipped with a team of 6 · 24+ AI build sessions · The manifesto's first proof point

Ehoro Village is where the manifesto stopped being theory. A six-person team, an AI-augmented build cycle across 24+ sessions, a live product running in production. Every system in Ehoro — the spirit economy, the onboarding, the data model — was built against the same single source of truth so no AI session, and no teammate, drifted from the shared model.

Open the Ehoro case study →

05 — Multi-Model Fluency

Each model has a job. Knowing which one to call is the skill.

No single model is the right answer for every step of a real build. The skill isn't picking a favorite — it's matching the cognitive shape of the task to the model that handles it best, and keeping the manifesto's shared context portable across all of them.

Claude

Long-running build sessions, deep refactors, taste-sensitive interface code, manifesto-driven work where context fidelity matters most.

ChatGPT

Fast exploratory work, copy iteration, image and asset generation, quick reasoning passes when latency matters more than depth.

Gemini

Long-context research synthesis, multi-document analysis, cross-checking findings before they enter the manifesto or hit the team.

The methodology this page argues for is model-agnostic by design. The manifesto travels. The taste travels. The team's mental model travels. The models themselves are interchangeable infrastructure.

What this page argues

The market is rewarding designers who shape AI workflows, not just use them.

The wage premium and the hiring signal both go to the small group of designers who built methodology around AI before the field caught up. This page is the evidence that I'm in that group — not because I adopted the tools, but because I wrote the manual.

If you're building an AI-augmented design team, the manifesto, the tooling, and the shipped product are all part of the same answer to the same question: how does a team stay coherent when AI is in the loop?