Part 2 — Deep Dive

The Dev Process

From issue to production — every step, every gate, every artifact

0

Steps

0

Phases

0

Artifact Types

The Pipeline

Five phases from problem to production

Frametriage, frame
Shapeanalyze*, spec
Buildplan, implement, pr
Verifyvalidate, review, fix*
Shippromote*, cleanup*

* = conditional step (skipped based on tier or outcome)

Match Process to Complexity

Three tiers, one pipeline

Tier SQuick Fix

≤3 files, no architecture, no risk

triage → implement → pr → validate → review
Tier F-liteFeature (lite)

Clear scope, single domain

frame → spec → plan → implement → verify → ship
Tier F-fullFeature (full)

New architecture, unclear requirements, >2 domains

frame → analyze → spec → plan → implement → verify → ship

Skip Matrix

triageframeanalyzespecplanimplementprvalidatereviewfixpromotecleanup
S
skip
skip
skip
skip
cond
cond
cond
F-lite
skip
skip
cond
cond
cond
F-full
skip
cond
cond
cond

Key insight File count alone doesn't determine tier. A 50-file mechanical change may be S, while a 3-file rate limiter is F-full.

Frame

Define the problem

Triage

Goal

Categorize and assign the issue

Artifact

GitHub issue with size/priority labels

Gate

Issue exists with clear title and body

Frame

Goal

Define the problem, constraints, and scope

Artifact

artifacts/frames/{slug}.mdx

Gate

User approves frame

artifacts/frames/{slug}.mdx
Problem
Who
Constraints
Out of Scope
Tier

S-tier skips framing — goes straight to implement

Shape

Design the solution

Analyze

F-full only

Goal

Deep technical exploration — risks, alternatives, recommendations

Artifact

artifacts/analyses/{N}-{slug}.mdx

Gate

User approves analysis

Spec

Goal

Define what we'll build — acceptance criteria, breadboard, slices

Artifact

artifacts/specs/{N}-{slug}.mdx

Gate

User approves spec

artifacts/specs/{N}-{slug}.mdx
Goal
Users
Expected Behavior
Breadboard
Slices
Success Criteria

F-lite skips analysis — frame is sufficient context for spec

Build

Write the code

Plan

Goal

Break spec into tasks, pick agents, define order

Artifact

artifacts/plans/{N}-{slug}.mdx

Gate

User approves plan

Implement

Goal

Create worktree, spawn agents, write code

Test-first: RED → GREEN → REFACTOR

REDGREENREFACTOR

PR

Goal

Package work for review

$ gh pr create --base staging

Agent Routing

Frontend

frontend-dev

Backend

backend-dev

CI/CD

devops

Tests

tester

Docs

doc-writer

Test-first: tester writes failing tests, then domain agents implement

Verify

Review and fix

Validate

Goal

Run all quality gates

lint
typecheck
test
i18n
env
license

Review

Goal

Fresh agents review code they didn't write

security-auditorarchitectproduct-leadtesterdomain agents

No agent reviews code it wrote

Fix

Goal

Apply accepted review findings

Auto-apply → /1b1 walkthrough → domain fixers

Fresh agents = no implementation bias

Ship

Release and clean up

Promote

Goal

Merge staging → main with version bump and changelog

version bumpchangelogrelease PRGitHub Release

Standalone via /promote — not auto-triggered by /dev

Cleanup

Goal

Remove stale worktrees and branches

$ git worktree remove

$ branch deletion

Promotion batches all staging changes into one release

Pick Up Where You Left Off

Artifacts persist — sessions don't have to

/dev #42State Scan
triagedone
framedone
specdone
plandone
implementin progress
prpending
reviewpending

Session dies → restart → same progress

Artifact Types

frame

What's the problem?

artifacts/frames/

analysis

How deep is it?

artifacts/analyses/

spec

What will we build?

artifacts/specs/

plan

How do we build it?

artifacts/plans/

You Decide, AI Executes

Every phase transition requires your approval

Frame
Approve frame
Shape
Approve spec
Build
Approve plan
Verify
Accept findings
Ship

Structured Choices

Every choice presented as structured options — never plain-text questions

Clear Roles

Human decides, Claude orchestrates, agents specialize

Human-in-the-Loop

Not fully autonomous — intentionally human-in-the-loop

Automated Quality Gates

pre-commit

Biome lint + format

commit-msg

Commitlint conventional

pre-push

lint + typecheck + tests + i18n

Hooks: Automated Guardrails

Git hooks and Claude Code hooks enforce quality at every step — no human action needed

Git Hooks (Lefthook)

Triggered by git operations — block bad code before it reaches the repo

pre-commit

Biome check — lint + format every staged file

commit-msg

Commitlint — enforce conventional commit format

pre-push

Full suite — lint, typecheck, tests, i18n validation, license check

Claude Code Hooks

Triggered by tool use — enforce patterns during AI-assisted development

PostToolUse

Auto-format files after every Write/Edit with Biome

PreToolUse

Block 'bun test' (must use 'bun run test'), warn on sensitive file access

Key insight: Hooks run automatically — they don't require human attention. They catch mistakes before they compound.

Compress: Less Tokens, Same Semantics

Formal notation rewrite for skills and agents

allexistsmemberandor¬nottheniff
## Step 1 — Parse Input

First, look at the arguments. If an issue number is provided (like #42), fetch the GitHub issue using the gh CLI tool to get the title and body.

If the issue does not exist, stop execution and inform the user that the issue was not found.

If free text is provided instead of an issue number, search for matching issues using the gh issue list command with the search parameter.

If a matching issue is found, ask the user if they want to use it. If not, create a new issue or proceed without one.
Lines: 156 | -60% | Tokens: -62%

Every skill and agent in this project is compressed

Build Your Process

We use cookies to improve your experience. You can accept all, reject all, or customize your preferences.