
We Ran 52 AI Coding Benchmarks. Here's Every Uncomfortable Thing We Found.
CONTRACT.md cuts cost 54%, raises quality from 5/10 to 9/10. Agent Teams cost 124% more with no gain. Retry loops degrade output. 52 controlled runs, full data open-sourced.
UpGPT Team
UpGPT·April 19, 2026·12 min read
Why We Did This
This is the full technical version — every test, every table, every methodology detail. If you need the decision-maker summary, it's here.
We had just run 25 parallel AI workers across 7 swarms simultaneously and produced 12,500 lines of code across 96 files in 36 minutes. We had no idea what it cost. We hadn't measured quality. We'd just shipped fast.
So we ran a benchmark. Then another. Then 50 more.
What started as "let's figure out if parallel workers are worth it" turned into a set of findings that overturned almost every assumption we started with.
What We Tested
Task types:
- T3 — Notes CRUD: SQL migration + TypeScript types + 2 API routes + Vitest tests. 3 workers. Small-to-medium.
- T6 — Notifications system: large greenfield. 8 workers. Complex.
- T7 — SMS refactor: modifying existing code. Pure edit.
Approaches:
- V1 — minimal, vague prompts. Workers guess at interfaces and import paths.
- V2 — CONTRACT.md added: workers get exact interfaces, column names, import paths, SQL conventions upfront.
- NS — V2 with self-evolution: worker checks its own output and retries if it falls short.
- NSX — V2 with cross-model verification: Opus reads the worker's output and writes line-level critique before retry.
- V2O — V2 with a one-shot Opus review pass at the end (no retry loop — just a targeted surgical edit).
Architecture comparisons: Sequential · UpCommander (tmux workers) · Agent Teams (Anthropic native sub-agents)
Independent variables: CONTRACT.md on/off, architecture type, model (Haiku/Sonnet/Opus), grader (Opus/GPT-4o/Gemini).
Finding 1: CONTRACT.md is the entire game
A structured brief before the task — exact TypeScript interfaces, exact column names, exact import paths, SQL conventions, explicit non-goals — made the single largest difference of anything we tested.
2×2 factorial experiment (20 controlled runs):

The CONTRACT.md effect: -65% cost, -68% time, quality from 5 to 9/10. Architecture was secondary. Same model, same codebase, just a different document.
What goes in the brief that matters:
## CONTRACT.md
### Interfaces
interface Note {
id: string;
user_id: string;
content: string;
created_at: string;
}
### Database
Table: platform.notes
Columns: id (uuid), user_id (uuid FK auth.users), content (text), created_at (timestamptz)
SQL conventions: CREATE TABLE IF NOT EXISTS, no DROP POLICY
### Import paths
Types: @/lib/platform/notes/types
Supabase client: @/lib/supabase-server (server components)
### Non-goals
- No pagination in this PR
- No soft delete
- No full-text search
Workers stop exploring and start executing.
Finding 2: Agent Teams cost 73–124% more with zero quality gain
Anthropic markets Agent Teams as a way to parallelize work. Technically true. The data:

T6 (large task) results:

Every agent loads the full codebase context independently. Three agents = three copies of your 80K-token context. The cache burn dominates. Agent Teams never wins on cost. Sequential + CONTRACT wins cost every time.
Finding 3: Retry loops make the output worse
We wanted to test whether self-improvement retry loops could fix incorrect output without degrading quality. We built one and ran it.
N=5 on T3 with deliberate traps (wrong import paths, missing exports):

Self-evolution improved acceptance criteria by 1 item but degraded overall quality from 9/10 to 6/10 and cost 2.1× more.
Why? The model doesn't make surgical edits. It regenerates entire files. Fixing a broken import path means rewriting the whole route file — and losing all the CRUD endpoints and tests that were correct the first time. We observed this across every single retry attempt across 3 runs. It never didn't happen.
There's also a ceiling: the model cannot see the blindspot it keeps creating. Every run, every retry, stalled at exactly 4/5 ACs. The 5th requirement never resolved regardless of how specific the hint was.
NS-run-1: Fix import path → regenerates route.ts → loses 3 endpoints → 4/5 ACs, 6/10
NS-run-2: Fix import path → regenerates route.ts → loses 3 endpoints → 4/5 ACs, 6/10
...same pattern, 15 retry attempts across 3 runs
Don't use retry loops for code generation. The architecture is the problem.
Replication caveat: this is one codebase, one model family, greenfield TypeScript tasks. The failure mode (whole-file regeneration on retry) may not appear in every setup.
Finding 4: Opus one-shot review adds nothing when the contract is good
We tested V2O (V2 + Opus reads the full output and makes surgical edits — not a retry loop, just a targeted one-shot patch):
Clean N=5 retest (full file context, no truncation):

Zero quality gain. +56% cost. When the CONTRACT.md is well-formed, Sonnet already reaches 9.8/10 — there's nothing for Opus to fix.
The lesson: Write the contract right. Don't retry. Don't add a review pass. The brief is the quality lever.
Finding 5: AST compression cuts tokens 91%
CONTRACT generation for refactoring tasks was expensive — the generator had to read the entire codebase ($0.36 vs $0.15–0.17 for greenfield). We adapted the AST-summary approach from agora-code: tree-sitter parsing, export-only extraction, cached by git blob SHA.
Results on 28 production files:

118x compression. For a large T6 session: baseline $5.45 → $0.85 stacked with CONTRACT.md. Zero quality tradeoff.
Finding 6: Haiku + CONTRACT ≈ Sonnet + CONTRACT (at 64% less cost)
We tested all three model tiers with identical CONTRACT.md prompts to isolate whether the scaffolding or the model was doing the work:

Haiku scores 9.0/10 at 36% of Sonnet's cost. The scaffolding does most of the work. Opus adds 0.2 points at a 69% premium — not justified.
Implication: For boilerplate workers in multi-worker sessions, route Haiku. Route Sonnet only to workers making non-trivial design decisions.
Finding 7: Cross-vendor grading agrees within ±1 point
All quality scoring in this project uses Opus as the grader. We validated this by grading the same 5 V2 outputs with three model families simultaneously:

Cross-vendor spread: ±1.0 pts. Opus grading is directionally reliable. Gemini is systematically stricter and catches issues the others miss (unused NoteListOptions in the test file) — worth adding to production quality pipelines.
The Stacked Numbers
All improvements applied to a large T6 session:

$5.45 → $0.83. -85%. Same model throughout.
The Six Rules
- Write the CONTRACT first. Always, for any task touching 3+ files. Costs ~$0.15 to generate. Saves 47–54% on the task. Paid back on the first run, every time.
- Don't use Agent Teams for cost-sensitive work. 73–124% more expensive. No quality benefit. Empirically proven across N=5.
- Don't use retry loops. They degrade quality (9→6/10) and cost 2×. The model regenerates whole files when it retries — correct sections disappear. Skip self-evolution entirely.
- Don't add an Opus review pass when your contract is good. Sonnet + CONTRACT already hits 9.8/10. Write a better brief instead of paying for a review.
- Compress your file context with AST extraction. 91% token reduction, zero quality tradeoff.
- Use Haiku for boilerplate workers. 9.0/10 quality at 64% of Sonnet's cost. The scaffolding does the work.
The CLI
# Install
npm install -g @upgpt/upcommander-cli
# Set your key
export ANTHROPIC_API_KEY=sk-ant-...
# Generate contract + run worker
upcommander run "add pagination to the notes API"
# Or: generate contract first, review it, then run
upcommander contract "add pagination to the notes API"
upcommander run "add pagination to the notes API"
# Quality review on specific files (Opus one-shot)
upcommander review src/app/api/notes/route.ts
# Regenerate the codebase index
upcommander index
The repo includes: contract generator (Sonnet, ~$0.15 per contract), L0/L1/L2 codebase index (118x compression), AST-summary module (tree-sitter, 12 languages), ephemeral Haiku orchestrator (-96% orchestration cost), worker recipes, and all 52+ benchmark evaluation files in /evaluations.
- GitHub: github.com/UpGPT-ai/upcommander
- npm:
npm install -g @upgpt/upcommander-cli - Full benchmark data:
/evaluationsin the repo — all raw JSON, every run
What's Still Open
- Human quality review — all scoring is model-on-model. Same-family bias acknowledged. Independent human review pending.
- Non-greenfield at scale — all real data is greenfield. Large refactoring at production scale needs its own benchmark series.
- OpenRouter multi-model routing — infrastructure exists, benchmarks pending.
Questions or replications: hello@upgpt.ai
Frequently Asked Questions
What is a CONTRACT.md in AI-assisted development?
Are Anthropic Agent Teams faster and cheaper than sequential AI workers?
Do retry loops improve AI code quality?
How much do AST compression techniques reduce token usage?
Is Haiku as good as Sonnet for AI coding tasks?
What is UpCommander?
Related Articles
What We Learned After $45 and 52 AI Coding Benchmarks (The Part That Matters to You)
The biggest factor in AI development cost isn't the model — it's the brief. 52 benchmarks, -85% total cost reduction, explained for decision-makers.
AI AgentsAI Employees vs. AI Assistants: Why the Difference Matters for Your Business
AI assistants help humans work faster. AI employees do the work autonomously. Learn why the agentic workforce model is replacing chatbots and copilots in 2026.