← Blog/AI Agents
We Ran 52 AI Coding Benchmarks. Here's Every Uncomfortable Thing We Found.

We Ran 52 AI Coding Benchmarks. Here's Every Uncomfortable Thing We Found.

CONTRACT.md cuts cost 54%, raises quality from 5/10 to 9/10. Agent Teams cost 124% more with no gain. Retry loops degrade output. 52 controlled runs, full data open-sourced.

UpGPT Team

UpGPT Team

UpGPT·April 19, 2026·12 min read

Share

Why We Did This

This is the full technical version — every test, every table, every methodology detail. If you need the decision-maker summary, it's here.

We had just run 25 parallel AI workers across 7 swarms simultaneously and produced 12,500 lines of code across 96 files in 36 minutes. We had no idea what it cost. We hadn't measured quality. We'd just shipped fast.

So we ran a benchmark. Then another. Then 50 more.

What started as "let's figure out if parallel workers are worth it" turned into a set of findings that overturned almost every assumption we started with.

What We Tested

Task types:

  • T3 — Notes CRUD: SQL migration + TypeScript types + 2 API routes + Vitest tests. 3 workers. Small-to-medium.
  • T6 — Notifications system: large greenfield. 8 workers. Complex.
  • T7 — SMS refactor: modifying existing code. Pure edit.

Approaches:

  • V1 — minimal, vague prompts. Workers guess at interfaces and import paths.
  • V2 — CONTRACT.md added: workers get exact interfaces, column names, import paths, SQL conventions upfront.
  • NS — V2 with self-evolution: worker checks its own output and retries if it falls short.
  • NSX — V2 with cross-model verification: Opus reads the worker's output and writes line-level critique before retry.
  • V2O — V2 with a one-shot Opus review pass at the end (no retry loop — just a targeted surgical edit).

Architecture comparisons: Sequential · UpCommander (tmux workers) · Agent Teams (Anthropic native sub-agents)

Independent variables: CONTRACT.md on/off, architecture type, model (Haiku/Sonnet/Opus), grader (Opus/GPT-4o/Gemini).

Finding 1: CONTRACT.md is the entire game

A structured brief before the task — exact TypeScript interfaces, exact column names, exact import paths, SQL conventions, explicit non-goals — made the single largest difference of anything we tested.

2×2 factorial experiment (20 controlled runs):

CONTRACT.md Effect — 2×2 Factorial, N=20

The CONTRACT.md effect: -65% cost, -68% time, quality from 5 to 9/10. Architecture was secondary. Same model, same codebase, just a different document.

What goes in the brief that matters:

## CONTRACT.md

### Interfaces
interface Note {
  id: string;
  user_id: string;
  content: string;
  created_at: string;
}

### Database
Table: platform.notes
Columns: id (uuid), user_id (uuid FK auth.users), content (text), created_at (timestamptz)
SQL conventions: CREATE TABLE IF NOT EXISTS, no DROP POLICY

### Import paths
Types: @/lib/platform/notes/types
Supabase client: @/lib/supabase-server (server components)

### Non-goals
- No pagination in this PR
- No soft delete
- No full-text search

Workers stop exploring and start executing.

Finding 2: Agent Teams cost 73–124% more with zero quality gain

Anthropic markets Agent Teams as a way to parallelize work. Technically true. The data:

Agent Teams vs Sequential — T3 Task

T6 (large task) results:

Agent Teams vs Sequential — T6 Large Task

Every agent loads the full codebase context independently. Three agents = three copies of your 80K-token context. The cache burn dominates. Agent Teams never wins on cost. Sequential + CONTRACT wins cost every time.

Finding 3: Retry loops make the output worse

We wanted to test whether self-improvement retry loops could fix incorrect output without degrading quality. We built one and ran it.

N=5 on T3 with deliberate traps (wrong import paths, missing exports):

Retry Loops Degrade Quality — N=5

Self-evolution improved acceptance criteria by 1 item but degraded overall quality from 9/10 to 6/10 and cost 2.1× more.

Why? The model doesn't make surgical edits. It regenerates entire files. Fixing a broken import path means rewriting the whole route file — and losing all the CRUD endpoints and tests that were correct the first time. We observed this across every single retry attempt across 3 runs. It never didn't happen.

There's also a ceiling: the model cannot see the blindspot it keeps creating. Every run, every retry, stalled at exactly 4/5 ACs. The 5th requirement never resolved regardless of how specific the hint was.

NS-run-1: Fix import path → regenerates route.ts → loses 3 endpoints → 4/5 ACs, 6/10
NS-run-2: Fix import path → regenerates route.ts → loses 3 endpoints → 4/5 ACs, 6/10
...same pattern, 15 retry attempts across 3 runs

Don't use retry loops for code generation. The architecture is the problem.

Replication caveat: this is one codebase, one model family, greenfield TypeScript tasks. The failure mode (whole-file regeneration on retry) may not appear in every setup.

Finding 4: Opus one-shot review adds nothing when the contract is good

We tested V2O (V2 + Opus reads the full output and makes surgical edits — not a retry loop, just a targeted one-shot patch):

Clean N=5 retest (full file context, no truncation):

Opus One-Shot Review — Clean N=5 Retest

Zero quality gain. +56% cost. When the CONTRACT.md is well-formed, Sonnet already reaches 9.8/10 — there's nothing for Opus to fix.

The lesson: Write the contract right. Don't retry. Don't add a review pass. The brief is the quality lever.

Finding 5: AST compression cuts tokens 91%

CONTRACT generation for refactoring tasks was expensive — the generator had to read the entire codebase ($0.36 vs $0.15–0.17 for greenfield). We adapted the AST-summary approach from agora-code: tree-sitter parsing, export-only extraction, cached by git blob SHA.

Results on 28 production files:

AST Index Compression — Production Codebase

118x compression. For a large T6 session: baseline $5.45 → $0.85 stacked with CONTRACT.md. Zero quality tradeoff.

Finding 6: Haiku + CONTRACT ≈ Sonnet + CONTRACT (at 64% less cost)

We tested all three model tiers with identical CONTRACT.md prompts to isolate whether the scaffolding or the model was doing the work:

Model Comparison — Same CONTRACT.md (N=5 each)

Haiku scores 9.0/10 at 36% of Sonnet's cost. The scaffolding does most of the work. Opus adds 0.2 points at a 69% premium — not justified.

Implication: For boilerplate workers in multi-worker sessions, route Haiku. Route Sonnet only to workers making non-trivial design decisions.

Finding 7: Cross-vendor grading agrees within ±1 point

All quality scoring in this project uses Opus as the grader. We validated this by grading the same 5 V2 outputs with three model families simultaneously:

Cross-Vendor Grading — Same 5 Outputs

Cross-vendor spread: ±1.0 pts. Opus grading is directionally reliable. Gemini is systematically stricter and catches issues the others miss (unused NoteListOptions in the test file) — worth adding to production quality pipelines.

The Stacked Numbers

All improvements applied to a large T6 session:

Stacked Savings — T6 Session

$5.45 → $0.83. -85%. Same model throughout.

The Six Rules

  1. Write the CONTRACT first. Always, for any task touching 3+ files. Costs ~$0.15 to generate. Saves 47–54% on the task. Paid back on the first run, every time.
  2. Don't use Agent Teams for cost-sensitive work. 73–124% more expensive. No quality benefit. Empirically proven across N=5.
  3. Don't use retry loops. They degrade quality (9→6/10) and cost 2×. The model regenerates whole files when it retries — correct sections disappear. Skip self-evolution entirely.
  4. Don't add an Opus review pass when your contract is good. Sonnet + CONTRACT already hits 9.8/10. Write a better brief instead of paying for a review.
  5. Compress your file context with AST extraction. 91% token reduction, zero quality tradeoff.
  6. Use Haiku for boilerplate workers. 9.0/10 quality at 64% of Sonnet's cost. The scaffolding does the work.

The CLI

# Install
npm install -g @upgpt/upcommander-cli

# Set your key
export ANTHROPIC_API_KEY=sk-ant-...

# Generate contract + run worker
upcommander run "add pagination to the notes API"

# Or: generate contract first, review it, then run
upcommander contract "add pagination to the notes API"
upcommander run "add pagination to the notes API"

# Quality review on specific files (Opus one-shot)
upcommander review src/app/api/notes/route.ts

# Regenerate the codebase index
upcommander index

The repo includes: contract generator (Sonnet, ~$0.15 per contract), L0/L1/L2 codebase index (118x compression), AST-summary module (tree-sitter, 12 languages), ephemeral Haiku orchestrator (-96% orchestration cost), worker recipes, and all 52+ benchmark evaluation files in /evaluations.

  • GitHub: github.com/UpGPT-ai/upcommander
  • npm: npm install -g @upgpt/upcommander-cli
  • Full benchmark data: /evaluations in the repo — all raw JSON, every run

What's Still Open

  1. Human quality review — all scoring is model-on-model. Same-family bias acknowledged. Independent human review pending.
  2. Non-greenfield at scale — all real data is greenfield. Large refactoring at production scale needs its own benchmark series.
  3. OpenRouter multi-model routing — infrastructure exists, benchmarks pending.

Questions or replications: hello@upgpt.ai

Frequently Asked Questions

What is a CONTRACT.md in AI-assisted development?
A CONTRACT.md is a structured brief written before any AI worker touches code. It contains exact TypeScript interfaces, database column names, import paths, SQL conventions, and explicit non-goals. It eliminates the exploration phase where AI workers guess at interfaces — cutting cost 54% and raising quality from 5/10 to 9/10 in controlled experiments.
Are Anthropic Agent Teams faster and cheaper than sequential AI workers?
Agent Teams are faster in wall-clock time but cost 73–124% more than sequential execution at equivalent quality. Every agent loads the full codebase context independently, so three agents means three copies of an 80,000-token context. For cost-sensitive work, sequential execution with a CONTRACT.md beats Agent Teams every time.
Do retry loops improve AI code quality?
No — retry loops degrade quality and cost more. In controlled N=5 experiments, self-improvement retries degraded overall quality from 9/10 to 6/10 and cost 2.1× more. The model regenerates entire files when retrying instead of making surgical edits, destroying previously-correct sections. The right lever is a well-written brief, not verification architecture.
How much do AST compression techniques reduce token usage?
AST compression using tree-sitter parsing and export-only extraction achieves 85–91% token reduction on production codebases. In our benchmark across 28 production files, compression reached 118× with zero quality tradeoff. This is set up once and runs automatically on each session.
Is Haiku as good as Sonnet for AI coding tasks?
Haiku with a CONTRACT.md scores 9.0/10 versus Sonnet's 9.8/10 on identical tasks — at 64% lower cost. The scaffolding (CONTRACT.md, AST index) does most of the quality work. For boilerplate workers in multi-worker sessions, Haiku is the better economic choice.
What is UpCommander?
UpCommander is an open-source CLI for AI-assisted software development that implements CONTRACT-first development, AST compression, multi-worker orchestration, and model routing. It is available at github.com/UpGPT-ai/upcommander and via npm install -g @upgpt/upcommander-cli.
Share
aibenchmarksclaudellmdeveloper-toolsupcommandercost-optimizationcontract-driven-development