
Make Codebases Easier for Agents
Structure repos so agents can find code, make local changes, and verify them fast.
We publish practical research on agentic coding, team adoption, multi-agent workflows, and the habits that hold up in production.

Structure repos so agents can find code, make local changes, and verify them fast.

Practical workflows for coding tools that still work when the task gets messy.

Use an agent to turn public signals into a prospect list and draft outreach for human review.

Give an agent one narrow creative job and review the output against a clear checklist.

A small instruction change can make agent output easier to review and trust.

A practical look at coding tools that stay useful after the demo.

Shared integrations can turn design handoff into repeatable UI work.

Small, quick evals that fit the edit loop and support real coding decisions.

A practical pattern for unsticking coding agents on long tasks.

Clear specs, good tests, and stable stacks make agentic coding more reliable.

CSS selectors make agent-written E2E tests brittle. Use stable, user-facing hooks instead.

A practical prompt pattern for surfacing more bugs in agentic coding workflows.

Practical workflow patterns for teams using coding agents on real engineering tasks.

A practical look at AI coding tools that stay useful after the demo.

How composer layers turn intent into edits, and where the workflow still breaks.

Run coding agents in scripts, CI, and automation with checks that keep output reviewable.

Why long-running coding agents help on iterative, verification-heavy tasks.

Practical patterns for keeping coding agents useful on messy tasks.

A practical look at which AI coding tools stay useful after the first demo.

Agentic teams get better results from workflow design than from manual prompt tuning.

Why docs return markdown for agents, and how to do it without hurting human readers.

Let coding agents verify UI changes in a real browser, then patch based on what they see.

A practical look at refreshed dependencies and short rules that make AI coding easier to review.

Why some coding agents add extra checks, where that helps, and where it slows reviews.

Browser backend choice can change agent speed, retries, and recovery.

When wrappers help, where they fail, and how to test them on real coding work.

Give each subagent one job, one output, and one stop rule.

A practical look at rollout patterns for AI coding tools across regions and teams.

Practical steps and tradeoffs for keeping AI coding workflows effective with agent orchestration.

Workflow patterns that help engineering teams get consistent results from AI coding tools.

Background async subagents run tasks in parallel, improving AI coding workflow efficiency.

A practical examination of how engineering teams adapt documentation to return markdown for coding agents, focusing on token efficiency, implementation steps, tradeoffs, and limitations.

A practical examination of how integrating Playwright MCP into agentic coding workflows affects iteration speed, implementation steps, tradeoffs, and limitations for engineering teams working with coding agents.

A practical look at returning markdown from internal docs for coding agents: why teams do it, how to implement it, and what it actually changes in agentic workflows.

A concrete look at how to wire Playwright MCP into agentic coding workflows, what it actually changes for teams, and where the limits are.

A practical look at how AI coding agents change workflows, quality bars, and expectations for engineering teams—and what to do when a core model quietly gets worse.

A practical look at how an "Opus 4.6" style orchestrator could coordinate a fleet of "Codex 5.3" coding agents, what this changes for engineering teams, and how to implement it without breaking your workflow.