Back to Research

Cursor subagents and skills for teams

A practical Cursor workshop on subagents, skills, and team workflows for reviewable AI coding systems.

Editorial illustration for Cursor subagents and skills for teams. A practical Cursor workshop on subagents, skills, and team workflows for reviewable AI coding
Rogier MullerMay 9, 20266 min read

The situation

Field note: We teach Cursor teams to separate “cool demo agent” from “mergeable agent output”—without scoped .mdc rules and one trusted AGENTS.md, subagents amplify drift instead of reducing it.

Cursor’s own framing is useful because it names a change teams are already seeing: the product is moving from autocomplete to prompt-and-response agents, then to longer-running agents that do more work with less steering. For Cursor users, the bottleneck is no longer just writing code faster. It is deciding how to organize cursor subagents, cursor skills, cursor rules, and team conventions that other people can review.

For engineering teams, this is the point where “try the agent” stops being enough. If agents are going to handle larger tasks independently, you need artifacts that last beyond one chat: scoped rules, repo-level memory, task-specific skills, and clear boundaries for what a subagent may change. Cursor’s docs already point in that direction through Agent, Rules, MCP, and Skills.

The practical question is simple: what should a team create this week so Cursor produces less drift and more repeatable output? The answer is not a bigger prompt. It is a smaller operating system around the agent.

Walkthrough

Start by separating three layers that often get mixed together. Use AGENTS.md for durable repo conventions. Use .cursor/rules/*.mdc for scoped behavior that should attach only in the right files or contexts. Use skills for repeatable, on-demand procedures that are better expressed as a task package than as a permanent rule.

A minimal rule stub is enough to show the pattern:

---
description: Frontend component rules
globs:
  - src/components/**
apply: auto
---

Prefer small diffs, preserve existing props, and update tests when behavior changes.

That file does two useful things. First, it narrows context so Cursor does not carry irrelevant instructions everywhere. Second, it makes the rule reviewable in code review, which is the real test for cursor custom subagents and agent-driven edits: can another engineer see why the model behaved that way?

Next, add a short AGENTS.md boundary at the repo root. Keep it about what should never vary, not about every task the team might do.

# AGENTS.md

- Use the existing folder structure; do not move files unless the task asks for it.
- Prefer targeted edits over rewrites.
- Run the project’s test command before marking work complete.
- If a change touches auth, billing, or data access, ask for review.

Team artifact

Ship this checklist beside Cursor onboarding docs:

  • AGENTS.md + .mdc globs referenced before delegating subagents
  • SKILL activation quoted when repeatable workflows rerun
  • Diff summaries cite /topics/subagents-and-skills for deeper reading

This is where Cursor’s third era becomes concrete. The agent is no longer just generating code; it is operating inside a team contract. That contract should be short enough to read, strict enough to matter, and easy to update when the repo changes.

Then define one skill for a repeated workflow that the team keeps re-explaining. A skill is the right place for a procedure like “prepare a release note,” “triage a failing test,” or “summarize a PR for review.” Keep the description specific so the agent can decide when to load it.

---
name: pr-review-summary
description: Summarize a Cursor-authored pull request with changed files, risks, and test status.
---

1. List the files changed.
2. State the intended behavior change.
3. Call out risky areas.
4. Note tests run and any gaps.

That is a better fit than a permanent rule when the behavior is task-shaped rather than repo-shaped. It also matches the workshop pattern: teach the team to package reusable knowledge, not scatter it across prompts.

For subagents, keep delegation narrow. A cursor subagent should own one bounded job, return a summary, and avoid making unrelated edits. In practice, that means assigning tasks like “inspect failing tests,” “draft a migration plan,” or “review a browser bug reproduction,” then requiring the parent agent or human to reconcile the result. The failure mode is coordination drift: the subagent solves a local problem while the parent loses the thread.

What to verify

  • Did the change stay inside the intended files or globs?
  • Did the agent follow the repo’s AGENTS.md rules?
  • If a skill was used, was it the right one for a repeatable task?
  • If a subagent was delegated work, did it return a concise summary and not just raw output?
  • Were tests or validation steps run before review?

That checklist is also the right place for a small methodology note. In the Review step, the goal is not to admire the output; it is to verify that the agent’s work is explainable, scoped, and safe to merge.

If you are running a Cursor workshop or cursor ai coding training for an engineering team, this is the material to cover first: rules, skills, subagents, and review boundaries. It is also the right place to point people to subagents and skills when they need a deeper pass.

Finally, connect the workflow to Cursor’s official surfaces. Cursor’s docs cover Agent, Rules, MCP, and Skills, and the product blog’s third-era framing makes the direction explicit: larger tasks, longer timescales, less hand-holding. If your team is adopting Cursor for engineering work, the next move is to make that direction operational in the repo.

Tradeoffs and limits

This approach does not remove the need for judgment. Scoped rules can still conflict if they overlap too broadly. Skills can become stale if nobody updates the description or the steps. Subagents can create false confidence if their summaries are treated as proof instead of evidence.

There is also a maintenance cost. A clean .cursor/rules/ tree and a short AGENTS.md are useful only if the team keeps them current. If the repo changes faster than the rules, the agent will faithfully follow outdated instructions. That is worse than having no rule at all.

MCP and other connectors add another layer of review. Once a workflow reaches outside the repo, permission boundaries matter more than prompt quality. Teams should treat connector scope as part of the design, not as an afterthought.

The practical limit is simple: Cursor can help build the factory, but the factory still needs operators. The best teams use agents for repeatable work and keep humans on the boundary where intent, risk, and review meet.

Further reading

Related training topics

Related research

Continue through the research archive

Ready to start?

Transform how your team builds software.

Get in touch