Cursor 3 subagents and skills
Practical Cursor 3 setup for cursor subagents, rules, and team handoffs in shared repos.

The situation
Field note: Parallel Cursor agents amplify whichever repo contract is weakest—we teach teams to freeze AGENTS.md and .mdc splits before adding another subagent lane.
Official anchors: Cursor Agent docs, Rules, Skills.
Cursor’s April 2026 update pushes AI coding toward multiple agents working in parallel. For teams, the real issue is not getting code out of an agent. It is keeping cursor subagents scoped, reviewable, and aligned with repo rules.
For Cursor users, the practical question is how to organize cursor skills, cursor rules, and team conventions so the IDE produces work another engineer can trust. Cursor’s docs separate rules, agents, MCP, and skills, so each needs its own review path.
This is for engineering teams using Cursor in shared repositories. The goal is a setup that works in practice: scoped rules, a clear AGENTS.md boundary, and a review loop that catches drift before it reaches a pull request. For the broader training path, see Cursor subagents and skills.
Walkthrough
Start by separating what should always be true from what should apply only in one folder or task. Cursor rules are the right place for repo-specific behavior, while AGENTS.md works for team conventions and instructions that should travel with the codebase. If you are still using one large rules file, split it into scoped files before adding more agent behavior.
A minimal rule stub is enough to show the pattern:
---
description: API route conventions
globs:
- app/api/**/*.ts
alwaysApply: false
---
Use existing route helpers.
Keep validation close to the handler.
Prefer small changes that preserve current error shapes.
That kind of scoped rule is easier for a cursor subagent to follow than a long global policy. It also gives reviewers something concrete to check. If the agent touched an API route, the reviewer can verify the file path and the local convention.
Next, define the team boundary in AGENTS.md. Keep it short and operational. The point is not to explain the whole repository. It is to say what must not change, what must be verified, and where human review is required.
# AGENTS.md
- Do not change auth flows without a maintainer review.
- Prefer existing test helpers over new fixtures.
- If a change affects shared types, update the consumer package first.
- Summarize any cross-package impact in the PR description.
Now add one delegation rule for cursor custom agents or subagents: use them when the task has a bounded scope and a clear return format. Good fits are refactors, test generation, or browser-debugging tasks. Bad fits are open-ended architecture changes, because the parent agent still has to reconcile too much context.
A short review checklist keeps the handoff honest:
- Did the agent use the smallest applicable rule or skill?
- Did it stay inside the folder or package boundary?
- Are tests or checks attached to the change?
- Does the summary explain what changed and what did not?
- Could a teammate reproduce the result from the artifact alone?
Cursor’s docs also place MCP and skills in the same operating model. Treat MCP as the connector boundary and skills as on-demand capability. A skill should describe a repeatable task. MCP should be reviewed like any other external integration. If a team installs a plugin or custom skill, the question is not only whether it works, but what permissions, data paths, and failure modes it introduces.
That is where the workflow becomes workshop-ready. Use Plan Mode or agent mode to draft the change, then force a review step that checks the artifact, not just the output. In our methodology, this is a Review move: the agent can propose, but the team still validates scope, permissions, and file-level fit before merge. See our methodology for the review step framing.
Team artifact
Parallel-agent rollout gate:
- Each subagent prompt lists allowed folders + forbidden glob patterns
- Parent summaries cite which
.mdcfired for delegated work - MCP scopes pasted into the rollout ticket when connectors ran
What to verify
- Docs at
cursor.com/docsstill describe parallel modes you enabled. - CI mirrors local
.cursor/rulesresolution order. - Internal wiki links
/topics/subagents-and-skillsfor onboarding.
Tradeoffs and limits
Cursor’s layered model helps only if the team keeps the layers distinct. If every instruction becomes a global rule, subagents lose local context and start behaving like one noisy assistant. If every task becomes a custom agent, the overhead can outweigh the value.
There is also a trust problem. Parallel agents make it easier to ship more changes, but they also make it easier to miss cross-file regressions. The more autonomous the workflow, the more important it is to keep tests, summaries, and ownership boundaries explicit.
MCP and skills add power, but they also expand the review surface. A connector that reaches into GitHub, Slack, or a private knowledge base should be treated as production infrastructure, not a convenience toggle. If the team cannot explain the permission boundary, it is not ready for broad use.
Cursor’s product language is still moving, so verify current behavior in the official docs before you standardize a workshop or internal playbook.
Further reading
Related training topics
Related research

Cursor in Teams for Subagents
Cursor in Teams routes tasks to cloud agents and fits cursor subagents, rules, and review habits into team chat.

Cursor 3.3 Context for Subagents and Skills
Read Cursor 3.3 context usage and tune cursor rules, skills, MCPs, and subagents without bloated prompts.

Cursor subagents and skills for teams
A practical Cursor workshop on subagents, skills, and team workflows for reviewable AI coding systems.