Cursor team controls for admins
Cursor’s new model controls, spend alerts, and usage analytics for teams using subagents, skills, and rules.

The situation
Cursor’s latest changelog is aimed at enterprise admins, not individual users. It adds more granular model controls, soft spend limits with alerts, and a more detailed usage analytics tab. For teams running Cursor as part of a Cursor workshop or Cursor AI coding training program, that changes how governance fits into day-to-day work: by role, by surface, and by budget.
The main question is what to change this week so agentic work stays reviewable. If you already standardize Cursor rules, subagents, or team conventions, this release gives you a cleaner way to line up access policy with those workflows. See the related training topic on Cursor subagents and skills.
The release also changes the operating habit. Instead of hard stops that interrupt work, admins can use soft limits and alerts to keep users moving while still making consumption visible. That fits teams using Cursor custom agents, background agents, or other high-variance workflows where usage can spike during debugging or review.
Walkthrough
Start with model access. Cursor now lets admins block at the provider level or at the model configuration level, including speed and context window variants. The changelog also says enterprises can block new providers or model versions by default, and that existing blocklists need to migrate by June 1. The first task is inventory: list what is allowed, what is blocked, and which teams depend on each exception.
A short migration note can live beside your team conventions:
# AGENTS.md
## Cursor team policy
- Use approved providers and model families only.
- New providers are blocked by default until reviewed.
- Exceptions require a named owner and expiry date.
- Agent-authored changes must include a human review step.
Next, move spend management from a hard ceiling to a soft-limit workflow. Cursor now supports soft limits and sends alerts at 50%, 80%, and 100% of the limit. That helps when you want visibility without cutting off a team in the middle of a long debugging session or a multi-step agent run. The habit to set here is simple: 50% is a heads-up, 80% is a review point, and 100% is a decision point for the admin.
For teams that rely on Cursor rules, this is a good time to tighten scope. The current Cursor model favors layered .cursor/rules/*.mdc files over one large rule blob. Keep always-on rules narrow, and use auto-attached or manual rules for specific repos or tasks. A minimal rule stub looks like this:
---
description: Reviewable changes for agent-authored code
apply: always
---
- Prefer small, scoped edits.
- Ask before broad refactors.
- Summarize file-level impact in the final response.
Then use the usage analytics tab to check whether policy matches reality. Cursor now lets admins filter by user or break usage down by product surface: clients, Cloud Agents, automations, Bugbot, and Security Review. That split matters because it separates interactive IDE use from background or automated surfaces. If one team is consuming most of the budget through automations, the fix may be a workflow change; if one user is driving most of the spend through clients, the fix may be training or a rule adjustment.
This is also where subagents and skills are easier to govern. If you are packaging repeatable work into a cursor skill or delegating to a cursor subagent, answer three questions before rollout: what surface it runs on, what model access it needs, and how its output is reviewed. Use the same discipline for an AGENTS.md boundary or a repo-specific rule file.
A practical starter checklist:
- Migrate any existing blocklists to the new model/provider system before June 1.
- Decide which providers are blocked by default and who can request exceptions.
- Set soft limits and document what 50%, 80%, and 100% alerts trigger.
- Review usage by user and by surface, not just by total spend.
- Tighten
.cursor/rules/*.mdcscope so agent behavior matches policy. - Add a short
AGENTS.mdnote for review expectations on agent-authored changes.
Methodology note: in our Review step, the useful habit is to verify the policy artifact before trusting the workflow. For Cursor teams, that means checking the rule file, the boundary file, and the admin setting together, not separately.
Tradeoffs and limits
The new controls improve visibility, but they do not remove the need for local judgment. A blocked provider can still be the wrong answer if a team depends on a specific context window or latency profile for a real workload. Soft limits reduce disruption, but they can hide waste if nobody owns the alert thresholds.
Usage analytics are only useful if teams interpret them in context. A spike in Cloud Agents may be healthy during a migration and wasteful during a quiet week. The same is true for Bugbot or Security Review: surface-level totals do not explain whether the work was valuable. Treat the dashboard as a review input, not a verdict.
There is also a governance risk in over-centralizing model policy. If admins block too aggressively, teams may route around the controls or stop using agentic features altogether. The better pattern is narrow defaults, explicit exceptions, and a review loop that ties spend to actual repo outcomes.
Further reading
Related training topics
Related research

Cursor in Teams for Subagents
Cursor in Teams routes tasks to cloud agents and fits cursor subagents, rules, and review habits into team chat.

Composer 2 for Cursor Teams
Composer 2 for Cursor subagents, skills, and team workflows, with practical evals, rules, and review checks.

Cursor 3.3 Context for Subagents and Skills
Read Cursor 3.3 context usage and tune cursor rules, skills, MCPs, and subagents without bloated prompts.