Cursor Canvases for Team Workflows
Cursor Canvases add reviewable side-panel artifacts for agent work. Read the workflow, review rules, and team training patterns for Cursor Canvases.

The situation
Field note: Teams demo canvases fast; production friction shows up when reviewers cannot diff canvas output against .mdc rules—operators should rehearse one workflow end-to-end before standardizing.
Official anchors: Cursor Agent docs, Rules.
Cursor’s April 15 changelog adds canvases as a new agent output in the Agents Window. Instead of only replying in chat, Cursor can create interactive artifacts that use first-party components like tables, boxes, diagrams, and charts, plus Cursor components such as diffs and to-do lists.
That changes the shape of a useful agent response. For many coding tasks, the next step is a reviewable artifact a teammate can inspect, edit, and keep open beside the terminal, browser, and source control.
This fits teams already using Cursor subagents, skills, and repo rules. Canvases sit in the same workflow surface: they are output, but they are also something you can review and keep around. If you are building habits around subagents and skills, canvases are worth testing as a place where agent work becomes visible and durable.
How to try it
Start with the artifact you want the agent to leave behind. For planning, that may be a checklist. For debugging, it may be a compact diagram or a table of failing cases. For review, it may be a diff plus a short decision log. The goal is to move from chat to a persistent side-panel artifact.
Try one team workflow at a time in Cursor 3.1, in the Agents Window or the editor. A good first test is a repo task that already depends on rules and review discipline. Ask the agent to summarize a change, then compare the canvas with the repo’s instructions in AGENTS.md or scoped Cursor rules. If the canvas is useful, it should make those conventions easier to check.
A narrow rule file helps here. Keep the rule focused on one local behavior, then use the canvas to verify the result.
---
description: Use this rule for API route changes in the billing area.
globs:
- app/billing/**
- src/api/billing/**
apply: always
---
- Prefer small diffs and explicit error handling.
- Update tests when request or response shapes change.
- Summarize any user-facing behavior change in the final artifact.
That kind of .mdc stub gives the agent a local boundary. The canvas then becomes a place to check whether the agent stayed inside it. In practice, this is closer to a review surface than a chat transcript.
If your team already uses AGENTS.md, keep it for durable repo conventions and use the canvas for task-specific output. A minimal boundary note might look like this:
# AGENTS.md
- Follow the billing API contract in `docs/billing-contract.md`.
- Do not change shared auth helpers without review.
- Prefer changes that are easy to inspect in one PR.
The habit to build is simple: ask the agent for a canvas, then review the canvas before trusting the code. Check three things. Does it match the repo rules? Does it show the evidence you need? Can another teammate act on it without reopening the whole conversation?
Our methodology treats this as a review problem as much as a feature problem. If the artifact cannot be reviewed, it is not ready to use.
Starter checklist:
- Pick one recurring task: planning, debugging, or PR review.
- Add or tighten one scoped
.cursor/rules/*.mdcfile. - Keep
AGENTS.mdfor repo-wide conventions only. - Ask the agent to return a canvas, not just a chat answer.
- Review whether the canvas makes the next action obvious.
- Record what was missing, then adjust the rule or convention.
Team artifact
Canvas rehearsal checklist for reviewers:
- Canvas JSON/export attached beside the PR diff
- Matching
.mdcglobs cited when UI contracts shifted -
AGENTS.mdboundary quoted if repo-wide behavior changed
What to verify
- Canvas components align with current Agents Window docs.
- No privileged data pasted into shared canvases without redaction.
- Training links reference
/topics/subagents-and-skillsfor deeper drills.
Tradeoffs and limits
Canvases do not replace repo rules, skills, or subagents. They are an output surface. If the instructions are vague, the canvas will usually be vague too.
They also do not remove governance work. A durable artifact can still be wrong, incomplete, or overconfident. Teams still need to check permissions, review boundaries, and the difference between a helpful summary and a verified change.
Canvases are most useful when the task benefits from a persistent side-panel artifact. They are less useful for tiny edits, one-line fixes, or work that is already best handled directly in the editor. If every task becomes a canvas, the workflow will slow down.
The official changelog is the source of truth for what Cursor shipped here, so verify current docs before standardizing a process around canvases.
Further reading
Related training topics
Related research

Cursor Team Marketplace
Cursor lets admins create a team marketplace before a repo, with controls for plugins, skills, subagents, MCP, rules, and hooks.

Cursor SDK for agents
Cursor’s SDK changes agent workflows, review habits, and repo boundaries. Read the workflow, review rules, and team training patterns for Cursor SDK.

Cursor Security Review beta
Cursor’s Security Review beta adds always-on agents for Teams and Enterprise, with Slack, MCP, and admin controls.