Sentry MCP Server – Model Context Protocol Server for Windsurf

free

Official Sentry MCP server: list issues, fetch traces and inspect performance in Windsurf AI workflows.

Curated by AI Stack · Platform pick
Installation Instructions →
Category: Monitoring & ObservabilityCompany: Sentry
Compatible Tools:
Windsurf (Primary)

Featured on AI Stack

Add this badge to your README or site so visitors know this MCP is listed in our directory.

Listed on AI Stack MCP Directory
<a href="https://ai-stack.dev/mcps/sentry-mcp-windsurf" target="_blank" rel="noopener noreferrer" style="display:inline-block;padding:6px 12px;background:#1a1f27;color:#93c5fd;border:1px solid #2d323a;border-radius:6px;font-size:12px;text-decoration:none;font-family:system-ui,sans-serif;">Listed on AI Stack MCP Directory</a>

About Sentry MCP Server MCP Server

Quick overview of why teams use it, how it fits into AI workflows, and key constraints.

Sentry in AI Workflows Without Context Switching

Sentry's Model Context Protocol (MCP) server integration allows AI-powered coding assistants like Claude and Cursor to directly access Sentry's rich debugging data without having to manually navigate between disparate dashboards, scripts, and APIs. By integrating the Sentry MCP server, these AI agents can pull the right data or actions from the underlying Sentry system to seamlessly handle a wide range of workflows, from incident response to automated monitoring and summarization.

Instead of repeatedly context-switching between various tools and interfaces, developers can now delegate common Sentry-related tasks to their AI assistant, which can pull the necessary information and perform the required actions through the MCP integration. This streamlines the development and troubleshooting process, enabling developers to stay focused on their core work while offloading repetitive or complex Sentry-related tasks to their AI teammate.

How Sentry MCP Server Improves AI‑Assisted Workflows

The Sentry MCP server integration unlocks a variety of AI-assisted workflows, including:

  • Incident response and triaging: The AI agent can quickly pull relevant issue details, trace data, and performance metrics to help investigate and resolve incidents.
  • Automated monitoring and reporting: The agent can regularly check Sentry for new issues, errors, and performance regressions, and generate summary reports or notify the appropriate teams.
  • Release health checks: The agent can inspect the health of recent releases, analyze error trends, and provide recommendations for improving release quality.
  • Contextual debugging and summarization: When developers encounter issues, the agent can fetch the necessary Sentry data to provide rich, contextual insights and suggested next steps.

Architecture and Data Flow

The Sentry MCP server acts as a middleware layer between the AI agent and the upstream Sentry API. It handles the necessary authentication, credential management, and permission enforcement to allow the agent to securely access the required Sentry data and perform actions on behalf of the user. The communication between the agent and the MCP server is facilitated through a combination of standard input/output (stdio) and Server-Sent Events (SSE), optimizing the data flow for efficiency and real-time updates.

When the agent sends a request to the MCP server, the server translates that request into the appropriate Sentry API calls, handles the response, and returns the relevant data back to the agent. This abstraction layer ensures that the agent can interact with Sentry without having to manage the underlying API complexities or authentication details.

When Sentry MCP Server Is Most Useful

  • AI-assisted incident investigation and response
  • Automated monitoring and reporting of Sentry issues, errors, and performance
  • Integrating Sentry data into AI-powered development workflows and assistants
  • Contextual debugging and summarization of Sentry data for developers
  • Automating release health checks and error trend analysis
  • Enabling Sentry integrations in AI-powered chatbots and conversational interfaces

Limitations and Operational Constraints

To use the Sentry MCP server, you'll need to obtain an API key with the necessary scopes (org:read, project:read, project:write, team:read, team:write, event:write). This API key will be used to authenticate the requests made by the AI agent to the MCP server.

  • The MCP server is subject to the same rate limits as the Sentry API, so heavy usage may trigger throttling.
  • The MCP server is currently designed to work with the Sentry SaaS platform. Running it against a self-hosted Sentry instance requires additional configuration and may have feature limitations.
  • The AI-powered search tools (e.g., `search_events`, `search_issues`) require an LLM provider (OpenAI or Anthropic) to be configured. Without a configured provider, these specific tools will be unavailable, but all other tools will function normally.
  • The MCP server is built to support a variety of AI-powered coding assistants, but specific tool integrations (e.g., the Claude Code plugin) may have additional requirements or constraints.

Example Configurations

For stdio Server (Sentry MCP Server Example):
https://github.com/getsentry/sentry-mcp
For SSE Server:
URL: http://example.com:8080/sse

Sentry MCP Server Specific Instructions

1. Configure Sentry token
2. Run MCP server
3. Add stdio/HTTP in Windsurf

Usage Notes

Help other developers understand when this MCP works best and where to be careful.

No usage notes provided.

Community field notes and related MCPs load below.