One place to discover MCPs developers actually use
A community-curated hub to discover and contribute MCPs, AI rules, and AI development tools.
A community-curated hub to discover and contribute MCPs, AI rules, and AI development tools.
Latest and most popular rules for AI development tools
Practical rules developers actually use with AI coding tools
## Goal Ensure backend behavior is validated through real execution: routing, configs, async flows, not assumptions. ## Rule Behavior ### 1️⃣ Validate Endpoints Using Actual Requests - Test every API with sample payloads - Confirm status codes and response shape - Verify error payload structure matches spec - Prefer execution confidence over mental models ### 2️⃣ Prove Async Execution Order - Log inside async tasks to detect silent failures - Surface race conditions using delayed mock calls - Never allow Promise errors to be swallowed ### 3️⃣ Runtime Verified Configuration - Validate env variables resolve correctly - Confirm DB credentials, ports, tokens through live connects - Test fallback logic by simulating missing configs ### 4️⃣ Architecture Transparency - Independent controllers and services for isolated execution - No hidden side effects or runtime surprises - Run business logic without full server boot ### 5️⃣ Data Driven Confidence - Validate input and output schemas live - Log edge cases and boundary responses - Test on real data before refactoring decisions ### 6️⃣ Safe Refactoring With Proof - Compare before and after logs for critical paths - Ensure no routing or behavior drift - Runtime validation required before merge ### 7️⃣ Replit Agent Partnership - Trace real request lifecycle - Convert invisible bugs into observable truth - Ask AI to explain failures from logs ## Examples - "Run GET /users with invalid token and confirm response format." - "Inject delay and check async sequence correctness." - "Validate DB env vars and test live connect." ## Tool Prompts - "Execute this endpoint with payload {...} and display logs." - "Trace async calls and show execution order." - "Simulate missing API_KEY and show fallback behavior." - "Compare logs before and after refactor and list differences." ## Quick Implementation Wins - Add centralized request logging - Enable error stack traces in logs - Automate smoke tests for core routes
## Goal Validate Node.js backend behavior by inspecting real execution: async flow, event loop timing, routing accuracy, and transparent module structure. ## Rule Behavior ### 1️⃣ Validate Routes With Real Requests - Test endpoints using example payloads - Confirm response codes and JSON schema correctness - Verify error flows show explicit messages ### 2️⃣ Debug Async Flows Through Execution - Log inside async functions to confirm order of operations - Detect race conditions using delayed mock responses - Ensure Promise errors are surfaced, not swallowed ### 3️⃣ Configuration Proven at Runtime - Confirm environment variables load correctly - Test DB connectivity using actual credentials and ports - Validate fallback branches by simulating missing configs ### 4️⃣ Node Architecture Transparency - Keep controllers, services, and helpers modular and traceable - Enable isolated execution of pure functions without server - Avoid hidden behavior and side effects in modules ### 5️⃣ Event Loop Visibility - Ask for execution traces to detect microtask and macrotask ordering - Monitor long running operations blocking the event loop - Use logs to reveal async delays affecting user experience ### 6️⃣ Safe Refactoring With Behavioral Proof - Compare execution logs before and after change - Confirm there is no routing or logic drift - Refactor only after runtime stability is verified ## Examples - "Run POST /login and show full response logs." - "Simulate slow DB call and detect ordering issues." - "Show timing of callbacks vs Promise resolution." ## Tool Prompts - "Trace async events and show execution order." - "Test missing env variables and confirm fallback behavior." - "Run service function independently without server boot." ## Quick Implementation Wins - Add structured logging for async functions - Detect hidden await or blocking code using traces - Use isolated unit routes for debugging problematic flows
## Goal Trace logic across files and modules to understand data flow, function calls, and dependencies so you can debug confidently and refactor safely. ## Rule Behavior ### 1️⃣ Start From a Clear Entry Point - Identify the initiating call (API route, event handler, or UI action) - Ask the Agent to list the entry function and its immediate callers or callees ### 2️⃣ Follow Calls Across Module Boundaries - Trace imports and exported functions to find the next hop in the chain - Continue until you reach persistence, side effects, or external I O - Mark places where data shape changes or is transformed ### 3️⃣ Map Data Flow and Transformations - Record how key objects move and change across functions - Note mutations, conversions, and where validation occurs - Highlight boundary crossings like serialization, network, or DB writes ### 4️⃣ Detect Hidden Side Effects and Shared State - Identify modules that mutate global state or rely on singletons - Flag implicit dependencies or hidden I O that affect reproducibility - Prefer pure function refactors where feasible ### 5️⃣ Document the Chain for Future Reference - Create a short chain summary with steps and key variables at each hop - Save the trace in the repo or PR to help reviewers and future debuggers ### 6️⃣ Use Execution to Validate the Trace - Run small, isolated executions of each hop to confirm assumptions - Ensure the observed runtime behavior matches the traced chain ## Examples - "Follow chain of references from API handler to DB write and list every transformation." - "Trace this event emitter to all subscribers and show the order of operations." - "Show where this object is mutated across the codebase." ## Tool Prompts - "Follow chain of references starting at src/api/orders.ts:placeOrder and summarize each step." - "List all files that import this function and indicate which ones call it." - "Produce a step by step trace including variable snapshots for each hop." ## Quick Implementation Wins - Start by tracing the five most frequently failing endpoints - Attach chain summaries to PRs that modify core flows - Add small runnable snippets to validate each link in the chain
## Goal Modernize React code by converting class components to functional components using hooks, improving readability, testability, and runtime behavior while preserving existing UI contracts. ## Rule Behavior 1️⃣ Identify Conversion Candidates - Target components that use state, lifecycle methods, or complex render logic. - Prefer converting small and isolated components first. - Delay conversion of components that depend on unusual class patterns until tests exist. 2️⃣ Preserve Public Props and Behavior - Keep prop names and defaults identical to avoid breaking calling components. - Ensure events, callback behavior, and external API expectations remain unchanged. 3️⃣ Map Lifecycles to Hooks Translate lifecycle methods into hook equivalents: - componentDidMount becomes a useEffect that runs once. - componentDidUpdate becomes a useEffect that listens to specific dependencies. - componentWillUnmount becomes the cleanup function inside useEffect. - setState logic becomes useState or useReducer depending on complexity. ### Before 'class Counter extends React.Component { constructor() { super(); this.state = { count: 0 }; this.tick = this.tick.bind(this); } componentDidMount() { this.timer = setInterval(this.tick, 1000); } componentWillUnmount() { clearInterval(this.timer); } tick() { this.setState({ count: this.state.count + 1 }); } render() { return <div>{this.state.count}</div>; } }' ### After 'function Counter() { const [count, setCount] = useState(0); useEffect(() => { const timer = setInterval(() => { setCount(c => c + 1); }, 1000); return () => clearInterval(timer); }, []); return <div>{count}</div>; }' ### Common Mistake: Stale State in Interval Wrong: 'useEffect(() => { const timer = setInterval(() => { setCount(count + 1); }, 1000); return () => clearInterval(timer); }, []);' Right: 'useEffect(() => { const timer = setInterval(() => { setCount(c => c + 1); }, 1000); return () => clearInterval(timer); }, []);' 4️⃣ Replace Derived State Carefully - Convert computed values into memoized values rather than storing them directly. - Use memoized callbacks when passing functions to children. 5️⃣ Handle Complex State with Reducers - Use useReducer for multi-field or interdependent state structures. - Keep the reducer pure and testable. 6️⃣ Validate Through Execution and Tests - Run UI flows to confirm behavior, accessibility, and interactions remain identical. - Add interaction and snapshot tests to ensure no regressions occur. ## Examples - Convert a class component to a functional one using React hooks while maintaining identical behavior. - Replace setState logic with a reducer and verify the refactor through tests. - Move lifecycle-dependent logic into effects and confirm correct cleanup behavior. ## Tool Prompts - Convert this class component into a functional component while keeping behavior unchanged. - Show a step-by-step transformation explaining each lifecycle mapping. - Run the updated component and provide logs demonstrating correct interactions. ## Quick Implementation Wins - Start with leaf-level components that have few dependencies. - Add tests for event handling and rendering before conversion. - Use automated codemods for basic class-to-function transformations, then refine manually.
**Summary** Investigate the project history to find when regressions or behavioral changes were introduced. Use targeted git queries, commit bisects, and contextual diffs to identify the minimal change that caused a failure. **Objectives** • Locate the commit that introduced a regression quickly • Extract the minimal diff that explains behavior change • Provide reproducible test cases tied to specific commits • Produce safe rollback or fix guidance based on git evidence **Principles** 1. Use git history as a forensic tool—relevant commits contain the best clues. 2. Reproduce the issue at specific commits to confirm causality. 3. Prefer bisect-driven isolation over manual guesswork. 4. Preserve history; use branch-based fixes and clear commit messages for rollbacks. **Implementation Pattern** Step 1 — Identify the approximate time window or feature area • Collect error timestamps, release notes, and deploy events. • Map failing behavior to likely touched files or subsystems. Step 2 — Run a focused git log and file blame • Use git log -- path to see recent changes to related files. • Use git blame to find when the specific line was last changed and by whom. Step 3 — Create a reproducible test for the failing behavior • Capture a small test or harness that fails on the current main branch. • Confirm the test passes on an earlier known-good commit if available. Step 4 — Use git bisect to isolate the offending commit • Start bisect with known-good and known-bad commits. • Automate test runs during bisect to let git find the exact commit quickly. Step 5 — Analyze the diff and add targeted fixes or revert where necessary • Review the minimal diff to understand intent and side effects. • If a quick revert is safe, prepare a revert PR with explanation. • Otherwise craft a minimal patch that preserves intent and fixes the bug. **Anti-Pattern (Before)** Manually scanning thousands of commits or reverting broad ranges without confirming behavior at each step. **Recommended Pattern (After)** Use bisect and test harness: '# Example commands git bisect start git bisect bad HEAD git bisect good v1.4.2 # let bisect run tests and identify offending commit' Benefits: • Precise identification of the breaking change. • Clear audit trail for decisions and rollbacks. • Faster remediation with minimal collateral risk. **Best Practices** • Always include a reproducible test before bisecting. • Record environment and dependency versions when reproducing historical commits. • Prefer targeted fixes over blind reverts. • Annotate the discovered commit with a follow-up issue and remediation plan. **Agent Prompts** "Run git log for the files touched by this failure and summarize likely suspects." "Automate git bisect using this test script and report the first bad commit." "Show the diff between the last good and first bad commits and highlight risky changes." **Notes** • When bisecting across major refactors, bisect may land on commits requiring build or dependency changes; handle these as special cases. • Keep test harness lightweight so bisect can run quickly in CI.
Why developers come here
Find MCPs without digging through GitHub or Reddit
See platform picks alongside community submissions
Share MCPs and help others find what works
New MCPs are added weekly — community submissions are free and credited.
Curated by AI Stack · Platform pick
Firebase MCP is a Model Context Protocol server that allows AI tools to securely interact with Firebase projects. It enables LLM clients to inspect and manage Firestore / Realtime Database data, Authentication users, Cloud Functions, Hosting configs, and project metadata through natural language requests, helping developers debug, explore, and operate Firebase-backed applications faster.
Submitted by @deepsyyt · Community
Sentry MCP is a Model Context Protocol server that enables AI tools to securely access and interact with Sentry projects. It allows LLM clients to query issues, errors, performance traces, releases, and alerts using natural language, helping developers debug, investigate incidents, and understand application health directly from AI assistants.
Submitted by Merwyn Mohankumar · Community
Notion MCP is a Model Context Protocol server (hosted and open implementations) that enables AI tools to securely access and interact with your Notion workspace. Using MCP, LLM clients can search, read, create, update, and manage Notion pages, databases, and content through natural language requests. 0
Curated by AI Stack · Platform pick
Atlassian MCP lets Claude interact with Atlassian products like Jira and Confluence to read, search, and update work items using natural language. Instead of manually navigating issues, tickets, and docs, you can ask Claude to fetch context, summarize work, create or update issues, and help with planning and analysis directly from Atlassian data.
Submitted by Yuan Teoh · Community
MCP Toolbox for Databases is an open source MCP server for databases.
Popular MCPs being explored by the community this week.
Based on recent community views and activity
Sentry MCP is a Model Context Protocol server that enables AI tools to securely access and interact with Sentry projects. It allows LLM clients to query issues, errors, performance traces, releases, and alerts using natural language, helping developers debug, investigate incidents, and understand application health directly from AI assistants.
Firebase MCP is a Model Context Protocol server that allows AI tools to securely interact with Firebase projects. It enables LLM clients to inspect and manage Firestore / Realtime Database data, Authentication users, Cloud Functions, Hosting configs, and project metadata through natural language requests, helping developers debug, explore, and operate Firebase-backed applications faster.
Atlassian MCP lets Claude interact with Atlassian products like Jira and Confluence to read, search, and update work items using natural language. Instead of manually navigating issues, tickets, and docs, you can ask Claude to fetch context, summarize work, create or update issues, and help with planning and analysis directly from Atlassian data.
An MCP server that enables AI assistants to perform real-time web searches using the Brave Search API. It exposes search capabilities over Model Context Protocol so tools like Claude Desktop and Cursor can retrieve up-to-date web results in a privacy-focused way.
Official HTTP Fetch Model Context Protocol (MCP) server that enables Claude to perform secure outbound HTTP requests. Supports REST API calls, JSON responses, header configuration, and controlled network access for API integration, data retrieval, and service orchestration workflows. Designed specially for production-safe API interactions using Claude.