DA

Daem0n-MCP – Model Context Protocol Server for Claude

free

AI Memory & Decision System - Give AI agents persistent memory and consistent decision-making with actual semantic understanding. Features semantic search, rule enforcement, knowledge graphs, time-aware recall, and background dreaming for autonomous re-evaluation of past decisions.

Submitted by @dasblueeyeddevil · Community · View profile
Installation Instructions →
Category: BackendCompany: 9th Level Software
Compatible Tools:
Claude (Primary)CursorGitHub Copilot

Featured on AI Stack

Add this badge to your README or site so visitors know this MCP is listed in our directory.

Listed on AI Stack MCP Directory
<a href="https://ai-stack.dev/mcps/daem0n-mcp" target="_blank" rel="noopener noreferrer" style="display:inline-block;padding:6px 12px;background:#1a1f27;color:#93c5fd;border:1px solid #2d323a;border-radius:6px;font-size:12px;text-decoration:none;font-family:system-ui,sans-serif;">Listed on AI Stack MCP Directory</a>

About Daem0n-MCP MCP Server

Quick overview of why teams use it, how it fits into AI workflows, and key constraints.

9th Level Software in AI Workflows Without Context Switching

AI assistants and developers often face the challenge of constantly switching between various dashboards, scripts, and APIs to gather the necessary information and actions for their workflows. Daem0n-MCP solves this problem by providing a unified interface that allows AI agents to pull the right data or actions from the underlying system without manual navigation. By acting as a semantic middleware, Daem0n-MCP bridges the gap between the agent's natural language understanding and the diverse set of tools and APIs used in modern AI-powered applications.

Daem0n-MCP's memory and decision system gives AI agents persistent memory and consistent decision-making with actual semantic understanding, enabling them to operate with greater context and autonomy across a wide range of tasks.

How Daem0n-MCP Improves AI‑Assisted Workflows

Daem0n-MCP enables AI agents to handle a variety of workflows more effectively, including incident response, reporting, monitoring, and summarization. By providing a centralized memory system and a set of workflow-oriented tools, Daem0n-MCP allows agents to:

  • Quickly retrieve relevant historical decisions, warnings, and patterns to inform their current actions
  • Automatically enforce best practices and decision rules before executing changes
  • Persistently store the outcomes of their actions and learn from past successes and failures
  • Seamlessly integrate with external systems and APIs without manual context switching

Architecture and Data Flow

The Daem0n-MCP server acts as a semantic middleware, translating the agent's natural language requests into the appropriate upstream API calls. It handles credential management, permission enforcement, and the bidirectional flow of data between the agent and the underlying tools and systems. The server communicates with the agent using either a stdio or SSE transport, ensuring a responsive and reliable interaction.

When Daem0n-MCP Is Most Useful

  • AI-assisted incident investigation and remediation
  • Automated summarization of complex reports or technical documents
  • Proactive monitoring and health checks for software releases
  • Integrating monitoring data and alerts into conversational AI assistants like Claude or Cursor
  • Maintaining consistent decision-making across long-running, multi-step workflows
  • Providing a semantic layer for AI agents to interact with diverse enterprise tools and APIs

Limitations and Operational Constraints

Daem0n-MCP requires API keys or other credentials to authenticate with the underlying systems it integrates with. Additionally, there may be rate limits or other platform-specific constraints that need to be considered when deploying and using Daem0n-MCP. The server and agent must also be properly configured to work within the target environment and network setup, and the agent must be compatible with the specific models and tooling used by Daem0n-MCP.

  • API key management and credential security
  • Rate limits and throttling on upstream API calls
  • Platform and host-specific compatibility requirements
  • Network and firewall setup for server-agent communication
  • Model and tooling compatibility (e.g., supported languages, embeddings, etc.)

Example Configurations

For stdio Server (Daem0n-MCP Example):
Command: node path/to/daem0n-mcp/server.js
For SSE Server:
URL: http://example.com:8080/sse

Daem0n-MCP Specific Instructions

Best Install Experience:
1) Download Summon_Daem0n.md to your project root
2) Open Claude Code
3) Input the following prompt: Read Summon_Daem0n.md and perform the necessary rituals
Manual Install Experience:
1. Install via pip: pip install -e ~/Daem0nMCP
2. For ONNX acceleration (optional): pip install -e ~/Daem0nMCP[onnx]
3. Run the MCP server (Linux/macOS): python -m daem0nmcp.server
4. Register with Claude Code: claude mcp add daem0nmcp --scope user -- <PYTHON_PATH> -m daem0nmcp.server
5. On Windows, use HTTP transport: python ~/Daem0nMCP/start_server.py --port 9876
6. Add to ~/.claude.json for HTTP mode configuration

Usage Notes

Help other developers understand when this MCP works best and where to be careful.

Best used when:
Building AI agents that need long-term memory and context awareness
Enforcing consistent decision-making patterns and rules
Analyzing code with semantic understanding and causal relationships
Tracking failed decisions for improved future outcomes
Avoid or be careful when:
Real-time systems requiring sub-millisecond latency
Projects with extremely large codebases (embedding model has latency)
Known limitations or caveats:
Existing embeddings need re-encoding when upgrading to v6.6.6 (embedding model changed from all-MiniLM-L6-v2 to ModernBERT)
Windows requires HTTP transport instead of stdio
First embedding model load adds several seconds to initial calls
Tool-specific behavior:
Works best with Claude Code and OpenCode; legacy tool compatibility maintained
8 consolidated workflow tools reduce context overhead vs 67 individual tools
Graph features require hierarchical community detection (Leiden algorithm)

Community field notes and related MCPs load below.