Blog/Product·May 12, 2026·5 min read

Coherence is now an MCP server: use your CRM from Claude, Cursor, and ChatGPT

We just shipped the official Coherence MCP server. Talk to your CRM and its AI agent from Claude Desktop, Cursor, Cline, and any other MCP-compatible client.

K

Keith Fawcett

Founder

Coherence is now an MCP server

You can now talk to your Coherence workspace from Claude Desktop, Cursor, Cline, Windsurf, and VS Code's GitHub Copilot — every major AI client that speaks the Model Context Protocol. The official server is on npm as @coherenceos/mcp-server and listed in the MCP Registry as io.getcoherence/mcp.

The headline isn't that we exposed an API — we already had one. The headline is that you can ask your CRM's AI agent (Nash) to do things from anywhere, and the things it does flow through the same approval rules your team already trusts.

What that looks like

In Claude Desktop, paste seven lines of JSON into the config file, restart, and you can write things like:

"Ask my Coherence agent to draft a follow-up email to the leads I created this week."

"List my open deals over $50k and tell me which ones haven't had activity in 14+ days."

"Have Nash check what's on my calendar today and remind me about the most important meeting an hour before."

"Add a contact for Jane Doe at Acme Corp, then ask Nash to draft an intro email I can review."

Claude (or Cursor, or Cline) does the orchestration. Nash does the actual work — drafting, scheduling, reminding, recording — using the same internal toolset Nash has when you talk to it inside Coherence. The drafted email lands in your outbox waiting for your approval. The reminder shows up in your reminders inbox. The contact appears in the CRM.

The shape of the integration

Two tiers of tools, by design:

chat_with_agent — the headline. Sends a natural-language message to a Coherence agent and returns the agent's reply. The agent has its full toolset internally: sending email, drafting outreach, creating reminders, posting to social, creating landing pages, managing approval flows. We didn't have to re-expose each of those as a separate MCP tool — Claude calls chat_with_agent, the agent figures out which of its 40+ tools to use, and reports back what it did.

Fast-path REST tools — for direct data access where you don't want a full agent loop. list_modules, list_records, get_record, create_record, update_record, delete_record, plus the outreach content CRUD. These hit api.getcoherence.io/v1/* and return in under a second.

For everything that isn't raw record CRUD — calendar events, email sends, social posts, landing page creation, reminder management — the right move is chat_with_agent. The agent has those tools and your approval rules still gate the sensitive ones.

How we built it

Most "MCP servers" are thin wrappers over an existing API. Ours is too — about 500 lines of TypeScript over a fresh /v1/* REST surface on our backend. What took the actual work was three decisions that aren't in the source:

1. Per-service auth via a shared library, not a gateway. We considered standing up an Envoy ext_authz filter in front of every service. Then we noticed our prod infrastructure (DigitalOcean App Platform) doesn't run an Envoy gateway — DO's native ingress handles routing. So we extracted the auth code (API key hashing, JWT verification, scope checks) into a @coherence/api-auth package and dropped it into the two services that need it. Linear, Stripe, and Notion all do per-service auth via shared libs. The gateway model is elegant; the library model ships.

2. Approval gates inside the platform, not at the edge. When the agent invokes sendEmail, the workspace's approval system fires the same way it does for human users. An API key cannot bypass an approval rule that was set up for human users. We could have built a parallel "API-key approval" path; instead we made the existing one universal.

3. Coarse scopes, not per-tool scopes. API keys can be limited to records:read, records:write, collab:read, collab:write, agents:read, agents:write, or workspace:read. Seven scopes covers ~95% of real cases. Per-tool scopes (28+ flags to manage) would have been the "more secure" choice on paper, and the wrong product choice in practice — most users would have left them all on by default.

What's next

This is the stdio transport version — you install via npx and run locally. We're also building the hosted Streamable HTTP transport at mcp.getcoherence.io, which unlocks Smithery's hosted-server flow, Claude.ai web connectors, and ChatGPT's MCP integration. Same tools, no install.

In the meantime, the existing npx install works in every MCP client that speaks stdio — and that's everywhere serious right now.

Try it

If you've been waiting for "AI tools that actually do the work" — this is what that looks like. Drop the JSON snippet into your config, restart, and ask Nash for something specific. The first time it drafts an email exactly the way you would have written it and you just hit Approve, that's the moment.

K

Keith Fawcett

Founder

Founder of Coherence. Building the intelligence layer for business.