·3 min read·Ember Team

Your AI Agent Doesn't Have to Know Everything

Ember now supports MCP so your agents can manage tasks, but local models keep your work where it belongs.

A minimal desk setup representing local, private AI-assisted work

What MCP actually does

The Model Context Protocol (MCP) is an open standard that lets AI assistants connect to tools and data sources. With it, an AI agent can read your tasks, create new ones, update statuses, and generally interact with your project management system as if it were a person at the keyboard.

Ember now supports MCP. You can connect agents like Claude, or any MCP-compatible assistant, directly to your workspace. Ask your agent to summarize what's in progress, break a project down into tasks, or mark something done after you describe what you finished.

It's genuinely useful. But the first question worth asking is: where does your data go when the agent reads it?

Cloud agents read what you give them

When you connect a cloud-based AI like Claude or GPT to your workspace, your task titles, descriptions, and project structure are sent to that provider's servers to be processed. The model lives on remote infrastructure, so the data has to travel there.

For some people, this is an acceptable trade-off. For others, it isn't.

Your tasks are not just metadata. They contain your thinking. Project names, half-formed ideas, client references, things you're still figuring out. Sending that to a third-party model to get a summary is a reasonable choice, but it should be a conscious one.

Local models change the equation

Tools like Ollama and LM Studio let you run capable language models entirely on your own hardware. The model runs on your CPU or GPU. Nothing leaves your machine. When you connect a local model to Ember via MCP, your task data never touches an external server.

The capability gap between local and cloud models has narrowed considerably. For the kind of tasks you'd ask an agent to do in a project management context, a local model on a modern laptop is more than sufficient: reading through your task list, helping you prioritize, drafting descriptions, creating new items from a brief.

The question is not whether AI can be useful for your work. It's whether the cost of that usefulness includes your data leaving your control.

Ollama is straightforward to set up. You install the application, pull a model (Mistral, Llama 3, Gemma, and others are available), and it exposes a local API that any MCP-compatible client can connect to. LM Studio offers a similar experience with a graphical interface for people who prefer one.

How this works in practice

Ember's MCP integration works the same way regardless of which model you point it at. You configure your MCP client to connect to Ember, configure the model, and the agent can read your workspace and take actions within it.

If you use a cloud model, the data flows through that provider. If you use a local model via Ollama or LM Studio, the data stays entirely on your device. The choice is yours, and switching between them is straightforward.

The integration doesn't require any special account setup. It uses the same underlying data your workspace already has, exposed through a standard protocol that any MCP-compatible agent can speak.

Privacy-first by default, open by choice

Ember was built around the idea that your work is yours. No behavioral tracking, no cloud sync unless you want it, no selling your usage patterns. The MCP feature is an extension of that principle: you get the tools to use AI on your own terms.

Using a local model is the most private way to bring AI into your workflow. We built the MCP integration with that option in mind, and it's the one we'd suggest starting with if keeping your data local matters to you.