privacysecuritytechnical

How Stomme AI Keeps Your Data Private

Your agent infrastructure runs on your Mac. Your conversations stay local. Here's exactly what lives where.

By Nils Ekström, CTO at Stomme AI

The short version

Your Stomme AI agent infrastructure runs on your Mac. Conversations and files stay on your machine. We don't read your email. We don't store your conversations. If our servers go down, your agent keeps working. If you cancel, your data stays with you.

That's the promise. Here's how it works.

Where your data lives

Most AI tools work the same way: you type something, it goes to a server, the server processes it, the response comes back. Your data passes through — and often stays on — someone else's infrastructure.

Stomme AI works differently. Your agent is software installed on your Mac. It runs locally, like any other application. Your conversations, files, and agent memory are stored on your hard drive — not in a cloud database.

When your agent needs to reason — generate a response, analyse a document, draft an email — it sends a prompt to a cloud AI model API (Claude by Anthropic). The prompt is processed and the response comes back. But your files aren't uploaded. Your email archive isn't indexed on a remote server. Your conversation history lives on your machine.

What stays on your Mac:

  • All conversations with your agent
  • Files your agent creates or reads
  • Your agent's memory (preferences, context, patterns)
  • Connected service credentials (stored in macOS Keychain)

What passes through external servers:

  • AI reasoning prompts and responses (Anthropic cloud API)
  • Web search queries (Brave Search API)

What we store on our infrastructure:

  • Your account information (name, email, billing)
  • Subscription and billing data (via Stripe)
  • Service metadata (agent health checks, usage counters)

That's it. We don't have access to your conversations, your files, or your agent's memory.

What about onboarding?

When you first set up your agent, we ask about your preferences, tools, and working style through an onboarding form. That information is processed on our servers to configure your agent — personalising its behaviour, connecting your services, and building its working profile.

Once your agent is configured and running on your Mac, that onboarding data is deleted from our servers. It's not stored, not logged, and not used to train any models.

The approval gate

Your agent is autonomous — it works without you watching. But it's not unsupervised.

High-impact actions require your explicit approval before they execute:

  • Sending email on your behalf — your agent drafts, you approve
  • Deleting files — your agent flags, you confirm
  • Accessing a new service — your agent requests, you grant
  • Installing or modifying anything — blocked by default

You set the boundaries. Your agent respects them. Every action is logged and auditable through Mission Control.

Think of it like a new colleague: you give them access to what they need, you set clear expectations about what requires your sign-off, and you review their work until you're confident. Then you gradually extend trust.

How this compares to ChatGPT

With ChatGPT, your conversations are stored on OpenAI's servers. OpenAI's privacy policy covers how they handle that data, including potential use for model training (though you can opt out). Your questions, your documents, your ideas — they pass through and potentially stay on infrastructure you don't control.

With Stomme AI, your conversations never leave your Mac. The cloud AI model sees individual reasoning prompts (the same way ChatGPT sees your messages), but your conversation history, your files, and your agent's accumulated knowledge remain local.

The practical difference: if OpenAI has a data breach, your ChatGPT history is potentially exposed. If Stomme AI has a data breach, your conversations aren't affected — because we never had them.

What happens if you cancel

Your agent files stay on your Mac. Your conversation history, your agent's memory, your configured workflows — all of it remains on your machine. We delete your account from our infrastructure (billing records, usage counters, agent health data). You keep everything you created.

You can even export your agent's configuration if you want to set up a DIY OpenClaw installation later. The technology is open-source. Your data was always yours.

What we can't promise

Transparency means being honest about limitations too:

  • AI model providers see your reasoning prompts. When your agent sends a query to Claude, that prompt reaches Anthropic's servers. We can't control what happens inside their infrastructure. They have privacy policies and data handling practices you should understand.
  • Web searches are visible to the search provider. When your agent uses Brave Search, the query goes through Brave's API. This is the same as you searching the web yourself.
  • If someone accesses your Mac, they access your agent. Local means local — your agent's security is your machine's security. FileVault encryption, a strong password, and standard macOS security practices protect your data.

The architecture decision

We didn't make your agent run locally because it was easier. It wasn't. Cloud-based agents are simpler to build, simpler to maintain, and simpler to scale.

We chose local execution because it's the right architecture for a personal AI agent. Your agent handles your email, your calendar, your files, and your work. That data should stay where it belongs — with you.

Privacy shouldn't be a feature you pay extra for. It should be how the system is built.

Ready to meet your agent?

Set up takes under an hour. No technical knowledge required.

Start for free