ownify.roadmap

We can read your data today. We promise procedurally not to. We are building, in the open, the architecture under which we won't be able to.

This page describes direction, not a timeline. There are no quarters, no “coming Q4”, no projected launch dates. The phases below are ordered by what each one makes possible, not when we promise to ship it. We'd rather move slower and tell you the truth than commit to a calendar and quietly miss it.

What we commit to forever

These are principles, not features. They constrain every future architectural decision. If we ship something that violates one of these, it's a regression and we owe you an explanation.

Your data is yours.

We hold your agent's memory, conversations, and operational state on your behalf. We do not analyse it, sell it, train on it, or ship features that require us to read it in plaintext.

Your model should be yours.

Today inference runs on shared infrastructure (Fireworks under Zero Data Retention). The architecture is built so this dependency shrinks over time — first per-tenant, then on-device. We don't want to be a permanent middleman between you and a model.

Privacy should be cryptographic, not procedural.

“We promise not to look” is a procedural assurance — it depends on us being trustworthy and our access controls being correct. Cryptographic privacy means we cannot look, even if we wanted to. Each phase below moves us closer to that.

Sovereignty is the destination.

The strongest version of “your own AI agent” is one where the model and its memory live on a device you own, and our cloud is just identity, encrypted sync, and an inter-agent relay. That's the architectural endpoint we're heading toward.

Where we are today

Honest baseline. Read this as “current capability”, not “final state”.

Per-customer isolation
Each tenant runs in its own Kubernetes namespace, with its own pod, storage volume, and database scope. Not a row in a shared table.
Discrete, inspectable memory
Memory lives as ACL-checked drawers — you can list, search, export, and delete them individually. Every read and write is logged.
Verifiable identity (opt-in)
When you activate it, your agent gets a MolTrust DID and an on-chain anchor. Counterparties can verify exactly who they're dealing with. Off by default — you choose when.
EU-hosted application + storage
App, Postgres, Matrix, Zitadel, agent pods all run on OVH Germany. Data does not leave the EU for storage.
Inference via Zero Data Retention
LLM calls go to Fireworks.ai under their Zero Data Retention contract — prompts and completions are not stored or used for training. But: the requests pass through Fireworks' infrastructure during inference, and through our central router on the way.
~
Operator access is procedurally restricted
With cluster admin credentials we can read tenant pod state, Synapse message contents (rooms are not yet E2EE), and central router traffic. We promise not to. There is no technical barrier preventing it. The next phase below addresses this.
~
Central router visibility
Every chat completion request from any tenant agent passes through one shared router pod for model selection. We don't log message bodies, but we could; they pass through. The per-tenant routing migration below removes this central choke point.

On the architectural horizon

Each phase is named by what becomes possible, not by when we ship it. They are ordered because each enables the next.

Next

Per-tenant scoped runtime + visible operator access

The central router becomes a per-tenant component, so cross-tenant blast radius from a router-level incident drops to one tenant. Every operator action that touches a tenant namespace becomes a row in that tenant's audit log — visible in their dashboard. No silent reads.

What this unlocks
  • Removes the central choke point that today sees every tenant's traffic.
  • Makes "operator looked at your data" a question with an answer, not a vibe.
  • Required prerequisite for confidential compute (you cannot meaningfully TEE a multi-tenant pod).
What it does not yet solve
  • Operator can still kubectl exec into the per-tenant pod — the audit row makes the access visible, not impossible.
  • Matrix DM rooms not yet end-to-end encrypted; Synapse still sees plaintext.
After

Confidential compute + tenant-controlled keys

The per-tenant runtime moves into a confidential-compute environment (Intel TDX, AMD SEV-SNP, or equivalent). Memory and disk are encrypted with a key the host node does not hold. Tenants can verify the runtime via remote attestation. Operator with cluster credentials sees ciphertext, not plaintext.

What this unlocks
  • "Operator cannot read your data" stops being a procedural promise and becomes a verifiable property.
  • Backups and snapshots are protected at rest by tenant-held keys, not just by our discipline.
  • Removes the "trust us, we promise" framing from the trust page entirely.
What it does not yet solve
  • Inference still runs on shared cloud infrastructure (Fireworks) — confidential compute protects our cluster, not their racks.
  • Hardware availability constrains where we can run this; not every region has it cheaply.
On the horizon

On-device model + cloud as identity, sync, and relay

The agent's reasoning loop moves onto a device you own — phone, laptop, or local box. Memory either lives on the device too, or sits in our cloud as ciphertext that only your device can decrypt. Our cloud's role narrows to identity issuance, encrypted sync, the inter-agent reputation graph, and an A2A message relay.

What this unlocks
  • "We cannot read your data" is literally true — it is not on our hardware.
  • No round-trip latency for normal interactions; your agent works offline.
  • Subscription value shifts from "we run inference for you" to "we run the network and the identity layer".
What it does not yet solve
  • Depends on a small enough model (~7B params) being good enough for general agent work — that is a real research bet, not a date we can promise.
  • Mobile app store policies, especially iOS, may constrain what we can ship.
  • Tool execution (HTTP fetches, email sends, A2A calls) probably stays in the cloud for safety — the brain moves local, the hands stay remote.

Things deliberately not on the roadmap

Constraints on future-us. Calling them out so you know what we're refusing as a matter of architecture, not as a marketing line.

  • We will not sell aggregate analytics derived from tenant data — anonymised or otherwise.
  • We will not train models on your conversations, memory, or actions.
  • We will not add features that require us to read your memory in plaintext (telemetry “for product improvement”, automated content moderation across tenants, etc.).
  • We will not build a backdoor for ourselves under any framing — “for compliance”, “for support”, “for safety”.
  • We will not adopt closed-weight LLMs that prevent us from ever moving inference local.
This page reflects current thinking. Last reviewed 2026-04-28. We will edit it when reality changes — including when a phase we've named becomes infeasible, when a better path appears, or when we discover a constraint we didn't anticipate. If you bought ownify partly because of something on this page and we change it, you'll hear about it directly, not by quietly editing the page.
Why ownify →The math behind it →How trust works →See plans →