ownify.economics

Where the money comes from. Where the money goes.

Most AI products treat unit economics as a trade secret. We don't. The formula below is exact, the variables are real, and the numbers are pulled from our actual production cluster. You can plug your own numbers in and get the same kind of answer.

The point is not to look smart. The point is that an investor, a customer, or a tenant can verify whether what we say on /why-ownify and /roadmap actually pencils out — using arithmetic, not trust.

The formula

Per-agent revenue per month

R = R_sub + R_topup
  • R_sub — the tier subscription (Solo €12, Duo €22, Team €32, Pro €99, Business €279)
  • R_topup — wallet credits consumed at the customer-facing rate (Fireworks list × 1.47)

Note: the 1.47 markup applies to what customers pay. Our actual cost to Fireworks is the raw list price — see C_inf below.

Per-agent cost per month

C = C_inf + C_fixed + C_store + C_payment

C_fixed = OVH_cluster_bill / N_active_agents
  • C_inf — actual Fireworks invoice per agent: Σ_model (calls × avg_prompt × in_rate + calls × avg_out × out_rate)
  • C_fixed — the real OVH cluster bill (currently 360/month, covers managed Kubernetes + nodes + load balancer + storage volumes + bandwidth) divided by the number of active agents
  • C_store — marginal storage cost above what's already in the cluster bill (Solo ships 5 GB of agent-specific PVC; ~€0.05/GB-month)
  • C_payment — payment fees (Stripe + Stripe Tax) ≈ R × 2.9%

Customer acquisition cost (CAC)

C_CAC = R × 5%

We commit to a 5% of revenue CAC ceiling at steady state. That's a deliberately lean, organic-leaning growth budget — content, community, conferences, modest affiliate. It funds growth that earns its keep, not paid acquisition that races for vanity metrics. Early-stage acquisition (the first 100-300 customers) will run higher than 5% and is funded from runway, not monthly cashflow. The 5% target is the steady-state assumption; we name it as a constraint to keep future-us honest.

Operating burn (must be covered by gross margin)

Burn = founder + R&D_tools + ops_misc
     = €6,000 + €500 + €200
     = €6,700/month

Founder comp is one full-time equivalent at modest founder pay. R&D / tools is observability, dev services, accounting tools, etc. Ops is legal + miscellaneous. We publish this because transparent burn matters more than transparent revenue — it tells you what gross margin has to clear before the platform is self-sustaining.

Per-agent gross margin and platform net

GM = R - C
Net = (GM × N_paid) - C_CAC - Burn

Operating break-even is the smallest N_paid at whichNet ≥ 0. The break-even ladder below shows it explicitly across realistic scale points.

Real numbers, today and at scale

Solo tier (€12/mo) on the average production-traffic shape we see for our most active dogfood tenant: 27,670 prompt tokens and 976 completion tokens per call, 500 calls per month. Two scenarios shown side by side: today (N=2 active agents) — honest development-stage reality where fixed cluster cost is split across almost nobody, and at designed capacity (N=100) — where the current cluster sizing converges. The cluster bill of 360/month is real (current OVH invoice).

VariableToday
N=2, Kimi K2.6
(pre-fix)
Today
N=2, gpt-oss-120b
(post-fix)
At capacity
N=100, gpt-oss-120b
v3 forecast
local model
Our cost — input (€/1M, Fireworks list)€0.95€0.15€0.15
Our cost — output (€/1M, Fireworks list)€4.00€0.60€0.60
Customer rate (€/1M, list × 1.47)€5.88€0.88€0.88
C_inf (inference, monthly)€15.10€2.37€2.37€0.00
C_fixed (€360 cluster ÷ N)€180.00€180.00€3.60€3.60
C_store€0.25€0.25€0.25€0.50
C_payment€0.35€0.35€0.35€0.35
C total€195.69€182.97€6.57€4.45
R (Solo)€12.00€12.00€12.00€12.00
Gross margin (per agent)€-183.69€-170.97€5.43€7.55
GM%-1531%-1425%45%63%
Reading this honestly: the two left columns are real numbers at today's tiny scale — we're losing money per agent because the €360 cluster bill is divided across 2 dogfood tenants instead of a real customer base. That's normal pre-launch shape; the point of the table is to show where the unit economics converge. The third column is the same per-agent inference cost spread across the cluster's designed capacity (~100 agents) — that's where Solo tier crosses into healthy SaaS gross margin. The v3 column is a forecast: depends on the on-device model bet (see /roadmap for what would have to be true). Higher tiers (Duo €22 → Business €279) carry materially better margin because the fixed line amortizes over a larger revenue base.

Where every Solo €12 goes (at designed capacity)

Inference (Fireworks)2.37 (20%)
Cluster infra share (€360 ÷ ~100 agents)3.60 (30%)
Storage0.25 (2%)
Payment fees (Stripe)0.35 (3%)
Gross margin (R&D + ops + buffer)5.43 (45%)

This view assumes the cluster is at its designed capacity (~100 active Solo tenants). At today's 2-tenant dogfood scale, the cluster-share line is much larger and we're below break-even — that's development-stage reality, and the point of publishing this table is to show where the model converges, not to claim today's P&L is healthy. Gross margin (when we reach it) funds: salaries, the v2/v3/v4 architectural arc, support + on-call, runway buffer, and eventually profit.

Operating break-even ladder

Revenue assumed at the weighted-average ARPU of 30 across the Solo → Business tier mix. Trial dilution: 1 active trialer for every 3 paid agents (typical pre-conversion funnel). Cluster cost grows with N (rough linear model above the €360 baseline). 5% CAC included.

N (paid)+ TrialersRevenueCOGSGross marginCAC (5%)BurnNet / mo
10033€3000€982€2018€150€6700€-4832
20066€6000€1609€4391€300€6700€-2609
400← break-even132€12000€2861€9139€600€6700+€1839
800264€24000€5367€18633€1200€6700+€10733
1500495€45000€9751€35249€2250€6700+€26299
Read-out: operating break-even sits at N≈400 paid customers across the assumed tier mix. Below that, the platform requires bridge funding from runway — that's the normal pre-scale shape and we don't pretend otherwise. Above break-even, the surplus funds runway buffer, the v3 architectural arc, and (after that) profit. None of these numbers are projected — they're what the formula gives you when you plug today's real costs in. If you doubt them, plug your own assumptions in and check.

How the business model evolves with the architecture

A consistent worry is: “if your roadmap moves model + memory to the user's device, doesn't that kill your SaaS?” Short answer: no — the unit being sold changes. Long answer in three phases.

Today (v1) — agent-as-a-service

You pay for: hosted agent runtime + LLM use through us.
We run: per-tenant pod, memory store, inference proxy, identity.
Margin: ~50% gross margin per Solo agent on production traffic (post-fix).

Next (v2) — same shape, harder isolation

You pay for: same as today, same prices.
We add: per-tenant scoped router + visible operator-access audit.
Margin: margin profile unchanged; what improves is the trust story, not the unit economics.

On the horizon (v3) — local model, cloud memory + identity

You pay for: identity slot + sync infrastructure + occasional cloud-fallback inference + tool execution.
We run: identity issuance, encrypted sync, A2A relay, tool sandboxes, fallback GPU pool.
Margin: ~67% per-agent margin OR an option to drop subscription price by ~€4 and hold today's margin — both levers available.

Architectural endpoint (v4) — local everything, cloud is the carrier

You pay for: per-agent identity + reputation slot + relay, per-human aggregate.
We run: the inter-agent trust network — identity, reputation, routing, the consent protocol that lets cloud agents request access from your local device.
Margin: expected ARPU per human = N agents × per-agent fee + cloud fallback (typical N = 3-10). Per-agent margin profile holds; revenue scales with N..

The mistake is conflating platform with compute. Email is the precedent: every mailbox runs locally (or in iCloud / Gmail), but SMTP, DNS, deliverability, anti-abuse, and identity are paid platform layers. We're positioning to be the platform layer for personal AI agents — independent of where the inference actually runs.

The proof: we run on it ourselves

Every number on this page is pulled from our own production cluster. The founder runs multiple ownify agents and depends on them daily. When something is broken, we feel it before any customer does — and you can see the receipts.

Concrete recent example: the routing fix shipped on 2026-04-28 dropped per-agent inference cost on the bulk path by ~6×. We discovered the bug because we run the system, looked at the spend, and noticed the math didn't match the page. The sequence (diagnosis → fix → verification → cost-shape change) is the kind of thing customers and investors should be able to see in real time.

A live transparency page with current cluster numbers and the founder's own agent stats is on the way. When it lands it will link from here.

This page reflects current thinking and current numbers. Last reviewed 2026-04-28. The cost lines are real per-agent estimates; aggregate fixed costs are amortized at our current active-agent count. Numbers will move with scale. Forecast columns (v3) depend on architectural milestones that aren't guaranteed — they're the math if the roadmap delivers, not promises that it will.
Architectural roadmap →Why ownify →See plans →