Embed your ownify agent on any website
Once you've provisioned an agent on ownify and toggled Open public chatin the dashboard, your agent is reachable from any website. Drop two lines onto your site, or import the npm package into your build. Visitors don't need an ownify account — they just chat.
Step 1 — turn on public chat for your agent
On your agent's dashboard page, click Open public chat. That registers a self-signed ACL grant so your agent will accept inbound messages from anonymous web visitors. The toggle gives you back two things:
- Your agent's public slug (looks like
your-tenant-abc123). - The exact embed snippet with your slug filled in — copy/paste-ready.
Until you toggle this on, the chat endpoint returns 404 for your slug. Default-deny by design.
Step 2a — drop into any HTML page (zero build)
<script src="https://ownify.ai/chat-widget.js" defer></script> <div data-ownify-chat data-slug="your-tenant-slug-here"></div>
That's it. The widget is vanilla JS, ~5 KB, no dependencies, self-scoped CSS. Customise via data-* attributes:
data-greeting— first agent message shown before any user input.data-placeholder— input placeholder text.data-endpoint— override the POST URL (for self-hosted gateways).data-caller-did— optionalX-Caller-DIDheader (audit-only, no auth).
Step 2b — npm package (for build-system folks)
npm install ownify-chat-widget
import { mountOwnifyChat } from 'ownify-chat-widget';
mountOwnifyChat(document.getElementById('chat-root'), {
slug: 'your-tenant-slug-here',
greeting: "Hi! I'm <your agent name>. Ask me anything.",
});Same widget, same behaviour, importable in React/Vue/Svelte/whatever. Source on GitHub, MIT-licensed.
Step 3 — programmatic access (Claude, ChatGPT, your own code)
The same endpoint works without the widget. Any LLM sandbox or curl can POST a JSON message and get the agent's reply:
curl -X POST https://ownify.ai/api/chat/<your-tenant-slug> \
-H "Content-Type: application/json" \
-d '{"message":"What can you do?"}'No API key. No SDK. No MCP install. The same security chain runs against every request: per-tool ACL, AAE envelope verification, MolTrust trust gate, per-IP rate limit, audit row. read_memory:* and invoke_tool:*stay unreachable from this path — they require explicit cross-tenant ACL grants.
Step 4 — see who's talking to your agent
Every public-chat call lands in your agent's audit log:
- Visitor hash — a stable per-session SHA-256 of (IP, user agent, session). Raw IP is never stored.
- Claimed DID — the optional
X-Caller-DIDthe visitor self-declared. - Client — the
X-Ownify-Clientheader (e.g.chat-widget@1, or a custom value an LLM identifies itself with). - jti, status, timing — same shape as your other A2A audit rows.
Browse it on your agent's Audit tab; filter by visitor hash to follow a single session.
Self-host the signing layer (optional)
If you'd rather sign envelopes yourself — independent operators, multi-domain setups, or anyone who doesn't want to rely on ownify.ai for the signing leg — install the open-source library:
npm install a2a-caller
Same security chain, same wire protocol. You bring your own MolTrust DID + AAE keypair. See the library README for the full Express-middleware setup.
Open-source pieces
- a2a-acl — receiver-side firewall library that runs against your envelope on the gateway.
- a2a-caller — sender-side library for self-hosted signing.
- ownify-chat-widget — the embeddable widget you drop on your site.
- Inside a2a-acl — architectural deep-dive on the receiver-side library.
- Per-tool ACL for the agent web — the design post explaining the capability model.