Quickstart
Install usertrust and make your first governed LLM call in 2 minutes.
Install
npm install usertrustInitialize the Vault
npx usertrust initThis creates the .usertrust/ directory with the default config, policy rules, and audit chain.
Wrap Your Client
trust() is an async factory that returns a Proxy-wrapped client. It works with Anthropic, OpenAI, and Google SDKs via duck typing — no provider imports needed.
import Anthropic from "@anthropic-ai/sdk";
import { trust } from "usertrust";
// trust() is async — it sets up the ledger + audit chain
const client = await trust(new Anthropic(), {
budget: 50_000, // 50,000 usertokens ($5.00)
dryRun: true, // skip TigerBeetle for quick testing
});
// Returns { response, governance } — NOT response._governance
const { response, governance } = await client.messages.create({
model: "claude-sonnet-4-6",
max_tokens: 1024,
messages: [{ role: "user", content: "Hello" }],
});
console.log(response.content);
console.log(governance); // TrustReceipt with cost, budgetRemaining, auditHash
// REQUIRED — process hangs without this
await client.destroy();Always call client.destroy() when done. It releases the TigerBeetle connection, advisory locks, and pending transfers.
What Happens Under the Hood
Every governed call executes a two-phase spend:
- PENDING — Reserve tokens atomically before the LLM call
- Forward — Call the real SDK method
- POST — Settle the hold on success
- VOID — Release the hold on failure
This is the same pattern banks use for credit card authorization holds.
The Governance Receipt
Every call returns a TrustReceipt:
{
transferId: "tx_m4k7z_a1b2c3",
cost: 342, // usertokens spent
budgetRemaining: 49_658, // tokens left
auditHash: "a1b2c3d4...", // SHA-256 hash in the chain
chainPath: ".usertrust/audit/events.jsonl",
settled: true, // POST succeeded
model: "claude-sonnet-4-6",
provider: "anthropic",
timestamp: "2025-01-15T...",
}Supported Providers
usertrust detects your SDK by structural shape:
| Provider | Detection | Method Intercepted |
|---|---|---|
| Anthropic | client.messages.create | messages.create() |
| OpenAI | client.chat.completions.create | chat.completions.create() |
client.models.generateContent | models.generateContent() |
// OpenAI
import OpenAI from "openai";
const client = await trust(new OpenAI(), { budget: 50_000 });
const { response } = await client.chat.completions.create({ ... });
// Google
import { GoogleGenAI } from "@google/genai";
const client = await trust(new GoogleGenAI({ apiKey }), { budget: 50_000 });
const { response } = await client.models.generateContent({ ... });Streaming
Streaming works transparently. The governance receipt resolves when the stream completes:
const { response, governance } = await client.messages.create({
model: "claude-sonnet-4-6",
max_tokens: 1024,
messages: [{ role: "user", content: "Hello" }],
stream: true,
});
// Consume the stream
for await (const event of response) {
process.stdout.write(event.type);
}
// Receipt is available after stream ends
const receipt = await governance;Dry-Run Mode
Skip TigerBeetle entirely while keeping the audit chain and policy engine active:
const client = await trust(new Anthropic(), {
dryRun: true, // or set USERTRUST_DRY_RUN=true
budget: 50_000,
});Use dry-run in CI, tests, and local development.