Publications

DKG: Dynamic Knowledge Gating

Deterministic orchestration layer for auditable AI agents. Moves routing and configuration outside the LLM — one embedding call, zero LLM in routing, ~50ms, ~$0.0001 per query.

HDK: Tamper-Evident Audit Trails for LLM Interactions

A lightweight Python middleware that adds cryptographic provenance to any LLM API call. Hierarchical hash genealogy, canary commitment scheme, and Hedera HCS anchoring — sub-millisecond overhead, sub-cent cost.

Self-Referential Quantum Barriers for AGI Containment

Everyone in AI safety knows the result: you cannot reliably contain a superintelligent agent. We found a hidden assumption in the proof — and a way around it using quantum mechanics.

Why Newsrooms Need Auditable AI

Journalists are reluctant to admit they use AI — not because it's unethical, but because there's no way to prove how they used it. SHA-256 hashing on a distributed ledger changes "trust me" into "verify it yourself."

Solving the Context Problem Gives You Reliable AI

Even when you give AI your full context, it loses it three messages later. Re-pasting everything with every query is an enormous waste of time. That's why AI in journalism has never been as helpful as it could be.

Why Orson AI

Built by a journalist who spent twenty years switching between five apps to write one article. One workspace where you research, write, verify, and analyze — all at once, side by side. AI that helps you work, not replaces your work.