Your AI is making decisions.
But are you able to verify what it did?
Proof-of-Control the category of AI security that delivers independent, tamper-resistant verification of what AI systems did. Right now, every organization deploying AI has the same problem: no verification of what their systems actually did. Proof-of-Control changes that, delivering accountability to your board, your regulators, and your customers.
Advanced AI Society is the industry association bringing together the companies building Proof-of-Control technologies, across every industry AI touches, and the enterprises that need them.
Proof-of-Control Summit
Our flagship annual gathering for the Proof-of-Control ecosystem.
Enterprise AI Security Roundtable
CISO-focused discussion on AI provability for regulated industries.
Insurance & AI Working Group
First underwriting pilot framework review with carrier partners.
Every enterprise deploying AI is asking the same question: how do we know what the machine did?
Logs can be fabricated. Policies describe intent, not behavior. Contracts assign blame after the fact. None of these constitute independent evidence.
This is the Verifiability Gap: the absence of independent evidence of what AI systems actually do. It's the reason enterprises are stuck between constraining their agents and accepting unmanaged risk. And without that, trust evaporates.
What AI did
Actions, decisions, data accessed
The Verifiability Gap
No independent evidence
What you can prove
Logs? Policies? Contracts?
[ Proof-of-Control ] closes the gap
With Proof-of-Control, you can get evidence at any step of your AI workflow or agent lifecycle.
Human Authorizes
Agent Authenticates
Data Accessed
Boundary Crossed
Compliance Checked
Output Delivered
Context Retrieved
Payment Settled
Model Executes
Agent lifecycle — where Proof-of-Control could apply.
See the full journey →79%
of organizations deploying agentic AI can't observe what their systems actually did.
They're stuck between two bad options: constrain agents and lose value, or unleash them and accept unmanaged risk.
The Agent Risk-Value Matrix
Risk ↓
FAILED
Risk ↑↑ Value −
Agents unchecked. Nothing to show for it.
WITH Proof-of-Control
Risk ↓↓ Value ↑↑
Agents free + proved.
The only quadrant that works.
CONSTRAIN
Risk ↓↓ Value ↓↓
Safe, but agents can't do their job.
UNLEASH
Risk ↑↑ Value ↑↑
No way to prove what they did.
Risk ↑
Proof-of-Control sits at the intersection of the two fastest-growing categories in AI.
AI Data is the fastest-growing AI spending category at 155% CAGR. Proof-of-Control generates the evidence.
AI Security is the second fastest-growing at 74% CAGR. Proof-of-Control delivers the security.
141%
Increase in enterprise agentic AI spending
50%+
Enterprises deploying AI agents by 2028
#1
Cybersecurity trend: agentic AI security
Source: Gartner Forecast: AI Spending, 4Q25 · Gartner Top Strategic Technology Trends 2026
Founding members

“Since becoming a member, we've had new doors open up, starting with our Davos experience. Michael Casey who's been eight times, showed me the ropes, got me into events, and made the introductions that mattered. Without him and his cofounder Tricia Wang from the Advanced AI Society this experience wouldn't have been nearly as valuable.”
CEO, Edge & Node
The Proof-of-Control Initiative
Initiative Co-Chair

“I've spent years building the frameworks the industry relies on to secure AI. Agent Name Service secures Agent discovery, MAESTRO models threats. OWASP AIVSS scores risks. CSA's AI Controls Matrix defines expected practices, with 243 controls across 18 domains. Yet when an enterprise asks, ‘can you prove you have implemented a minimum set of deterministic and verifiable controls for an agentic AI system deployment?’, there is no standard to reference.”
“That is the focus of the Proof-of-Control Initiative. It defines how a minimum control set for agentic AI systems can be implemented, measured, and attested using mathematically verifiable evidence. That's why I'm co-chairing it. We're building the missing piece.”
Participants include all of our members and leaders from Fortune 100 companies, which will be announced soon.
Learn about the Proof-of-Control Initiative →Our Initiatives For Activating the Market
Buyer enablement
Getting Proof-of-Control into the rooms where decisions are formed. Events, briefings, and vendor-neutral evidence that internal champions need to get through procurement.
Learn more →The Proof-of-Control Initiative
The first shared standard defining what independent provability means and how to evaluate whether a system delivers it. Co-chaired by Ken Huang with founding enterprise members.
Learn more →Insurance activation
Producing the three things insurers need to underwrite AI risk: enforceable standards, auditable evidence, and credible eligibility.
Learn more →Guided by leaders who've moved markets before.

Bettina Warburg
Web3 Investor/Advisor
Foresight for enterprise

Brian Behlendorf
Co-founder, Open Source Initiative
Mozilla and EFF board

Clay Shirky
CTO, New York University
Bestselling author

Sunny Bates
TED brainstrust
Serial entrepreneur

Cyrus Hodes
AI Governance operator
Co-founder of Stability AI & AI Safety Connect
This is the moment the category gets defined.
Build Proof-of-Control technology?
Become a founding member→Deploying AI at enterprise scale?
Join the Proof-of-Control Initiative→Want to understand the category?
Request a briefing→Stay connected
Follow us on LinkedIn→