Initiative 2
The Proof-of-Control Initiative
Certification, insurance, and procurement all require the same thing: a shared definition.
Any vendor can claim independent provability. There is no shared definition of what that means, no way to evaluate the claim, and no framework for third-party assessment. Vendors with real technology can't differentiate from those who just claim the label. Enterprises can't compare solutions. Insurers can't underwrite a property no one has defined.
The Proof-of-Control Initiative is building the first shared standard that defines what independent provability means and how to evaluate whether a system delivers it. Co-chaired by Ken Huang with founding enterprise members.
What is Proof-of-Control and why do all AI systems need it?
Proof-of-Control is a category of AI security that delivers independent, tamper-resistant verification of what AI systems did. Not a log that can be rewritten.
Learn more about Proof-of-Control →The Proof-of-Control Initiative is an invitation-only convening for leaders at organizations whose architectures, buying authority, or institutional role already shape how AI systems are evaluated.
We convene builders of AI security Proof-of-Control technologies, frontier AI, enterprises deploying AI, and vertical industry partners to formalize and operationalize Proof-of-Control requirements into durable infrastructure.
To express interest, please complete the Founding Member Form.
The verifiability gap in the AI economy
Agentic AI is technically ready to scale. What is missing is the evidentiary infrastructure with Proof-of-Control. AI security companies are already building technologies that make agent actions independently provable — without relying on checklists or operators. But what is missing is the shared industry standard that defines the security property precisely enough for enterprises to procure, regulators to mandate, and insurers to price.
Without it, procurement stalls, pilots remain pilots, and capital defaults to Agentic AI systems that are insecure. As Christian Catalini's Simple Economics of AGI argues, as AI drives the cost of execution toward zero, the bottleneck shifts from capability to verified control. Proof-of-Control is that missing standard.
The Proof-of-Control standard
Proof-of-Controlis the industry standard for AI agents — an independently provable property of what a system's outputs, actions, and decision pathways did. The evidence demonstrates whether agents' actions remained within defined, auditable boundaries, established not by vendor attestation or contractual assurance, but by cryptographically tamper-resistant evidence generated at the moment of execution.
The Advanced AI Society is formalizing this property with a definition that is cryptographically sound, practitioner-friendly, and technology-neutral, accompanied by a framework with certification levels and stakeholder guides. Application-specific properties (Proof of Payment, Proof of Compute, Proof of Inference) and vertical working groups follow in a second phase.
The five dimensions of Proof-of-Control
The criteria will define whether a system has the Proof-of-Control property. The five dimensions define the domains that Proof-of-Control applies to.
Privacy
Independently verifiable evidence of enforceable data boundaries and usage constraints.
Portability
Independently verifiable evidence of continuity and control across vendors and platforms.
Verifiability
Independently verifiable evidence of what actions occurred and how they propagated.
Security
Independently verifiable evidence of system integrity and access control enforcement.
Identity
Independently verifiable evidence of actors, agents, and delegated authority relationships.
What founding members shape
This is not a finished standard looking for endorsements. Founding members shape what the standard measures, how conformance is assessed, and how it maps to the frameworks enterprises already use.
The formal definition of Proof-of-Control and the criteria
How the standard maps to existing frameworks including MAESTRO, AICM, OWASP AIVSS, and AIUC-1
How conformance is assessed consistently across vendors, platforms, and environments
How technical evaluation translates into evidence that executives, boards, and regulators can act on
The sequencing of industry profiles and application-specific guidance
What the standard delivers
Binary
A system has the Proof-of-Control property or it doesn’t.
Deterministic
Evidence at the moment of execution, not reconstructed later.
Auditable
Conformance levels built for third-party assessment from day one.
Technology-neutral
Defines the property, not the mechanism.
Industry-driven
Built by the enterprises that deploy AI and the builders who make Proof-of-Control products.
Interoperable
Maps to MAESTRO, AICM, OWASP AIVSS, and AIUC-1.
Phased roadmap
The standard is built in three phases, each expanding scope while maintaining the universal foundation.
Universal Standard
A shared definition that applies to all implementations. The foundation everything else builds on.
In progressApplication Profiles
Proof of Payment, Proof of Storage, Proof of Compute, Proof of Personhood, and more. Domain-specific extensions of the universal standard.
Not yet startedIndustry Working Groups
Sector-specific guidance for healthcare, finance, supply chains, and other regulated industries. Built by the people who deploy AI in those sectors.
Not yet startedWho's at the table
Initiative Co-Chair

“I've spent years building the frameworks the industry relies on to secure AI. Agent Name Service secures Agent discovery, MAESTRO models threats. OWASP AIVSS scores risks. CSA's AI Controls Matrix defines expected practices, with 243 controls across 18 domains. Yet when an enterprise asks, ‘can you prove you have implemented a minimum set of deterministic and verifiable controls for an agentic AI system deployment?’, there is no standard to reference.”
“That is the focus of the Proof-of-Control Initiative. It defines how a minimum control set for agentic AI systems can be implemented, measured, and attested using mathematically verifiable evidence. That's why I'm co-chairing it. We're building the missing piece.”
Founding members
Founding members cohort 1 will be announced soon.
Founding builder members
The companies building Proof-of-Control technology participate as builder members, contributing technical expertise and real-world implementation experience to the standard development process.
How the work happens
Regular working sessions, draft review cycles, and consensus-based governance. The standard is built in the open, with transparency at every stage.
Frequently asked questions
Who is convening this effort?
Advanced AI Society is the industry association bringing together the companies building Proof-of-Control technologies, across every industry AI touches, and the enterprises that need them.
What problem does Proof-of-Control solve?
The Verifiability Gap. Agentic AI systems can act, but enterprises, boards, and regulators currently have no independent way to verify what those systems did, who authorized them, or whether their actions stayed within defined boundaries. Proof-of-Control closes the Verifiability Gap.
Is Proof-of-Control a standard or a framework?
Both. We are defining the standard: a precise, binary, technology-neutral property that AI systems either have or don’t have. The framework will include certification levels, stakeholder guides, and implementation specifications for technical and non-technical adopters.
What does Proof-of-Control actually prove?
Proof-of-Control covers inputs, outputs, tool calls, and delegation events. It does not cover internal AI reasoning or decision-making.
How is this different from SOC 2?
SOC 2 certifies that controls exist and have been tested. Proof-of-Control certifies that independently verifiable evidence of those controls exist at the moment of execution, not reconstructed afterward.
Is this blockchain-specific?
No. Proof-of-Control is technology-agnostic. Cryptographic proof can be satisfied through hardware attestation, cryptographic logging, certificate transparency logs, governed ledgers, or any combination. No specific technology is required or excluded. What the standard requires is the property of independent verifiability, not the mechanism that achieves it.
Will Proof-of-Control become a certification?
Yes, as a downstream outcome. Certification requires a definition precise enough that a third party can assess it. The Initiative plans to maintain the conformance list modelled on how Open Source Initiative maintains the approved open source license list.
What comes after the standard is defined?
Three future stages. First, a certification framework with conformance levels and stakeholder-specific guides for implementors, auditors, and regulators. Then application profiles (e.g. Proof of Payment, Proof of Compute, Proof of Storage, Proof of Inference) and industry-specific guidance will convene.
Shape the standard before it shapes the market.
The Initiative is actively convening founding members. If you're deploying AI at scale or building Proof-of-Controltechnology, there's a seat at the table.