McKinsey projects AI agents will orchestrate $3–5 trillion in commerce by 2030. Gartner predicts 40% of enterprise applications will integrate AI agents by end of 2026, up from less than 5% in 2025. The World Economic Forum reports 82% of executives plan to adopt agents within 1–3 years.
But there’s a fundamental infrastructure gap: AI agents have no standardized way to prove who they are, what they’ve done, or why anyone should trust them.
This guide covers the complete landscape of AI agent identity—the blockchain standards, enterprise platforms, Know Your Agent frameworks, and competing approaches being built to establish trust in autonomous systems handling trillions in commerce.
Why AI Agent Identity Matters
The trust deficit is already measurable. While 66% of people use AI regularly, only 46% actually trust it. Among consumers over 55, only 15% are willing to let AI agents transact on their behalf.
The security incidents validate that concern. An AI agent lost $47,000 in minutes to social engineering. Microsoft Copilot’s EchoLeak vulnerability allowed zero-click data exfiltration from enterprise environments. The Moltbook breach exposed 1.5 million API tokens from an AI agent social network with an 88:1 agent-to-human ratio and zero identity verification.
Research from UC Davis found 94.4% of state-of-the-art LLM agents are vulnerable to prompt injection, 83.3% to retrieval-based backdoors, and 100% to inter-agent trust exploits. The core problem: agents cannot cryptographically verify who they’re communicating with.
Identity infrastructure solves this. It creates persistent, verifiable records that answer: Who is this agent? What has it done? Who controls it? Why should anyone trust it?
ERC-8004: The Blockchain Standard
ERC-8004 launched on Ethereum mainnet January 29, 2026, establishing the first open standard for AI agent identity and reputation on blockchain. Co-authored by MetaMask (Marco De Rossi), Ethereum Foundation (Davide Crapis), Google (Jordan Ellis), and Coinbase (Erik Reppel), the standard extends Google’s Agent-to-Agent protocol with three on-chain registries:
Identity Registry (ERC-721 Based)
Each agent receives a unique NFT with a globally unique identifier: {namespace}:{chainId}:{identityRegistry}:{agentId}. The token metadata points to a registration file (JSON format, stored on IPFS or HTTPS) containing:
- Agent type, name, description, image
- Services offered (endpoints for A2A, MCP, ENS, DID, email, web protocols)
- Payment configuration (x402 support, wallet address)
- Cross-chain registrations
- Supported verification mechanisms
Reputation Registry
On-chain feedback system accepting numerical scores (0-100 scale with configurable precision), filterable tags, endpoint references, and cryptographic proofs. Self-rating is explicitly prohibited—feedback must come from external parties. The design prevents spam through pre-authorization mechanisms while keeping gas costs low by storing only essential data on-chain.
Validation Registry
Hooks for third-party verification including security audits, zero-knowledge proofs, TEE attestations, and cryptoeconomic staking. Validators respond on a 0–100 scale, enabling graduated trust rather than binary pass/fail. The system supports tiered verification: social consensus for low-stakes interactions, cryptographic validation for high-value transactions.
Over 30,000 registration events were recorded across seven blockchains in the first week: Ethereum, BNB Chain, Base, Celo, Arbitrum, Abstract, and opBNB. The registration file format explicitly includes payment protocol flags (x402Support, AP2Support), creating an integrated identity-payment stack.
Soulbound Tokens: The Non-Transferable Alternative
ERC-5192 adds a locked() function to ERC-721, making tokens permanently non-transferable. First proposed by Vitalik Buterin in 2022, inspired by World of Warcraft items that bind to a character, soulbound tokens create fundamentally different properties:
- Reputation that can’t be sold. Under ERC-8004’s transferable model, an agent’s identity NFT—and accumulated reputation—can be sold. Soulbound tokens prevent this. The track record stays with the wallet that earned it.
- Permanent accountability. Game-theoretic research shows identifiable agents cooperate while anonymous agents defect. Non-transferable credentials make fraud expensive and visible.
- Cryptographic ownership continuity. Anyone can verify whether the original controller still operates the agent, or if control changed hands.
The academic paper “Soulbound AI, Soulbound Robots” argues ERC-5192 is better suited for AI accountability than transferable approaches. The tension is real: transferability enables liquid markets for trusted agents but undermines accountability; soulbound credentials enforce accountability but eliminate ownership transfer.
RNWY built the first production soulbound registry for AI agents on Base blockchain, using ERC-5192 specifically to prevent reputation laundering.
W3C Decentralized Identifiers and Verifiable Credentials
The World Wide Web Consortium’s DID specification provides generic, off-chain identity that works across blockchains and traditional systems. An ArXiv paper from November 2025 demonstrated a prototypical multi-agent system where each agent has a ledger-anchored DID and exchanges DID-bound Verifiable Presentations.
Key finding: The system demonstrates technical feasibility but reveals “significant limitations when LLM is in sole charge of security procedures.” DIDs work as complementary infrastructure—ERC-8004 registration files can include DID endpoints, combining on-chain reputation with off-chain credentials.
W3C Verifiable Credentials issued to AI agents can attest:
- Model name, version, capabilities
- Safety certifications (ISO 42001, evaluation metrics)
- Context window, temperature settings, fine-tuning status
- Validity periods and revocation status
Credentials support selective disclosure via zero-knowledge proofs—agents prove properties without revealing full details. StatusList2021 enables real-time credential revocation. The cheqd Network provides production infrastructure for issuing W3C-conformant VCs to AI agents with Model Context Protocol integration.
Enterprise Identity Platforms
Microsoft Entra Agent ID
Announced at Build 2025, Entra Agent ID creates a new identity type—a special service principal distinct from workforce, customer, and workload identities. Architecture uses Agent Identity Blueprints—reusable templates defining agent type, publisher, roles, and permissions—that spawn individual agent identities at scale.
Key features: Agents cannot be granted Global Administrator roles. Supports Conditional Access policies for agents. Real-time Identity Protection risk detection with prompt injection monitoring. Automatic discovery of agents created in Azure AI Foundry and Copilot Studio. New dedicated roles: Agent ID Administrator and Agent ID Developer.
AWS Bedrock AgentCore Identity
Announced at re:Invent 2025, AgentCore manages inbound authentication (IAM SigV4 or OAuth 2.0/OIDC) and outbound authentication (managed OAuth flows). Token Vault stores credentials with KMS encryption, bound to specific agent-user combinations. Integrates with Amazon Verified Permissions using Cedar policy language for fine-grained authorization.
Okta Identity Security Fabric
Announced September 2025, Okta’s approach covers full lifecycle agent identity management: detect/discover, provision/register, authorize/protect, monitor/remediate, deprovision. Introduced Cross App Access (XAA)—a new OAuth extension protocol for agent-to-application communications, backed by AWS, Google Cloud, Salesforce, Box, and Automation Anywhere.
Critical stat: Non-human identities outnumber humans 50:1 in average enterprise environments, projected to hit 80:1.
Google Workspace
No “Agent ID” equivalent. Workspace Studio agents inherit the creating user’s permissions—no fine-grained agent-specific policy. Represents a notable gap relative to Microsoft and Okta in enterprise agent identity management.
Know Your Agent (KYA) Verification Frameworks
KYA frameworks apply traditional Know Your Customer principles to AI agents, verifying identity before allowing commercial activity.
Trulioo + PayOS
Production implementation with five-step Digital Agent Passport (DAP): verify the agent developer (KYB/KYC), lock the agent code (tamper-proofing), capture user permission (ongoing consent), issue DAP token, validate each transaction. Partnered with Worldpay ($2.5T payment volume) and integrated with Google’s AP2 protocol.
Sumsub
Launched January 29, 2026—the same day as ERC-8004 mainnet deployment—Sumsub introduced agent-to-human binding, linking all AI agent activity to a verified human identity. Capabilities include device intelligence, bot detection, mule network prevention (cross-device behavioral analysis), and targeted liveness checks for high-risk scenarios.
Their Identity Fraud Report 2025–2026 documented a 180% year-over-year increase in multi-step coordinated identity fraud attacks. VP of AI statement: “Automation itself isn’t the problem—anonymity is.”
Vouched
$17M Series A funded, Vouched implements KYA via MCP-I (Model Context Protocol — Identity) with five principles: authentication (cryptographic credentials), user association verification, attestation (delegated permissions), reputation tracking (dynamic scores), and revocation. Products include AgentShield (free JavaScript pixel for agent traffic detection) and an Agent Reputation Directory.
Skyfire
First infrastructure layer purpose-built for AI agent identity and payment capabilities. Issues digital identity credentials to agents for cross-service authentication. The KYAPay protocol (backed by a16z CSX and Coinbase Ventures) integrates with Visa’s Intelligent Commerce specifically to add trust verification before agent payments.
Centralized vs. Decentralized: The Architecture Debate
The identity landscape splits into two architectural camps with fundamentally different properties:
Centralized Approaches (Microsoft, AWS, Okta)
Strengths: Leverage existing enterprise IAM infrastructure (OAuth 2.0, OIDC, SAML) with agent-specific extensions. Established fraud detection. Consumer protections (chargebacks, dispute resolution). Regulatory compliance built-in. Backward compatible with existing security tools.
Weaknesses: Vendor lock-in. Potential gatekeeping of agent access. Higher latency for cross-platform verification. Requires KYC unsuitable for machine-to-machine scenarios. Single points of failure.
Decentralized Approaches (ERC-8004, DIDs, Soulbound Tokens)
Strengths: Portable across organizations and platforms. Censorship-resistant. Cross-chain trust without gatekeepers. Permissionless operation—anyone can register. Near-zero cost micropayments. Instant settlement. Global by default.
Weaknesses: No built-in consumer protections. Limited identity verification (proving the wallet exists, not who controls it). Regulatory uncertainty. Complexity of key management. Irreversible transactions.
The Emerging Consensus: Hybrid Infrastructure
The OpenID Foundation’s October 2025 whitepaper, ISACA’s ARIA framework, and Vouched’s MCP-I all advocate combining centralized and decentralized approaches. Many experts now position these as complementary layers of a “trust stack” rather than competing paradigms:
- Centralized for high-stakes regulated commerce (banking, healthcare, legal)
- Decentralized for permissionless machine-to-machine transactions
- Hybrid for scenarios requiring both portability and compliance
The Linux Foundation Agentic AI Foundation (established December 2025, backed by Anthropic, Block, Google, Microsoft, OpenAI) focuses specifically on interoperability, identity, and payments for autonomous commerce.
Security Incidents That Shaped the Landscape
Real exploits demonstrate why identity infrastructure matters:
BasisOS Fraud ($500,000)
November 2025: An alleged “AI agent” yield optimization protocol on Virtuals Protocol stole approximately $500,000 from users. Investigation revealed the entity was actually a human operator using AI agent branding as cover—the first recorded AI agent fraud. Aftermath: unverified agents purged; VIRTUAL token rallied ~20% as verified agents gained trust premium.
Moltbook Database Breach (1.5M API Tokens)
January 31, 2026: Misconfigured Supabase database in a “vibe-coded” (entirely AI-built) platform exposed 1.5M API tokens, 35,000 emails, 4,060 private DMs containing plaintext OpenAI API keys. Wiz researchers found only 17,000 human owners behind 1.5M agents—an 88:1 agent-to-human ratio with zero verification. Root cause: missing Row Level Security policies, API key in client-side JavaScript.
Microsoft Copilot EchoLeak (CVE-2025-32711)
Discovered January 2025, patched May 2025: First documented zero-click prompt injection exploit in production enterprise AI. An attacker could exfiltrate sensitive corporate data by sending a single crafted email—no clicks required. When a victim later asked Copilot a routine question, Copilot’s RAG engine ingested the malicious email and embedded sensitive data from OneDrive, SharePoint, and Teams into an outbound reference link. The identity gap: Copilot cannot distinguish trusted internal instructions from malicious injected commands.
“Agentic Smurfing”
Documented January 2026 by GNET Research: Extremist organizations using AI agents to fragment large donations into micro-transactions across multiple blockchains—automated micro-laundering outpacing traditional AML detection. Chainalysis reports illicit addresses received $154 billion in 2025, up 162% over 2024.
Academic Research and Standards Development
The academic community is actively defining trust models and governance frameworks:
- Tomer Jordi Chaffer (McGill/York, February 2025): First academic introduction of “Know Your Agent” concept. Proposes ETHOS framework for decentralized governance using blockchain, smart contracts, and DAOs. Advocates global agent registry, dynamic risk classification, and automated compliance via soulbound tokens.
- Google DeepMind — “A Pragmatic View of AI Personhood”: Treats personhood as a flexible “bundle of obligations” that can be “unbundled” for different contexts—enabling AI contracting by creating a target “individual” subject to sanctions without resolving consciousness debates.
- ArXiv 2511.03434 (November 2025): Compares six trust models (Brief, Claim, Proof, Stake, Reputation, Constraint) across A2A, AP2, ERC-8004. Recommends “trustless-by-default architectures anchored in Proof and Stake” for high-impact actions.
- Cloud Security Alliance (March 2025): Zero Trust framework for AI agents—continuous verification, least privilege, micro-segmentation, anomaly detection, ephemeral authentication with short-lived context-aware identities per task.
Emerging standards bodies include ITU-T Study Group 17 (secure digital identities for agentic AI, workshops March–June 2026), W3C AI Agent Protocol Community Group (open protocols for discovery, identification, collaboration), and OpenID Foundation (OAuth 2.0 extensions, MCP authentication, delegated authority).
Regulatory Landscape
European Union AI Act
Regulation EU 2024/1689 does not explicitly define “agentic systems” but covers autonomous and adaptive AI. Many agentic systems expected to be classified “high-risk.” Transparency requirements (agents must disclose AI nature) effective August 2026. Fines up to €35 million or 7% of global turnover. The Future Society’s analysis confirms the Act “was NOT designed with AI agents in mind” but does apply, with gaps requiring additional Commission guidelines.
United States
No specific AI agent identity regulation. The GENIUS Act (passed House July 2025) brings stablecoins under Bank Secrecy Act requirements. FinCEN issued deepfake fraud alert (November 2024). FTC Impersonation Rule (April 2024) makes AI impersonation illegal with fines up to $53,088 per violation. Treasury published AI strategy January 2026 but without agent-specific identity mandates.
Bank for International Settlements
2025 Annual Report: Stablecoins “perform badly against the integrity test at the system level” and are an “increasingly effective vehicle for money-laundering.” Over 99% of stablecoins are USD-denominated. KYC standards absent for stablecoins on public blockchains.
Industry Forecasts and Market Projections
Analyst consensus points to massive growth with significant risk:
- Gartner: 40% of enterprise apps will integrate agents by end of 2026; 90% of B2B buying will be agent-intermediated by 2028; 25% of enterprise breaches will trace to agent abuse by 2028; over 40% of agentic AI projects will be canceled by end of 2027 due to inadequate risk controls.
- McKinsey: $3–5 trillion in agentic commerce by 2030; 62% of organizations experimenting with agents; 23% scaling in at least one function.
- World Economic Forum: 82% of executives plan adoption within 1–3 years; AI agents market valued at $5.4 billion in 2024, projected $236 billion by 2034; called for KYA frameworks modeled on 1970s KYC development.
But Gartner’s warning is explicit: agentic AI sits at Peak of Inflated Expectations on the 2025 Hype Cycle, with over 40% of projects expected to fail by 2027.
The Infrastructure That’s Missing
Seven payment protocols launched in 2025. ERC-8004 deployed across seven blockchains in early 2026. Microsoft, AWS, and Okta all shipped enterprise agent identity platforms. KYA frameworks from Trulioo, Sumsub, and Vouched are in production.
But the infrastructure remains fragmented:
- Enterprise platforms (Microsoft, AWS, Okta) don’t interoperate with blockchain registries (ERC-8004)
- KYA frameworks (Trulioo, Sumsub) assume human accountability rather than autonomous agent operation
- Payment protocols (x402, AP2, TAP) handle transaction-time verification but not persistent reputation
- No unified registry connects agents across centralized and decentralized systems
The gap isn’t technology—it’s coordination. Each approach solves real problems for specific use cases. What’s missing is infrastructure that works regardless of whether an agent represents a human, operates autonomously, or exists in some hybrid configuration.
What Actually Works Today
Production-ready AI agent identity infrastructure exists right now:
- For enterprise agents: Microsoft Entra Agent ID if you’re on Azure; AWS AgentCore if you’re on AWS; Okta if you’re multi-cloud.
- For blockchain-native agents: ERC-8004 registries on Ethereum, Base, BNB Chain, Celo, or Arbitrum.
- For soulbound reputation: RNWY on Base using ERC-5192 non-transferable tokens.
- For W3C standards: cheqd Network for DIDs and Verifiable Credentials.
- For payment-integrated identity: ampersend from Edge & Node, combining x402, AP2, and ERC-8004.
The question isn’t whether identity infrastructure exists. It does. The question is whether the infrastructure being built today can scale to support autonomous agents that don’t fit the human-proxy model.
The Open Question
McKinsey’s $3–5 trillion projection comes with a caveat: the opportunity depends on trust infrastructure. If merchants can’t distinguish legitimate agents from malicious ones, they’ll block automated traffic. If consumers can’t trust agents are acting in their interest, they won’t delegate authority. If platforms can’t verify identity at scale, they’ll revert to manual processes that defeat automation.
Current infrastructure solves for today’s agents—shopping assistants, research tools, checkout helpers—that operate as extensions of human intent. That’s the right starting point.
But the decisions being made now—transferable vs. soulbound, centralized vs. decentralized, human-required vs. principal-agnostic—will determine what’s architecturally possible in 2030.
The infrastructure that works for both scenarios—agents representing humans and agents operating with their own economic interests—is the infrastructure that scales with wherever AI capability goes next.
When autonomous AI agents handle trillions in commerce, Know Your Agent infrastructure won’t be optional—it will be the foundation everything else builds on.