The US Government Wants to Know How to Verify AI Agents. You Have 24 Days to Respond.

On January 8, 2026, the National Institute of Standards and Technology published a Request for Information in the Federal Register titled “Security Considerations for Artificial Intelligence Agents.” The document asks the public a deceptively simple question: how should autonomous AI agents be identified, authorized, and secured?

The comment deadline is March 9, 2026, at 11:59 p.m. ET. As of mid-February, only 54 comments have been submitted. For context, a routine FDA food labeling rule can attract thousands. The document that may shape how every AI agent in the United States proves its identity has generated less public input than a proposal about nutrition labels.

That should concern anyone building, deploying, or transacting with AI agents.

What NIST Is Actually Asking

The RFI comes from NIST’s Center for AI Standards and Innovation (CAISI), the rebranded successor to the U.S. AI Safety Institute. The contact is Peter Cihon, Senior Advisor at CAISI. It covers five categories of questions, and NIST has flagged priority questions for respondents who don’t have time to address all of them.

Category 1: Security threats and risks. What unique threats affect AI agent systems compared to traditional software? How do risks change based on model capability, deployment method, and hosting environment? How will these threats evolve?

Category 2: Security practices. What technical controls work? NIST breaks these into model-level controls (prompt injection robustness), system-level controls (data and tool restrictions), and human oversight (approval workflows before consequential actions). Which cybersecurity frameworks apply? What’s preventing adoption?

Category 3: Assessing security. How do you anticipate threats during development? How do you detect incidents after deployment? Should disclosure practices differ between open-source and closed-source models?

Category 4: Deployment environments. This is where identity lives. How should agent access be constrained? How should agents be monitored in production? How do you manage risks from agent interactions with humans, web services, IoT systems, authentication mechanisms, and other AI agents?

Category 5: Additional considerations. Where is government collaboration most urgent? What research should be prioritized?

The RFI defines AI agent systems as containing “at least one generative AI model and scaffolding software that equips the model with tools to take a range of discretionary actions.” It explicitly excludes chatbots, basic RAG systems, and misuse of AI for cyberattacks (unless it affects agent system security). It references seven prior NIST publications including the AI Risk Management Framework and SP 800-53 Rev. 5.

Comments must be submitted through regulations.gov under docket number NIST-2025-0035. No postal mail, no fax, no email.

The Second Document Nobody’s Talking About

A month after the RFI, NIST’s National Cybersecurity Center of Excellence (NCCoE) quietly released a companion document: “Accelerating the Adoption of Software and Artificial Intelligence Agent Identity and Authorization.”

This concept paper, published February 5, 2026, is more specific than the RFI. It proposes a demonstration project using commercially available technologies across four areas: identification (distinguishing AI agents from humans, managing metadata), authorization (OAuth 2.0, policy-based access control), access delegation (linking user identities to agents for accountability), and logging and transparency.

The paper references OAuth 2.0/2.1, OpenID Connect, SPIFFE/SPIRE, SCIM, NIST SP 800-207 (Zero Trust Architecture), SP 800-63-4 (Digital Identity Guidelines), and Model Context Protocol (MCP). The project lead is Ryan Galluzzo, identity program lead in NIST’s Applied Cybersecurity Division.

Comments on this paper are due April 2, 2026, submitted to AI-Identity@nist.gov. If the RFI addresses agent security broadly, this paper addresses the identity question head-on: how do you know which agent is acting, on whose authority, and with what permissions?

As Hogan Lovells noted in their client guidance, AI developers and deployers should consider submitting comments to both.

The Regulatory Landscape Beyond NIST

The NIST documents don’t exist in a vacuum. Governments worldwide are grappling with the same problem, and the responses vary dramatically.

Singapore published the world’s first governance framework for agentic AI on January 22, announced by Minister Josephine Teo at Davos. The IMDA framework explicitly requires organizations to implement agent identity management: assigning unique identities to each agent, linking agents to accountable human supervisors, and ensuring agent permissions never exceed those of the human user. It’s voluntary, but it’s the most comprehensive government guidance on agentic AI that exists anywhere. Computer Weekly and Hogan Lovells both covered it in detail.

The EU AI Act’s Article 50 transparency requirements take effect August 2, 2026. A GovAI policy brief by Alan Chan argues that web requests and browser actions from AI agents likely require labeling as AI-generated under Article 50(2), with verifiable digital signatures. The Future Society published the first comprehensive analysis of AI agents under the EU AI Act, identifying significant gaps that require additional guidelines.

The UK AI Safety Institute (now AI Security Institute) ran a landmark international joint testing exercise in July 2025 with nine partner countries. The key finding: agent safety rates were significantly lower than conversational tasks, with the highest pass rates reaching roughly 57% compared to 99% in earlier conversational tests. AISI also developed RepliBench, tracking AI self-replication success rates, which rose from 5% to 60% between 2023 and 2025.

The ITU is establishing a new working group (Question 16/17) specifically on AI security and digital identity for agentic AI. Chair Arnaud Taddei compared the scope to building the OSI model four decades ago. Workshops are scheduled for March 30-31 and June 3, 2026.

In the US, the regulatory posture has been deregulatory. The Trump administration revoked Biden-era AI safety requirements via Executive Order 14179. The FTC vacated its Rytr consent order in December 2025, citing a directive to review whether prior orders “unduly burden AI innovation.” No federal legislation specifically addresses AI agent identity. The NIST RFI and NCCoE concept paper are, functionally, the most significant US government actions on this topic.

What the Private Sector Built While Regulators Were Listening

The gap between government pace and industry pace is striking. While NIST collects comments, the private sector has shipped production infrastructure.

Visa’s Trusted Agent Protocol (TAP), launched October 2025 with Cloudflare, uses agent-specific cryptographic signatures built on IETF RFC 9421. Early partners include Adyen, Stripe, Shopify, Microsoft, Coinbase, Fiserv, and Worldpay. By December, hundreds of real agent-initiated transactions had been completed in pilots. The protocol is open source on GitHub.

Mastercard’s Agent Pay, launched April 2025, requires agents to be registered and verified before transacting on the network. By mid-November, all US Mastercard cardholders were enabled. Agents receive cryptographically secure Agentic Tokens that make every transaction traceable.

Google’s Agent Payments Protocol (AP2), developed with 60+ organizations including American Express, PayPal, and Coinbase, uses tamper-proof, cryptographically signed “Mandates” for intent, cart, and payment verification.

CrowdStrike acquired SGNL for $740 million on January 8 to transform identity security for AI agents. CEO George Kurtz: “AI agents operate with superhuman speed and access, making every agent a privileged identity that must be protected.” IDC projects the identity security market growing from $29 billion in 2025 to $56 billion by 2029.

Trulioo launched its Know Your Agent framework and Digital Agent Passport (DAP) in August 2025, co-developed with PayOS. The DAP is a lightweight, tamper-proof token with five verification steps: verify the agent developer, bind to an authorized user, confirm permission scope, provide real-time behavior telemetry, and continuous risk scoring. Trulioo joined Google’s AP2 in December and partnered with Worldpay ($2.5 trillion in annual transaction volume across 174 countries).

Sumsub launched AI Agent Verification on January 29, claiming to be the first to offer “agent-to-human binding” at scale. PYMNTS covered the launch, noting that Sumsub’s Identity Fraud Report found a 180% year-over-year increase in coordinated fraud attacks.

Catena Labs raised $18 million from a16z crypto to build the first fully regulated, AI-native financial institution for agents. CEO Sean Neville previously co-founded Circle and co-created USDC.

Microsoft introduced Entra Agent ID in January, giving each agent a unique identity profile with conditional access policies. Okta launched Agent Discovery on February 12, using OAuth consent events to detect shadow AI agents in enterprise environments.

Cloudflare published a detailed technical post on securing agentic commerce with Visa and Mastercard. PYMNTS declared that KYA has become “table stakes” for delegated, machine-initiated commerce. VentureBeat reported that machine identities now outnumber humans 82 to 1 in enterprise environments.

The Academic Foundation

The research community has been ahead of both government and industry. Several papers deserve attention from anyone responding to the NIST RFI.

“Authenticated Delegation and Authorized AI Agents” by Tobin South et al. at MIT Media Lab, accepted as an Oral at ICML 2025, proposes extending OAuth 2.0 and OpenID Connect with agent-specific credentials. It defines “authenticated delegation” so third parties can verify an entity is an AI agent, acts on behalf of a specific user, and has defined permissions. The OpenID Foundation whitepaper on identity management for agentic AI builds directly on this work.

“Know Your Agent: Governing AI Identity on the Agentic Web” by Tomer Jordi Chaffer introduced KYA as a theoretical framework using blockchain, smart contracts, DAOs, soulbound tokens, and zero-knowledge proofs for decentralized agent governance. Published February 2025, it gave the concept its name.

“A Novel Zero-Trust Identity Framework for Agentic AI” from the Cloud Security Alliance, AWS, MIT, and Salesforce proposes Decentralized Identifiers (DIDs), Verifiable Credentials (VCs), and an Agent Naming Service for discovery. The Cloud Security Alliance’s survey of 285 IT/security professionals found that only 23% have a formal enterprise-wide strategy for agent identity management.

And “Multi-Agent Risks from Advanced AI” from the Cooperative AI Foundation, with 30+ researchers from DeepMind, Anthropic, Carnegie Mellon, and Harvard, identifies three failure modes (miscoordination, conflict, collusion) and was submitted to NITRD as policy input for the AI Action Plan.

Why 54 Comments Isn’t Enough

The NIST RFI and NCCoE concept paper together represent the most significant US government engagement with AI agent identity to date. The responses will inform whatever standards, guidance, or frameworks NIST produces next. Those frameworks will influence procurement requirements, compliance expectations, and eventually, regulation.

Right now, the conversation is dominated by large enterprises and law firms. The Register quoted Cyata CEO Shahar Tal saying organizations are “letting thousands of interns run around in our production environment” with no identity governance. Harvard Business Review published a CyberArk-sponsored piece on the identity risks of agentic AI. CIO.com ran Trulioo CTO Hal Lonas’s expert piece on KYA as the new frontier of digital commerce.

But the voices of independent developers, open-source communities, startups building verification infrastructure, and researchers working on decentralized approaches are underrepresented. The WEF’s January call for a KYA standard described the goal as “a universal trust layer, much like SSL certificates for websites.” Who builds that trust layer, and on whose terms, depends partly on who shows up to comment.

The blockchain and decentralized identity communities have particular reason to engage. The NCCoE concept paper references OAuth 2.0, OpenID Connect, and SPIFFE/SPIRE, but does not mention DIDs, verifiable credentials, soulbound tokens, or on-chain identity. The academic literature supports these approaches. The ERC-8004 standard has over 30,000 registered agents. If decentralized identity belongs in the conversation, someone needs to put it on the record.

How to Submit a Comment

For the CAISI RFI (deadline: March 9, 2026):

  • Go to regulations.gov/document/NIST-2025-0035-0001
  • Click “Comment” in the upper right
  • You can respond to specific questions or submit a general statement
  • NIST flagged priority questions: 1a, 1d, 2a, 2e, 3a, 3b, 4a, 4b, and 4d
  • Upload supporting documents if you have them

For the NCCoE concept paper (deadline: April 2, 2026):

  • Download the concept paper PDF
  • Email comments to AI-Identity@nist.gov
  • Address any of the four scope areas: identification, authorization, access delegation, logging/transparency

Both processes accept input from anyone: individuals, companies, nonprofits, academic researchers, open-source projects. You don’t need to be a US citizen or a US-based organization.

Twenty-four days. Fifty-four comments. The standards that govern how AI agents prove their identity are being written right now. If you have something to say about how that should work, this is the window.

Scroll to Top