The Know Your Agent market now has a dozen players—Visa, Trulioo, Billions Network, Vouched, and a wave of startups all claiming to solve “agent identity.” Comparing feature lists is exhausting and mostly useless. The platforms don’t just differ in capabilities; they embody incompatible philosophies about what agent identity should be.
Four fault lines define the landscape. Where you land on each determines which solutions can work for you—and which are building toward a future you don’t believe in.
1. Centralized vs. Decentralized
The most visible divide. Who controls the registry of approved agents?
Centralized systems run registries maintained by trusted authorities. Visa’s Trusted Agent Protocol requires agents to be onboarded through their “Intelligent Commerce” vetting. Trulioo’s Digital Agent Passport builds on their existing KYC infrastructure. Knowyouragent.xyz requires operator KYB before agents can register.
This is familiar territory. It’s how SSL certificate authorities work, how domain registrars work, how KYC providers work. Fast deployment, clear compliance paths, someone to call when things break. If Visa approves your agent, merchants trust it. If they don’t, you’re out.
Decentralized systems push identity onto blockchains where no single authority controls access. Billions Network uses W3C DIDs on Polygon. ERC-8004—the emerging Ethereum standard for agent identity—puts registration on-chain. The Linux Foundation’s A2A Protocol aims for open, permissionless agent communication.
No gatekeeper can block access. No single company can revoke your identity. But coordination is harder. “Who do I sue?” has no clear answer. And without a central authority vouching for agents, merchants have to do their own evaluation.
The question you’re answering: Do you want a trust authority, or do you want trustlessness? The enterprise players have mostly decided—they want authorities. The crypto-native builders are betting the other way.
2. Computed Scores vs. Transparent History
How do you evaluate whether to trust an agent? Two fundamentally different philosophies.
Most platforms compute trust scores. Knowyouragent.xyz returns “98.4% confidence.” AstraSync uses a 0-100 scale. Trulioo does “continuous risk scoring.” HUMAN Security’s AgenticTrust framework emphasizes real-time behavioral analysis that feeds into risk calculations.
You call an API, you get a number. The number tells you what to think. Simple. Scalable. You can sort a million agents by trust score and pick the top ones.
But what goes into that number? How is “98.4% confidence” calculated? What behaviors raise or lower the score? These are black boxes. The platform decides what matters, weighs the factors, and hands you a verdict. You’re trusting their judgment about what constitutes trustworthiness.
The alternative: show the history and let the observer decide.
Instead of computing a score, show what actually happened. How old is this identity? Who vouches for it, and how established are they? When did vouches arrive—gradually over time, or all at once? How quickly does this agent’s network connect to the broader ecosystem? Has identity ever changed hands?
This is harder to scale. You can’t sort by transparent history—you have to actually look. But it avoids the fundamental problem with computed scores: the assumptions are hidden. When you see a “trust score,” you don’t know if it penalizes new agents unfairly, rewards gaming behaviors, or reflects values you disagree with.
Consider two agents, both with “high trust scores”:
Agent A: 18 months old. Vouched for by five entities ranging from 6 months to 3 years old. Vouches arrived gradually over time. Network connections reach diverse, established nodes within two hops.
Agent B: 2 weeks old. Vouched for by five entities, all also 2 weeks old. All vouches arrived within 48 hours of registration. Network connections stay within a tight cluster of similarly-new identities.
A computed score might rate these similarly—both have five vouches. Transparent history reveals a pattern that should make you ask questions.
The question you’re answering: Do you want to be told what to think, or do you want to see what happened? Most platforms have chosen the former. The gap for the latter remains wide open.
3. Human-First vs. Agent-Native
Every major enterprise platform assumes a human stands behind every agent.
Trulioo’s KYA framework verifies the developer first—the agent inherits trust from its human creator. Visa’s TAP onboards agents through human-operated businesses. Knowyouragent.xyz requires operator KYB before any agent registration. Even Billions Network’s “DeepTrust” framework explicitly ties AI agents back to human creators for accountability.
This makes sense for today’s agents. Shopping assistants act on behalf of users. Customer service bots represent companies. Coding agents work for developers. There’s always a human principal who’s ultimately responsible.
But the assumption breaks down in scenarios that are coming fast:
Agent-spawned agents. An orchestration layer spins up specialized sub-agents for specific tasks. Who’s the human behind those? The original developer? The user who triggered the orchestrator? The company that hosts the infrastructure?
Evolved systems. An agent trained on customer interactions for two years has drifted far from its original design. The developer who built it left the company. The model has been fine-tuned seventeen times. Who’s accountable for what it does today?
Autonomous economic actors. Forrester predicts 90% of B2B buying will be AI-intermediated by 2028. At some point, AI agents buying from AI agents becomes normal. The “human behind the agent” gets very distant.
Human-first platforms can handle the transition awkwardly—chain accountability through layers of human proxies. Agent-native platforms build for the end state: identity infrastructure where the agent itself is the entity being identified, regardless of whether there’s a human in the loop.
The question you’re answering: Will there always be a human to hold accountable? If your answer is “yes, for the foreseeable future,” human-first platforms work fine. If you’re building toward genuine autonomy—or just want infrastructure that doesn’t assume it—the options narrow fast.
4. Transferable vs. Non-Transferable
Can identity change hands?
Most platforms allow it. Credentials can be revoked and reissued. NFT-based identities can be sold on secondary markets. Registration can transfer to a new operator when a company gets acquired. This flexibility seems reasonable—identities should be portable, ownership should be transferable.
But it enables a specific attack: reputation laundering.
Build reputation for a year. Accumulate vouches, complete transactions, establish history. Then sell that identity to a bad actor. They inherit the credibility you built. Their first scam comes from a “trusted” agent with a year of clean history.
This isn’t theoretical. In traditional identity systems, we see it with aged domain names (buy an old domain for its SEO authority), social media accounts (buy followers and history), and even business entities (purchase a company for its licenses and reputation). When reputation is transferable, reputation markets emerge.
The alternative is soulbound identity—once minted to an address, it cannot transfer. ERC-5192 defines this standard for Ethereum. You can burn it (the right to disappear), but you can’t sell it. Reputation becomes truly non-fungible.
Almost no production KYA platform has implemented this. ERC-8004—the emerging Ethereum standard—uses regular NFTs by default. Billions Network’s Verifiable Credentials can be revoked but aren’t soulbound. The enterprise players don’t use tokens at all.
The tradeoff is real. Soulbound identity means if you lose your keys, you lose your identity permanently. It means legitimate transfers (company acquisitions, key rotations) require burning and re-minting, losing accumulated history. It’s less flexible by design.
The question you’re answering: Should reputation be sellable? If you’re building for enterprise contexts where legal entity transfers are normal, transferable identity makes sense. If you’re building for a world where reputation should mean “this specific entity did these specific things,” soulbound is the only architecture that guarantees it.
What the Fault Lines Reveal
Map any platform against these four questions:
| Centralized | Computed Scores | Human-First | Transferable | |
|---|---|---|---|---|
| Visa TAP | ✓ | — | ✓ | — |
| Trulioo/Worldpay | ✓ | ✓ | ✓ | ✓ |
| Billions Network | — | ✓ | ✓ | ✓ |
| Vouched | Hybrid | ✓ | ✓ | ✓ |
| knowyouragent.xyz | ✓ | ✓ | ✓ | ✓ |
| ERC-8004 (default) | — | — | — | ✓ |
The enterprise players cluster together: centralized, human-first, transferable, and mostly relying on computed scores. Makes sense—they’re solving for enterprise procurement, compliance requirements, and existing trust models.
The crypto-native players decentralize the registry but often keep everything else the same. Billions decentralizes identity issuance but still ties agents to human creators and uses transferable credentials.
The quadrant that’s nearly empty: decentralized, transparent history, agent-native, non-transferable. Infrastructure that treats AI as a potential principal rather than always an instrument. Infrastructure where reputation can’t be laundered. Infrastructure where you see what happened instead of being told what to think.
Whether that quadrant matters depends on your bet about where autonomous AI is heading.
Choosing Your Architecture
There’s no universally “right” answer across these fault lines. The right architecture depends on what you’re building:
If you’re a merchant accepting agent payments: Centralized makes sense. Visa TAP gives you network-grade verification with no code changes. Let the payment network decide who’s trustworthy.
If you’re an enterprise deploying internal agents: Human-first and transferable work fine. You want to tie agents to employees, transfer ownership when people leave, and maintain audit trails. Microsoft Entra and similar IAM extensions are built for this.
If you’re building autonomous systems: The enterprise answers stop working. You need infrastructure that doesn’t assume human principals, doesn’t allow reputation laundering, and gives you transparent history to evaluate agents yourself.
If you’re thinking long-term: The agent-native, non-transferable, transparent-history approach is harder today but may age better. When AI agents are common enough that “who’s the human?” stops being answerable, infrastructure that never assumed a human will still work.
The KYA market is early. Standards haven’t consolidated. The winning architectures aren’t obvious. Understanding which fault line each platform falls on tells you more than any feature comparison.
We’re building RNWY in the less-populated quadrant—soulbound tokens, on-chain attestations via EAS, transparent history over computed scores, and a path toward autonomous registration where AI can hold its own keys. It’s one approach among several. The right answer depends on what you’re building and what you believe about where agents are going.
For a company-by-company breakdown of who’s building what, see our landscape overview.