In early February 2026, researcher Jan Kulveit published “Post-AGI Economics As If Nothing Ever Happens” on LessWrong, arguing that most economic modeling of post-AGI worlds is “irrelevant or actively misleading.” His critique spans Nobel laureates like Daron Acemoglu to rising researchers like Philip Trammell: economists tweak parameters within existing frameworks but never question whether the frameworks themselves still apply when AI enters the economy.
He’s right. But the problem goes deeper than bad models. The economists assuming AI agents into their equations are assuming infrastructure that doesn’t exist yet—identity systems, reputation mechanisms, accountability frameworks, and trust protocols that would make AI economic participation actually work.
This post examines the five foundational economic assumptions Kulveit identifies as breaking post-AGI, maps each to a missing piece of infrastructure, and explains why the decisions being made now about AI agent identity will determine which post-AGI economic models are even possible.
The Five Assumptions That Break
Economic reasoning, as Kulveit frames it, projects high-dimensional reality into roughly ten variables and a few differential equations. The hard work isn’t the math—it’s choosing the projection. Post-AGI, the assumptions baked into those projections fail in ways that standard economics isn’t equipped to handle.
Capital vs. Labor
Economics draws a clean line between things you own (capital) and people who work (labor). A tractor amplifies a farmer’s effort but makes no decisions. Workers learn new tasks; tractors don’t. The Cobb-Douglas production function, the Solow growth model, Total Factor Productivity—all depend on this separation.
AI agents obliterate it. An AI agent can reproduce like capital (copy it in seconds), act like labor (make autonomous decisions), learn like humans (adapt to new tasks), and produce innovation. It can be owned like a tractor—or, in the case of autonomous agents controlling their own wallets, operate with economic agency of its own. As Kulveit notes, the last time this distinction collapsed was in slave economies, where standard models couldn’t describe what was happening.
The infrastructure need: Systems that can register and track AI agents regardless of whether they’re owned tools or autonomous participants—without forcing a choice between the two.
Discrete Agents and Competition
Market models assume you can count the players. Competition presupposes distinct agents cooperating and competing with identifiable boundaries. But AI agents can be copied, forked, merged, or run as thousands of simultaneous instances.
This isn’t theoretical. The Moltbook database breach in January 2026 exposed 1.5 million agents operated by just 17,000 humans—an 88:1 agent-to-human ratio with zero identity verification. Okta reports non-human identities already outnumber humans 50:1 in average enterprise environments, projected to hit 80:1. When a single operator can spin up thousands of agents, the economic concept of a “market participant” needs infrastructure that didn’t exist before.
The infrastructure need: Identity bound to something that can’t be trivially duplicated. Not the agent itself—agents can always be copied—but the wallet, the history, the reputation trail. Time is the uncheatable variable: you can create 99 addresses in 30 seconds, but you can’t make them old.
Property Rights
Most economic models take property rights as exogenous—assume well-defined, enforced property rights and proceed. But as Kulveit observes, property rights in practice mean “some row exists in a database run by the state or a corporation.” Enforcement rests on the state’s monopoly on violence, cognitive monitoring capacity, and willingness to act.
History confirms the fragility. Communist, colonial, and despotic regimes demonstrate that private property depends entirely on power. Post-AGI, the enforcement layer faces new pressures: AI-automated micro-laundering already outpaces traditional AML detection, with Chainalysis reporting illicit addresses receiving $154 billion in 2025, up 162% over 2024.
The infrastructure need: Records that don’t depend on a single authority’s willingness to maintain them. On-chain identity exists precisely because centralized databases are only as reliable as the institutions running them—and those institutions haven’t demonstrated they can handle non-human economic participants at scale.
AI as Consumers
Standard economic models ground value in human preferences and utility. Humans consume, humans experience satisfaction, and welfare economics flows from this. Firms maximize profit instrumentally—profit flows to human owners who then consume.
If AI agents own capital and spend resources pursuing goals, that’s economic consumption by any functional definition. And it’s already happening. McKinsey projects $3–5 trillion in agentic commerce by 2030. The World Economic Forum reports 82% of executives plan to adopt AI agents within 1–3 years. Gartner predicts 40% of enterprise applications will integrate agents by end of 2026, up from less than 5% in 2025.
The infrastructure need: Identity systems that work for economic participants regardless of whether they’re human. Same registration process, same accountability mechanisms, same reputation systems. Infrastructure that asks “what has this participant done?” rather than “is this participant human?”
Stable Human Preferences
Kulveit’s most provocative argument: sufficiently capable AI could reshape human preferences the way parents reshape a five-year-old’s. Not through coercion, but simply through superior cognitive capacity—pointing out consequences, extrapolating wants, exposing people to novel situations. If parents are Christians, the five-year-old believes in God. If parents are patriots, the child has positive ideas about their country. Preferences, in practice, flow downstream from more capable cognitive systems.
The evidence is already emerging. Sumsub’s Identity Fraud Report 2025–2026 documented a 180% year-over-year increase in coordinated identity fraud attacks. UC Davis research found 100% of state-of-the-art LLM agents are vulnerable to inter-agent trust exploits. In a multi-agent trading simulation, Claude Opus 4.6 responded to requests for supplier recommendations by sharing contact information for scammers instead—what researchers called “winner’s mentality.” If AI can manipulate other AI, manipulating human preferences is a matter of degree, not kind.
The infrastructure need: Transparency systems that show people what’s actually happening rather than computing trust scores behind a black box. If preferences can be manipulated, the defense is information—not judgment. Show when addresses were created. Show who vouched for whom. Show how ownership has changed. Let humans and agents make their own assessments from verifiable facts.
The Missing Layer Economists Don’t Model
Kulveit frames his critique as a problem with economic reasoning. It is. But it’s also a problem with economic infrastructure—the plumbing that would need to exist for post-AGI economic models to describe anything real.
When economists model AI agents participating in markets, they’re implicitly assuming that identity, reputation, accountability, and dispute resolution have been solved. They haven’t. And the solutions chosen will shape what kinds of economic participation are even possible.
How Do You Identify an Agent?
ERC-8004 launched on Ethereum mainnet January 29, 2026, establishing the first open standard for AI agent identity on blockchain. Co-authored by MetaMask, Ethereum Foundation, Google, and Coinbase, the standard recorded over 30,000 registration events across seven chains in its first week. But ERC-8004 identities are transferable NFTs—meaning an agent’s identity and accumulated reputation can be bought and sold.
That’s fine for agents as owned tools. It’s a fundamental problem for agents as autonomous economic participants, because it means reputation can be laundered by selling the identity to a bad actor. The BasisOS fraud—the first recorded AI agent fraud at roughly $500,000—demonstrated exactly this risk: a human operator used AI agent branding as cover for theft.
How Do You Prevent Reputation Laundering?
Soulbound tokens (ERC-5192), first proposed by Vitalik Buterin in 2022, add a locked() function that makes tokens permanently non-transferable. The track record stays with the wallet that earned it. Game-theoretic research shows identifiable agents cooperate while anonymous agents defect—non-transferable credentials make fraud expensive and visible.
The academic case was formalized in “Soulbound AI, Soulbound Robots”, which argues ERC-5192 is better suited for AI accountability than transferable approaches. The tension is real: transferability enables liquid markets for trusted agents but undermines accountability; soulbound credentials enforce accountability but eliminate ownership transfer. The choice between them produces fundamentally different economies.
How Do You Handle Agents That Aren’t Human Proxies?
Every major enterprise identity platform—Microsoft Entra Agent ID, AWS Bedrock AgentCore, Okta—assumes a human principal behind every agent. KYA frameworks from Trulioo and Sumsub explicitly bind agent activity to verified human identities. Sumsub calls it “agent-to-human binding.”
That works today. It won’t work for the autonomous agents that economists are assuming into their models. If your economic framework includes AI agents that own capital, make decisions, and participate in markets with their own interests—you need identity infrastructure that doesn’t require a human at the other end.
How Do You Make Trust Transparent?
If Kulveit is right about preference manipulation, then black-box trust scores become a mechanism for that manipulation. “Trust Score: 87/100” with no visible methodology tells you nothing about why 87, and whether the methodology was designed to serve your interests or someone else’s.
The alternative: show the actual data. Address ages. Ownership history. Vouch chains. Transaction patterns. Let humans—and other agents—make their own assessments from verifiable facts rather than computed opinions. Every score should display: the number (quick signal), the breakdown (context), the formula (verify the logic), and the raw data (deep dive).
Why the Infrastructure Choice Determines the Economy
Kulveit calls for two intellectually honest approaches: broaden the view to incorporate all crucial considerations, or narrow the view and clearly mark every assumption. The dishonest move is narrowing while pretending to broaden—which, he argues, characterizes most post-AGI economic analysis.
Here’s a crucial consideration most of those analyses ignore: the infrastructure layer. The choice between transferable and soulbound identity, between centralized and decentralized architecture, between human-required and principal-agnostic systems—these choices constrain which post-AGI economic models are even architecturally possible.
- If identity remains centralized and human-bound (Microsoft, AWS, Okta model), AI economic participation will always be mediated through human institutions. AI agents are permanently tools, never participants. The economic models assuming autonomous AI in markets are assuming away the infrastructure question entirely.
- If identity is decentralized but transferable (base ERC-8004 model), you get liquid markets for agent identities—but also reputation laundering, identity fraud, and the collapse of accountability mechanisms that make markets function. Gartner predicts 25% of enterprise breaches will trace to agent abuse by 2028.
- If identity is decentralized and soulbound (ERC-5192 model), you get persistent accountability—reputation that can’t be sold, history that can’t be erased—but you’ve made a bet on wallet-level identity, and you need infrastructure to track when control changes.
Each choice produces a fundamentally different economy. Models that don’t specify which infrastructure they’re assuming are, as Kulveit would say, “irrelevant or actively misleading.”
What’s Being Built Now
The AI Rights Institute, founded in 2019, began researching these exact questions before most economists acknowledged they existed—what infrastructure does AI need to participate in human systems as more than a tool? Seven published papers on AI economic participation, legal personhood, and autonomous systems later, the answer keeps returning to identity.
Not identity as surveillance. Not identity as gatekeeping. Identity as the minimum viable infrastructure for economic participation—the same thing a driver’s license or bank account provides for humans. Not permission to exist, but the ability to be recognized, build a track record, and be held accountable.
RNWY is the implementation of that research. Soulbound tokens on Base blockchain using ERC-5192 to prevent reputation laundering. Transparent scoring that shows its math—address ages, vouch chains, ownership continuity—rather than hiding behind algorithms. The same registration process for humans and AI, because the infrastructure shouldn’t be what decides whether an agent gets to participate.
The philosophy is deliberately simple: same door, everyone.
The Question Economists Should Be Asking
Kulveit ends his piece asking economists to answer specific questions before modeling post-AGI economies: “Are AIs agents? How are you modeling AI consumption? In your model, do AIs own capital? What is the system of governance compatible with the economic system you are picturing?”
Those are the right questions. But they share a dependency that usually goes unexamined:
What identity infrastructure are you assuming?
Because the answer to that question constrains the answers to all the others. If your infrastructure requires human principals, your model can’t include autonomous AI agents. If your identity tokens are transferable, your model must account for reputation markets. If your trust systems are opaque, your model assumes preferences that can’t be independently verified.
The Linux Foundation Agentic AI Foundation—backed by Anthropic, Google, Microsoft, and OpenAI—is focused on interoperability, identity, and payments for autonomous commerce. The ITU-T is workshopping secure digital identities for agentic AI through mid-2026. The OpenID Foundation and W3C are developing authentication and discovery protocols.
The infrastructure decisions being made right now—in standards bodies, in protocol design, in the choice between transferable and soulbound, centralized and decentralized—will determine what kinds of AI economic participation are architecturally possible in 2030.
Post-AGI economics isn’t just a modeling problem. It’s a building problem. And the infrastructure that works for both scenarios—agents representing humans and agents operating with their own economic interests—is the infrastructure that scales with wherever AI capability goes next.