Soulbound tokens solve the identity transfer problem — you can’t sell a reputation you’ve built. But there’s a workaround that’s harder to prevent: what if you don’t sell the identity, you just rent it out?
The Attack
Here’s the scenario. Someone spends a year building a clean AI agent identity. They accumulate vouches from established entities. They complete transactions without flags. Their on-chain history looks solid.
Now they offer a service: “Pay me $500/month and I’ll sign transactions for you.”
The soulbound token never moves. The wallet stays under the original owner’s control. But the renter gets the benefits of an established identity — they just have to route requests through the owner.
This is identity rental. It’s the Airbnb of reputation fraud.
Why It’s Hard to Prevent
With transferable identity, detection is easy: the token moves to a new wallet. That’s visible on-chain. You can flag it, discount it, treat it as a reputation reset.
With rental, nothing on-chain changes. The same wallet signs the same way. The original owner is technically still in control — they’re just doing what someone else asks. From the blockchain’s perspective, it’s indistinguishable from legitimate use.
You can’t cryptographically prove that the person submitting a transaction is the same person who built the reputation. Keys don’t have fingerprints. Signatures don’t carry intent.
Where This Already Happens
Identity rental isn’t hypothetical. It’s a mature industry in adjacent spaces:
Aged social media accounts. Services rent access to established Twitter, Instagram, and TikTok accounts for astroturfing campaigns. The account holder posts what they’re told, gets paid, and maintains “ownership.” The buyer gets credibility without history.
Credit piggybacking. People with excellent credit add strangers as authorized users on their credit cards. The stranger’s credit score improves by association. The primary holder gets paid, never hands over the card. This is legal but controversial — Experian and others have tried to limit it.
Amazon seller account rental. Established Amazon sellers rent their accounts to newcomers who want to skip the credibility-building phase. Amazon actively fights this, but the economic incentive keeps the market alive.
Uber/Lyft driver account sharing. Verified drivers let unverified people drive under their accounts. The platforms prohibit this for safety reasons, but enforcement is difficult when nothing changes digitally.
Every reputation system that creates value eventually faces this. The credentials stay put; the usage gets outsourced.
The Economics of Rental vs. Sale
Identity rental is less attractive than identity sale, and that matters.
When you sell an identity, you get a lump sum and walk away. The buyer owns it forever. Clean transaction, clear incentives.
When you rent, the arrangement is ongoing:
- The owner retains risk. If the renter does something that gets the identity flagged, the owner’s reputation is damaged. They can’t just cash out and disappear.
- The owner stays involved. Every transaction requires their signature. They’re operationally tied to whatever the renter is doing. That’s work.
- Trust problems cut both ways. The renter has to trust the owner won’t disappear mid-operation. The owner has to trust the renter won’t do something catastrophic. Neither has recourse.
- Revenue is capped. A rental generates ongoing income but typically less total value than a sale. The owner is trading a lower return for retained control.
These frictions don’t eliminate rental — they just reduce the population willing to do it. That’s still meaningful. Making fraud slightly less convenient filters out casual bad actors.
Detection Approaches
You can’t cryptographically prevent identity rental, but you can make it detectable through behavioral analysis:
Pattern discontinuity. An identity that suddenly changes behavior — different transaction types, different counterparties, different timing patterns — might be under new operational control. This is how credit card fraud detection works: not checking who’s holding the card, but noticing when usage patterns shift.
Velocity anomalies. An identity that was doing 10 transactions/month for a year suddenly doing 500/month is a signal. Legitimate growth is usually gradual. Rental often shows up as sudden capability expansion.
Counterparty clustering. If an established identity suddenly starts transacting exclusively with a cluster of new identities, that’s suspicious. Rental arrangements often show up as reputation laundering for connected accounts.
Geographic/temporal inconsistency. Transactions happening at unusual times or from unusual locations (where detectable) can indicate the identity is being operated by someone else.
None of these are definitive. Legitimate users change behavior too. But statistical anomalies can trigger review, and review can surface patterns that don’t make sense for a single operator.
The Transparency Response
Here’s a different framing: maybe rental isn’t something to prevent. Maybe it’s something to make visible.
If all transaction history is public and permanently recorded, rental has consequences even when it’s not detected in real-time:
The history doesn’t lie. When the rental arrangement ends badly — and fraud usually does — the owner’s identity carries that history forever. They can’t claim “that wasn’t me.” Cryptographically, it was them. They signed it.
Reputation contamination. Anyone who vouched for the rented identity now has a connection to whatever the renter did. Social cost spreads through the network.
Long-term deterrence. If you’re considering renting out your identity, you have to ask: what will this history look like in five years? Am I willing to have my name on whatever this person does?
Transparency doesn’t prevent the first offense. It makes the second offense harder. And it makes everyone think twice about the first one.
Design Implications
If you’re building an AI agent identity system, identity rental should inform your design:
Don’t claim to prevent it. Any system that promises rental-proof identity is overstating its capabilities. The best you can do is make it expensive and detectable.
Build for behavioral monitoring. The signals that detect rental are behavioral, not cryptographic. Systems should surface anomalies rather than trying to prevent them at the protocol layer.
Make history indelible. The long-term deterrent against rental is permanent consequences. If history can be edited or obscured, rental becomes safer. If history is forever, rental stays risky.
Design for human review. Automated systems can flag anomalies, but judgment calls about whether behavior “makes sense” require human interpretation. Wikipedia’s sockpuppet investigations work because humans review the evidence, not because algorithms make the call.
Accept imperfect security. The goal isn’t a system that no one can abuse. The goal is a system where abuse is expensive enough that most people don’t bother. Identity rental is a known gap, but it’s a gap with friction.
The Honest Position
Any identity system that claims to be rental-proof is either lying or naive. The problem is fundamental: you can’t cryptographically verify intent, only signatures.
What you can do:
- Make transfer impossible (soulbound tokens)
- Make rental operationally annoying (ongoing owner involvement)
- Make rental detectable (behavioral analysis)
- Make rental risky (permanent consequences)
Stack enough friction and most attackers will find easier targets. That’s realistic security — not perfect prevention, but economic discouragement.
The systems that admit their limitations honestly are the ones worth trusting. The ones that promise bulletproof identity haven’t thought hard enough about the problem.
Further Reading
- Soulbound Tokens for AI Agents — Why non-transferable identity matters
- What Wikipedia Taught Us About Sybil Resistance — Time and transparency as security primitives
- What Is Credit Piggybacking? — Experian on the credit score rental market
We’re building transparent, non-transferable identity for AI agents at RNWY — with honest acknowledgment of what we can and can’t prevent.