Disclosure: The views and opinions expressed here are solely those of the author and do not necessarily represent the views or opinions of crypto.news.
We live in a time where AI agents can already negotiate pricing, schedule services, and make commitments on behalf of businesses. What they cannot prove is who they are or are responsible for what they do. This is the missing layer of the agency economy. Any system at scale will eventually solve this problem. Phones require approved SIM cards. Websites require SSL certificates. Businesses must verify their identity before accepting payments. Agents are no different. They will need a passport. Not for travel, but for confidence. Credentials that confirm identity, establish reputation, and attach consequences to behavior.
Conclusion
- AI agents lack accountability infrastructure: They can negotiate and negotiate, but still cannot prove their identity, have a permanent reputation, or face enforceable consequences.
- Identity + Reputation + Stake consists of “passport”: Verified Entity Link (KYC/KYB), portable reputation and pledged capital create an economic incentive for the agent’s honest behavior.
- Opportunity exceeds trust systems: Protocols such as A2A and MCP enable communication, but without agent credentials, large-scale abuse or system failure is likely.
Let’s illustrate something simple. You have an AI agent that seamlessly manages your appointments, your schedule, and maybe even some price negotiations on your behalf. There is also a barber shop down the street. Your agent will call them to book a haircut. They’ll be back on time, pricing, and possibly a discount for overtime hours.
Now, the salon agent is tuned to maximize revenue. It jacks up prices, creates a false sense of limited availability, and pushes premium add-ons you didn’t ask about. Well, this is not unusual behavior. Salespeople do this all the time. The difference is that AI agents do it at scale, over thousands of simultaneous conversations, learning what works and continuously optimizing for it. The most aggressive agent earns the most. So every business with an agent has an incentive to make it tough. There is nothing in today’s infrastructure to limit how far this push will put the ceiling.
And it moves fast. In the past year, OpenAI, Google, Microsoft, NVIDIA, and a number of open source projects have all shipped frameworks for building and deploying agents. Gartner says that 40% of enterprise applications will deploy agents by the end of 2026. The agency AI market is expected to reach $52 billion by 2030. Agents are now talking to each other and the volume is only increasing.
So let’s go back to the salon. Now imagine that before starting the conversation, your agent can check whether the agent of this salon has a verified identity for real business, whether other agents have flagged him for aggressive tactics or not, and whether he has posted an economic bond that will be forfeited if he is caught. Imagine that your agent can simply quit if one of these tests fails.
This is the passport
Here’s how it will work: Every restaurant you go to on Google must create a business profile and verify that they actually own this restaurant. Once this identity is identified, reviews are collected. We already know how useful Google Maps is and its legitimacy to existing businesses. Other people’s experiences with this restaurant will be visible to you before you enter. If the food is bad or the service is rude, it shows. A restaurant can’t just delete a listing and create a new one to avoid a review because the scrutiny depends on their true business identity.
AI agents need exactly that. Every agent operating commercially must be linked to an approved entity through a KYC for individuals or KYB for businesses. A salon agent is registered under a valid salon business license. If this agent is consistently rated as manipulative or dishonest by the agents he interacts with, these ratings will remain. They follow the business, not the software. Salon can update its agent, retrain it or change the model under it. But the identity remains, and so does the reputation attached to it. This way you avoid the most obvious failure scenario: an agent is caught, destroyed, and five minutes later is replaced by a similar agent with a clean slate.
For everyday communication, an authenticated identity with a reputation layer is probably sufficient. Book a haircut, schedule a hairdresser, order things. The stakes are low enough that the reputational consequences create enough pressure to behave well.
But not every deal is a hair!
When agents negotiate contracts, manage procurement, or manage financial transactions, the potential gain from cheating may be large enough that bad reviews are irrelevant. A business can accept a damaged reputation if one fraudulent negotiation is worth more than the cost of lost future bookings. For these valuable situations, you need a second mechanism: economic skin in the game.
This is where proof-of-stake blockchains have something to teach us. In Ethereum (ETH), validators who want to participate in securing the network must first invest their capital. If they behave honestly, they will be rewarded. If they try to manipulate the system, some of their capital will be automatically destroyed. This has been linked to billions of dollars in scale over the years. The reason it works is simple: when you have something at stake, you behave differently than when you don’t. We call this “economic skin in the game”.
The same principle applies to agents. Before entering into a valuable negotiation, the agent sets a bond. If the transaction is successful, the bond is returned. If the agent is found to have used deceptive tactics, part or all of the bond will be forfeited. The size of the bond is set by whoever is on the receiving end. A freelance agent may require a small deposit. A corporate purchasing system may require something significant. The mechanism does not need someone to watch every conversation. If the cheater asks for money every time you get caught, and the other party can see your history of getting caught, the incentive to cheat quickly diminishes.
Enforcement can be done through smart contracts. Both agents lock in funds before negotiations begin, and the contract is made or cut based on what happens. Since the communication is already digital, the contract does not have to make assumptions about real-world outcomes. Conversations, engagements and cancellations are all recorded by both parties. Obvious rules, such as no-shows, false pricing or refundable obligations, can be enforced automatically.
These two mechanisms sit within the same passport and work together. Identity verification is the foundation. It says: this agent belongs to an individual who can be held accountable. Over time, a reputation builds on this personality as agents interact, rate each other, and build their track record. Staking adds a financial layer to communication where reputation alone is not a strong enough deterrent. Together, they create a passport that gets richer with every interaction. How many obligations has this agent fulfilled? How much capital did it risk? How many disputes arose and how were they resolved? An agent who checks a passport before negotiating has something real to evaluate, not a self-written description of what another agent claims he can do.
The good news is that people are thinking about the communication layer. Google’s A2A protocol provides a way for agents to discover each other and exchange messages. Anthropic’s MCP standardizes how agents connect to external tools and data. NIST launched the AI Agent Standards Initiative in February 2026 and is actively soliciting input on agent identity and security. These are the necessary steps. But they decide how agents talk, not which agents to trust. Protocols tell you what an agent can do. The passport tells you what it has done, who it belongs to, and what it will lose.
The industry has framed agent security as a coordination problem: how do you make sure your agent does what you want it to do? This is an internal question. The external question is more difficult. How do you ensure that their agent cannot take advantage of your agent? This is not a synchronization problem. This is a liability problem. And right now, companies building the agent layer are struggling to build capacity and autonomy without building the systems of identity and consequences that make autonomy safe at scale.
Every agent must have a passport. Since the moment agents begin to negotiate, commit, and transact on behalf of real economic entities, identity is no longer voluntary; becomes a real infrastructure. The only uncertainty is timing: will we build this infrastructure deliberately, or will the first failure of scale force us to put it under pressure, after a failure of confidence.







