The tech industry, much like everything else in the world, abides by certain rules.
With the boom in personal computing came USB, a standard for transferring data between devices. With the rise of the internet came IP addresses, numerical labels that identify every device online. With the advent of email came SMTP, a framework for routing email across the internet.
These are protocols — the invisible scaffolding of the digital realm — and with every technological shift, new ones emerge to govern how things communicate, interact, and operate.
As the world enters an era shaped by AI, it will need to draw up new ones. But AI goes beyond the usual parameters of screens and code. It forces developers to rethink fundamental questions about how technological systems interact across the virtual and physical worlds.
How will humans and AI coexist? How will AI systems engage with each other? And how will we define the protocols that manage a new age of intelligent systems?
Across the industry, startups and tech giants alike are busy developing protocols to answer these questions. Some govern the present in which humans still largely control AI models. Others are building for a future in which AI has taken over a significant share of human labor.
“Protocols are going to be this kind of standardized way of processing non-deterministic information,” Antoni Gmitruk, the chief technology officer of Golf, which helps clients deploy remote servers aligned with Anthropic’s Model Context Protocol, told BI. Agents, and AI in general, are “inherently non-deterministic in terms of what they do and how they behave.”
When AI behavior is difficult to predict, the best response is to imagine possibilities and test them through hypothetical scenarios.
Here are a few that call for clear protocols.
Scenario 1: Humans and AI, a dialogue of equals
Games are one way to determine which protocols strike the right balance of power between AI and humans.
In late 2024, a group of young cryptography experts launched Freysa, an AI agent that invites human users to manipulate it. The rules are unconventional: Make Freysa fall in love with you or agree to concede its funds, and the prize is yours. The prize pool grows with each failed attempt in a standoff between human intuition and machine logic.
Freysa has caught the attention of big names in the tech industry, from Elon Musk, who called one of its games “interesting,” to veteran venture capitalist Marc Andreessen.
“The core technical thing we’ve done is enabled her to have her own private keys inside a trusted enclave,” said one of the architects of Freysa, who spoke under the condition of anonymity to BI in a January interview.
Secure enclaves are not new in the tech industry. They’re used by companies from AWS to Microsoft as an extra layer of security to isolate sensitive data.
In Freysa’s case, the architect said they represent the first step toward creating a “sovereign agent.” He defined that as an agent that can control its own private keys, access money, and evolve autonomously — the type of agent that will likely become ubiquitous.
“Why are we doing it at this time? We’re entering a phase where AI is getting just good enough that you can see the future, which is AI basically replacing your work, my work, all our work, and becoming economically productive as autonomous entities,” the architect said.
In this phase, they said Freysa helps answer a core question: “What does human involvement look like? And how do you have human co-governance over agents at scale?”
In May, the The Block, a crypto news site, revealed that the company behind Freysa is Eternis AI. Eternis AI describes itself as an “applied AI lab focused on enabling digital twins for everyone, multi-agent coordination, and sovereign agent systems.” The company has raised $30 million from investors, including Coinbase Ventures. Its co-founders are Srikar Varadaraj, Pratyush Ranjan Tiwari, Ken Li, and Augustinas Malinauskas.
Scenario 2: To the current architects of intelligence
Freysa establishes protocols in anticipation of a hypothetical future when humans and AI agents interact with similar levels of autonomy. The world, however, needs also to set rules for the present, where AI still remains a product of human design and intention.
AI typically runs on the web and builds on existing protocols developed long before it, explained Davi Ottenheimer, a cybersecurity strategist who studies the intersection of technology, ethics, and human behavior, and is president of security consultancy flyingpenguin. “But it adds in this new element of intelligence, which is reasoning,” he said, and we don’t yet have protocols for reasoning.
“I’m seeing this sort of hinted at in all of the news. Oh, they scanned every book that’s ever been written and never asked if they could. Well, there was no protocol that said you can’t scan that, right?” he said.
There might not be protocols, but there are laws.
OpenAI is facing a copyright lawsuit from the Authors Guild for training its models on data from “more than 100,000 published books” and then deleting the datasets. Meta considered buying the publishing house Simon & Schuster outright to gain access to published books. Tech giants have also resorted to tapping almost all of the consumer data available online from the content of public Google Docs and the relics of social media sites like Myspace and Friendster to train their AI models.
Ottenheimer compared the current dash for data to the creation of ImageNet — the visual database that propelled computer vision, built by Mechanical Turk workers who scoured the internet for content.
“They did a bunch of stuff that a protocol would have eliminated,” he said.
Scenario 3: How to take to each other
As we move closer to a future where artificial general intelligence is a reality, we’ll need protocols for how intelligent systems — from foundation models to agents — communicate with each other and the broader world.
The leading AI companies have already launched new ones to pave the way. Anthropic, the maker of Claude, launched the Model Context Protocol, or MCP, in November 2024. It describes it as a “universal, open standard for connecting AI systems with data sources, replacing fragmented integrations with a single protocol.”
In April, Google launched Agent2Agent, a protocol that will “allow AI agents to communicate with each other, securely exchange information, and coordinate actions on top of various enterprise platforms or applications.”
These build on existing AI protocols, but address new challenges of scaling and interoperability that have become critical to AI adoption.
So, managing their behavior is the “middle step before we unleash the full power of AGI and let them run around the world freely,” he said. When we arrive at that point, Gmitruk said agents will no longer communicate through APIs but in natural language. They’ll have unique identities, jobs even, and need to be verified.
“How do we enable agents to communicate between each other, and not just being computer programs running somewhere on the server, but actually being some sort of existing entity that has its history, that has its kind of goals,” Gmitruk said.
It’s still early to set standards for agent-to-agent communication, Gmitruk said. Earlier this year he and his team initially launched a company focused on building an authentication protocol for agents, but pivoted.
“It was too early for agent-to-agent authentication,” he told BI over LinkedIn. “Our overall vision is still the same -> there needs to be agent-native access to the conventional internet, but we just doubled down on MCP as this is more relevant at the stage of agents we’re at.”
Does everything need a protocol?
Definitely not. The AI boom marks a turning point, reviving debates over how knowledge is shared and monetized.
McKinsey & Company calls it an “inflection point” in the fourth industrial revolution — a wave of change that it says began in the mid-2010s and spans the current era of “connectivity, advanced analytics, automation, and advanced-manufacturing technology.”
Moments like this raise a key question: How much innovation belongs to the public and how much to the market? Nowhere is that clearer than in the AI world’s debate between the value of open-source and closed models.
“I think we will see a lot of new protocols in the age of AI,” Tiago Sada, the chief product officer at Tools for Humanity, the company building the technology behind Sam Altman’s World. However, “I don’t think everything should be a protocol.”
World is a protocol designed for a future in which humans will need to verify their identity at every turn. Sada said the goal of any protocol “should be like this open thing, like this open infrastructure that anyone can use,” and is free from censorship or influence.
At the same time, “one of the downsides of protocols is that they’re sometimes slower to move,” he said. “When’s the last time email got a new feature? Or the internet? Protocols are open and inclusive, but they can be harder to monetize and innovate on,” he said. “So in AI, yes — we’ll see some things built as protocols, but a lot will still just be products.”