Researchers and developers gathered at MIT's Open Agentic Web conference arrived at a conclusion that runs counter to the prevailing narrative in AI: the next major breakthroughs won't come from bigger models, but from better protocols that let AI agents discover, trust, and coordinate with each other.
The conference revealed a consistent theme across multiple independent presentations: we're in the DNS era of agent infrastructure. Before agents can find and trust each other at scale, we need identity systems, attestation mechanisms, reputation frameworks, and registry infrastructure—the same structural role DNS played before web search became possible. This layer is massively underbuilt right now.
One attendee's summary, posted to Reddit, captured six key insights from the conference that challenge mainstream assumptions about where AI development is headed.
First, the chatbot framing has become a local maximum. The most interesting work isn't about better conversational UX or smarter responses—it's about agents as persistent actors that discover, negotiate, and transact across networks over time. People doing serious development work have already moved past the assistant model entirely.
Second, coordination is the hard problem, not capability. A room full of brilliant agents can still fail spectacularly. This matches findings from benchmark testing earlier this year: collective reasoning is not simply the sum of individual reasoning. There's a legitimate argument that protocol design, not model scaling, represents the actual frontier.
Third, there's an emerging category called "commerce of intelligence"—not buying things through agents, but a market where intelligence itself (bundled, verified, priced, resold) becomes the object of exchange. This was described as the most underexplored idea at the conference.
Fourth, data provenance becomes load-bearing. What an agent knows, how that knowledge was verified, under what terms it can flow—this is the actual architecture forming beneath everything else. Without provenance infrastructure, agent networks face the same trust problems that plague current information ecosystems.
Fifth, autonomy theater keeps failing in predictable ways. The demos that actually worked—in healthcare, in enterprise contexts—were about helping experts operate at higher leverage, not replacing them. Partnership consistently outperforms replacement.
Sixth, we need infrastructure before we need more intelligence. Identity, attestation, reputation, discovery—these aren't sexy problems. They're plumbing. But they're the plumbing that enables everything else.
This framing matters because it shifts focus from "make the model better" to "build the systems that let models work together." The latter is an engineering and protocol design challenge more than a machine learning problem. It's also more accessible—you don't need billion-dollar training runs to contribute to protocol standards.
The technology industry has a tendency to focus on the flashy parts—bigger models, better benchmarks, more impressive demos. Infrastructure work is less exciting and harder to sell in blog posts. But the history of technology suggests that infrastructure—DNS, TCP/IP, HTTP, TLS—often matters more than the applications built on top of it.
If the MIT conference insights are correct, the AI field is transitioning from a research challenge to an infrastructure challenge. The question isn't whether we can build smarter models—we clearly can. The question is whether we can build the systems that let those models become genuinely useful rather than just impressively capable in controlled demonstrations.





