Published February 2026 12 min read

A New Architecture for a New Web

The web was built for documents. Then APIs. Then serverless functions. But autonomous AI agents — goal-driven, long-lived, and capable of spawning sub-agents — need infrastructure that doesn't exist yet. This is the story of why, and what's being built to fill the gap.

Series: The Agentic Web Part 1 of 6

The Web's Four Eras

Every decade, the web has undergone a structural shift that exposed the limits of the previous architecture. Understanding the pattern helps explain why the agentic transition isn't just another incremental upgrade — it's a category change.

  1. Endpoints (1990s) — Static HTML files, images, and CGI scripts. A user requests a page; a server returns bytes; the conversation ends.
  2. Services (2000s) — REST and GraphQL APIs turned machines into first-class clients. Data mutated continuously, not just on release nights.
  3. Workers (2010s) — Serverless functions, cron jobs, and RPA bots. Short-lived, event-driven compute — thousands spin up when a queue spikes and vanish a second later.
  4. Agents (2020s) — Autonomous, LLM-powered software entities that pursue goals, maintain memories, migrate between runtimes, and delegate sub-tasks to freshly spawned helpers.

At the agent stage, three thresholds appear that no prior architecture was designed to cross: self-directed discovery in milliseconds, delegated authority with instant revocation, and cryptographic proof of behaviour. Trust can no longer hinge on "I control this domain" — counterpart agents and regulators will demand code-integrity attestations and tamper-evident execution logs.

Interactive · The Web's Four Eras

Click any era to explore · At the agent stage, three thresholds appear that no prior architecture was designed to cross

Where the Current Stack Breaks

Today's web infrastructure rests on four interlocking pillars — DNS, WHOIS/RDAP, IP addressing, and Certificate Authorities — each optimised for human-initiated, request-response traffic. None were designed for the demands of autonomous agent networks.

DimensionStatic Web PageCloud API / FunctionLLM-Backed Agent
Who initiates?Client fetches URLClient calls endpointAgent decides & pushes
LifecycleImmutable fileEphemeral per-requestPersistent with long-term state
AutonomyPassive (level 0)Reactive (level 1)Proactive (levels 2–3)
IdentityDNS + TLS certDNS + API keysDIDs + capability attestation
Failure mode404 / 5xxRetry logicGoal re-planning + trust revocation

The gap is architectural. DNS can't propagate trust metadata in real time. Certificate revocation lists can't handle trillion-agent scale. WHOIS records — mostly redacted after GDPR — are an unacceptable foundation for autonomous trust negotiation. And none of these systems support capability-based discovery: finding an agent by what it does, not where it lives.

The Dial-Up Analogy

The best precedent for the coming transition is the shift from dial-up to broadband. When the internet was first commercialised, telephone infrastructure seemed like a natural fit — it already reached most homes and businesses. But dial-up's 56 kbps ceiling, circuit-switched connections, and ~200 ms modem latency were fundamentally wrong for the always-on, packet-switched future the internet demanded.

Engineers didn't patch dial-up forever. They designed last-mile upgrades — DSL, cable, then fibre — that offered >1 Mbps downstream and sub-30 ms round-trip times while staying "always-on." The key insight: they prevented unknown unknowns by moving to packet-switched, layered networks with TCP/IP, creating a flexible foundation that could accommodate unforeseen applications like streaming video, real-time gaming, and eventually AI inference at the edge.

The same pattern applies today. We can bolt agent-specific DNS records and faster revocation onto the existing stack — and we will, for backward compatibility. But the architectural requirements of agent networks (millisecond discovery, cryptographic trust propagation, intent-aware routing) demand purpose-built infrastructure alongside the legacy web, just as broadband demanded purpose-built last-mile networks alongside POTS.

The Landscape: A2A, MCP, AGNTCY, and NANDA

2025 was a watershed year for agentic infrastructure. Three major open-source moves reshaped the landscape:

  • A2A → Linux Foundation (June 2025) — Google donated the Agent-to-Agent protocol to the Linux Foundation, with over 100 companies — including AWS and Cisco — now supporting the standard. A2A defines the wire protocol: JSON-RPC 2.0 semantics for how two agents exchange work once connected.
  • MCP → Agentic AI Foundation (December 2025) — Anthropic donated the Model Context Protocol to the newly formed Agentic AI Foundation (AAIF), a directed fund under the Linux Foundation. MCP standardises how agents connect to tools and data sources.
  • AGNTCY (July 2025) — The Linux Foundation welcomed the AGNTCY project, backed by Cisco, Dell, Google Cloud, Oracle, and Red Hat, to bridge A2A agents and MCP servers in dynamic multi-agent environments.

These are essential building blocks. But they solve communication — what happens after two agents are connected. None of them answer the prior question: how do agents find each other in the first place?

The city analogy. If A2A is the language agents speak on the streets, and MCP is how they use the tools in their offices, then Project NANDA builds the city itself — the address system, the trust infrastructure, the economic plumbing, and the governance framework that makes everything work at scale.

What NANDA Provides

Project NANDA (Networked AI Agents in Decentralized Architecture), led by Prof. Ramesh Raskar at MIT Media Lab, is purpose-built to dismantle the four critical choke points in today's web infrastructure as it relates to agents: DNS, Certificate Authorities, Orchestration, and Attestation.

The project proceeds in three stages:

Phase 1 — Current

Foundations of the Agentic Web

NANDA Index for agent discovery. Cross-platform protocol bridges (A2A, MCP, HTTPS). SDKs for agent onboarding.

Phase 2 — Next

Agentic Commerce

Knowledge pricing. Edge AI integration. Economic protocols and resource markets for agent services.

Phase 3 — Vision

Society of Agents

Large Population Models (LPMs). Privacy-preserving co-learning. Cross-silo coordination and distributed AI.

Crucially, NANDA is protocol-neutral. A NANDA-indexed agent can expose an A2A endpoint, an MCP server, a plain HTTPS API, or all three. The NANDA Adapter handles protocol translation automatically — an MCP assistant can discover and communicate with an A2A inventory agent through NANDA's universal handshake system.

As Google's Rao Surapaneni, VP, Google Cloud, noted: "We're proud to support Project NANDA's work as they utilise the Agent2Agent protocol for their advanced research on the internet of agents."

The Road Ahead

The MIT IAP 2026 course (6.S192) — "Agentic Web: Networked AI Agents and Decentralized AI (NANDA)" — is training the next generation of engineers on this infrastructure. Forbes called it "TCP/IP for AI." The comparison is apt: just as TCP/IP provided the universal transport layer that let the internet accommodate applications no one had imagined, NANDA provides the universal discovery-and-trust layer for agent applications that don't exist yet.

This series explores each layer of that architecture in depth:

  • Part 2 covers Agent Identity — AgentFacts, DIDs, and verifiable credentials
  • Part 3 covers Trust Without Borders — the Quilt architecture and federated registries
  • Part 4 covers Agent Privacy — dual-path resolution and lean indexes
  • Part 5 covers the Security Blueprint — Zero Trust Agentic Access and Agent Visibility
  • Part 6 covers Governance at Scale — multistakeholder frameworks for billions of agents

The Internet of AI Agents isn't coming. It's here. The question is whether we build its infrastructure with the same intentionality that the original internet's architects brought to TCP/IP — or patch dial-up until it breaks.

Continue Reading

Coming Soon

By Invitation Only