WebMCP and Agent Readiness for Boards
A new browser standard called WebMCP promises to let AI agents interact with websites like humans do. Most companies are not ready.
Bottom Line: WebMCP (Web Model Context Protocol) is a proposed browser standard that lets websites declare capabilities as structured tools for AI agents. Google, Microsoft, and the W3C are behind it. The specification is pre-production and will not reach enterprise stability before 2027 at the earliest. But the structural readiness it requires, clean semantic HTML, predictable form architecture, stable navigation, is already overdue for most organisations. Ninety percent of agent readiness is form hygiene. Ten percent is the new standard. Agencies will sell the ten percent as a project and ignore the ninety. Start with assessment.
What WebMCP Actually Is
WebMCP stands for Web Model Context Protocol. It is a proposed standard that allows web pages to describe their interactive capabilities in a way that AI agents can understand and act on without screen scraping or guesswork.
Today, when an AI agent visits a website, it has to interpret the page the same way a human would. It reads text, scans for buttons, infers what a form does based on visual layout. This works often enough to be impressive. It fails often enough to be unreliable.
WebMCP changes the approach. Instead of forcing agents to interpret, it lets websites declare. A contact form can announce itself as a structured tool with defined inputs, types, and expected behaviours. A booking flow can describe its steps. A search function can expose its parameters.
The mechanism is straightforward. Web pages include structured metadata, either as HTML attributes or through a JavaScript API, that describes what the page can do and how to interact with it. AI agents read this metadata before deciding how to act.
Google Chrome is developing the implementation. Microsoft Edge has signalled support. The W3C has begun formal specification work. The underlying Model Context Protocol was originally developed by Anthropic for connecting AI systems to external tools. WebMCP extends that principle to the open web.
Why It Matters Commercially
The commercial significance of WebMCP is not the technology itself. It is what the technology reveals about where the web is heading.
Search engines already use structured data to understand web content. Schema markup, Open Graph tags, and meta descriptions help crawlers interpret pages without rendering them fully. This layer is well understood. Most marketing teams have at least attempted it.
AI visibility adds a second layer. Large language models now retrieve, summarise, and cite web content in conversational responses. Appearing in these responses requires content that is structurally clear, semantically consistent, and authoritative enough to be selected. This layer is less well understood. Most organisations are still treating it as a content problem when it is an architecture problem.
Agent readiness adds a third layer. When AI agents can not only read your site but act on it, the requirements shift again. It is no longer sufficient for a page to be understood. It must be operable. A form that looks correct to a human but lacks proper input types, stable field names, or predictable validation will fail under agent interaction.
These three layers, search visibility, AI visibility, and agent readiness, are cumulative. Each builds on the previous. Organisations that have neglected structured data will struggle with AI visibility. Those that have neglected semantic HTML will struggle with agent readiness.
The commercial risk is not that competitors will implement WebMCP first. It is that organisations will discover their foundations are too weak to support any of the three layers reliably.
The 90/10 Split
Here is where most vendor conversations will go wrong.
When WebMCP reaches production readiness, agencies and consultancies will position it as a new capability that requires a new project. They will scope implementations around the WebMCP specific attributes: toolname, tooldescription, structured tool declarations. They will present this as the work that matters.
It is not. It is roughly ten percent of the work that matters.
The remaining ninety percent is structural HTML hygiene that should already be in place. This includes form elements with proper labels and name attributes. Input fields with correct types. Required fields that are actually marked as required. Predictable validation that does not rely on JavaScript tricks. Stable URLs that do not change between sessions. Navigation that follows consistent patterns.
This is not glamorous work. It does not photograph well in a pitch deck. It is the foundation without which WebMCP declarations are decorative.
An AI agent that encounters a form with a toolname attribute but missing input labels will still fail. A booking flow that declares itself as a structured tool but changes its URL structure between steps will still break. WebMCP makes the intention legible. It does not make the underlying architecture functional.
The readiness gap is not WebMCP adoption. It is form hygiene. Most enterprise websites have forms that work for humans through a combination of visual context, muscle memory, and tolerance for ambiguity. AI agents have none of these compensating mechanisms. They need structure. Most sites do not provide it.
What Agent Readiness Actually Requires
Agent readiness is a structural property of your web presence. It cannot be bolted on through a sprint or declared through a tag.
The requirements are specific. Forms need semantic HTML with every input properly labelled, typed, and named. Labels must be associated with their inputs through the for attribute, not through visual proximity. Validation must be predictable and consistent. Error states must be machine-readable.
Navigation must be stable. URLs must be deterministic. The path from landing page to completed transaction must be traversable without relying on visual layout or session dependent redirects.
Authentication flows must degrade gracefully. If a form requires login, that requirement must be expressed structurally, not through a modal that appears after three seconds of inactivity.
Content must be semantically marked up. Not just for search engines, which is the current standard ambition, but for agents that need to understand relationships between elements on a page. What is this form for? What happens when it is submitted? What are the required fields versus optional ones? What are the constraints on each field?
None of this is new. Every item on this list has been best practice for a decade. The difference is that human users tolerate deviation from these standards. AI agents do not.
The Timeline Problem
WebMCP is currently available in Chrome 146 behind an experimental feature flag. This means it is accessible to developers who deliberately enable it. It is not available to the general browsing population or to AI agents operating at scale.
The W3C specification work is ongoing. Browser vendors are still negotiating the details. The JavaScript API surface is still evolving. Enterprise features like authentication handling across WebMCP interactions have not been fully specified.
A realistic timeline for enterprise adoption looks roughly like this:
2026: Specification stabilises. Chrome and Edge ship initial production support. Early adopters experiment. Developer tooling begins to emerge.
2027: Production implementations appear on major platforms. Agent providers begin supporting WebMCP discovery. Enterprise CMS platforms add WebMCP configuration options.
2028 and beyond: Broad adoption across enterprise web properties. Agent interaction becomes a measurable channel. WebMCP hygiene becomes a standard audit line item.
This timeline is not a reason to ignore the topic. It is a reason to invest in the right sequence. Organisations that spend 2026 implementing WebMCP attributes on structurally unsound websites will need to redo the work. Those that spend 2026 fixing their structural foundations will be ready to add WebMCP attributes when the specification is stable.
What Your Agency Will Pitch
When WebMCP enters the mainstream conversation, agencies will propose implementation projects. The scope will typically include: auditing your site for WebMCP compatibility, adding tool declarations to key pages, configuring structured metadata for forms and interactive elements, and testing agent interactions.
This work is legitimate. It will also be premature if the underlying structure is not sound.
The question to ask is not whether your agency can implement WebMCP. It is whether your agency has assessed the structural readiness of your web presence at the HTML level. Can they tell you how many of your forms have proper label associations? Do they know which of your input fields lack type attributes? Have they tested whether your critical user journeys are navigable without visual context?
If the answer is no, the WebMCP project is cosmetic. It addresses the visible ten percent while leaving the structural ninety percent untouched.
The organisations that will benefit earliest from WebMCP are those that have already invested in semantic HTML, structured data, and accessible form architecture. For them, adding WebMCP declarations is a small incremental step. For everyone else, it is a project built on sand.
A Three Phase Decision Framework
For leadership teams evaluating agent readiness, we recommend a sequenced approach:
Phase One: Structural Assessment (Now)
Commission a technical audit of your web presence at the HTML level. This is not a content audit or an SEO audit in the traditional sense. It is a structural assessment of whether your forms, navigation, and interactive elements are machine-operable. Identify gaps in semantic markup, form hygiene, and URL stability. Prioritise remediation based on commercial impact.
Phase Two: Specification Monitoring (2026)
Track the W3C WebMCP specification as it evolves. Assign a technical owner. Do not implement against the current draft unless you are prepared to rebuild when the specification changes. Focus on understanding how agent interaction patterns affect your specific business model.
Phase Three: WebMCP Implementation (When Stable)
Once the specification reaches production stability in major browsers, implement WebMCP declarations on your highest value interactive elements. Start with forms that drive commercial outcomes: enquiry forms, booking flows, configuration tools. Expand based on measured agent interaction patterns.
This sequence matters because it prevents the most common mistake: investing in the visible standard while neglecting the invisible structure it depends on.
The Assessment Question
The simplest test of agent readiness is this: could an AI agent complete a transaction on your site without guessing?
Not could it read your content. Not could it navigate your pages. Could it fill in a form, submit it correctly, and receive a coherent response, without relying on visual layout, session state, or human intuition to bridge the gaps in your markup?
For most enterprise websites, the honest answer is no. Forms work for humans because humans compensate for structural deficiency. They read labels that are visually but not programmatically associated. They infer input types from context. They tolerate validation that appears after submission rather than inline.
AI agents cannot compensate. They operate on structure. Where structure is absent, they fail. WebMCP will eventually provide a richer layer of structure for agents to operate on. But it will not fix what is broken underneath.
The organisations that take this seriously now will spend less later. The ones that wait for their agency to pitch WebMCP as a project will spend more and get less.
Where This Connects
Agent readiness is an extension of the same assessment methodology we apply through the Marketing MRI. The question is the same: is the system designed to produce the outcomes you need, or is it designed to produce the appearance of activity?
The visibility stack, search, AI, agents, is a system. Each layer depends on the structural integrity of the layer beneath it. A Marketing MRI examines that integrity at the decision system level. Agent readiness extends the examination to the technical implementation layer.
If you are evaluating whether your web presence is structurally ready for what comes next, that is where we would start.
Get the digest weekly
We'll email you Fridays. No hot takes. Just patterns and causes.
Assessment before action. Always.