How to Make Your Web App Agent Native
Agent Native Is a Product Decision
The web has a new class of user: AI agents.
They discover content, reason over it, and take actions for people and software workflows.
If agents are first class users of your product, your app needs to be understandable and operable for them too.
Agent native apps are built in layers. Discovery, content negotiation, action semantics, safety boundaries, and tests all have to line up.
There are competing standards and conventions right now. This is my best effort to cover the major ones that matter today, with concrete examples you can implement.
The 7-Layer Blueprint
Think of agent support as progressive enhancement:
I think of this as a 7-layer burrito, but for agent-native websites.
- Static discovery (
llms.txt,agents.json) - Head-level hints (
<link rel="alternate">, capability meta) - Declarative form semantics (tool attributes)
- Imperative tool registration (WebMCP JS API)
- Content negotiation (serve markdown when requested)
- SEO + structure (robots, sitemap, JSON-LD)
- Contract tests (prevent drift)
Each layer helps on its own. Together they make your app predictable for agents and robust for humans.
Layer 1: Publish Machine-Readable Discovery Files
Start with two static files in public/:
/.well-known/agents.json/llms.txt
For agents.json, follow the Wildcard AI spec:
Keep chains task oriented and explicit. Avoid vague tools like "handle_user_request". Define narrow actions with clear parameter contracts.
{
"apiVersion": "0.1.0",
"name": "City Services Portal",
"description": "Residents report non-emergency city issues and track resolution status.",
"baseUrl": "https://api.example.com",
"chains": {
"report_streetlight_outage": {
"description": "Create a new outage report for a streetlight with optional photo evidence.",
"steps": [
{
"endpoint": "/issues/streetlight-outages",
"method": "POST",
"content_type": "multipart/form-data",
"parameters": {
"address_id": { "type": "string", "required": true },
"description": { "type": "string", "required": true },
"photo": { "type": "file", "required": false },
"contact_email": {
"type": "string",
"required": false,
"format": "email"
}
}
}
]
}
}
}
For llms.txt, keep it concise and operational. Think "runbook for an agent," not marketing copy:
- what your app does
- key public pages
- action endpoints
- auth requirements
- payload shape constraints
Layer 2: Add Agent Hints in <head>
Many agents parse HTML first, and some agents such as Claude Code may request markdown first when available. In both cases, give them immediate pointers.
<link rel="alternate" type="text/plain" href="/llms.txt" title="LLM-readable information" />
<link rel="alternate" type="application/json" href="/.well-known/agents.json" title="Agent actions" />
<meta
name="agent-capabilities"
content="Report streetlight outages, check issue status, and retrieve incident reference IDs. See /.well-known/agents.json and /llms.txt."
/>
This is simple and high leverage.
Layer 3: Use Declarative Tool Semantics on Forms
If a form maps to an agent action, annotate it.
<form
// progressive enhancement attributes
toolname="report_streetlight_outage"
tooldescription="Report a streetlight outage with optional photo evidence"
>
<input
type="file"
name="photo"
toolparamtitle="Outage photo"
toolparamdescription="Optional JPG or PNG, max 10 MB"
/>
</form>
Even where these attributes are ignored today, they are harmless and future-friendly.
Layer 4: Register Imperative Tools with WebMCP
For richer integrations, use the WebMCP proposal APIs directly:
Minimal pattern:
useEffect(() => {
const mc = (navigator as any).modelContext;
if (!mc) return;
mc.registerTool({
name: 'report_streetlight_outage',
description: 'Report a streetlight outage from this page',
inputSchema: { type: 'object', properties: {}, required: [] },
execute: async (_params, agent) => {
const confirmed = await agent.requestUserInteraction(async () => {
document.getElementById('outage-photo-input')?.click();
return true;
});
if (!confirmed) {
return { content: [{ type: 'text', text: 'Cancelled by user.' }] };
}
(
document.getElementById(
'streetlight-outage-form',
) as HTMLFormElement | null
)?.requestSubmit();
return { content: [{ type: 'text', text: 'Outage report submitted.' }] };
},
});
return () => mc.unregisterTool('report_streetlight_outage');
}, []);
Important: keep human confirmation for sensitive operations. Never auto-submit file attachments without explicit user interaction.
Layer 5: Serve Markdown to Agents
Do not force every agent to parse your full HTML app shell.
Two good options:
- Build your own markdown route and rewrite when
Accept: text/markdownis present. - Use the Vercel reference approach: markdown-to-agents (Vercel Labs)
Simple rewrite example in Next.js:
async rewrites() {
return {
beforeFiles: [
{
source: '/:path*',
destination: '/markdown/:path*',
has: [{ type: 'header', key: 'accept', value: '(.*)text/markdown(.*)' }]
}
]
};
}
This keeps one source of truth while giving agents a low token, high signal representation.
Layer 6: Keep SEO and Agent Discovery Aligned
Agent native does not replace SEO basics. It builds on them.
- Generate
robots.txt - Generate
sitemap.xmlwith static and dynamic routes - Set canonical base URL
- Add JSON-LD for org/site/page entities
When your SEO map and agent map disagree, both humans and models get worse results.
Layer 7: Add Contract Tests So Docs Do Not Drift
This is where most teams drift. They ship discovery files once, then forget to update them when limits and fields change.
Add tests that validate:
llms.txthas expected sections and valid URLsagents.jsonparses and includes required fields- each chain step has endpoint + method
- sitemap generation includes expected dynamic pages
If your upload limit changes from 20 MB to 25 MB, your tests should force updates across UI, llms.txt, and agents.json.
Rollout Checklist
If you want to implement this in a week, this is the order I recommend:
- Add
llms.txtandagents.json - Add
<head>discovery links and capability meta - Add markdown content negotiation for high-value pages
- Add declarative form attributes
- Add WebMCP imperative tool registration where it improves UX
- Add robots/sitemap/schema alignment checks
- Add contract tests for all discovery surfaces
Companion Tool: Agent Native Auditor
If you want a fast way to debug this end to end, I built an open source companion app:
It checks the same practical signals covered in this post:
/.well-known/agents.jsoncontract shape and chain quality/llms.txtavailability and structureAccept: text/markdownbehaviorrobots.txtandsitemap.xmlcrawlability- head discovery hints and JSON-LD coverage
- WebMCP surface signals
Use it as a repeatable loop: implement one layer, run audit, close gaps, rerun, and track score changes over time.
Common Mistakes
- One giant generic tool: split into specific chains.
- No user interaction boundary: keep consent in file upload and submit flows.
- HTML-only responses: provide markdown for agent consumption.
- No tests: discovery docs drift silently.
- Treating this as SEO-only: agent-native behavior is product and platform work.
Final Thought
The AI web will not be won by whoever has the fanciest model integration. It will be won by teams whose apps are easiest for agents to discover, understand, and operate safely.
You do not need to build everything at once. Start with static discovery plus markdown negotiation, then layer in tool semantics and execution APIs.
Build for both users: humans and agents.