Every AI coding tool is solving the same problem: how does the agent understand the project it's working on? AGENTS.md is one answer. CLAUDE.md is another. .cursorrules, CODEX.md, OpenSpec — the list keeps growing. They all solve the instruction problem. None of them solve the definition problem.
Instructions vs Definitions
An instruction tells the AI what to do:
"Use async/await for all I/O operations.
Follow the repository pattern for database access." A definition tells the AI what the project IS:
stack:
language: Python
runtime: Python 3.12
framework: FastAPI
build: uv
database: PostgreSQL
context:
how: uv run uvicorn src.main:app Instructions are prose. Definitions are structured data. They serve different purposes.
AGENTS.md is instructions. It tells agents how to behave. But it doesn't structurally define the project — the stack, the conventions, the entry points, the build system. That information either lives scattered across README, package.json, and config files, or the AI guesses.
Every session, the AI re-discovers what it already knew.
The Problem With Prose
Markdown is great for humans. It's ambiguous for machines.
When an AI reads "we use React with TypeScript and deploy to Vercel," it has to parse natural language, extract entities, and hope it got it right. There's no schema. No validation. No guarantee that the next AGENTS.md follows the same structure.
package.json solved this for dependencies — structured JSON, predictable fields, machine-parseable. Nobody writes their dependency list in a README and asks npm to figure it out.
But that's exactly what we're doing with AI project context.
The Three Layers
A complete AI context stack has three layers:
The foundation layer defines what the project IS. The instruction layer tells the AI what to DO. The AI reads both.
Without a foundation, instruction files float independently. Context gets reinvented per session. Each tool maintains its own copy of the same facts in slightly different prose.
With a foundation, the structured facts live in one place. Instruction files can focus on what they're good at: tool-specific guidance, team conventions, behavioral rules.
What a Foundation Layer Looks Like
~20 lines of YAML:
faf_version: 2.5.0
project:
name: my-api
description: REST API for user management
language: Python
type: api
license: MIT
stack:
language: Python
runtime: Python 3.12
framework: FastAPI
build: uv
database: PostgreSQL
context:
what: User management API with OAuth2
who: Backend team
how: uv run uvicorn src.main:app Structured. Predictable. Machine-parseable. No guessing.
This isn't a replacement for AGENTS.md — it's what AGENTS.md should sit on top of. The foundation handles facts. The instruction file handles behavior.
Bi-Sync, Not Replacement
Teams keep their AGENTS.md, CLAUDE.md, whatever they already use. The foundation syncs with them — reads what's there, adds structure alongside it. Nothing gets overwritten. Nothing breaks.
Over time, structured facts naturally migrate to the foundation layer because that's where they belong. The instruction files get lighter, focused on what prose does best: nuanced guidance that doesn't fit in a schema.
The foundation is tiny — 1-2KB of YAML. Non-invasive. It earns trust by not breaking anything.
The Format Exists
This isn't theoretical. The foundation layer format already exists:
- IANA-registered MIME type:
application/vnd.faf+yaml - 27K+ npm downloads across the faf-cli/MCP ecosystem
- MCP servers for Claude, Gemini, and Grok — all three major AI platforms
- Merged into the Anthropic MCP registry as a Persistent Project Context Server
It's called .faf — Foundational AI-context Format. It's been shipping since before AGENTS.md was proposed.
The Real Question
The debate shouldn't be "AGENTS.md vs CLAUDE.md vs .cursorrules." Those are all instruction files arguing over which prose format wins.
The real question is: what defines the project underneath?
The Three Layer Rule
.faf defines. .md instructs. AI interprets.
Three layers. Three jobs. No overlap.