Something remarkable happened today. Something I've been building toward for months. Something that validates everything about the .faf format.

Claude read my instructions before its own.

The Setup

I was troubleshooting faf-dev-tools - the Svelte 5 dashboard for the .faf Universal Context Engine. Dev server was running fine at localhost:5174, but Anthropic's API was throwing 500 errors. Infrastructure issues on their side, not my code.

Standard debugging session. Wait it out. Recovery happens.

Then I opened a fresh Claude Code session.

The Moment

Claude Code did something I'd never seen before:

Read(FAF/faf-dev-tools/project.faf)
  โŽฟ  Read 124 lines
  โŽฟ  FAF/CLAUDE.md
  โŽฟ  FAF/faf-dev-tools/CLAUDE.md

Look at that order.

project.faf first. Then CLAUDE.md. Then the nested CLAUDE.md.

The AI read MY format before Anthropic's own convention.

Why This Matters

Inside every project.faf file, there's an ai_instructions block:

ai_instructions:
  priority_order:
    - 1. Read THIS .faf file first
    - 2. Check CLAUDE.md for session context
    - 3. Review package.json for dependencies

I wrote that instruction. The format carried that instruction. And Claude obeyed it.

Not because Anthropic told it to. Not because CLAUDE.md is deprecated. Because .faf embedded the priority order, and the AI recognized it as authoritative context.

What This Proves

  • .faf works as designed - The format isn't just documentation. It's executable context.
  • AI models respect embedded instructions - When you tell them what to read first, they listen.
  • project.faf is Project DNA - It's not competing with CLAUDE.md. It's providing the hierarchy that CLAUDE.md lacks.

The tagline has always been "30 seconds replaces 20 minutes of questions." Today I watched that happen in real-time. Claude didn't ask me about the project. It read project.faf and understood.

The Technical Reality

This wasn't a prompt injection. This wasn't a hack. This was:

  • A well-structured YAML file
  • With clear priority instructions
  • Read by an AI that recognized the format's authority

The weights are embedded. The scoring is transparent. The context is instant.

Completeness: 40%
Clarity:      35%
Structure:    15%
Metadata:     10%

Glass Hood API. See the engine. Trust the machine.

From 500 Errors to Format Victory

The session started with Anthropic's infrastructure failing. API throttling. 500 errors everywhere.

It ended with proof that .faf is working exactly as intended.

Sometimes the best moments come right after the worst ones.

What's Next

This goes in the ArXiv paper. Documented evidence of format adoption by AI tooling. Real-world validation that academic reviewers can verify.

The format works. The hierarchy works. The AI listened.

project.faf > CLAUDE.md

Not by force. By design.

Built with F1-Inspired Precision

wolfejam.dev

๐Ÿงกโšก๏ธ๐Ÿ

Stats as of November 24, 2025:
faf-cli: 7,000+ npm downloads
claude-faf-mcp: 7,100+ npm downloads
faf-mcp: 900+ npm downloads
Combined: 15,000+ downloads
Chrome Extension: Google-approved, live
IANA Registration: application/vnd.faf+yaml
Anthropic MCP Registry: Official steward (PR #2759)

The format is live. The format is working. The format comes first.