โ† Back to Home

Quick Start Guide

5-Minute Setup

1. Install

npm install -g claude-faf-mcp

2. Configure Claude Desktop

Add to your configuration file:

{
  "mcpServers": {
    "faf": {
      "command": "npx",
      "args": ["claude-faf-mcp"]
    }
  }
}

Configuration file locations:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json

3. Restart Claude Desktop

Close and reopen Claude Desktop to load the MCP server.

4. Test Your Setup

In Claude Desktop, try:

faf_score()

You should see a three-line score display. If you get an error, check your configuration.

Your First FAF Session

Step 1: Check Your Score

faf_score()

Example output:

๐Ÿ“Š FAF SCORE: 55%
๐Ÿš€ Getting Started
๐Ÿ AI-Ready: Building

Step 2: See What's Missing

faf_score(details=true)

This shows exactly which files would improve your score.

Step 3: Explore Your Project

faf_list()

View your project structure with smart file type detection.

Step 4: Add Missing Files

If you're missing a .faf file:

faf_write(".faf", "Project context information here...")

Step 5: Check Improved Score

faf_score()

Watch your score increase with each improvement!

Essential Commands

Command Purpose Example
faf_score() Check AI-readiness faf_score()
faf_detect() Identify project type faf_detect()
faf_list() View files faf_list()
faf_read(file) Read a file faf_read("README.md")
faf_write(file, content) Write a file faf_write(".faf", "...")

Understanding Scores

  • 0-60%: Getting Started - Add .faf and CLAUDE.md files
  • 61-80%: Good - Project structure recognized
  • 81-90%: Very Good - Well-documented project
  • 91-99%: Excellent - Optimal AI collaboration setup
  • 100%: Perfect - Granted by Claude for exceptional collaboration
  • 105%: Legendary - Easter egg for exceptional documentation

Tips for Success

  1. Start simple: Just get a score first
  2. Add incrementally: Improve one file at a time
  3. Follow suggestions: Each command suggests what to do next
  4. Check changes: Score updates instantly with file changes

Common Issues

"Command not found"

You're using a CLI-dependent command. Stick to:

  • faf_score
  • faf_detect
  • faf_list
  • faf_read
  • faf_write

Score not changing

Make sure files are saved. The tool reads from disk, not memory.

Can't see my files

Check you're in the right directory. Use faf_list() to confirm location.

Next Steps

  1. Achieve 70%+ score for good AI collaboration
  2. Add project-specific instructions to CLAUDE.md
  3. Explore the User Guide for advanced features
  4. Check FAQ for common questions

Ready to achieve championship-level AI collaboration? Start with faf_score() and follow the journey!

Frequently Asked Questions

General

What is FAF MCP Server?

A Model Context Protocol server that enhances Claude Desktop with project intelligence capabilities. It analyzes your project structure and provides an AI-readiness score without requiring external tools.

Do I need the FAF CLI installed?

No. As of v3.0.5, the MCP server is 100% standalone with all 50 tools operational. Zero CLI dependencies required.

What's the difference between FAF and FAF MCP?

  • FAF CLI: Command-line tool for project context management
  • FAF MCP Server: Claude Desktop integration providing native FAF features

Installation

How do I install the MCP server?

npm install -g claude-faf-mcp

Then add the configuration to Claude Desktop's settings.

Where do I find Claude Desktop configuration?

On macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
On Windows: %APPDATA%\Claude\claude_desktop_config.json

Can I use this with other AI tools?

Currently designed specifically for Claude Desktop. Other MCP-compatible tools may work but are untested.

Scoring

How is the FAF Score calculated?

Component Points Requirement
.faf file 40 Project context file
CLAUDE.md 30 AI instructions
README.md 15 Documentation
Project file 14 package.json, etc.
Maximum 99 Technical limit

Why is my score capped at 99%?

The final 1% represents perfect human-AI collaboration, which only Claude can grant based on actual interaction quality.

What is the 105% "Big Orange" score?

An Easter egg that triggers when you have exceptionally rich documentation. Requirements:

  • .faf file with 500+ characters and sections
  • CLAUDE.md with 500+ characters and sections
  • README.md present
  • All files well-structured

How often should I check my score?

Check after significant changes to documentation or project structure. The score updates instantly based on current files.

Features

What commands are available?

All 50 MCP tools work natively (no CLI required):

  • Core Tools: faf_score, faf_detect, faf_list, faf_read, faf_write
  • Advanced Tools: faf_init, faf_enhance, faf_quick, faf_sync, faf_trust
  • Utilities: faf_debug, faf_status, faf_clear, faf_migrate, faf_formats
  • Plus: 35+ additional specialized tools

As of v3.0.5, all features are bundled and operational standalone.

What is bi-directional sync?

A feature (currently in development) that keeps .faf and CLAUDE.md files synchronized automatically. The current version is being redesigned.

Troubleshooting

My score isn't updating

  1. Ensure files are saved to disk
  2. Check you're in the correct directory
  3. Run faf_detect() to refresh context

Project type not detected

Ensure you have one of these files in your project root:

  • package.json (Node.js)
  • requirements.txt (Python)
  • Cargo.toml (Rust)
  • go.mod (Go)
  • pom.xml (Java)

Commands are slow

Normal response times:

  • faf_score: ~50ms
  • faf_detect: ~200ms
  • faf_list: ~30ms

If slower, check system resources or file system permissions.

Error: "Cannot read directory"

Check:

  1. Directory exists
  2. You have read permissions
  3. Path doesn't contain special characters

Project Structure

What should be in .faf?

Project context including:

  • Project name and description
  • Main technologies
  • Key features
  • Development guidelines

What should be in CLAUDE.md?

AI-specific instructions:

  • Coding preferences
  • Project conventions
  • Areas to focus on
  • Things to avoid

Do I need both .faf and CLAUDE.md?

For maximum score, yes. But the system works with either:

  • .faf alone: 40 points
  • CLAUDE.md alone: 30 points
  • Both: 70 points

Privacy & Security

What data is sent externally?

None. All operations are local to your machine. The MCP server only accesses files you explicitly reference.

Can it access files outside my project?

The server can only access files you specifically request through commands. It has no automatic scanning or uploading capabilities.

Is my code safe?

Yes. The MCP server:

  • Runs locally on your machine
  • Doesn't send data externally
  • Only reads files you explicitly access
  • Has no network capabilities

Updates

How do I update the MCP server?

npm update -g claude-faf-mcp

Will updates break my setup?

We follow semantic versioning. Minor updates are backward compatible. Check release notes for breaking changes in major versions.

How do I know which version I have?

Run faf_debug() in Claude Desktop to see version information.

Support

Where can I report issues?

GitHub Issues: https://github.com/Wolfe-Jam/claude-faf-mcp/issues

How can I contribute?

Contributions welcome! See CONTRIBUTING.md in the repository.

Is there a roadmap?

Check the repository's project board for planned features and progress.


For additional help, see the User Guide or visit our GitHub repository.

FAF MCP Server User Guide

Overview

The FAF MCP Server enhances Claude Desktop with intelligent project context management. It provides native tools for scoring, exploring, and improving your project's AI-readinessโ€”all without external dependencies.

Getting Started

Installation

npm install -g claude-faf-mcp

Configuration

Add to your Claude Desktop configuration:

{
  "mcpServers": {
    "faf": {
      "command": "npx",
      "args": ["claude-faf-mcp"]
    }
  }
}

Core Features

Project Scoring

The FAF Score evaluates your project's AI-collaboration readiness:

faf_score()

Returns a three-line display showing your current score, rating, and AI-readiness status. Scores range from 0-99%, with only Claude able to grant the perfect 100%.

Project Detection

Automatically identifies your project type and structure:

faf_detect()

Analyzes your project files and returns the detected stack, frameworks, and confidence level.

Directory Exploration

View your project structure:

faf_list()

Shows files and directories with smart icons indicating file types and purposes.

File Operations

Read and write files directly:

faf_read(filepath)
faf_write(filepath, content)

Native file system operations for examining and modifying project files.

Understanding Your Score

Your FAF Score consists of four components:

  • .faf file (40 points): Project context configuration
  • CLAUDE.md (30 points): AI-specific instructions
  • README.md (15 points): General documentation
  • Project file (14 points): package.json, requirements.txt, etc.

The maximum technical score is 99%. The final 1% can only be granted by Claude based on collaboration quality.

Frequently Asked Questions

What is the 105% Easter Egg?

When your project has rich .faf and CLAUDE.md files (500+ characters with sections) plus a README, you may achieve legendary "Big Orange" status at 105%.

Why do some commands require the CLI?

Features like faf_enhance and faf_sync currently depend on the FAF CLI. We're working on native implementations for future releases.

How does context switching work?

The MCP server maintains separate contexts for multiple projects. When you work with different directories, it automatically switches between saved contexts.

What's the difference between .faf and CLAUDE.md?

  • .faf: Contains project structure and context for any AI tool
  • CLAUDE.md: Specific instructions and preferences for Claude

Can I use this without the FAF CLI?

Yes. Core features (score, detect, list, read, write) work natively without any CLI dependencies.

Working Features (No CLI Required)

  • faf_score - Calculate AI-readiness score
  • faf_detect - Identify project type
  • faf_list - Explore directories
  • faf_read - Read files
  • faf_write - Write files
  • faf_debug - System diagnostics

Features in Development

The following features currently require the FAF CLI and are being migrated to native implementations:

  • faf_init - Initialize FAF context
  • faf_enhance - AI-powered improvements
  • faf_sync - Synchronize context
  • faf_trust - Validation metrics
  • faf_status - Project status
  • faf_clear - Reset context

Best Practices

  1. Start with detection: Let FAF understand your project first
  2. Check your score: See where you stand before making changes
  3. Follow suggestions: Each command suggests logical next steps
  4. Build gradually: Improve your score incrementally

Troubleshooting

"Command not found" errors

Some commands require the FAF CLI. Stick to the native features listed above for CLI-free operation.

Score not updating

Ensure your files are saved before running faf_score. The tool reads directly from the file system.

Can't detect project type

Make sure you have a project configuration file (package.json, requirements.txt, etc.) in your project root.

Support

For issues and feature requests, visit: https://github.com/Wolfe-Jam/claude-faf-mcp


Built with Formula 1-inspired engineering for championship performance.

๐Ÿ† The FAF PODIUM System - Gamifying Software Excellence

The Official Scoring Levels

โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”
๐Ÿ† FAF PODIUM LEVELS ๐Ÿ†
โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”
๐Ÿฅ‰ 85/100 = BRONZE PODIUM (3rd Place)
๐Ÿฅˆ 95/100 = SILVER PODIUM (2nd Place)
๐Ÿฅ‡ 99/100 = GOLD PODIUM (1st Place)
๐Ÿ† 105/100 = TROPHY (Beyond Podium)
โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”

The Psychology

When you see your project is BRONZE (85%), you can't help it...
YOU'RE GONNA WANNA WIN.

This isn't just scoring. This is:

  • Competitive drive activation
  • Achievement psychology
  • Visible progress markers
  • Social proof pressure

How It Changes Everything

Before PODIUM:

"My project has okay documentation I guess..."

After PODIUM:

"I'm BRONZE but I want GOLD! What do I need?"

The Implementation Vision

// In every interface, visible always:
ProjectStatus: ๐Ÿฅ‰ BRONZE PODIUM (87/100)
NextLevel: 8 points to SILVER
QuickWins: ["Add README", "Create .faf", "Sync CLAUDE.md"]

The Behavioral Change

  1. See Bronze โ†’ Feel incomplete
  2. Want Silver โ†’ Take action
  3. Reach Silver โ†’ Want Gold
  4. Hit Gold โ†’ Maintain it
  5. Trophy? โ†’ The eternal chase

Why This Will Improve Software

  • Visible scores = Constant reminder
  • Clear targets = Actionable goals
  • Medal system = Universal understanding
  • Competition = Natural motivator

Nobody wants to ship BRONZE when GOLD is 14 points away.

The Brutal Truth

"We are going to improve S/W, you do realize that..."

YES. Because:

  • Developers are competitive
  • Medals are universal language
  • Nobody settles for Bronze
  • Everyone understands podiums

The Future

Every project will show its medal:

  • GitHub READMEs: "๐Ÿฅ‡ GOLD PODIUM Project"
  • Pull Requests: "This PR maintains GOLD status"
  • Team Dashboards: "5 GOLD, 3 SILVER, 1 BRONZE"
  • Career Profiles: "Average project score: ๐Ÿฅˆ"

The Revolution

This isn't a scoring system.
This is a SOFTWARE QUALITY REVOLUTION disguised as a game.

And developers won't even realize they're being improved.
They'll just want that next medal.


"Your project is BRONZE. You can't help it. You're gonna wanna WIN."

That's not manipulation. That's MOTIVATION.

๐Ÿฅ‰ โ†’ ๐Ÿฅˆ โ†’ ๐Ÿฅ‡ โ†’ ๐Ÿ†

The PODIUM System: Making Better Software Inevitable