Your coding standards, always in context.
An open-source MCP server that gives your AI assistant semantic search over your personal documentation and coding guidelines. 100% offline.
Website · Documentation · Issues
Your AI assistant doesn't know your team's coding conventions. Every session starts from zero — your standards get lost, docs are scattered across wikis and READMEs, and pasting entire documents into prompts wastes tokens.
Context42 solves this by indexing your documentation locally and serving the most relevant chunks to your AI assistant via MCP — automatically, semantically, and without any data leaving your machine.
- Add your documentation directories as sources
- Index — Context42 chunks your docs and creates local vector embeddings
- Store — Vectors are saved locally in LanceDB
- Serve — Your AI assistant queries relevant content via MCP, weighted by priority
Requires Python 3.11+
# Using pipx (recommended)
pipx install context42-io
# Using uv
uvx context42-io
# Using pip
pip install context42-io# 1. Add your documentation
c42 add ~/my-coding-standards --name standards --priority 0.9
# 2. Index the content
c42 index
# 3. Start the MCP server
c42 serveAdd to your Claude Desktop configuration file:
| Platform | Path |
|---|---|
| Linux | ~/.config/claude/claude_desktop_config.json |
| macOS | ~/Library/Application Support/Claude/claude_desktop_config.json |
| Windows | %APPDATA%\Claude\claude_desktop_config.json |
{
"mcpServers": {
"context42": {
"command": "c42",
"args": ["serve"]
}
}
}Restart Claude Desktop after adding the configuration.
c42 add <path> --name <name> # Add a documentation source
c42 add <path> --name <name> --priority 0.9 # Add with priority (0.1–1.0)
c42 add <path> --name <name> --exclude "*.log,tmp/*" # Exclude file patterns
c42 index # Index all pending sourcesc42 list # List all sources with status
c42 set-priority <name> <value> # Change source priority
c42 remove <name> # Remove a source
c42 status # Show index statisticsc42 search "error handling" # Search from the CLI
c42 serve # Start the MCP server
c42 --help # Show all available commandsSet higher priority for your personal instructions so they take precedence over reference documentation:
# Your rules — highest weight
c42 add ~/my-standards --name standards --priority 1.0
# Team guidelines — medium weight
c42 add ~/team-docs --name team --priority 0.7
# Reference docs — lower weight
c42 add ~/library-docs --name reference --priority 0.4Search results are weighted by priority, so your personal preferences always appear first.
| Format | Extensions | Notes |
|---|---|---|
| Markdown | .md |
Full CommonMark support |
| reStructuredText | .rst |
Sphinx-compatible |
More formats planned — see Roadmap.
Context42 exposes a single search tool via MCP:
search(query: string, top_k?: int) → SearchResult[]
Each result includes:
text— The matching content chunksource— Name of the sourcefile— Relative path to the filescore— Similarity score (0–1, higher = more relevant)priority— Source priority weightis_priority—trueif the source has priority ≥ 0.8
Context42 works out of the box with no configuration required. All settings below are optional:
| Variable | Default | Description |
|---|---|---|
C42_EMBEDDING_MODEL |
BAAI/bge-small-en-v1.5 |
Sentence-transformer embedding model |
C42_CHUNK_SIZE |
500 |
Characters per chunk |
C42_BATCH_SIZE |
50 |
Chunks per indexing batch |
C42_DATA_DIR |
Platform default | Data storage directory |
HF_TOKEN |
— | Hugging Face token for faster model downloads |
# Example: use a different embedding model
export C42_EMBEDDING_MODEL="BAAI/bge-base-en-v1.5"
# Optional: set HuggingFace token for faster downloads
export HF_TOKEN="hf_your_token_here"Note: Changing the embedding model requires re-indexing all sources (
c42 index).
Get your HF token at: https://huggingface.co/settings/tokens
| Platform | Path |
|---|---|
| Linux | ~/.local/share/context42/ |
| macOS | ~/Library/Application Support/context42/ |
| Windows | %LOCALAPPDATA%\context42\ |
Context42 processes everything locally. Your code and documentation never leave your machine.
- Zero data transmission — No outbound network calls
- Local-only embeddings — The AI model runs on your CPU, no tokens sent to external APIs
- No telemetry — No analytics, no crash reports, no phone-home behavior
- Works air-gapped — After initial setup, no internet connection needed
Learn more at context42.io.
- Git clone sources — Add any Git repository as a source
- File watcher — Auto re-index when files change on disk
- Git sync — Detect new commits and re-index automatically
See all planned features and suggest new ones on GitHub Issues.
Contributions are welcome! Feel free to:
- Open a feature request
- Report a bug
- Submit a pull request
MIT