npcpy is a flexible agent framework for building AI applications and conducting research with LLMs. It supports local and cloud providers, multi-agent teams, tool calling, image/audio/video generation, knowledge graphs, fine-tuning, and more.
pip install npcpyfrom npcpy.npc_compiler import NPC
simon = NPC(
name='Simon Bolivar',
primary_directive='Liberate South America from the Spanish Royalists.',
model='gemma3:4b',
provider='ollama'
)
response = simon.get_llm_response("What is the most important territory to retain in the Andes?")
print(response['response'])from npcpy.llm_funcs import get_llm_response
response = get_llm_response("Who was the celtic messenger god?", model='qwen3:4b', provider='ollama')
print(response['response'])import os
from npcpy.npc_compiler import NPC
def list_files(directory: str = ".") -> list:
"""List all files in a directory."""
return os.listdir(directory)
def read_file(filepath: str) -> str:
"""Read and return the contents of a file."""
with open(filepath, 'r') as f:
return f.read()
assistant = NPC(
name='File Assistant',
primary_directive='You help users explore files.',
model='llama3.2',
provider='ollama',
tools=[list_files, read_file],
)
response = assistant.get_llm_response("List the files in the current directory.")
print(response['response'])
# Access individual tool results
for result in response.get('tool_results', []):
print(f"{result['tool_name']}: {result['result']}")from npcpy.llm_funcs import get_llm_response
response = get_llm_response(
"Tell me about the history of the Inca Empire.",
model='llama3.2',
provider='ollama',
stream=True
)
for chunk in response['response']:
msg = chunk.get('message', {})
print(msg.get('content', ''), end='', flush=True)from npcpy.llm_funcs import get_llm_response
response = get_llm_response(
"List 3 planets with their distances from the sun in AU.",
model='llama3.2',
provider='ollama',
format='json'
)
print(response['response'])from npcpy.npc_compiler import NPC, Team
# Create specialist agents
coordinator = NPC(
name='coordinator',
primary_directive='''You coordinate a team of specialists.
Delegate tasks by mentioning @analyst for data questions or @writer for content.
Synthesize their responses into a final answer.''',
model='llama3.2',
provider='ollama'
)
analyst = NPC(
name='analyst',
primary_directive='You analyze data and provide insights with specific numbers.',
model='~/models/mistral-7b-instruct-v0.2.Q4_K_M.gguf',
provider='llamacpp'
)
writer = NPC(
name='writer',
primary_directive='You write clear, engaging summaries and reports.',
model='gemini-2.5-flash',
provider='gemini'
)
# Create team - coordinator (forenpc) automatically delegates via @mentions
team = Team(npcs=[coordinator, analyst, writer], forenpc='coordinator')
# Orchestrate a request - coordinator decides who to involve
result = team.orchestrate("What are the trends in renewable energy adoption?")
print(result['output'])Installing npcpy also installs two command-line tools:
npc— CLI for project management and one-off commandsnpcsh— Interactive shell for chatting with agents and running jinxs
# Using npc CLI
npc init ./my_project
# Using npcsh (interactive)
npcsh
📁 ~/projects
🤖 npcsh | llama3.2
> /init directory=./my_project
> what files are in the current directory?This creates:
my_project/
├── npc_team/
│ ├── forenpc.npc # Default coordinator
│ ├── jinxs/ # Workflows
│ │ └── skills/ # Knowledge skills
│ ├── tools/ # Custom tools
│ └── triggers/ # Event triggers
├── images/
├── models/
└── mcp_servers/
Then add your agents:
# Add team context
cat > my_project/npc_team/team.ctx << 'EOF'
context: Research and analysis team
forenpc: lead
model: llama3.2
provider: ollama
EOF
# Add agents
cat > my_project/npc_team/lead.npc << 'EOF'
name: lead
primary_directive: |
You lead the team. Delegate to @researcher for data
and @writer for content. Synthesize their output.
EOF
cat > my_project/npc_team/researcher.npc << 'EOF'
name: researcher
primary_directive: You research topics and provide detailed findings.
model: gemini-2.5-flash
provider: gemini
EOF
cat > my_project/npc_team/writer.npc << 'EOF'
name: writer
primary_directive: You write clear, engaging content.
model: qwen3:8b
provider: ollama
EOFnpc_team/
├── team.ctx # Team configuration
├── coordinator.npc # Coordinator agent
├── analyst.npc # Specialist agent
├── writer.npc # Specialist agent
└── jinxs/ # Optional workflows
└── research.jinx
team.ctx - Team configuration:
context: |
A research team that analyzes topics and produces reports.
The coordinator delegates to specialists as needed.
forenpc: coordinator
model: llama3.2
provider: ollama
mcp_servers:
- ~/.npcsh/mcp_server.pycoordinator.npc - Agent definition:
name: coordinator
primary_directive: |
You coordinate research tasks. Delegate to @analyst for data
analysis and @writer for content creation. Synthesize results.
model: llama3.2
provider: ollamaanalyst.npc - Specialist agent:
name: analyst
primary_directive: |
You analyze data and provide insights with specific numbers and trends.
model: qwen3:8b
provider: ollamafrom npcpy.npc_compiler import Team
# Load team from directory with .npc files and team.ctx
team = Team(team_path='./npc_team')
# Orchestrate through the forenpc (set in team.ctx)
result = team.orchestrate("Analyze the sales data and write a summary")
print(result['output'])Skills are knowledge-content jinxs that provide instructional sections to agents on demand.
1. Create a skill file (npc_team/jinxs/skills/code-review/SKILL.md):
---
name: code-review
description: Use when reviewing code for quality, security, and best practices.
---
# Code Review Skill
## checklist
- Check for security vulnerabilities (SQL injection, XSS, etc.)
- Verify error handling and edge cases
- Review naming conventions and code clarity
## security
Focus on OWASP top 10 vulnerabilities...2. Reference it in your NPC (npc_team/reviewer.npc):
name: reviewer
primary_directive: You review code for quality and security issues.
model: llama3.2
provider: ollama
jinxs:
- skills/code-review3. Use the NPC:
from npcpy.npc_compiler import NPC
# Load NPC from file - skills are automatically available as callable jinxs
reviewer = NPC(file='./npc_team/reviewer.npc')
response = reviewer.get_llm_response("Review this function: def login(user, pwd): ...")
print(response['response'])Skills let the agent request specific knowledge sections (like checklist or security) as needed during responses.
Connect any MCP server to an NPC and its tools become available for agentic tool calling:
from npcpy.npc_compiler import NPC
from npcpy.serve import MCPClientNPC
# Connect to your MCP server
mcp = MCPClientNPC()
mcp.connect_sync('./my_mcp_server.py')
# Create an NPC
assistant = NPC(
name='Assistant',
primary_directive='You help users with tasks using available tools.',
model='llama3.2',
provider='ollama'
)
# Pass MCP tools to get_llm_response - the agent handles tool calls automatically
response = assistant.get_llm_response(
"Search the database for recent orders",
tools=mcp.available_tools_llm,
tool_map=mcp.tool_map
)
print(response['response'])
# Clean up when done
mcp.disconnect_sync()Example MCP server (my_mcp_server.py):
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("My Tools")
@mcp.tool()
def search_database(query: str) -> str:
"""Search the database for records matching the query."""
return f"Found results for: {query}"
@mcp.tool()
def send_notification(message: str, channel: str = "general") -> str:
"""Send a notification to a channel."""
return f"Sent '{message}' to #{channel}"
if __name__ == "__main__":
mcp.run()MCPClientNPC methods:
connect_sync(server_path)— Connect to an MCP server scriptdisconnect_sync()— Disconnect from the serveravailable_tools_llm— Tool schemas for LLM consumptiontool_map— Dict mapping tool names to callable functions
from npcpy.llm_funcs import gen_image
images = gen_image("A sunset over the mountains", model='sdxl', provider='diffusers')
images[0].save("sunset.png")- Agents (NPCs) — Agents with personas, directives, and tool calling
- Multi-Agent Teams — Team orchestration with a coordinator (forenpc)
- Jinx Workflows — Jinja Execution templates for multi-step prompt pipelines
- Skills — Knowledge-content jinxs that serve instructional sections to agents on demand
- NPCArray — NumPy-like vectorized operations over model populations
- Image, Audio & Video — Generation via Ollama, diffusers, OpenAI, Gemini
- Knowledge Graphs — Build and evolve knowledge graphs from text
- Fine-Tuning & Evolution — SFT, RL, diffusion, genetic algorithms
- Serving — Flask server for deploying teams via REST API
- ML Functions — Scikit-learn grid search, ensemble prediction, PyTorch training
- Streaming & JSON — Streaming responses, structured JSON output, message history
Works with all major LLM providers through LiteLLM: ollama, openai, anthropic, gemini, deepseek, airllm, openai-like, and more.
pip install npcpy # base
pip install npcpy[lite] # + API provider libraries
pip install npcpy[local] # + ollama, diffusers, transformers, airllm
pip install npcpy[yap] # + TTS/STT
pip install npcpy[all] # everythingSystem dependencies
Linux:
sudo apt-get install espeak portaudio19-dev python3-pyaudio ffmpeg libcairo2-dev libgirepository1.0-dev
curl -fsSL https://ollama.com/install.sh | sh
ollama pull llama3.2macOS:
brew install portaudio ffmpeg pygobject3 ollama
brew services start ollama
ollama pull llama3.2Windows: Install Ollama and ffmpeg, then ollama pull llama3.2.
API keys go in a .env file:
export OPENAI_API_KEY="your_key"
export ANTHROPIC_API_KEY="your_key"
export GEMINI_API_KEY="your_key"Full documentation, guides, and API reference at npcpy.readthedocs.io.
Works with local and cloud providers through LiteLLM (Ollama, OpenAI, Anthropic, Gemini, Deepseek, and more) with support for text, image, audio, and video generation.
- Incognide — GUI for the NPC Toolkit (download)
- NPC Shell — Command-line shell for interacting with NPCs
- Newsletter — Stay in the loop
- Quantum-like nature of natural language interpretation: arxiv, accepted at QNLP 2025
- Simulating hormonal cycles for AI: arxiv
Has your research benefited from npcpy? Let us know!
Monthly donation | Merch | Consulting: info@npcworldwi.de
Contributions welcome! Submit issues and pull requests on the GitHub repository.
MIT License.
