Lightweight Node client and Vercel AI SDK tool for ShellifyAI — secure, sandboxed shell execution for AI agents.
ShellifyAI runs shell commands in isolated sandboxes so your AI agents can execute code safely. Instead of giving models direct access to your machine, commands run in ephemeral containers with:
- Security isolation — No access to host system
- Streaming output — Real-time stdout/stderr
- File artifacts — Created files uploaded with signed URLs
- Session persistence — Maintain state across commands
npm install @shellifyai/shell-tool
# or
pnpm add @shellifyai/shell-toolPeer dependencies: ai (^5.0.0), zod (^3.23.0)
The easiest integration — just add shellifyTool to your tools and the SDK handles execution automatically.
How it works: You provide a natural language prompt. The AI model decides when to run shell commands and generates the command parameter automatically. The shellifyTool executes it in a sandbox and returns the result to the model.
import { generateText, stepCountIs } from "ai";
import { openai } from "@ai-sdk/openai";
import { shellifyTool } from "@shellifyai/shell-tool";
const { text } = await generateText({
model: openai("gpt-5.1"),
prompt: "Create a Python file that prints Hello World and run it",
tools: {
// The model will call this tool with { command: "..." } when needed
shell: shellifyTool({
apiKey: process.env.SHELLIFYAI_API_KEY!,
}),
},
stopWhen: stepCountIs(5), // Allow up to 5 tool calls
});
console.log(text);The flow: prompt → model decides to use shell → model generates { command: "echo 'print(\"Hello World\")' > hello.py && python hello.py" } → shellifyTool executes → result back to model
// app/api/chat/route.ts
import { openai } from "@ai-sdk/openai";
import { streamText, stepCountIs } from "ai";
import { shellifyTool } from "@shellifyai/shell-tool";
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai("gpt-5.1"),
messages,
tools: {
shell: shellifyTool({
apiKey: process.env.SHELLIFYAI_API_KEY!,
}),
},
stopWhen: stepCountIs(5),
});
return result.toDataStreamResponse();
}For non-Vercel AI SDK projects, use ShellifyClient directly:
import { ShellifyClient } from "@shellifyai/shell-tool";
const client = new ShellifyClient({
apiKey: process.env.SHELLIFYAI_API_KEY!,
});
// Execute and get result
const result = await client.execute({
tool: "local_shell",
payload: { command: "echo hello && ls -la" },
});
console.log(result.summary.stdout);
console.log(result.summary.artifacts); // Any files createdfor await (const event of client.stream({
tool: "local_shell",
payload: { command: "pip install pandas && python script.py" },
})) {
if (event.type === "log") {
console.log(event.data); // Real-time output
} else if (event.type === "artifact") {
console.log("File created:", event.url);
}
}| Option | Type | Default | Description |
|---|---|---|---|
apiKey |
string |
process.env.SHELLIFYAI_API_KEY |
Your project API key |
baseUrl |
string |
https://shellifyai.com |
API endpoint |
description |
string |
— | Override tool description for the model |
| Option | Type | Default | Description |
|---|---|---|---|
apiKey |
string |
— | Required. Project API key |
baseUrl |
string |
https://shellifyai.com |
API endpoint |
fetchImpl |
typeof fetch |
globalThis.fetch |
Custom fetch implementation |
| Option | Type | Description |
|---|---|---|
tool |
string |
Tool to invoke (default: local_shell) |
payload.command |
string |
Required. Shell command to run |
payload.intent |
string |
Context for what you're trying to do |
payload.sessionId |
string |
Reuse session for file persistence |
payload.workingDirectory |
string |
Working directory for command |
payload.env |
Record<string, string> |
Environment variables |
payload.timeoutMs |
number |
Timeout in ms (default: 120000) |
payload.systemMessage |
string |
Custom system prompt |
sandboxId |
string |
Target specific sandbox |
signal |
AbortSignal |
Abort controller signal |
interface ShellifyResult {
requestId: string;
adapter: string;
events: ShellifyEvent[];
summary: {
stdout: string;
stderr: string;
exitCode?: number;
status?: string;
sessionId?: string;
artifacts: Array<{
url?: string;
filename?: string;
contentType?: string;
}>;
};
}| Type | Description |
|---|---|
meta |
Request metadata (requestId, adapter) |
status |
Execution status changes (running, completed, failed) |
log |
stdout/stderr output with stream field |
artifact |
File created with url and filename |
error |
Error message |
# Required
SHELLIFYAI_API_KEY=your_api_key
# Optional overrides
SHELLIFYAI_BASE_URL=https://shellifyai.com
SHELLIFY_API_KEY=fallback_key # Legacy fallbackGet your API key from the ShellifyAI Dashboard.
try {
const result = await client.execute({
tool: "local_shell",
payload: { command: "some_command" },
});
if (result.summary.exitCode !== 0) {
console.error("Command failed:", result.summary.stderr);
}
} catch (error) {
// Network or API errors
console.error("API error:", error.message);
}Full TypeScript support with exported types:
import type {
ShellifyClient,
ShellifyClientConfig,
ShellifyResult,
ShellifyEvent,
ShellifySummary,
Artifact,
AdapterType,
ExecuteOptions,
ShellifyToolOptions,
} from "@shellifyai/shell-tool";MIT