Type-safe tool integration for LLMs.
# Using npm
npm install @marrakesh/core zod
# Using pnpm
pnpm add @marrakesh/core zod
# Using yarn
yarn add @marrakesh/core zodimport { prompt, tool } from '@marrakesh/core';
import { z } from 'zod';
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
// Define a tool with Zod schema
const getWeather = tool({
description: 'Get weather for a location',
parameters: z.object({
city: z.string().describe('City name'),
units: z.enum(['celsius', 'fahrenheit']).default('celsius')
})
});
// Create prompt with tools
const p = prompt('You are a helpful weather assistant')
.tool(getWeather);
// Use with Vercel AI SDK
export async function POST(req: Request) {
const { messages } = await req.json();
return streamText({
model: openai('gpt-4'),
...p.toVercelAI(messages)
});
}The SDK is fully compatible with both AI SDK v4 and v5, supporting both simple CoreMessage and complex ModelMessage types:
Marrakesh SDK is compatible with both AI SDK v4 and v5:
- v4: Uses
parametersfor tool definitions - v5: Uses
inputSchemafor tool definitions
Our SDK automatically provides both properties, so you can use either version without any changes to your code.
Tool results must use the structured LanguageModelV3ToolResultOutput format:
- Text:
{ type: 'text', value: 'result string' } - JSON:
{ type: 'json', value: { ... } } - Error:
{ type: 'error-text', value: 'error message' } - Complex:
{ type: 'content', value: [...] }for mixed text and media
import { convertToModelMessages } from 'ai';
import { prompt } from '@marrakesh/core';
// Works with convertToModelMessages output
const p = prompt('You are a helpful assistant').tool(myTool);
export async function POST(req: Request) {
const { messages } = await req.json();
// Pass ModelMessage[] directly from convertToModelMessages
return streamText({
model: openai('gpt-4'),
...p.toVercelAI(convertToModelMessages(messages))
});
}The SDK supports both message formats:
Simple messages (CoreMessage):
const simpleMessages = [
{ role: 'user', content: 'Hello' },
{ role: 'assistant', content: 'Hi there!' }
];Complex messages (ModelMessage):
const complexMessages = [
{
role: 'user',
content: [
{ type: 'text', text: 'Look at this image:' },
{ type: 'image', image: 'data:image/png;base64,...' }
]
},
{
role: 'assistant',
content: [
{ type: 'text', text: 'I can see the image!' },
{
type: 'tool-call',
toolCallId: 'call_123',
toolName: 'analyzeImage',
args: { image: 'data:image/png;base64,...' }
}
]
}
];Both formats work seamlessly with all SDK methods:
toVercelAI(messages)- Works with both CoreMessage[] and ModelMessage[]toOpenAI(messages)- Works with both CoreMessage[] and ModelMessage[]toAnthropic()- No messages parameter needed
The SDK keeps your system prompt clean by handling tools and structured output separately:
Your code:
const p = prompt('You are a helpful weather assistant')
.tool(getWeather);Vercel AI SDK / OpenAI:
- System prompt:
"You are a helpful weather assistant" - Tools: Passed as separate
toolsparameter (Record format withinputSchema)
Anthropic:
- System prompt:
"You are a helpful weather assistant"
The key insight: your system prompt stays clean and focused on behavior, while technical details (tools, schemas) are handled by the API integration layer.
Create a new prompt builder.
const p = prompt('You are a helpful assistant');Add system instructions.
p.system('Always be polite');Add tools with Zod schemas.
p.tool(getWeather, getLocation);Add multiple tools from an array.
const myTools = [getWeather, getLocation];
p.tools(myTools);Convert to Vercel AI SDK format.
const { messages, tools } = p.toVercelAI();Convert to OpenAI format.
const { messages, tools } = p.toOpenAI();Convert to Anthropic format.
const { system, tools } = p.toAnthropic();The SDK includes optional analytics tracking to help you understand how your prompts and tools are being used. Analytics are completely opt-in and designed to have zero impact on your application's performance.
Set the MARRAKESH_API_KEY environment variable:
export MARRAKESH_API_KEY="your-api-key-here"Once set, analytics will automatically start tracking without any code changes.
- Prompt Metadata: Content, tools, and version information
- Prompt Executions: When prompts are compiled and used
- Tool Calls: When tools are executed (Vercel AI SDK integration only)
- All data is sent securely to Marrakesh's analytics endpoint
- No sensitive information is collected
- Disable anytime:
MARRAKESH_ANALYTICS_DISABLED=true - Debug mode:
MARRAKESH_DEBUG=trueto see what data is being sent
For detailed information, see Analytics Documentation.
Test your prompts like you test code. Marrakesh provides a complete testing framework with CLI support and automatic analytics tracking.
For the CLI to automatically discover your prompts, use the .prompt.ts or .prompt.js file extension:
// weather.prompt.ts
import { prompt } from '@marrakesh/core'
import { openai } from '@ai-sdk/openai'
export const weatherAgent = prompt('You are a weather assistant')
.tool(getWeather)
.test({
cases: [
{ input: 'Weather in Paris?', expect: { city: 'Paris' } },
{ input: 'Is it raining in Tokyo?', expect: { city: 'Tokyo' } }
],
executors: [
{ model: openai('gpt-4') }
]
})import { prompt, createVercelAIExecutor } from '@marrakesh/core'
import { openai } from '@ai-sdk/openai'
const weatherAgent = prompt('You are a weather assistant')
.tool(getWeather)
.test([
{ input: 'Weather in Paris?', expect: { city: 'Paris' } },
{ input: 'Is it raining in Tokyo?', expect: { city: 'Tokyo' } }
])
// Run tests
const results = await weatherAgent.run({
executor: createVercelAIExecutor({ model: openai('gpt-4') })
})
console.log(`${results.passed}/${results.total} tests passed`)# Run all tests (automatically finds *.prompt.ts files)
npx @marrakesh/cli test
# Watch mode - reruns on file changes
npx @marrakesh/cli test --watch
# Stop on first failure
npx @marrakesh/cli test --bail
# Custom pattern (override default)
npx @marrakesh/cli test "src/**/*.ts"- π§ͺ Test Cases: Define test cases with expected outputs
- π Watch Mode: Auto-rerun tests on file changes
- π€ Agentic Support: Handles multi-step tool calling automatically
- π Analytics: Test results automatically tracked to dashboard
- β Assertions: Deep equality matching with partial object support
For complete documentation, see Testing Guide.
If you're upgrading from an earlier version, note that the SDK now uses AI SDK v5.0 naming conventions:
ToolCallPart.argsβToolCallPart.inputToolResultPart.resultβToolResultPart.output
This ensures full compatibility with convertToModelMessages() output.
- Basic Usage - Simple prompt building
- Tool Integration - Tools with Zod schemas
- Vercel AI SDK - Full integration example
- Prompt Testing - Testable prompts with the CLI
# Install dependencies
pnpm install
# Start development
turbo dev
# Build all packages
turbo build
# Run tests
turbo test