Open Retrieval-Based Inference Toolkit
Stop rewriting your app every time you switch LLMs. ORBIT unifies 20+ AI providers with your databases, vector stores, and APIs—all through one self-hosted gateway.
Ship faster. Stay portable. Keep your data private.
orbit-full-web.mp4
See ORBIT’s unified LLM workspace in action.
- Questions? Open an issue
- Updates: Check the changelog
- Commercial Support: OrbitPods.io
- Maintained by: Schmitech.ai
- One control plane for 20+ LLM providers, plus Ollama, llama.cpp, and vLLM for local inference.
- First-class RAG adapters for SQL, MongoDB, Elasticsearch, Pinecone, Qdrant, Chroma, Redis, HTTP APIs, and file uploads.
- Intent-aware routing that converts natural language to SQL queries, Elasticsearch DSL, Mongo filters, and custom API calls.
- Multimodal support across OpenAI, Gemini, Anthropic, Ollama and vLLM.
- Built-in security with API keys and moderation hooks.
Explore adapters, test prompts, and preview the dashboard—no install required.
docker pull schmitech/orbit:basic
docker run -d --name orbit-basic -p 5173:5173 -p 3000:3000 schmitech/orbit:basic- Auto-detects CPU/GPU via
ORBIT_PRESET=auto(override withsmollm2-1.7b-gpuor-cpu). - Includes ORBIT server, orbitchat UI, and an Ollama preset so you can chat instantly.
- More options live in docker/README.md.
- Prereqs: Python 3.12+, Node 18+, npm, and any AI-provider keys (OpenAI, Anthropic, Cohere, Gemini, Mistral, etc.).
- Optional deps: MongoDB, Redis, and a vector DB (Chroma, Qdrant, Pinecone, etc.).
curl -L https://github.com/schmitech/orbit/releases/download/v2.3.0/orbit-2.3.0.tar.gz -o orbit-2.3.0.tar.gz
tar -xzf orbit-2.3.0.tar.gz && cd orbit-2.3.0
cp env.example .env && ./install/setup.sh
source venv/bin/activate
./bin/orbit.sh start && cat ./logs/orbit.log- To use local models with Ollama, first install it:
curl -fsSL https://ollama.com/install.sh | sh, then pull a model such asollama pull granite4:1bor another of your choice. - Default adapters live in
config/adapters/passthrough.yamlandconfig/adapters/multimodal.yaml; updateconfig/ollama.yamlfor model changes. - Visit
http://localhost:3000/dashboardto monitor the ORBIT server.
dashboard.mp4
The ORBIT dashboard for adapter management and monitoring.
git clone https://github.com/schmitech/orbit.git
cd orbit && cp env.example .env
./install/setup.sh && source venv/bin/activate
./bin/orbit.sh start && ./bin/orbit.sh key create- Bring your own API keys (OpenAI, Anthropic, Cohere, Gemini, Mistral, etc.).
- Works great with Ollama, vLLM, llama.cpp, Redis, MongoDB, and vector DBs.
- Check logs via
tail -f ./logs/orbit.logand openhttp://localhost:3000/dashboard.
orbit-chatCLI:pip install schmitech-orbit-client && orbit-chat --api-key <KEY>- React web app:
npm install -g orbitchat && orbitchat --api-url http://localhost:3000 --api-key <KEY> --open - Embeddable widget: Drop @schmitech/chatbot-widget into any site for floating or inline chat.
- Node SDK:
npm install @schmitech/chatbot-apiand stream responses in TypeScript/JavaScript apps. - OpenAI-compatible API: Point the official
openaiPython client atbase_url="http://localhost:3000/v1"to reuse existing code.
orbit-cli-chat.mp4
Using the
orbit-chat CLI. Run orbit-chat -h for options.
See the tutorial for step-by-step instructions on connecting your data, performing retrieval, and watching as ORBIT converts natural language into SQL queries and API calls.
ev-analytics-example.mp4
Electric Vehicle Population Data through natural language (Data Source: data.wa.gov).
- Installation guide
- Configuration reference
- Authentication & API keys
- RAG adapters
- Roadmap
- Contributing & Code of Conduct
- Open an issue or discussion.
- Commercial support + managed hosting: OrbitPods.io.
- Say hi on Schmitech.ai.
Apache 2.0 – see LICENSE.