Ask anything in plain text, get commands or answers instantly. No quotes needed.
A CLI tool that lets you interact with AI models using natural language, without the need for quotes around your questions.
- Natural input: Just type
ask how to list docker containers- no quotes needed - Flexible flags: Put options before or after your question - both work!
- Smart command injection: Commands are safely flattened into one-liners when possible and pasted directly to your terminal
- Smart intent detection: Automatically detects if you want a command or an answer
- Multiple providers: Supports Gemini (default), OpenAI, and Anthropic Claude
- Streaming responses: Real-time token-by-token output
- Thinking mode: Enable AI reasoning for complex tasks (
-tflag or config) - Context awareness: Optional conversation memory per directory
- Safe command execution: Detects and warns about destructive commands
- Sudo retry: Suggests retry with sudo on permission denied errors
- Flexible configuration: TOML config with environment variable overrides
- Custom commands: Define your own commands with custom system prompts
- Piping support: Works with
git diff | ask cmstyle workflows - Auto-update: Background update checks with notifications
- Shell completions: Bash, Zsh, Fish, PowerShell, Elvish support
curl -fsSL install.cat/verseles/ask | shirm install.cat/verseles/ask | iexcargo install --git https://github.com/verseles/askThe installer will prompt you to configure your API keys automatically.
# Initialize configuration (set up API keys)
ask init
# Non-interactive init (for scripts/automation)
ask init -n -p gemini -k YOUR_API_KEY
# Ask questions naturally
ask how to list docker containers
ask what is the capital of France
# Commands are auto-detected and pasted to your terminal
ask delete old log files # Command appears ready to edit/run
ask -y delete old log files # Execute immediately
# Flags can go before OR after your question
ask -x delete old log files
ask delete old log files -x
# Enable thinking mode for complex reasoning
ask -t explain the theory of relativity
# Use context for follow-up questions
ask explain kubernetes -c
ask -c what about pods?
# Pipe input
git diff | ask cm
cat main.rs | ask explain this codeFlags can be placed before or after your question - whatever feels natural.
ask [OPTIONS] <your question here>
ask <your question here> [OPTIONS]
OPTIONS:
-c, --context[=MIN] Use context for current directory (default: 30 min, 0 = permanent)
Examples: -c (30 min), -c60 (1 hour), --context=0 (permanent)
-x, --command Force command mode (bypass auto-detection)
-y, --yes Auto-execute commands without confirmation
-t, --think[=VAL] Enable thinking mode with optional level (min/low/med/high)
Examples: -t, --think, --think=high, -tlow
-m, --model <MODEL> Override configured model
-p, --profile <NAME> Use named profile (e.g., -p work, --profile=local)
-P, --provider <NAME> Override configured provider
-k, --api-key <KEY> API key (for use with init -n)
-n, --non-interactive Non-interactive init (use with -P, -m, -k)
--no-fallback Disable profile fallback for this query
-s, --search Enable web search for this query
--citations Show citations from web search results
--json Output in JSON format
--markdown[=bool] Output rendered in Markdown (--markdown or --markdown=true)
--raw Output raw text without formatting
--no-color Disable colorized output
--color=bool Enable/disable colorized output
--no-follow Disable result echo after execution
--make-prompt Export default prompt template
--make-config Export example ask.toml template
--update Check and install updates
--help-env Show all environment variables
-v, --verbose Show verbose output (profile, provider, model info, debug flags)
-V, --version Show version
-h, --help Show help
SUBCOMMANDS:
init, config Initialize/manage configuration interactively
profiles List all available profiles
--clear Clear current directory context (use with -c)
--history Show context history (use with -c)
Run ask init or ask config to configure interactively:
? What would you like to do?
› View current config
Manage profiles
Exit
Interactive Menu Features
Main Menu:
- View current config - Display all settings in formatted output
- Manage profiles - Create, edit, delete, set default profiles
Profile Management:
- Create new profiles with custom provider, model, API key, base URL
- Edit existing profiles (provider, model, API key, thinking, web search, fallback)
- Delete profiles
- Set default profile
Configuration is loaded from multiple sources (in order of precedence):
- CLI arguments
- Environment variables (
ASK_*) ./ask.tomlor./.ask.toml(project local)~/ask.toml(home directory - legacy, still supported)~/.config/ask/ask.toml(XDG config - recommended)- Default values
# All configuration lives in profiles
# First profile is used by default (set default_profile to change)
# Switch profiles with: ask -p <profile_name>
# Optional: explicitly set default profile
# default_profile = "work"
[profiles.main]
provider = "gemini"
model = "gemini-3-flash-preview"
api_key = "YOUR_API_KEY_HERE"
stream = true
# thinking_level = "low" # For Gemini 3: minimal, low, medium, high
# web_search = false # Enable web search by default
# fallback = "none" # Profile to use on errors: "any", "none", or profile name
# Example: Work profile with OpenAI
# [profiles.work]
# provider = "openai"
# model = "gpt-5"
# api_key = "sk-..."
# reasoning_effort = "medium" # For o1/o3/gpt-5: none, minimal, low, medium, high
# Example: Local profile with Ollama
# [profiles.local]
# provider = "openai"
# base_url = "http://localhost:11434/v1"
# model = "llama3"
# api_key = "ollama" # Dummy key for local servers
[behavior]
auto_execute = false
confirm_destructive = true
timeout = 30
[context]
max_age_minutes = 30
max_messages = 20
# Command-line aliases
[aliases]
# q = "--raw --no-color"
# fast = "-p fast --no-fallback"
# deep = "-t --search"
# Custom commands
[commands.cm]
system = "Generate concise git commit message based on diff"
type = "command"
auto_execute = falseAll configuration options can be set via environment variables. Run ask --help-env for the complete reference.
Click to expand full environment variables list
# Profile/Provider selection
ASK_PROFILE=main # Select profile (like -p)
ASK_PROVIDER=gemini # Ad-hoc mode (like -P), mutually exclusive with ASK_PROFILE
ASK_MODEL=gemini-3-flash # Override model
# API Keys (used with ASK_PROVIDER or as fallback)
ASK_GEMINI_API_KEY=... # Gemini API key
ASK_OPENAI_API_KEY=sk-... # OpenAI API key
ASK_ANTHROPIC_API_KEY=sk-ant-... # Anthropic API key
# Custom base URLs (for proxies or compatible APIs)
ASK_GEMINI_BASE_URL=https://...
ASK_OPENAI_BASE_URL=https://... # e.g., for Ollama: http://localhost:11434/v1
ASK_ANTHROPIC_BASE_URL=https://...
# Behavior settings
ASK_AUTO_EXECUTE=false # Auto-execute safe commands
ASK_CONFIRM_DESTRUCTIVE=true # Confirm destructive commands
ASK_TIMEOUT=30 # Request timeout in seconds
# Context settings
ASK_CONTEXT_MAX_AGE=30 # Context TTL in minutes
ASK_CONTEXT_MAX_MESSAGES=20 # Max messages in context
ASK_CONTEXT_PATH=~/.local/share/ask/contexts # Custom storage path
# Update settings
| ASK_UPDATE_AUTO_CHECK | true | Enable background update checks |
| ASK_UPDATE_INTERVAL | 24 | Hours between checks (min 1h in aggressive mode) |
| ASK_UPDATE_CHANNEL | stable | Update channel |
ASK_NO_UPDATE=1 # Disable all update checks
# Other
NO_COLOR=1 # Disable colorsEnable real-time web search to get current information beyond the LLM's training data:
# Enable web search for a single query
ask -s what happened in the news today
# Show citations from search results
ask --search --citations latest rust 1.85 featuresWeb Search Configuration
[profiles.research]
web_search = true
# Domain filtering (Anthropic only)
allowed_domains = ["docs.rs", "stackoverflow.com"]
blocked_domains = ["pinterest.com"]Provider Notes:
- Gemini: Uses Google Search grounding
- OpenAI: Uses Responses API (only works with official API, not compatible endpoints)
- Anthropic: Uses
web_search_20250305tool with optional domain filtering
Named profiles let you switch between different configurations quickly, like rclone:
# List all profiles
ask profiles
# Use work profile
ask -p work how to deploy to kubernetes
# Use local profile (Ollama)
ask --profile=local explain this error
# Ad-hoc mode: use provider without config (requires API key)
ask -P gemini -k YOUR_KEY what is rust
# Disable fallback for a single query
ask --no-fallback -p work critical query
# Verbose mode shows which profile is active
ask -v -p work what is kubernetesProfile Configuration Examples
# Optional: explicitly set default profile (otherwise first profile is used)
default_profile = "work"
# Work profile with cloud provider
[profiles.work]
provider = "openai"
model = "gpt-5"
api_key = "sk-..."
fallback = "personal" # retry with personal on errors
# Personal profile with different provider
[profiles.personal]
provider = "anthropic"
model = "claude-haiku-4-5"
api_key = "sk-ant-..."
fallback = "none" # don't retry with another profile
# Local profile for Ollama/LM Studio
[profiles.local]
provider = "openai"
base_url = "http://localhost:11434/v1"
model = "llama3"
api_key = "ollama" # dummy key for local serversProfile Resolution: First profile is used by default. Set default_profile to explicitly choose a different default.
Fallback Options:
fallback = "profile-name"- Use specific profile on provider errorsfallback = "any"- Try any available profilefallback = "none"- Disable fallback (fail immediately)
Google's Gemini models. Get your API key from Google AI Studio.
OpenAI's GPT models. Get your API key from OpenAI Platform.
Anthropic's Claude models. Get your API key from Anthropic Console.
Any OpenAI-compatible API (e.g., Ollama, LM Studio):
[profiles.local]
provider = "openai"
api_key = "ollama"
base_url = "http://localhost:11434/v1"
model = "llama3"Enable AI reasoning/thinking for more complex tasks. Use -t/--think flag or configure in your config file:
# Enable thinking for a single query
ask -t explain quantum entanglement
ask how does RSA encryption work --think
# Disable thinking (if enabled in config)
ask --no-think what time is it| Provider | Config Parameter | Values |
|---|---|---|
| Gemini | thinking_level |
none, low, medium, high |
| OpenAI | reasoning_effort |
none, minimal, low, medium, high |
| Anthropic | thinking_budget |
Token count or level (low=4k, medium=8k, high=16k) |
Configure during ask init or manually in your config file.
The CLI includes safety detection for potentially destructive commands:
- Commands like
rm -rf,sudo,dd, etc. require explicit confirmation - Use
-yto bypass confirmation (use with caution) - Safe commands like
ls,git status,docker pscan auto-execute
The optional context system (-c flag) maintains conversation history per directory:
# Start a conversation
ask -c how do I set up nginx
# Continue the conversation
ask -c what about SSL?
# Clear context
ask -c --clear
# View history
ask -c --historyContext is stored locally and automatically cleaned up after 30 minutes of inactivity.
Define reusable commands in your config file with custom system prompts:
[commands.cm]
system = "Generate a concise git commit message based on the diff provided"
type = "command"
auto_execute = false
[commands.explain]
system = "Explain this code in detail, including what it does and how it works"
inherit_flags = true
[commands.review]
system = "Review this code for bugs, security issues, and improvements"
provider = "anthropic"
model = "claude-3-opus"Usage:
git diff | ask cm # Generate commit message
cat main.rs | ask explain # Explain code
cat api.py | ask review # Code reviewDefine short aliases for common flag combinations:
[aliases]
q = "--raw --no-color"
fast = "-p fast --no-fallback"
deep = "-t --search"Usage:
ask q what is rust # Expands to: ask --raw --no-color what is rust
ask deep explain quantum # Expands to: ask -t --search explain quantumCustomize the AI's behavior by creating ask.md files. These files completely replace the default system prompt.
# Export the default prompt template
ask --make-prompt > ask.md
# Edit ask.md to customize behavior
# The file will be used automaticallyCustom Prompt Configuration
Search Order (first found wins):
- Recursive search for
./ask.mdor./.ask.md(traverses up from current directory to root) ~/ask.md(home directory)~/.config/ask/ask.md(XDG config)
Command-Specific Prompts:
ask.cm.md- Custom prompt for thecmcommand (also searched recursively)ask.explain.md- Custom prompt for theexplaincommand (also searched recursively)
Available Variables:
| Variable | Description |
|---|---|
{os} |
Operating system (linux, macos, windows) |
{shell} |
User's shell (/bin/bash, /bin/zsh, etc.) |
{cwd} |
Current working directory |
{locale} |
User's locale (en_US.UTF-8, etc.) |
{now} |
Current date and time |
{format} |
Formatting instruction (markdown/colors/plain) |
Example ask.md:
You are a senior developer assistant. Respond in {locale}.
When asked for commands:
- Use {shell} syntax appropriate for {os}
- Consider the current directory: {cwd}
Current time: {now}
{format}Generate shell completions for your preferred shell:
# Bash
ask --completions bash >> ~/.bashrc
# Zsh
ask --completions zsh >> ~/.zshrc
# Fish
ask --completions fish > ~/.config/fish/completions/ask.fish
# PowerShell
ask --completions powershell >> $PROFILE
# Elvish
ask --completions elvish >> ~/.elvish/rc.elvThe CLI automatically checks for updates in the background and notifies you on the next run when an update is available. To manually check and install updates:
ask --updateSet ASK_NO_UPDATE=1 to disable automatic update checks.
AGPL-3.0 - see LICENSE
Contributions are welcome! Please see the CODEBASE.md for project structure and ADR.md for architectural decisions.