Skip to content

Provider Setup Guides

This guide covers setup for the most common providers. AgentZero supports 37 providers — run agentzero providers for the full list.

  1. Get an API key from platform.openai.com/api-keys.
  2. Configure:
Terminal window
agentzero onboard --provider openai --model gpt-4o --yes
agentzero auth setup-token --provider openai --token sk-...

Or set the environment variable:

Terminal window
export OPENAI_API_KEY="sk-..."

TOML config:

[provider]
kind = "openai"
base_url = "https://api.openai.com/v1"
model = "gpt-4o"

Available models: gpt-4o, gpt-4o-mini, gpt-4-turbo, o1, o1-mini, o3-mini


  1. Get an API key from console.anthropic.com/settings/keys.
  2. Configure:
Terminal window
agentzero onboard --provider anthropic --model claude-sonnet-4-6 --yes
agentzero auth setup-token --provider anthropic --token sk-ant-...

Or set the environment variable:

Terminal window
export ANTHROPIC_API_KEY="sk-ant-..."

TOML config:

[provider]
kind = "anthropic"
base_url = "https://api.anthropic.com"
model = "claude-sonnet-4-6"

Available models: claude-opus-4-6, claude-sonnet-4-6, claude-haiku-4-5-20251001


OpenRouter gives you access to hundreds of models through a single API key.

  1. Get an API key from openrouter.ai/keys.
  2. Configure:
Terminal window
agentzero onboard --provider openrouter --model anthropic/claude-sonnet-4-6 --yes
agentzero auth setup-token --provider openrouter --token sk-or-v1-...

Or set the environment variable:

Terminal window
export OPENROUTER_API_KEY="sk-or-v1-..."

TOML config:

[provider]
kind = "openrouter"
base_url = "https://openrouter.ai/api/v1"
model = "anthropic/claude-sonnet-4-6"

Model names use the format provider/model — e.g., openai/gpt-4o, google/gemini-pro, meta-llama/llama-3.1-70b-instruct.


Ollama runs models locally. No API key needed.

  1. Install Ollama from ollama.com.
  2. Pull a model:
Terminal window
ollama pull llama3.1:8b
  1. Start Ollama (it runs on http://localhost:11434 by default):
Terminal window
ollama serve
  1. Configure AgentZero:
Terminal window
agentzero onboard --provider ollama --model llama3.1:8b --yes

TOML config:

[provider]
kind = "ollama"
base_url = "http://localhost:11434/v1"
model = "llama3.1:8b"

AgentZero can auto-discover local Ollama instances:

Terminal window
agentzero local discover

[provider]
kind = "lmstudio"
base_url = "http://localhost:1234/v1"
model = "your-model-name"
[provider]
kind = "llamacpp"
base_url = "http://localhost:8080/v1"
model = "default"
[provider]
kind = "vllm"
base_url = "http://localhost:8000/v1"
model = "your-model-name"

These providers have built-in base URLs — you only need to set the API key:

ProviderKindEnv Var
GroqgroqGROQ_API_KEY
MistralmistralMISTRAL_API_KEY
xAI (Grok)xaiXAI_API_KEY
DeepSeekdeepseekDEEPSEEK_API_KEY
Together AItogetherTOGETHER_API_KEY
Fireworks AIfireworks
Perplexityperplexity
Coherecohere
NVIDIA NIMnvidia

Example for Groq:

Terminal window
agentzero onboard --provider groq --model llama-3.1-70b-versatile --yes
export GROQ_API_KEY="gsk_..."

For any OpenAI-compatible API not in the catalog:

[provider]
kind = "custom:https://my-api.example.com/v1"
model = "my-model"

For Anthropic-compatible APIs:

[provider]
kind = "anthropic-custom:https://my-proxy.example.com"
model = "claude-sonnet-4-6"

Per-provider transport settings can be configured for timeout, retries, and circuit breaking:

[provider.transport]
timeout_ms = 30000 # request timeout (default: 30s)
max_retries = 2 # retry count on failure
circuit_breaker_threshold = 5 # failures before circuit opens
circuit_breaker_reset_ms = 60000 # time before half-open retry

Terminal window
# List all supported providers (marks active one)
agentzero providers
# Check provider quota and API key status
agentzero providers quota
# Diagnose model availability
agentzero doctor models