AI Provider Configuration
NAT's AI Assistant supports multiple AI providers. By default, NAT uses its own OpenAI integration β no configuration needed for the free tier. To use your own provider or a different model, set environment variables or add an ai_assistant block to your .natrc.
Supported providers
| Provider | Flag value | Notes |
|---|---|---|
| OpenAI | openai | Default. GPT-4o recommended. |
| Anthropic | anthropic | Claude 3.5 Sonnet and Opus supported. |
| Ollama | ollama | Local inference β no data leaves your machine. |
| Azure OpenAI | azure-openai | Use your Azure subscription and deployment. |
Provider configuration
Environment variables:
export NAT_AI_PROVIDER=openai
export NAT_AI_API_KEY=sk-...
export NAT_AI_MODEL=gpt-4o # optional, defaults to gpt-4o.natrc config:
ai_assistant:
provider: openai
model: gpt-4o
api_key: ${NAT_AI_API_KEY}Environment variables reference
| Variable | Description | Default |
|---|---|---|
NAT_AI_PROVIDER | Provider: openai, anthropic, ollama, azure-openai | openai (NAT's built-in key) |
NAT_AI_API_KEY | API key for the chosen provider | NAT's built-in key (free tier) |
NAT_AI_MODEL | Model name or deployment name | Provider default |
NAT_AI_BASE_URL | Custom base URL (Ollama, Azure, proxies) | Provider default |
Local / offline mode with Ollama
For air-gapped environments or when you need data to stay on-premises, use Ollama:
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Pull a model (no internet required after this step)
ollama pull llama3.2
# Configure NAT to use it
export NAT_AI_PROVIDER=ollama
export NAT_AI_MODEL=llama3.2
# Run AI commands as normal
nat ai plan --spec openapi.yamlOllama-based AI responses are typically slower and less capable than cloud providers, but all data stays local. For sensitive environments this trade-off is often worth it.
Rate limits and quotas
| Scenario | Limit |
|---|---|
| NAT free tier (built-in key) | 5 AI queries / month |
| Your own OpenAI key | Your OpenAI account limits apply |
| Your own Anthropic key | Your Anthropic account limits apply |
| Ollama (local) | No limit β bound only by local hardware |
| Azure OpenAI | Your Azure quota applies |
To avoid hitting NAT's free-tier quota on large teams, configure your own provider key in .natrc or environment variables. Your key is never stored by NAT β it's used only for the duration of the command.
Want to just scan? Quick Scan guide β