🧠 AI Assistant
AI Providers

AI Provider Configuration

NAT's AI Assistant supports multiple AI providers. By default, NAT uses its own OpenAI integration β€” no configuration needed for the free tier. To use your own provider or a different model, set environment variables or add an ai_assistant block to your .natrc.


Supported providers

ProviderFlag valueNotes
OpenAIopenaiDefault. GPT-4o recommended.
AnthropicanthropicClaude 3.5 Sonnet and Opus supported.
OllamaollamaLocal inference β€” no data leaves your machine.
Azure OpenAIazure-openaiUse your Azure subscription and deployment.

Provider configuration

Environment variables:

export NAT_AI_PROVIDER=openai
export NAT_AI_API_KEY=sk-...
export NAT_AI_MODEL=gpt-4o        # optional, defaults to gpt-4o

.natrc config:

ai_assistant:
  provider: openai
  model: gpt-4o
  api_key: ${NAT_AI_API_KEY}

Environment variables reference

VariableDescriptionDefault
NAT_AI_PROVIDERProvider: openai, anthropic, ollama, azure-openaiopenai (NAT's built-in key)
NAT_AI_API_KEYAPI key for the chosen providerNAT's built-in key (free tier)
NAT_AI_MODELModel name or deployment nameProvider default
NAT_AI_BASE_URLCustom base URL (Ollama, Azure, proxies)Provider default

Local / offline mode with Ollama

For air-gapped environments or when you need data to stay on-premises, use Ollama:

# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
 
# Pull a model (no internet required after this step)
ollama pull llama3.2
 
# Configure NAT to use it
export NAT_AI_PROVIDER=ollama
export NAT_AI_MODEL=llama3.2
 
# Run AI commands as normal
nat ai plan --spec openapi.yaml

Ollama-based AI responses are typically slower and less capable than cloud providers, but all data stays local. For sensitive environments this trade-off is often worth it.


Rate limits and quotas

ScenarioLimit
NAT free tier (built-in key)5 AI queries / month
Your own OpenAI keyYour OpenAI account limits apply
Your own Anthropic keyYour Anthropic account limits apply
Ollama (local)No limit β€” bound only by local hardware
Azure OpenAIYour Azure quota applies

To avoid hitting NAT's free-tier quota on large teams, configure your own provider key in .natrc or environment variables. Your key is never stored by NAT β€” it's used only for the duration of the command.


Want to just scan? Quick Scan guide β†’

Was this helpful?