LLM Test Plan Generator
The LLM Test Plan Generator turns an API specification into a structured, reviewable test plan in seconds. Upload your OpenAPI spec (or point NAT at your live API), and the LLM produces a complete set of test scenarios covering happy paths, edge cases, auth boundaries, and data-validation checks β ready for your review before execution.
The LLM Test Plan Generator is part of the AI Intelligence add-on. See feature availability for per-plan quotas.
How it works
- Upload your OpenAPI spec (YAML or JSON) or provide a base URL for spec discovery.
- Generate β NAT sends the spec to the configured LLM and receives a structured test plan JSON.
- Review β inspect each test scenario in the dashboard, edit descriptions, adjust expected responses, or remove irrelevant cases.
- Approve β mark the plan as approved to unlock execution.
- Execute β run the approved plan against your API; results are attached to the plan record.
CLI usage
Generate a test plan
nat plan --spec openapi.yaml --base-url https://api.example.comThis command:
- Reads
openapi.yaml - Sends it to the configured LLM provider
- Prints the generated plan ID and a summary of scenarios
- Saves the plan to your workspace (or prints JSON with
--format json)
Generate and save to file
nat plan --spec openapi.yaml --base-url https://api.example.com --output plan.jsonGenerate, approve, and execute in one step
nat plan --spec openapi.yaml --base-url https://api.example.com --auto-approve --execute--auto-approve skips the manual review step. Use with care β always review generated test plans before running them against a production environment.
Available flags
| Flag | Description | Default |
|---|---|---|
--spec <path> | Path to OpenAPI/Swagger YAML or JSON file | β |
--base-url <url> | Base URL for the API under test | β |
--output <path> | Write plan JSON to file instead of stdout | stdout |
--format json|table | Output format | table |
--auto-approve | Approve the plan without manual review | false |
--execute | Execute the plan immediately after generation | false |
--llm-provider <name> | Override the default LLM provider | from .natrc |
API endpoints
Create a test plan
POST /api/v1/plans
Content-Type: application/json
X-API-Key: $NAT_API_KEY
{
"spec_url": "https://api.example.com/openapi.yaml",
"base_url": "https://api.example.com"
}Alternatively, upload a spec file:
curl -X POST https://api.nat-testing.io/api/v1/plans \
-H "X-API-Key: $NAT_API_KEY" \
-F "spec=@openapi.yaml" \
-F "base_url=https://api.example.com"Response:
{
"id": "plan_01HXY3Z7KQNVJG84WDPT6BCS9",
"status": "generating",
"scenario_count": null,
"created_at": "2026-04-09T11:00:00Z"
}Generation is asynchronous. Poll the plan record or listen for the plan.generated WebSocket event.
Get a test plan
GET /api/v1/plans/{id}
X-API-Key: $NAT_API_KEYResponse (generated):
{
"id": "plan_01HXY3Z7KQNVJG84WDPT6BCS9",
"status": "draft",
"scenario_count": 24,
"scenarios": [
{
"id": "sc_001",
"name": "GET /users β returns paginated list for authenticated user",
"method": "GET",
"path": "/users",
"expected_status": 200,
"auth_required": true,
"tags": ["happy-path", "pagination"]
}
],
"created_at": "2026-04-09T11:00:00Z",
"generated_at": "2026-04-09T11:00:08Z"
}Update a test plan (approve or edit)
PATCH /api/v1/plans/{id}
Content-Type: application/json
X-API-Key: $NAT_API_KEY
{
"status": "approved"
}To edit individual scenarios, include a scenarios array with updated scenario objects. Only fields present in the request body are changed.
Execute a test plan
POST /api/v1/plans/{id}/execute
X-API-Key: $NAT_API_KEYThe plan must be in approved status. Returns a scan run record:
{
"plan_id": "plan_01HXY3Z7KQNVJG84WDPT6BCS9",
"run_id": "run_01HY0Q5RCNM7KT3PWVX8FDB24",
"status": "running",
"started_at": "2026-04-09T11:01:00Z"
}Dashboard
Access the Test Plans section from the sidebar: Testing β Test Plans.
Upload and generate
Click New Test Plan, then either upload an OpenAPI YAML/JSON file or paste a spec URL. Click Generate β the LLM produces a plan in a few seconds.
Review scenarios
Each scenario is listed in a table with its HTTP method, path, expected status, auth requirement, and tags. Click any scenario to edit its description, expected response, or tags inline.
Approve
When the plan looks correct, click Approve Plan. This unlocks the Execute button.
Execute and view results
Click Execute to run all scenarios against the configured base URL. Results appear inline β each scenario shows pass/fail, actual response status, and any assertion failures.
Feature availability
| Plan | Test plans / month | Notes |
|---|---|---|
| Free | β Not available | β |
| Pro | 5 | Quota shared across the workspace |
| Team | 20 | Quota shared across the workspace |
| Enterprise | Unlimited | β |
When your monthly quota is exhausted, POST /api/v1/plans returns 402 Payment Required with error code FEATURE_QUOTA_EXCEEDED. Quota resets on the first day of the next billing period.
On Pro, unused plan quota does not roll over. If you need more plans, upgrade to Team or Enterprise, or contact hello@nat-testing.io for a custom add-on.
LLM provider configuration
Test plan generation requires a configured LLM provider. Set your provider in .natrc or via environment variables:
# OpenAI (default)
export OPENAI_API_KEY=sk-...
# Anthropic
export ANTHROPIC_API_KEY=sk-ant-...
export NAT_LLM_PROVIDER=anthropic
# Azure OpenAI
export AZURE_OPENAI_API_KEY=...
export AZURE_OPENAI_ENDPOINT=https://my-resource.openai.azure.com/
export NAT_LLM_PROVIDER=azure_openaiRun nat doctor to verify your LLM configuration is correct before generating plans.
Related
- Natural Language Test Authoring β describe tests in plain English
- Root Cause Analysis β AI-powered failure diagnosis
- Pricing & Plans β plan comparison and quotas
- CLI Reference β all
natcommands and flags