Natural Language Test Authoring
Natural Language Test Authoring lets you describe what you want to test in plain English. The LLM translates your description into a complete, executable test scenario — including HTTP method, headers, request body, and assertions — which you can review, adjust, and run directly from the CLI or dashboard.
Natural Language Test Authoring is part of the AI Intelligence add-on. See feature availability for per-plan quotas.
How it works
- Describe your test in plain English — e.g.,
"Create a new user with a valid email and password, then verify the response contains an ID and the status is 201". - Generate — the LLM produces a structured test scenario with all HTTP details filled in.
- Preview — inspect the generated scenario before running it.
- Approve — confirm the scenario looks correct.
- Execute — run the scenario against your API and see pass/fail results immediately.
CLI usage
Generate a test scenario
nat test "Create a new user with email and password, expect 201 with an id field" \
--base-url https://api.example.comThis prints the generated scenario in a human-readable table and saves it to your workspace.
Generate and execute immediately
nat test "Login with valid credentials and confirm the response contains an access token" \
--base-url https://api.example.com \
--executeAdding --execute runs the scenario immediately after generation and prints pass/fail results.
Generate in JSON format
nat test "Delete a product by ID as an admin, expect 204" \
--base-url https://api.example.com \
--format jsonAvailable flags
| Flag | Description | Default |
|---|---|---|
--base-url <url> | Base URL for the API under test | — |
--execute | Execute the scenario immediately after generation | false |
--format json|table | Output format | table |
--output <path> | Write scenario JSON to file | stdout |
--auth-header <value> | Authorization header to include in requests | — |
--llm-provider <name> | Override the default LLM provider | from .natrc |
API endpoints
Generate a test scenario
POST /api/v1/test/generate
Content-Type: application/json
X-API-Key: $NAT_API_KEY
{
"description": "List all orders for the authenticated user and confirm each order has an id, status, and total",
"base_url": "https://api.example.com"
}Response:
{
"id": "scenario_01HXY8QMJRP4N7TDBWVK2CFG5",
"description": "List all orders for the authenticated user and confirm each order has an id, status, and total",
"method": "GET",
"path": "/api/v1/orders",
"headers": {
"Authorization": "Bearer {{access_token}}"
},
"body": null,
"assertions": [
{ "type": "status", "expected": 200 },
{ "type": "json_schema", "path": "$[*].id", "rule": "exists" },
{ "type": "json_schema", "path": "$[*].status", "rule": "exists" },
{ "type": "json_schema", "path": "$[*].total", "rule": "exists" }
],
"status": "draft",
"created_at": "2026-04-09T11:00:00Z"
}Execute a generated scenario
POST /api/v1/test/execute
Content-Type: application/json
X-API-Key: $NAT_API_KEY
{
"scenario_id": "scenario_01HXY8QMJRP4N7TDBWVK2CFG5",
"base_url": "https://api.example.com",
"variables": {
"access_token": "eyJhbGciOiJIUzI1NiJ9..."
}
}Response:
{
"scenario_id": "scenario_01HXY8QMJRP4N7TDBWVK2CFG5",
"run_id": "run_01HY1R7SDPQ3MK5NXVT9GAC08",
"status": "passed",
"duration_ms": 312,
"assertions": [
{ "type": "status", "expected": 200, "actual": 200, "passed": true },
{ "type": "json_schema", "path": "$[*].id", "rule": "exists", "passed": true },
{ "type": "json_schema", "path": "$[*].status", "rule": "exists", "passed": true },
{ "type": "json_schema", "path": "$[*].total", "rule": "exists", "passed": true }
]
}Dashboard
The NL Testing panel is accessible from Testing → Scenarios in the sidebar.
Describe your test
Type a plain-English description of what you want to test into the input area. Be as specific as you like — the more detail you include, the better the generated scenario.
Examples of good descriptions:
"Attempt to access another user's profile without authentication, expect 401""Create a product with a negative price, expect 422 with a validation error message""Paginate through the /orders endpoint using page and per_page query params, verify the total count matches"
Generate
Click Generate. The LLM produces a complete scenario in a few seconds and displays it in the preview panel showing method, path, headers, body, and assertions.
Preview and adjust
Review the generated scenario. Edit any field directly in the preview — method, path, headers, body, or individual assertions.
Approve
Click Approve to mark the scenario as ready for execution.
Run
Click Run to execute the scenario against your API. The results panel shows pass/fail for each assertion, the raw request, and the actual response.
Writing effective descriptions
The quality of generated scenarios depends on the clarity of your description. Follow these guidelines for the best results:
| Tip | Example |
|---|---|
| Specify the HTTP action | "Create a new product" rather than just "product endpoint" |
| Include auth context | "as an authenticated admin user", "without a token" |
| State expected outcomes | "expect a 422 with a validation error" |
| Mention key fields | "verify the response contains an id and a created_at timestamp" |
| Describe edge cases explicitly | "with a missing required field email" |
Feature availability
| Plan | NL scenarios / month | Notes |
|---|---|---|
| Free | ❌ Not available | — |
| Pro | 10 | Quota shared across the workspace |
| Team | 50 | Quota shared across the workspace |
| Enterprise | Unlimited | — |
When your monthly quota is exhausted, POST /api/v1/test/generate returns 402 Payment Required with error code FEATURE_QUOTA_EXCEEDED.
Each call to /api/v1/test/generate counts as one scenario. Calling /api/v1/test/execute on an already-generated scenario does not consume additional quota.
LLM provider configuration
Natural Language Test Authoring requires a configured LLM provider. See the LLM Test Plan Generator guide for environment variable details, or run nat doctor to verify your setup.
Related
- LLM Test Plan Generator — generate full test plans from an OpenAPI spec
- Root Cause Analysis — AI-powered failure diagnosis
- Pricing & Plans — plan comparison and quotas
- CLI Reference — all
natcommands and flags