Functional Testing, Reimagined
NAT brings the same autonomous, multi-agent intelligence that powers its API security testing to the world of functional testing. A single run() call orchestrates a team of BDI (Belief-Desire-Intention) agents that execute real browser interactions, detect visual regressions, scan for accessibility violations, and measure Core Web Vitals — producing a unified report across all four dimensions.
NAT's functional testing agents run on real browsers via Playwright — no Puppeteer mocks, no synthetic DOM emulation. What you test is what your users actually see.
Four Capabilities, One Test Run
🧪 Browser-Based Functional Testing
The BrowserExecutorAgent launches a headless Chromium browser via Playwright and executes your test scenarios as real user interactions. At each step it captures:
- DOM snapshots — full serialized document state for structural assertions
- Screenshots — pixel-perfect renders saved as PNG baselines or comparison targets
- Console logs & network traffic — surfaced in the unified report for debugging failures
- Interaction traces — click, fill, navigate, and wait actions are logged with timestamps
This gives you the same fidelity as a human tester without the manual overhead.
👁️ Visual Regression Detection
The VisualRegressionAgent compares each screenshot captured during the run against a stored baseline using pixel-diff analysis. It reports:
- Exact pixel difference counts and percentages per page/component
- Side-by-side diff images highlighting changed regions
- Configurable thresholds — set a tolerance in pixels or percentage before a diff becomes a failure
- Automatic baseline creation on the first run; subsequent runs compare against the stored baseline
No more "it looks fine on my machine" — every deploy is checked against a visual contract.
♿ WCAG Accessibility Scanning
The AccessibilityScannerAgent audits every page against the following WCAG 2.1 rules, covering key Success Criteria:
| WCAG SC | Rule | What It Checks |
|---|---|---|
| 1.1.1 | Non-text Content | Missing alt attributes on <img> elements |
| 1.3.1 | Info & Relationships | Form inputs without associated <label> elements |
| 2.4.2 | Page Titled | <title> element present and non-empty |
| 2.4.4 | Link Purpose | Anchor elements with empty or non-descriptive text |
| 3.1.1 | Language of Page | <html lang> attribute present and valid |
| 4.1.1 | Parsing | Duplicate id attributes within the same document |
| 4.1.2 | Name, Role, Value | Interactive elements missing accessible names |
| — | Heading Order | Skipped heading levels (e.g., <h1> → <h3>) |
Violations are reported with element selectors, WCAG level (A/AA), and remediation guidance.
⚡ Core Web Vitals Performance Testing
The PerformanceTestingAgent measures five key performance metrics against Google's recommended thresholds:
| Metric | Description | Good | Needs Improvement | Poor |
|---|---|---|---|---|
| LCP | Largest Contentful Paint | ≤ 2.5s | 2.5–4s | > 4s |
| FCP | First Contentful Paint | ≤ 1.8s | 1.8–3s | > 3s |
| TTI | Time to Interactive | ≤ 3.8s | 3.8–7.3s | > 7.3s |
| CLS | Cumulative Layout Shift | ≤ 0.1 | 0.1–0.25 | > 0.25 |
| TBT | Total Blocking Time | ≤ 200ms | 200–600ms | > 600ms |
Each metric receives a color-coded score (🟢 Good / 🟡 Needs Improvement / 🔴 Poor) that rolls up into a weighted performance score in the unified report.
How It Works
The FunctionalTestOrchestrator coordinates all four agents in a single, sequential pipeline. You call run() once and the orchestrator handles everything:
Configure your test scenarios
Define the URLs, user flows, and assertions in a simple config object — no separate test framework syntax to learn.
# nat-engine is the PyPI package; the Python import uses the `mannf` namespace
from mannf.core.functional_orchestrator import FunctionalTestOrchestrator
orchestrator = FunctionalTestOrchestrator(
base_url="https://your-app.example.com",
scenarios=[
{"name": "Homepage load", "url": "/"},
{"name": "Login flow", "url": "/login", "actions": [
{"type": "fill", "selector": "#email", "value": "user@example.com"},
{"type": "fill", "selector": "#password", "value": "secret"},
{"type": "click", "selector": "button[type=submit]"},
{"type": "wait_for", "selector": ".dashboard"},
]},
]
)Run the orchestrator
A single run() call launches all four agents sequentially, passing shared context between them.
results = orchestrator.run()Internally the pipeline executes:
BrowserExecutorAgent— runs all scenarios, captures DOM + screenshotsVisualRegressionAgent— diffs screenshots against baselinesAccessibilityScannerAgent— audits each DOM snapshot for WCAG violationsPerformanceTestingAgent— measures Core Web Vitals for each URL
Review the unified report
Results are written to an HTML dashboard and a machine-readable JSON file — both covering all four dimensions in a single report.
print(results.summary())
# Tests passed: 12 / 14
# Visual diffs: 1 (homepage: 23px change in header)
# A11Y violations: 3 (2× missing alt text, 1× missing label)
# Perf score: 87 / 100 (LCP: 2.1s ✅ CLS: 0.04 ✅ TBT: 310ms ⚠️)Unified Reporting
Every test run produces two output files:
The UnifiedReportGenerator produces both files from the same result object:
from mannf.core.reporting.unified_report import UnifiedReportGenerator
generator = UnifiedReportGenerator(results)
generator.write_html("report.html")
generator.write_json("report.json")The HTML report includes:
- ✅ Passed / ❌ Failed status for each functional scenario
- Side-by-side baseline vs. current screenshots for visual diffs
- Accessibility violation table with WCAG references and element selectors
- Core Web Vitals gauge charts with Google threshold overlays
- A top-level pass/fail badge suitable for embedding in CI/CD summaries
NAT vs. Other Testing Tools
| Feature | NAT | Mabl | Testim | Cypress | Playwright (raw) |
|---|---|---|---|---|---|
| AI/autonomous agents | ✅ | ✅ | ✅ | ❌ | ❌ |
| Visual regression | ✅ | ✅ | ✅ | ❌ | ❌ |
| WCAG accessibility scan | ✅ | ❌ | ❌ | ❌ | ❌ |
| Core Web Vitals | ✅ | ❌ | ❌ | ❌ | ❌ |
| API security testing | ✅ | ❌ | ❌ | ❌ | ❌ |
| Unified HTML + JSON report | ✅ | ✅ | ✅ | ❌ | ❌ |
| No-code test authoring | ❌ | ✅ | ✅ | ❌ | ❌ |
| Open-source | ❌ | ❌ | ❌ | ✅ | ✅ |
| Self-hosted | ✅ | ❌ | ❌ | ✅ | ✅ |
NAT is the only tool in this category that combines functional browser testing, visual regression, WCAG accessibility scanning, and Core Web Vitals measurement in a single orchestrated run — powered by autonomous BDI agents that coordinate via ECNP.
Quick Start
Python SDK
# nat-engine is the PyPI package name; the Python import uses the `mannf` namespace
pip install nat-enginefrom mannf.core.functional_orchestrator import FunctionalTestOrchestrator
from mannf.core.reporting.unified_report import UnifiedReportGenerator
# Configure
orchestrator = FunctionalTestOrchestrator(
base_url="https://your-app.example.com",
scenarios=[{"name": "Homepage", "url": "/"}]
)
# Run all four agents in one call
results = orchestrator.run()
# Generate reports
generator = UnifiedReportGenerator(results)
generator.write_html("report.html")
generator.write_json("report.json")REST API
curl -X POST https://api.nat-testing.io/api/v1/functional/run \
-H "X-API-Key: $NAT_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"base_url": "https://your-app.example.com",
"scenarios": [{"name": "Homepage", "url": "/"}]
}'CLI
nat functional --url https://your-app.example.com --report htmlGet Started
Questions? See the FAQ or email hello@nat-testing.io.