Skip to main content
Kernite uses Python by design for early iteration speed, but optimization decisions should be benchmark-driven.

Benchmark Harness

Run:
uv run python benchmarks/benchmark_execute.py --iterations 20000 --warmup 500
The script measures per-scenario latency and throughput for evaluate_execute without external dependencies.

Latest Local Snapshot

Measured on February 19, 2026 with Python 3.11.10:
ScenarioThroughput (req/s)p50 (ms)p95 (ms)p99 (ms)
governed_no_matching_policy74,683.110.0132090.0146670.017791
governed_missing_field_denied44,207.490.0223340.0250020.031842
governed_approved46,366.110.0212920.0242090.031425
out_of_scope_approved72,028.810.0133750.0150850.022833
These values are environment-specific. Use the same harness in CI/staging/prod-like environments for objective comparison over time.