NextRush
Performance

Benchmarks

Transparent, reproducible benchmarks — framework comparisons and deep performance profiling.

Last updated: 2026-04-26

All figures below come from one lab setup (CPU, OS, Node, tool versions). Absolute RPS on your hardware will differ; use the same scripts for relative comparisons. In production, databases and external calls usually dominate latency — not the framework.

How to read this page

Published RPS are snapshots, not guarantees. Run apps/benchmark locally when you need numbers for capacity planning.

At a Glance

Hello World

43,268 RPS

autocannon baseline with 100 connections and HTTP pipelining

wrk baseline

29,935 RPS

no pipelining, 60-second runs, five iterations per profile

Cold start

Host-specific

Process start time varies by machine, imports, and adapter — measure your deploy target

Core size

<3,000 LOC

kept intentionally small so you can audit the request path quickly

Mean RPS difference vs other frameworks

Percent gap between NextRush and each framework’s mean RPS in this lab run (not a guarantee on other hardware).

NextRush mean RPS higher than the listed framework
vs Express68.5%

Largest spread in this suite.

vs Koa18.3%

Same onion-style middleware; numbers are from this Node adapter setup.

vs Hono15.9%

Compared through each project’s Node.js HTTP entry in the benchmark harness.

Fastify recorded higher mean RPS than NextRush in this same run (see tables below). That comparison does not fit the bar chart above because the direction flips.


Framework Comparison

Head-to-head comparison against popular Node.js frameworks using identical test scenarios.

Test Environment

PropertyValue
Node.jsv25.1.0
PlatformLinux (x64)
CPUIntel Core i5-8300H @ 2.30GHz
Cores8
Memory15GB
FrameworkVersion
NextRush v33.0.4
Express5.2.x
Fastify5.7.x
Hono4.12.x (via @hono/node-server)
Koa3.1.x (with koa-router)
SettingValue
Toolautocannon v8
Duration10 seconds per test
Connections100 concurrent
Pipelining10 requests

Overall Results

Average requests per second across all test scenarios:

Average RPS by Framework

Mean throughput across the benchmark scenarios.

higher is better
Fastify40,084 RPS

+13.3% vs NextRush

NextRush v335,370 RPS

Second by mean RPS in this run.

Hono30,525 RPS

-13.7% vs NextRush

Koa29,889 RPS

-15.5% vs NextRush

Express20,987 RPS

-40.7% vs NextRush

Key takeaways (this run only)

On the lab setup above, mean RPS was higher than Express, Hono, and Koa, and lower than Fastify (which uses AOT JSON via fast-json-stringify). Re-run apps/benchmark on your hardware before comparing frameworks for your workload.

Detailed Scenarios

Baseline — a JSON response with no routing or body parsing.

app.get('/', (ctx) => ctx.json({ message: 'Hello World' }));

Hello World Throughput

higher is better
Fastify48,045 RPS

15ms p50, 37ms p99

NextRush v343,268 RPS

16ms p50, 43ms p99

Hono37,476 RPS

19ms p50, 48ms p99

Koa34,683 RPS

20ms p50, 56ms p99

Express23,739 RPS

50ms p50, 69ms p99

FrameworkRPSLatency p50Latency p99
Fastify48,04515ms37ms
NextRush v343,26816ms43ms
Hono37,47619ms48ms
Koa34,68320ms56ms
Express23,73950ms69ms

Dynamic route — measures router performance.

app.get('/users/:id', (ctx) => ctx.json({ id: ctx.params.id }));

Route Parameter Throughput

higher is better
Fastify45,852 RPS

15ms p50, 37ms p99

NextRush v338,983 RPS

19ms p50, 45ms p99

Hono35,144 RPS

20ms p50, 50ms p99

Koa33,893 RPS

21ms p50, 52ms p99

Express22,228 RPS

44ms p50, 70ms p99

FrameworkRPSLatency p50Latency p99
Fastify45,85215ms37ms
NextRush v338,98319ms45ms
Hono35,14420ms50ms
Koa33,89321ms52ms
Express22,22844ms70ms

URL query parsing performance.

app.get('/search', (ctx) => ctx.json({ q: ctx.query.q }));

Query String Throughput

higher is better
Fastify36,618 RPS

20ms p50, 51ms p99

NextRush v330,876 RPS

23ms p50, 47ms p99

Hono28,619 RPS

27ms p50, 50ms p99

Koa27,637 RPS

26ms p50, 55ms p99

Express20,769 RPS

47ms p50, 58ms p99

FrameworkRPSLatency p50Latency p99
Fastify36,61820ms51ms
NextRush v330,87623ms47ms
Hono28,61927ms50ms
Koa27,63726ms55ms
Express20,76947ms58ms

POST with JSON body — body parser overhead.

app.post('/users', (ctx) => ctx.json(ctx.body, 201));

POST JSON Throughput

higher is better
Fastify21,412 RPS

45ms p50, 71ms p99

NextRush v320,438 RPS

47ms p50, 73ms p99

Koa17,664 RPS

55ms p50, 75ms p99

Express14,417 RPS

66ms p50, 104ms p99

Hono12,625 RPS

75ms p50, 114ms p99

FrameworkRPSLatency p50Latency p99
Fastify21,41245ms71ms
NextRush v320,43847ms73ms
Koa17,66455ms75ms
Express14,41766ms104ms
Hono12,62575ms114ms

NextRush is within 5% of Fastify on POST JSON — the gap narrows significantly when body parsing is involved.

Combined GET/POST operations — simulates realistic traffic.

Mixed Workload Throughput

higher is better
Fastify48,493 RPS

15ms p50, 32ms p99

NextRush v343,283 RPS

16ms p50, 41ms p99

Hono38,759 RPS

18ms p50, 39ms p99

Koa35,566 RPS

20ms p50, 47ms p99

Express23,783 RPS

51ms p50, 63ms p99

FrameworkRPSLatency p50Latency p99
Fastify48,49315ms32ms
NextRush v343,28316ms41ms
Hono38,75918ms39ms
Koa35,56620ms47ms
Express23,78351ms63ms

Deep Performance Profile

Rigorous solo benchmarking of NextRush using wrk (C-based, no Node.js overhead in the test tool) with realistic conditions — no HTTP pipelining, 60-second runs, 5 iterations per configuration for statistical confidence.

Methodology

SettingValue
Toolwrk 4.2.0
Duration60 seconds per test
Runs5 per configuration (mean ± stddev reported)
PipeliningDisabled (1 request at a time — realistic client)
Concurrency1, 64, 256, 512 connections
Threads8
WarmupHTTP traffic warmup before measurement

Why wrk?

Unlike autocannon (Node.js-based), wrk is written in C and doesn't share the event loop with the server. This eliminates the "testing yourself with yourself" problem and produces more accurate numbers.

Scaling Under Load

How NextRush handles increasing concurrency levels:

ConnectionsRPS (mean ± stddev)CV%Latency p50Latency p99
125,928 ± 5312.05%32μs105μs
6429,935 ± 5651.89%2.04ms3.09ms
25629,539 ± 4411.49%8.42ms12.59ms
51229,145 ± 5611.92%16.95ms23.90ms

Throughput stays flat from 64 → 512 connections — no degradation under load.

ConnectionsRPS (mean ± stddev)CV%Latency p50Latency p99
126,502 ± 4521.71%33μs79μs
6429,463 ± 2840.96%2.05ms2.98ms
25629,244 ± 8212.81%8.43ms12.59ms
51228,855 ± 2860.99%17.06ms23.86ms

Minimal overhead from JSON serialization — within 2% of raw Hello World.

ConnectionsRPS (mean ± stddev)CV%Latency p50Latency p99
126,095 ± 1470.56%34μs77μs
6427,732 ± 2781.00%2.22ms3.08ms
25627,213 ± 3331.22%9.11ms12.66ms
51226,680 ± 2400.90%18.53ms26.60ms

Segment trie routing adds negligible overhead vs baseline.

ConnectionsRPS (mean ± stddev)CV%Latency p50Latency p99
114,521 ± 1641.13%60μs125μs
6417,609 ± 3341.90%3.64ms4.78ms
25616,758 ± 3972.37%15.27ms19.73ms
51216,541 ± 2241.36%28.76ms36.57ms

Body parsing is the single biggest cost — streaming parsers planned for v3 stable.

5-layer middleware stack (timing, logging, auth check, CORS, body parsing):

ConnectionsRPS (mean ± stddev)CV%Latency p50Latency p99
121,767 ± 2361.09%41μs88μs
6431,296 ± 910.29%1.97ms2.94ms
25630,373 ± 5321.75%7.96ms11.65ms
51229,579 ± 5371.81%16.55ms23.92ms

The middleware stack actually outperforms single-route handlers at high concurrency — compose() pre-compiles the pipeline, so adding middleware has near-zero dispatch cost.

Additional Scenarios

Peak RPS at 64 Connections

Additional wrk scenarios that stress specific framework paths.

higher is better
Empty Response36,436 RPS

2.45ms p99; 204 No Content pure framework overhead.

Deep Route27,548 RPS

3.20ms p99; 5-segment nested route.

Error Handling16,156 RPS

5.05ms p99; thrown HttpError with JSON response.

Large JSON16,151 RPS

5.22ms p99; about 10KB response payload.

Memory Profile

RSS peak

224.2 MB

Highest observed process memory during the benchmark harness run.

RSS average

193.1 MB

Average memory across the sampled wrk profile.

Samples

1,020

Memory samples collected while the profile was running.

Idle footprint

<200KB

Framework-level idle footprint outside the benchmark harness.

Memory numbers are for the benchmark harness (wrk + server + monitoring). Idle NextRush memory footprint is under 200KB.


Why Fastify Is Faster

Fastify consistently beats NextRush by ~13%. Here's exactly why:

TechniqueFastifyNextRush
JSON serializationAOT with fast-json-stringifyNative JSON.stringify
Routerfind-my-way (radix tree, mature)Custom segment trie (newer)
Schema validationCompiled JSON SchemaUser-supplied transforms
HTTP parsingCustom optimizedStandard Node.js parser

Trade-off

Fastify trades flexibility for speed. NextRush prioritizes zero dependencies, full TypeScript, and multi-runtime support. If raw single-runtime throughput is your only concern, Fastify is the right choice.


Run Your Own Benchmarks

Quick comparison (autocannon)

From the monorepo root:

cd apps/benchmark
pnpm install
pnpm bench:compare:quick

Deep profile (wrk)

cd apps/benchmark
pnpm install

# Requires wrk: sudo apt install wrk (or brew install wrk on macOS)
node scripts/run.js --profile full --tool wrk

Results are saved to results/ with timestamps. Each run generates a JSON file and a human-readable report.


How to use these numbers

In this benchmark suite, Fastify led on mean RPS; NextRush placed second. That ordering is useful for relative comparison on identical hardware, not as a universal ranking.

  1. Fastify prioritizes Node.js throughput and includes serialization choices that help in micro-benchmarks.
  2. A single database or HTTP client call often adds milliseconds per request, shrinking the gap between frameworks in real apps.
  3. NextRush keeps a small dependency-free core, ships adapters for several runtimes, and supports optional DI and decorators — different trade-offs, not a single “winner” column.

Pick based on runtime targets, team style, and operational needs; confirm with your own measurements.

On this page