Ask the Agent to compare performance before and after optimizations so changes are proven, measurable, and safe. Use simple benchmarks and representative workloads to show real impact.
1. Measure first, change later. Never optimize without a baseline.
2. Use representative inputs and workloads similar to production.
3. Keep tests deterministic and run multiple iterations to reduce noise.
4. Focus on user-visible metrics: end-to-end latency, throughput, and memory usage.
Step 1 — Define the target metric and workload
Step 2 — Create reproducible benchmark scripts
Step 3 — Run the baseline multiple times and collect statistics
Step 4 — Apply the optimization and run the same benchmark suite
Step 5 — Analyze and document results
Providing a single run or small synthetic input that does not reflect production. Example: running a "one-off" local request and claiming a 2x improvement based on a single measurement.
'// Node example using perf_hooks
const { performance, PerformanceObserver } = require("perf_hooks");
function targetOperation(data) {
// original logic that will be benchmarked
}
function runBenchmark(iterations, data) {
const results = [];
for (let i = 0; i < iterations; i++) {
const start = performance.now();
targetOperation(data);
const end = performance.now();
results.push(end - start);
}
return results;
}
// Run baseline
const baseline = runBenchmark(50, sampleData);
// Run optimized variant (ensure same data and environment)
const optimized = runBenchmark(50, sampleData);
// Compare statistics: mean, median, p95
'
Benefits:
"Run this benchmark script N times and report mean, median, p95, and standard deviation."
"Compare baseline and optimized runs and highlight statistically significant differences."
"Profile the code during the slow path to identify hot functions and memory allocations."
"Generate a reproducible benchmark script for this endpoint using sample payloads."