Back to Rules

🧠 Cursor Rule — Request Performance Comparisons

Official
CursorDebugging & Fixes
cursorperformanceoptimizationbenchmarking

Summary

Ask the Agent to compare performance before and after optimizations so changes are proven, measurable, and safe. Use simple benchmarks and representative workloads to show real impact.

Objectives

  • Measure baseline performance for the targeted code path
  • Run the optimized variant under the same conditions for a fair comparison
  • Surface measurable improvements (latency, throughput, memory) and regressions
  • Produce reproducible scripts that the Agent and humans can run

Principles

1. Measure first, change later. Never optimize without a baseline.

2. Use representative inputs and workloads similar to production.

3. Keep tests deterministic and run multiple iterations to reduce noise.

4. Focus on user-visible metrics: end-to-end latency, throughput, and memory usage.

Implementation Pattern

Step 1 — Define the target metric and workload

  • Choose metrics such as average latency, p95 latency, requests per second, or memory footprint.
  • Create input data that mirrors production shapes and sizes.

Step 2 — Create reproducible benchmark scripts

  • Use small scripts that the Agent can run directly. Examples: Node microbenchmarks, browser harness, or synthetic load scripts.
  • Prefer simple measurement tools: console.time, process.hrtime, or perf_hooks in Node; performance.now in the browser.

Step 3 — Run the baseline multiple times and collect statistics

  • Run the baseline N times and compute mean, median, p95, and standard deviation.
  • Record environment details: CPU, memory, Node version, and any relevant config.

Step 4 — Apply the optimization and run the same benchmark suite

  • Ensure the same input data and environment conditions.
  • Compare distributions (not just single runs).

Step 5 — Analyze and document results

  • Report improvements and any regressions.
  • If results are mixed, investigate variance and consider larger sample sizes.

Anti-Pattern (Before)

Providing a single run or small synthetic input that does not reflect production. Example: running a "one-off" local request and claiming a 2x improvement based on a single measurement.

Recommended Pattern (After)

'// Node example using perf_hooks

const { performance, PerformanceObserver } = require("perf_hooks");

function targetOperation(data) {

// original logic that will be benchmarked

}

function runBenchmark(iterations, data) {

const results = [];

for (let i = 0; i < iterations; i++) {

const start = performance.now();

targetOperation(data);

const end = performance.now();

results.push(end - start);

}

return results;

}

// Run baseline

const baseline = runBenchmark(50, sampleData);

// Run optimized variant (ensure same data and environment)

const optimized = runBenchmark(50, sampleData);

// Compare statistics: mean, median, p95

'

Benefits:

  • Decisions are data driven and defensible.
  • Small regressions are detected early.
  • Optimizations are targeted to real bottlenecks.

Best Practices

  • Automate benchmarks to run in CI only on demand or in a dedicated performance pipeline.
  • Record environment metadata with each run to ensure comparability.
  • Use warmup iterations to eliminate cold-start effects.
  • Prefer relative improvements and confidence intervals over single-number claims.

Agent Prompts

"Run this benchmark script N times and report mean, median, p95, and standard deviation."

"Compare baseline and optimized runs and highlight statistically significant differences."

"Profile the code during the slow path to identify hot functions and memory allocations."

"Generate a reproducible benchmark script for this endpoint using sample payloads."

Notes

  • Avoid micro-optimizing irrelevant code paths; prioritize user-visible hotspots.
  • When results are inconclusive, increase sample size and control for noise.
  • Share benchmark scripts and raw results to make findings reproducible and reviewable.
View Tool Page