Need the #1 custom application developer in Brisbane?Click here →

Testing

Performance Testing

9 min readLast reviewed: March 2026

Performance testing verifies that your application is fast enough. It's often invisible until it's broken: users don't notice a page that loads in 1 second, but they definitely notice one that takes 10 seconds. Performance testing helps you understand how your app behaves under load and catch bottlenecks before they reach users.

Why Performance Testing Matters

Performance issues are subtle. Your app might work fine with 10 users but fall apart with 1,000 concurrent users. You won't discover this until launch unless you test for it. Common performance problems:

  • Database queries: Unoptimized queries that are fast with 1,000 records but slow with 1 million.
  • N+1 problems: Fetching data in a loop instead of in bulk. Fast with small datasets, catastrophically slow with large ones.
  • Memory leaks: The app slowly consumes more memory until it crashes.
  • Resource exhaustion: Running out of database connections, file handles, or CPU.
  • Frontend slowness: Rendering a huge list without virtualization. Fast with 10 items, unusable with 10,000.
  • Network latency: Not considering that users on slow connections experience different performance.

These issues don't show up in normal testing. You need to simulate load to discover them.

Types of Performance Testing

Different performance tests serve different purposes:

TestTypePurposeExample
Load testingDoes the system handle expected load?1,000 concurrent users browsing the site
Stress testingWhere does the system break? What's the breaking point?Keep increasing users until response time becomes unacceptable
Soak testingDoes the system maintain performance over time? Memory leaks?Run moderate load for 24 hours, watch for degradation
Spike testingHow does the system handle sudden traffic spikes?1,000 users suddenly become 10,000 users
Endurance testingHow does the system perform over a long period?Run the app for a week, monitor for slowdown or crashes
Volume testingHow much data can the system handle?Fill the database with millions of records, test performance

Most teams start with load testing (can we handle normal traffic?) and stress testing (where do we break?). The others are useful but less common.

Load Testing Tools

Popular tools for load testing:

  • k6: Modern, developer-friendly. Write tests in JavaScript. Great for quick load tests.
  • Locust: Python-based. Write complex load scenarios in Python code.
  • Apache JMeter: Java-based, open source, powerful but older feel. Steep learning curve.
  • Artillery: JavaScript-based, easy to set up. Good for HTTP load testing.
  • Gatling: Scala-based, excellent reporting, great for complex scenarios.
  • AWS Load Testing, Google Cloud Load Testing: Managed services, handles massive scale.

For most projects, k6 or Artillery are great starting points. They're modern, developer-friendly, and can simulate thousands of users.

Writing a Load Test

A simple load test with k6 pseudocode:

import http from 'k6/http'
import { check } from 'k6'

export let options = {
  vus: 100, // 100 virtual users
  duration: '30s', // run for 30 seconds
}

export default function() {
  // Simulate a user browsing the site
  let response = http.get('https://api.example.com/posts')

  // Check the response is successful
  check(response, {
    'status is 200': (r) => r.status === 200,
    'response time < 200ms': (r) => r.timings.duration < 200,
  })
}

This test simulates 100 users making requests for 30 seconds. The results show: how many requests per second, response times, error rates, and whether checks passed. This gives you data on whether your app can handle the load.

Performance Budgets

A performance budget is a target for how fast your app should be. Examples:

  • Page load time: < 3 seconds on a 4G network
  • API response time: 95th percentile < 500ms (most requests respond within 500ms)
  • First Contentful Paint (FCP): < 1.8 seconds (time before user sees content)
  • Core Web Vitals: Industry standards for performance (LCP, FID, CLS)
  • Throughput: At least 1,000 requests per second on the API

Setting budgets makes performance concrete. It's hard to argue about "the app is slow." It's clear when you say "the API should respond in under 500ms." Tools like Lighthouse CI automatically check budgets in CI, preventing performance regressions.

Tip
Percentiles matter: Don't just look at average response time. If the average is 100ms but the 99th percentile is 5 seconds, your slowest users have a bad experience. Look at p50, p95, and p99.

Core Web Vitals

Google defines Core Web Vitals as metrics that matter most to user experience:

  • LCP (Largest Contentful Paint): How long until the main content loads? Target: < 2.5 seconds.
  • FID (First Input Delay): How responsive is the page to user input? Target: < 100ms.
  • CLS (Cumulative Layout Shift): How much does the page layout shift during loading? Target: < 0.1 (minimal shifting).

Good Core Web Vitals scores improve SEO (Google ranks fast sites higher) and user experience. Tools like Lighthouse measure them. Aim for "green" (good) scores; "orange" (needs improvement) should trigger investigation.

Database Query Performance

Many performance problems are in the database. Slow queries can be identified and optimized:

  • EXPLAIN: Most databases have an EXPLAIN command that shows how a query is executed. It reveals missing indexes, full table scans, etc.
  • Slow query logs: Configure your database to log queries that take longer than a threshold (e.g., > 1 second). Review them regularly.
  • Profiling: Use database profiling tools to see which queries are slowest.
  • Indexes: The most common fix. Adding an index on a frequently filtered column can speed queries 100x.
  • Query optimization: Rewrite queries to avoid N+1 problems, fetch only needed columns, use appropriate joins.

A single slow query can bring down your entire app under load. Regularly review slow query logs and optimize the worst offenders.

Profiling Backend Code

Profiling measures how much time and memory each function uses. Profilers identify bottlenecks:

  • CPU profiling: Which functions consume the most CPU time?
  • Memory profiling: Which functions allocate the most memory?
  • Tracing: Detailed timeline of what the app is doing, microsecond by microsecond.

Every language has profiling tools (Python: cProfile, Node.js: clinic.js, Java: JProfiler). When your app is slow, profiling tells you exactly which function to optimize.

Performance Testing in CI

Running performance tests in your CI pipeline catches regressions before they reach production. Common approaches:

  • Lighthouse CI: Runs Lighthouse on every commit, checks against your performance budget. Fails the build if budget is exceeded.
  • Load tests on schedule: Run a load test nightly or weekly against staging. Alert if response time degrades.
  • Benchmark comparisons: Compare performance of a new version against the previous version. Flag significant regressions.

Not all performance tests need to run on every commit (too slow). Run fast checks (Lighthouse) on every commit; run full load tests on schedule. If you catch a regression early, it's cheap to fix.

Interpreting Load Test Results

Load test results include:

  • Throughput: Requests per second. Higher is better.
  • Response time: How long requests take. Look at average, p50, p95, p99.
  • Error rate: Percentage of requests that fail or timeout.
  • Resource utilization: CPU, memory, database connection count. If CPU maxes at 50% while response time is acceptable, you have headroom.

Key insight: when does performance degrade? If 100 users = 100ms response time, but 1,000 users = 5 seconds, something doesn't scale. This points to the bottleneck (usually the database or a resource limit).

Capacity Planning

Load test results inform capacity planning: what infrastructure do you need to handle growth?

If your load test shows you can handle 1,000 concurrent users with 4 servers, and you expect to grow to 5,000 concurrent users, plan for 20 servers (accounting for some overhead and redundancy). Capacity planning prevents the infrastructure from becoming a bottleneck as your user base grows.

Note
Environment for load testing: Test in an environment that mimics production. If you load test against a tiny staging database while production has 100 million records, results won't be realistic. Load test with realistic data size and infrastructure.

When to Do Performance Testing

Timing matters:

  • Before launch: Test with expected initial load. Prevent a slow launch that damages reputation.
  • Before known traffic spikes: Holiday shopping season, Black Friday, major events. Test that you can handle it.
  • During development: Regularly profile code to catch performance regressions early.
  • When adding features: A new feature might introduce N+1 queries or memory leaks. Test performance after major changes.
  • When investigating slowness: Users complaining of slowness? Load test to see if it's the app or infrastructure.

Key Takeaways

Performance testing discovers how your app behaves under load. Use load testing tools (k6, Locust, Artillery) to simulate realistic traffic. Set performance budgets (page load time, API response time). Monitor Core Web Vitals for user experience metrics. Optimize database queries using EXPLAIN and slow query logs. Profile code to find bottlenecks. Use Lighthouse CI to catch regressions. Test before launch and before known traffic spikes. Remember: performance issues are invisible until they break. Test for them proactively.