Easy Load Testing: Piece of Mind on a Budget

Zayd Simjee

Jun 21, 2021

Easy Load Testing: Piece of Mind on a Budget

Everyone should be load testing. Here's how to do it using freely available command line tools.

You had an awesome private beta running on Digital Ocean, Heroku, or even just an AWS EC2 instance. Your customers are impressed by your product and how quickly you're able to deliver the features they need.

But now, you're trying to take your beta public.

How do you know if you’re able to deal with 200 customers instead of 20?

Usually...you don't. You have a hunch that you can maybe probably handle all 200 customers at once. But you don’t have confidence in that answer. That's okay, you think - I'll figure it out as you go along.

But why risk an outage if you can know the answer to that question within a few minutes?

Load testing quickly

The best way to know is by testing!

You've no doubt heard of load testing. But it can seem like a big investment to onboard an awesome Open Source tool like JMeter, Gatling , or Locust.

But these tools are great to have and make it easy to fire millions of requests against endpoints. When you're just starting out, they're overkill.

Fortunately, we can use some simple command line utilities to run lighter load tests!

Before testing

Before you start testing, it's important to lay the proper groundwork. You need to make two critical decisions:

  • Which environment will you test?
  • Where will you run the tests from?

First, figure out which running environments you're going to run your load tests against. Don't test against your main production environment. Instead, spin up an identical parallel environment. This way, you won't accidentally take down the service your customers are using.

Next, figure out where you'll run the tests from. The simplest approach is just to use your own desktop or laptop. If your computer for any reason can't handle the amount of parallelized requests you need to run, you can always have a teammate or friend pitch in and run the commands at the same time.

You can also get fancier if you need to. For example, you can spin up a virtual machine to enable test execution. If you need to make multiple requests from unique IP addresses, you'll need more machines. Some cloud providers such as Azure provide simple load testing services that can run parallel requests on your behalf.

What to test

I recommend testing three load cases in particular.

Exceed your expected requests. Think of how many requests you expect at once, and multiply that number by 10. Then, run that number of parallelized curl commands to your endpoints. You'll know pretty quickly how much load your servers can handle when you start getting error codes 500 or timeouts back from your service. If the APIs are authenticated, try to use different users.

Test large request bodies. This is relevant for APIs that take inputs. I've seen cases where large requests have seriously bogged down or totally imploded a service.

I recommend passing requests with bodies that are increasingly 2x in size until you can find how big of a request you can actually handle. Then, limit request request sizes on your server, giving yourself some headroom.

Most server frameworks usually have a native way to do this. Here's an example from Express:

app.use(bodyParser.json({ limit: '[max request size]mb' }))

Verify your throttling. If you have throttling limits in place, make sure they work! Throttling limits are frequently configured incorrectly. We don't know that the config doesn't work until our service goes down.

Run the same parallelized curl commands but in a way that your request pattern will trigger your intended throttling rules. If throttling is not triggered, change the rules and try again.


To test, you'll want to run a number of requests in parallel. The curl command line utility provides an easy way to do this with the --parallel command.

Here are a few ways to run it. The stuff you need to change below is in square brackets [].

# 1. single-line command in one step
curl --parallel --parallel-immediate --parallel-max [number of requests] [endpoint pasted the number of times you want to make a requests to it]

# Example:
curl --parallel --parallel-immediate --parallel-max 2 google.com google.com

# 2. Using a config file
# first, create a config file with entries that look like
# url = [endpoint]
# Optionally, you can write the output of each request to a file and analyze those files afterwards by adding a line after each url entry formatted as 
# output = [filename]
# then run your curl commands
curl --parallel --parallel-immediate --parallel-max 2 --config [config file]

# Example:
touch urls
echo 'url = google.com' >> urls
echo 'output = req1.txt' >> urls
echo 'url = google.com' >> urls
echo 'output = req2.txt' >> urls
curl --parallel --parallel-immediate --parallel-max 2 --config urls

Keep track of all of these lightweight load tests in scripts. They can take you to a pretty decent scale. When you do decide to onboard a tool later on, you'll find these scripts useful for porting tests over to your new framework.


© 2022 TinyStacks, Inc. All rights reserved.