Avoiding Fuzzy Math When Load Testing

  • 5 minute read

No one likes fuzzy math. It’s especially problematic when you’re conducting a load test and can’t accurately gauge concurrency.

In this last blog of a series on load testing, here are some tips on how to avoid the fuzzy math that can distort your expectations of how a website will perform.

Shaky math can happen when using Apache JMeter, which is the most commonly used application to load test the performance of an open source site.

JMeter breaks it down into three categories: the number of threads that are happening at once, the ramp-up period from zero requests at a time to the max number, and the number of iterations. This typically is the way to determine expected concurrency: the total number of requests divided by average response times over how long you test.

There’s a problem with this method, though. When a site becomes overwhelmed, the response time actually increases, which means concurrency drops. So with such a test, you’re actually simulating the exact opposite of a normal site and not even close to seeing probable concurrency.

But there is a proper way to solve this, a way to get a true glimpse of possible concurrency – it’s something called throughput shaping. You can find it on jmeter-plugins.org. With this tool, you only have to simply say, “I want a thousand requests at once,” and it’s not only accurate but will save time that’s ordinarily lost on JMeter as you first try to figure out things like how many requests will hit your site at once.

Another speed bump to consider is how difficult it’s becoming to properly determine how many requests hit your site. That’s because not every every bot and not all of the “noise” on the Web crosses the path of Google Analytics and Omniture. Likewise, looking at the number of requests to your Web servers on Acquia Cloud, for example, doesn’t take into account if you’re using a CDM. (But if you are using a CDM, that’s the most likely source where you’ll have an accurate number.)

So here’s the last lesson of this series: Don’t extrapolate results. If you’re testing with 150 connections, don’t assume 300 will be exactly twice the number of resources required. Your test will tell you what 150 did. Load testing should be conducted in an environment that’s exactly the same size as what your production environment will be. Sure, it will cost a bit, but it will tell you how things will actually behave.

If your site faces contention – when many visitors simultaneously compete for your application’s attention – it will perform the exact same way if you have 150, 300 or even 600 connections. But the point is, you don’t actually know that; extrapolating results won’t provide an accurate number. Most often people look at the numbers, testing exactly to the numbers they have today. They’re missing more than just surges.

Consider must-visit websites for special events like sporting events and live award shows. They have one or two huge days of traffic a year, but the rest of the year, they’ll have a totally different number of visitors. So, when load testing, it’s not just a matter of looking at numbers. It’s understanding what those numbers actually mean and recognizing your end goal.

I recommend that once you have numbers, always test about 50% above that. Not necessarily because you’ll see that kind of traffic at launch, but if the site’s successful and growing over time, it’s rare that you’ll take the time to go back and run more load tests. By initially testing well above what you’re expecting, you’ll have a buffer and won’t have to worry about how the site is going to behave six months from now.

Hopefully this series has prompted you to think carefully about the nuances of load testing and will help as you prepare to launch a site. Load testing done right can help achieve optimal site performance. And, as I mentioned earlier in the series, your users define where load testing should take place, so you can’t go wrong. If you have any questions or suggestions, please drop a note in the comment box. Thanks for reading.