How fast

How fast should we run load tests?

This is just one example of the sorts of things you might want to investigate to work out where to pitch your load tests. This is primarily aimed at services or low level tests where you have access to log files. This is not always the case and sometimes you have to come from the other end and talk to the business users to find out how they use the application and what levels of load you can expect in live.

This approach also shows you exact details of calls, often enough to fully design a load test - see here. But be careful This is not always the case. You need to be sure you are seeing the bigger picture. And watch out for various caching levels and where you are pointing your load tests...

So, in this case, I want to design a simple load test that hits the application server directly with current peak production load, This is for regression performance testing.

This page simply shows how I find the correct load to run (transaction per second) in this simple case. We can also see how the load balancing is performing and the ratios of various calls.

And often you can go the levels and look at log files on load balancers and cache servers and see how they are doing. Sometimes you may want to test at these higher levels.

My report:

Summary

  • A typical day's log files are used here: Wednesday 4th March 2015
  • An absolute peak rate of 100 transaction per second per server can be observed in these files at roughly the same time = 300tps
  • If we look across the three app servers, we definitely need to support 50 (app01) + 40 (app02) + 100 (app03) tps = 190tps
  • It is suggested that since this is a typical day's log, that the higher load of 300tps is used and perhaps some contingency catered for.
  • Requests are grouped into about 15 transaction types (e.g. asset_ios, asset_xbox, currentTime)
  • POST requests for /licence and /prlicence are dealt with with separate POST data files, provided by the devs
  • NOTE: Tests must ONLY be run with the licence server mocking in place.

And below are screnshots showing how I get these numbers. The basic idea is to load up log files in Excel (or similar) and roughly plot the transactions per second. Often I do this by averaging over hundreds of lines in Excel - because Excel graphs are very rough and easily hide details - so for example intermittent low level numbers can be lost to the eye between higher level values - just because the lines drawn are very thick. In this case I made it quite simple as I donít need high accuracy, just ballpark figures.

So, to get the numbers, I split the log file by spaces and colons when opening in Excel. This splits out the time stamp into hours, minutes and seconds. I then work on the seconds column

I count the number of sequential seconds that are the same, so the count column (H) is set to:

    H3 =IF(G3=G2,H2+1,1)

and then the rate column (I) is set to:

    I3 =IF(H4=1,H3,NA())

The graph is then based on the rate column. So you can see if we get 3 requests in the same second in the log file (second 7 in this example), we get a value of 3 on the graph:

ScreenShot285

Details used for the summary above

app01:

ScreenShot559

Peak:

ScreenShot562

App02:

ScreenShot561

Peak:

ScreenShot563

App03 (had to split into two files for excel):

ScreenShot564
ScreenShot565

Peak:

ScreenShot566

High load also seen on this server around the peaks of the other servers:

ScreenShot567

[Home] [About (CV)] [Contact Us] [JMeter Cloud] [webPageTest] [_64 images] [asset moniitor] [Linux Monitor] [Splunk ETL] [Splunk API] [AWS bash] [LR Rules OK] [LR Slave] [LR CI Graphs] [LoadRunner CI] [LR CI Variables] [LR Bamboo] [LR Methods] [LR CI BASH] [Bash methods] [Jenkins V2] [Streaming vid] [How fast] [Finding Issues] [Reporting] [Hand over] [VB Scripts] [JMeter tips] [JMeter RAW] [Dynatrace] [Documents] [FAQ] [Legal]

In the Cartesian Elements Ltd group of companies