wP Results

Analysing Results and Outputting Graphs for CI

So the test has finished, we now need to get to the raw results data file so we can do our own analysis before presenting results to Jenkins.

The resultUrl expression extractor digs out the Summary page url, then once on the Summary page we dig out the raw results url, then we download the results csv file and finally analyse that in the raw results processor:


At the top of the results processor we setup the jtl files. These are used by the Jenkins JMeter plugin to draw the graphs. The graphs are displayed on the page in order of the file names setup here.

Each line in our input data csv file is processed in turn, so each webPageTest script that is run has it's own jtl files. They are named using the url id and the script description:


Next we start processing the raw results csv file. This is split by comma and then each line is processed for the values we are after (there are about 90 columns in this file). We use the first row in the file to dig out the indexes based on the labels, so for eaxmple loadEventEnd is in column 79 (CA in Excel):


LoadEventEnd is also the field we are interested in to do our analysis with - for my current site we have decided to take load event end as the point where the webpage is ready for the user to use. This was actually quite a tricky decision and we based it on the way our particular pages feel when we load them up. This is probably a good exercise to go through when your site is new and slow - so you can see how things are progressing on the page load cycles. The classic point to take is on document complete, but for our pages, this is quite late in the rendering and the user can start to use the page before then. You may want to change this for your purposes, in which case you just have to move the setting of dataindex in the script.

You can see here, we are setting dataindex to load event end. The script uses dataindex for any calculations later on:


If we are not on the first row, then process some data. If we are on a data row for an uncached page (cached = 0), do the uncached calculations:

    1. sum up the uncached timings
    2. if the timing is 0 or we have a 404 (num request is small), flag this as a fail
    3. if the captured timing is over our limit, flag this as a fail:


Whilst we are processing the data rows, output the fields we are interested in to the html formatted log file (for the workspace click through):


After we've processed all the data rows, do any maths we want. In this case I'm taking the average of values per row, not counting failed rows:


The way I've done the fails, if we get any fails, we set 'runStatus' to false. This will show up on the Jenkins graph and will fail the build. It can be chased up with the click through logging (the formatted HTML file). But you may want to change this behavior depending on how robust your site is. You may not want to fail the build for perf issues or you may only want to fail on successful tests that have not reached their requirements, rather than failed tests (I have found sometimes the tests are not too robust):


We then build the jtl files that drive the Jenkins graphs:



When we have scripts that simply navigate to a single page and we run 3 iterations of the webPageTest script for example, the above results analysis works well. We get an average of the test runs aimed at loading this one page.

However, if our script involves going to the home page, logging in and moving on through the site, the data file has rows for each step in the workflow. So the above analysis gives an average across each of those steps. This may still be acceptable, particularly for following trend graphs.

If this is not what you are after there are a few approaches you can take:

    1. You can use 'logData 0' and logData 1' in the webPageTest script to determine what is logged and what is not logged, for example you could miss out the home page from the timings.

    2. You could adjust the analysis in the JMeter raw processor, potentially even building jtl files for each step in the script. You could still average over those steps as done in the simple case above. The steps can be pulled out of the raw data file based on page title or there is a setEventName option in webPageTest scripts that should come through to this log file in the event name column.

For the moment the solution here is doing what I need it to do so I will leave it to the reader to adjust any of the points raised here for their use. I will post updates to the script if and when I develop any.

Hope this is all of some use to you. If you use it, perhaps an acknowledgment would be good.

[Home] [About (CV)] [Contact Us] [JMeter Cloud] [webPageTest] [wP Highlights] [wP Overview] [wP Jenkins] [wp Jmeter] [wP Input] [wP Start] [wP While] [wP Results] [wP Utilities] [_64 images] [asset moniitor] [Linux Monitor] [Splunk ETL] [Splunk API] [AWS bash] [LR Rules OK] [LR Slave] [LR CI Graphs] [LoadRunner CI] [LR CI Variables] [LR Bamboo] [LR Methods] [LR CI BASH] [Bash methods] [Jenkins V2] [Streaming vid] [How fast] [Finding Issues] [Reporting] [Hand over] [VB Scripts] [JMeter tips] [JMeter RAW] [Dynatrace] [Documents] [FAQ] [Legal]

In the Cartesian Elements Ltd group of companies