13th Sept 2013
Whilst running full scale tests I have found that more load average information is useful for the injectors in particular. So I now print out 1 minute, 5 minute and 15 minute metrics rather than just the 1 minute value
10th Sept 2013
Fixed results-to-workspace.sh to correctly work out the timestamp even when project names have ‘-’s in them
26th March 2013
Fixed kill-jmeter.sh and now using it for a fail safe STOP, aimed at Jenkins runs.
I had a Jenkins build that didn’t catch the JMeter stop signal for some reason so I’ve added a fail safe STOP command line argument to jmeter-ec2.sh:
echo "[STOP] -optional, stop after this time in seconds. this can override the script setting and can be used as a failsafe."
18th March 2013:
Added kill-jmeter.sh. If you stop a Jenkins build with the x on a build, it can leave the jmeter sessions still running on the injectors. So I’ve got this script to kill those processes. In order for this to work, there is a small update to the main script (jmeter-ec2.sh) to output a list of injector ips that jmeter is started up on. If you are going to automate the call to this, have a wait time between the end of the test run and calling this file. Under normal circumstances the jmeter processes do take some time to close down gracefully after the end of a successful test. Monitor a typical run to determine the wait time required. NEW: the general tear down script on this page is a more robust solution: AWS bash
Small change to results-to-workspace.sh. Just changed ‘rm *’ to ‘rm -f *’. I had a Jenkins project that cleared the workspace itself so this was throwing an unnecessary error.
I’m not really aiming to keep versions on here. I’ll just provide the latest one at the moment. But I have got a list of things to be looked at since I first went live. I put the solution up a bit early but now points 2 and 3 below have been addressed so for me it is fully operational. Point 1, errors, may need looking at for individual projects.
There are a few things that need doing before this becomes an enterprise solution:
1. Check for errors in the logs for failure, don’t just rely on perf limits. I had some errors on one run but still got good timings. Need to look for ‘Exception’ in the jmeter log (at least). This could be a separate test (separate jtl file) from the 95th percentile test.
It turns out this is not so simple. I have in my jmeter log file for example:
jmeter.threads.JMeterThread: Stop Thread seen: org.apache.jorphan.util.JMeterStopThreadException: End of file detected
which is acceptable because I am using the csv option to stop the thread on end of line. This is convenient for throwing data files at the test but of course I can’t now fail the test just on finding ‘Exception’.
The next point I am working on one solution now. I’ll leave this note here just to highlight there are different ways of achieving this depending on your needs. I have used another solution. See the JM Assertion page for details:
2. The script needs to check for text on the pages and report failures and these need picking up by Jenkins - pass/fail. Perhaps with an acceptable percentage. One way to do this may be to use an if controller to get data lines in the output file for both pass and fail (to find the text) and count occurrences of both. Again, this could be a separate test (separate jtl file) from the 95th percentile test.
After running a bit more, I find I need more specific run time data to screen. I want specific rates and response times rather than the general summary results:
3. Output specific transaction rates and response times to screen during run time. The general summary results may not be needed.
Point 3 is being tackled now. See here.
4. Time stamp and scenario elapsed time output to screen - DONE.
5. Are there any better JMeter / Jenkins graphs out there? I did look into this a few years ago and didn’t come up with anything.
6. The 95th percentile at runtime includes pass and fail timings. This should probably filter on pass results only. I don’t see this as urgent as pass and fail counts are also shown so the user can see how significant this is. A bit more work is needed to do the filtering and I’m trying to keep that to a minimum for the runtime analysis.
7. 95th percentile post run analysis could be more efficient if it used the bespoke assertions files and it could offer more options - pass fail percentiles and not just 95th. - DONE. See JM 95th v2.
I haven’t yet run this solution under heavy load or over extensive test periods. No doubt other issues will surface during use.
I hope this is all of some use out there. If you do use it an acknowledgment would be good.