JM Versions

Version history

Latest updates:

23rd january 2015
An issue has come up recently as I try to run very large tests (long, high TPS, many injectors). I am running out of disc space on injectors and controllers. In AWS I do get ephemeral storage on /mnt but haven’t been using it because it is temporary storage. I have now had to employ that disc space.

For the injectors it is straight forward but did need a change to a couple of files.

  1. in jmeter.properties, set: REMOTE_HOME="/mnt"
  2. in jmeter-ec2.sh (update included in the project), I have added a line to allow use of this space:
  3. ssh -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -oLogLevel=quiet -i $PEM_PATH/$PEM_FILE $USER@${hosts[$y]} 'sudo chmod 777 "'$REMOTE_HOME'"'

For the controller, I have made changes external to the solution, in the controlling script in jenkins. In this case, I redirect the working directory on the controller to /mnt. So before calling the project file in Jenkins I do this (with some debug steps included):

    #run the test from /mnt
    sudo chmod 777 /mnt
    cd "${LOCAL_HOME}"
    pwd
    echo copy pml over to /mnt
    sudo cp -r pml /mnt
    sudo chmod -R 777 /mnt/pml
    echo backup pml to pml_store
    sudo mv pml pml_store
    echo make link
    sudo ln -s /mnt/pml pml
    echo list what we have under ${LOCAL_HOME}
    ls -all
    echo list what we have under /mnt
    ls -all /mnt

The project is then run:

    project="pml" setup="false" terminate="TRUE" count=45 STOP="22000" ./jmeter-ec2.sh
    (followed by all the post analysis steps etc., as usual)

At the end, I restore the project files (discarding the working directory):

    #put back store and remove project from /mnt
    cd "${LOCAL_HOME}"
    sudo rm pml
    mv pml_store pml
    ls -all
    cd /mnt
    pwd
    ls
    sudo rm -rf pml
    ls

One more thing I have added to the solution is to stop collating the general results summary report if I am not using it. This has saved me several gigabytes of space in jenkins for large test runs. You’ll see in jmeter-ec2.sh, extra lines where I decide not to perform various steps:

    if [ -z "$REPORT_SUMMARY" ] ; then

With all these changes in place, I have been able to run much larger projects. And I have run much larger AWS instances to run more threads in my scripts (all the code and assertions do require high CPU usage)

19th December 2014
Allowed for dollars in file paths in the main bash script. This means we can code for data files per thread for example, something I wanted in my new
bespoke general model.

17th December 2014
Added the shell scripts for the
bespoke general model: rate-check-bespoke.sh, assertion-check-bespoke.sh and jmeter-percentiles-v2-bespoke.sh

5th August 2014
Added ‘file-percentiles-v2.sh’ which you can point directly at a csv file to create jtl files independently. Added ‘tear-down-ec2.sh’ which can be used to tear down AWS instances based on server names. I use this to capture any servers that are still up, perhaps after failed Jenkins jobs.

9th July 2014
Improved the random timer I use for staggering injector startups and included it in the example project script. With multiple cloud injectors reading from the same data files, you end up with bunched sets of calls, one of the same call for each injector. I introduced a random timer but this didn’t work very well because each injector would seed it very similarly. So I now use the injector number to calculate an offset. (Also had to set the injector number to zero, if the file wasn’t present). The random timer is placed in the initialization thread:

timer

${__javaScript(parseInt(${__P(injectorNo)}) * 20000)} 
This code gives 20 seconds delay between each injector startup
x8 injectors = 160 seconds to ramp them all up.

4th July 2014
Added a few more log lines to the percentile_v2 script, so the log gives the percentile number directly.
Changed the jmeter example script slightly to allow for comments in the environment variables file. Was useful for me for a local test run.

20th May 2014
Can now load jmeter variables from the jenkins build command line. see
here
e.g. project="programmes-4od" ./file_variables.sh scala=s env=perf check=yes

6th May 2014
Just realized I had been updating the various other project scripts in windows. So some of them had \r\n line endings and this can cause issues on some machines. This update fixes that.

25th March 2014
Fixed yesterdays fix. And found another issue that users need to be aware of: All the linux bash scripts MUST use \n as a carriage return and not the default windows \r\n. I found an issue with one of my scripts I updated on a windows machine. bash didn’t know what to do with the \r (assertions.properties: line 8: $'\r': command not found). Instead of this:

image00103

You should have this:

image00203

24th March 2014
Found a small bug in jmeter-ec2.sh. The iteration count was being ignored for outputting runtime data, so details were being calculated every iteration. I picked this up as I tried to get better figures during runtime, that more closely matched the final analysis. I have been finding that tps rates during runtime are not very close to the final calculations, which means they are not so useful. Now I have changed some of my settings and with this bug fix, runtime data is more useful. I now output runtime calculations every 3 minutes, by setting RUNNINGTOTAL_INTERVAL="12" in the project’s ‘jmeter-ec2.properties’ file and ASSERTIONS_SAMPLE=40 in the project’s ‘assertions.properties’ file. This way, because ‘summariser.interval’ is set to 15 seconds in the main jmeter.properties file, we get summaries every 3 minutes (15 seconds x 12). And using 40 samples per injector rather than 10, gives better analysis.

12th Feb 2014
I have been using two controllers, running two projects concurrently, using Jenkins to time them. This has resulted in AWS slowing down in getting all the injectors launched and ready. So in this release I have increased the wait time for AWS launch to 300 seconds. If you have issues, you may want to increase this further. If AWS is fast, this will not hold things up.

9th Dec 2013
Allow for project specific jmeter.properties files. I have a project that needs ‘#CookieManager.save.cookies=false’ changing to true. I didn’t want this setting on for all projects, hence this update. If you put a jmeter.properties file under your project directory, that version will be used in priority to the top level one.

29th Nov 2013
Added a quick report csv file which lists all the major assertion results for quick reference.
Changed the way we find the latest results on the controller machine and allow for project names with ‘-’ in them (2nd attempt at this it looks like). Do not use project names containing ‘^’ !

24 Sept 2013
Added disc space monitor to keep an eye on controller disc space as jobs build up on the jenkins slave

I add these command right at the end of the jenkins batch:

    # keep an eye on controller disk space
    stepname="22." workspace="${WORKSPACE}" ./disk_space.sh

And then you get a graph in Jenkins with errors flagged up if you go above the limit you set (optional command line setting):#

disc_space_monitor

13th Sept 2013
Whilst running full scale tests I have found that more load average information is useful for the injectors in particular. So I now print out 1 minute, 5 minute and 15 minute metrics rather than just the 1 minute value

10th Sept 2013
Fixed results-to-workspace.sh to correctly work out the timestamp even when project names have ‘-’s in them

26th March 2013
Fixed kill-jmeter.sh and now using it for a
fail safe STOP, aimed at Jenkins runs.

I had a Jenkins build that didn’t catch the JMeter stop signal for some reason so I’ve added a fail safe STOP command line argument to jmeter-ec2.sh:

usage:
echo "[STOP]         -optional, stop after this time in seconds. this can override the script setting and can be used as a failsafe."

18th March 2013:
Added kill-jmeter.sh. If you stop a Jenkins build with the x on a build, it can leave the jmeter sessions still running on the injectors. So I’ve got this script to kill those processes. In order for this to work, there is a small update to the main script (jmeter-ec2.sh) to output a list of injector ips that jmeter is started up on. If you are going to automate the call to this, have a wait time between the end of the test run and calling this file. Under normal circumstances the jmeter processes do take some time to close down gracefully after the end of a successful test. Monitor a typical run to determine the wait time required. NEW: the general tear down script on this page is a more robust solution: AWS bash

Small change to results-to-workspace.sh. Just changed ‘rm *’ to ‘rm -f *’. I had a Jenkins project that cleared the workspace itself so this was throwing an unnecessary error.

Earlier changes:

I’m not really aiming to keep versions on here. I’ll just provide the latest one at the moment. But I have got a list of things to be looked at since I first went live. I put the solution up a bit early but now points 2 and 3 below have been addressed so for me it is fully operational. Point 1, errors, may need looking at for individual projects.

There are a few things that need doing before this becomes an enterprise solution:

    1. Check for errors in the logs for failure, don’t just rely on perf limits. I had some errors on one run but still got good timings. Need to look for ‘Exception’ in the jmeter log (at least). This could be a separate test (separate jtl file) from the 95th percentile test.

    It turns out this is not so simple. I have in my jmeter log file for example:

      jmeter.threads.JMeterThread: Stop Thread seen: org.apache.jorphan.util.JMeterStopThreadException: End of file detected

    which is acceptable because I am using the csv option to stop the thread on end of line. This is convenient for throwing data files at the test but of course I can’t now fail the test just on finding ‘Exception’.

    The next point I am working on one solution now. I’ll leave this note here just to highlight there are different ways of achieving this depending on your needs. I have used another solution. See the JM Assertion page for details:

    2. The script needs to check for text on the pages and report failures and these need picking up by Jenkins - pass/fail. Perhaps with an acceptable percentage. One way to do this may be to use an if controller to get data lines in the output file for both pass and fail (to find the text) and count occurrences of both. Again, this could be a separate test (separate jtl file) from the 95th percentile test.

    After running a bit more, I find I need more specific run time data to screen. I want specific rates and response times rather than the general summary results:

    3. Output specific transaction rates and response times to screen during run time. The general summary results may not be needed.

    Point 3 is being tackled now. See here.

    4. Time stamp and scenario elapsed time output to screen - DONE.

    5. Are there any better JMeter / Jenkins graphs out there? I did look into this a few years ago and didn’t come up with anything.

    6. The 95th percentile at runtime includes pass and fail timings. This should probably filter on pass results only. I don’t see this as urgent as pass and fail counts are also shown so the user can see how significant this is. A bit more work is needed to do the filtering and I’m trying to keep that to a minimum for the runtime analysis.

    7. 95th percentile post run analysis could be more efficient if it used the bespoke assertions files and it could offer more options - pass fail percentiles and not just 95th. - DONE. See JM 95th v2.

I haven’t yet run this solution under heavy load or over extensive test periods. No doubt other issues will surface during use.

I hope this is all of some use out there. If you do use it an acknowledgment would be good.

[Home] [About (CV)] [Contact Us] [JMeter Cloud] [JM Highlights] [JM Overview] [JM Control] [JM Inject] [JM Threads] [JM Results] [JM Assertions] [JM TPS] [JM Metrics] [JM Runtime] [JM Collation] [JM Logs] [JM 95th] [JM 95th v2] [JM Jenkins] [JM Corporate] [JM Scripts] [JM Variables] [JM Embedded] [JM Hosts] [JM Running] [JM Example] [JM Versions] [webPageTest] [_64 images] [asset moniitor] [Linux Monitor] [Splunk ETL] [Splunk API] [AWS bash] [LR Rules OK] [LR Slave] [LR CI Graphs] [LoadRunner CI] [LR CI Variables] [LR Bamboo] [LR Methods] [LR CI BASH] [Bash methods] [Jenkins V2] [Streaming vid] [How fast] [Finding Issues] [Reporting] [Hand over] [VB Scripts] [JMeter tips] [JMeter RAW] [Dynatrace] [Documents] [FAQ] [Legal]

In the Cartesian Elements Ltd group of companies