Our legacy site: www.webwob.com   

BASH scripting methods (page 5)

This page will cover more BASH scripting methods, following on from the legacy site pages.

     LoadRunner CI/CD      *_LR_CI_*      LR_CI_+      Email_out      Email_in      Cookies 1      TUTORIAL      sprintf      debug      VUgen      Kill -9      disk space      AWS S3      dig (DNS++)      substitute      curl with timings      LR_Jenkins_graphs     

Areas covered on this page

1. CI Steps in order

2. Shell script outlines

3. Some shell details


    

This page outlines some details in my CI/CD BASH scripts

I wont publish all the details here because my current employer has allowed me a lot of time and effort in developing these solutions, so there is some commercial confidence to be considered. But I do have permission to outline my ideas on the net. Specific BASH techniques can be found around this website, if you want to develop your own shell scripts.


CI Steps in order


  1) . /cygdrive/Z/GitHub/PerformanceTesting/jenkins/LR_CI_tidy_up.sh
  
  2) . /cygdrive/Z/GitHub/PerformanceTesting/jenkins/analysis_reset.sh OR
      #start_time=1 end_time=3 . /cygdrive/Z/GitHub/PerformanceTesting/jenkins/analysis.sh
  
  3) . /cygdrive/Z/GitHub/PerformanceTesting/jenkins/LR_CI_MAIN_jenkins_caller.sh
  
  (any extra analysis needed)
  4) use_times=1 start_time=30 end_time=40 template="Perc90SLA3" . /cygdrive/Z/GitHub/PerformanceTesting/jenkins/analyse.sh
  
  5) evaluate(new File("Z:\\github\\PerformanceTesting\\jenkins\\postbuild.groovy"));
[ postbuild_no_slas.groovy ]

  
  
  
    
    

Shell script outlines


The Scripts


  LR_CI_tidy_up.sh - Check the contoller is ready to run a test
  
  analysis_reset.sh - Reset the analysis options to factory defaults

  analysis.sh - Or set the analysis options to our own requirements
  
  LR_CI_MAIN_jenkins_caller.sh - Call the main routine shell
  
  analyse.sh - Analyse the results with any extras we require (can be called several times)
  
  postbuild.groovy OR postbuild_no_slas.groovy - Set the Jenkins job status
  
  
  
    
    

Some shell details


Script details outlined to some extent


  LR_CI_tidy_up.sh - Check the contoller is ready to run a test
  
First step is to exit the whole job on certain conditions. I don't just carry on because I don't want to interrupt anything that might already be using this machine. It is better for us to manage our resources separately from the automation and to make sure the machine is free when we want to use it.

exit job if controller already running on this machine
exit job if Fiddler already running on this machine
exit job if Analysis already running on this machine

exit job if disk space is low on this machine
I do use things like this to check specific disk drives that we need for example:
z=$(df -h | grep "Z:" | awk 'BEGIN {OFS=","} {FS=" "} { print $5}' | awk 'BEGIN {FS="%"} { print $1}')

  
Then I would make a decision based on this value:
if [ "${z}" -ge "90" ]
then
    echo "z drive running low on disk space. Must stop this test run"
    exit 1
else
    echo "z drive disk space check: less than 90% in use. Carrying on."
fi

Make sure AWSCLI is in the path variable

Tidy up the current Jenkins workspace (if we are running from Bamboo, we will have setup a workspace folder as well, using the same variable, so this still works) Delete .jtl files etc.

Copy over the analysis templates from git - these are also setup with our interactive menus in mind, with their configurable analysis options...
cp -rf /cygdrive/Z/GitHub/PerformanceTesting/jenkins/AnalysisTemplates /cygdrive/c/loadrunner/program/HP/loadrunner
cp -f /cygdrive/Z/GitHub/PerformanceTesting/jenkins/LRAnalysis80.ini /cygdrive/c/loadrunner/program/HP/loadrunner/config


---
---

  analysis_reset.sh - Reset the analysis options to factory defaults

filter="FilterEdge=0"
REPLACEMENT_TEXT_STRING="FilterEdge=0"
sed -i "/FilterEdge/c $REPLACEMENT_TEXT_STRING" ${analysis_ini}

---
---

  analysis.sh - Set the analysis options to our own requirements
  
var1=60
start_secs=$(echo "${start_time} * ${var1}" | bc | awk 'BEGIN {FS="."} {print $1}')

REPLACEMENT_TEXT_STRING="FSartTime=${start_secs}"
sed -i "/FSartTime/c $REPLACEMENT_TEXT_STRING" ${analysis_ini}

[
Adjust end time in the same manner
]

filter="FilterEdge=1"
REPLACEMENT_TEXT_STRING="FilterEdge=1"
sed -i "/FilterEdge/c $REPLACEMENT_TEXT_STRING" ${analysis_ini}

---
---

  LR_CI_MAIN_jenkins_caller.sh - Call the main routine shell

This is the main calling shell. It needs to set the results location, sort out the injectors, depending on what type is needed and then run the test.
  
Results locations are fixed relative to the workspace and contain a date representation, created here.

my_date=$(date +"%Y-%m-%d-%H-%M")
subdir=$(echo "res${my_date}")
echo subdir is ${subdir}
    
subdir is res2017-08-13-17-59


The main shell does contain sections like this:

if [ "${new_injector_method}" == "classic" ]; then
    echo "\"new_injector_method\" set to \"classic\""
    . /cygdrive/Z/GitHub/PerformanceTesting/jenkins/LR_CI_new_aws_machines.sh
fi

And inside the 'new injectors' shell scripts, we find the current injector ips in the scenario with:

     grep -o '{[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' ${scenario} | cut -c 2-
    (
note: the first '{' is not part of the BASH but is in the scenario code and we need to grep for it
)

We then use the AWS API to build injector machines, based on images, placed in our VPCs or not, depending on what's needed for this test. And then we can replace the injector ip's in the scenario with our new machines. If you look under lr_ci_in_bash1 you can see there are four different options for the new_injector_method.

    A typical AWS API call to build an injector is as follows
    [this is for the VPC machines so must contain the subnet id]:

    
aws ec2 run-instances --image-id ami-ae1152xx --count ${old_inj_count} --instance-type c3.2xlarge --subnet subnet-7d32xx25 --key-name marksKeyPair --security-group-ids sg-2783xx41 >>aws_file.txt 2>&1


Some effort is then spent in the script, waiting for the injectors to all be up and running and pass their AWS checks, looping through them, getting their internal and external ip's (different ones needed by LoadRunner, depending on whether or not they are VPCs etc. And finally, the LR scenario is updated with the new injector IP's.

This is just building the injectors! Then we can prepare to run the test...

    
/cygdrive/c/loadrunner/program/HP/loadrunner/bin/wlrun.exe -Run -TestPath "${scenario}" || true


Notice my path to LoadRunner. I worked out early on that windows is a real pain!! Spaces in directories are just too much for BASH under several situations, so I deliberately install in a 'normal' location.

Also, notice that since we have done all our preperation work already and a lot of it is in the actual scenario file now, because we edit that file directly, with BASH, we don't have anything else to do at this stage. Results are already set to go where we want (so we don't need to use the command line option). We KNOW the injectors are up and running, so in fact, actually running the test is straight forward now.

---
---

  analyse.sh - Analyse the results with any extras we require (can be called several times)
  
This is a main area where I have significantly improved upon the standard Jenkins LoadRunner plugin AND actually added some real value.
I have fully automated the results analysis. This allows you to consider different percentiles for example AND automate analysis of peaks - even very short ones - which brings in proper automation to your full performance testing. This is critical for our current web applications here, where we have very high, short lived peaks, triggered by Television events. So for example you can make several calls, one after the other, such as:

    
use_times=0 start_time=0 end_time=60 template="Perc95SLA2" . /cygdrive/Z/.../analyse.sh

    use_times=1 start_time=30 end_time=45 template="Perc95SLA3" . /cygdrive/Z/.../analyse.sh

    use_times=1 start_time=52 end_time=53 template="Perc95SLA4" . /cygdrive/Z/.../analyse.sh


The shell script also builds dynamic results menus with links to all your results, including the graphs. See below for a typical example menu, built on the fly as results sessions are instigated:

Screenshot Screenshot


---
---

  postbuild.groovy OR postbuild_no_slas.groovy - Set the Jenkins job status
  
Code on the top level page shows you how I decide what to include or not when deciding a Jenkins job Pass or Fail. For example we may or may not want to consider the SLAs, depending on whether we want this test run to influence the full build plan. Scroll down this page for example code used in the
postbuild.groovy
script file: *_LR_CI_*

This can be an important last step in deciding the Pass/Fail status in Jenkins as the job itself is not very discriminatory.