Our legacy site: www.webwob.com   

VUgen scripting methods (page 3)

This page will cover more BASH scripting methods, following on from the legacy site pages.

     LoadRunner CI/CD      *_LR_CI_*      LR_CI_+      Email_out      Email_in      Cookies 1      TUTORIAL      sprintf      debug      VUgen      Kill -9      disk space      AWS S3      dig (DNS++)      substitute      curl with timings      LR_Jenkins_graphs     

    

Areas covered on this page

Step 1: Configure a new user account for specific use with S3

Step 2: Download the S3 files FROM the correct location TO the local directory

Step 3: logs_run_now.sh -- example

Step 4: Concatinate all the downloaded S3 log files


Configure a new user account for specific use with S3

Command line usage:  

  aws configure --profile user_s3

Screenshot of manual input

aws configure   
  
OR you can use a pure BASH script
BUT do be aware of the security and sensitivity of this data (scrambled here for our security)

  Note the '\n's for new lines
  printf 'AK.............3RA\njwF....................biLAnc\neu-west-1\ntext\n' | aws configure --profile user_s3




    
    

Download the S3 files FROM the correct location TO the local directory

CD into the AWS S3 source directory, given the project as an input variable

  #project=allweb
  #project=allweb
  
  echo "Time: $(date -Iseconds). Starting now: ">> logs_now
  
  if [[ $project = *[!\ ]* ]]; then
      echo "\$project now set to \"${project}\""
  else
      echo "\$project consists of spaces only. Must stop here. List of projects you can use:"
      dir_a=prod
      echo
      echo "--------------- TOP ----------------"
      aws --profile user_s3 s3 ls s3://my-clb-logs/${dir_a}/ --human-readable --summarize
      echo
      echo "--------------- TOP ----------------"
      echo
      echo "-------------------- project not defined. Exiting here --------------------"
      echo
      echo Run like this:
      echo
      echo "project=allweb . ./s3_ls.sh"
      echo
      return -1
  fi
  
  echo "First we'll delete all the local ${project} log files:"
  
  for i in $( ls | grep "${project}" | grep ".log"); do
  
       echo item: $i
  
       rm -f $i
  
       rm -f ${i}
  
  done
  echo "Finished delete of all the local ${project} log files:"
  
  dir_a=prod
  echo
  echo "--------------- TOP ----------------"
  aws --profile user_s3 s3 ls s3://my-clb-logs/${dir_a}/ --human-readable --summarize
  echo
  echo "--------------- TOP ----------------"
  aws --profile user_s3 s3 ls s3://my-clb-logs/${dir_a}/
  dirb=$(aws --profile user_s3 s3 ls s3://my-clb-logs/${dir_a}/ --human-readable --summarize | grep "${project}" | xargs echo | awk 'BEGIN {FS=" "} {print $2}'| awk 'BEGIN {FS="/"} {print $1}')
  echo "dirb is: ${dirb}"
  echo
  dir0=prod/${dirb}/AWSLogs
  #dir0=prod/my4od/AWSLogs
  echo dir0
  echo "aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/"
  aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/
  dir1=$(aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/ --human-readable --summarize | head -1 | xargs echo | awk 'BEGIN {FS=" "} {print $2}'| awk 'BEGIN {FS="/"} {print $1}')
  echo "dir1 is: ${dir1}"
  echo
  echo dir1
  echo "aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/"
  aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/
  dir2=$(aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/ --human-readable --summarize | head -1 | xargs echo | awk 'BEGIN {FS=" "} {print $2}'| awk 'BEGIN {FS="/"} {print $1}')
  echo "dir2 is: ${dir2}"
  echo
  echo dir2
  echo "aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/"
  aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/
  dir3=$(aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/ --human-readable --summarize | head -1 | xargs echo | awk 'BEGIN {FS=" "} {print $2}'| awk 'BEGIN {FS="/"} {print $1}')
  echo "dir3 is: ${dir3}"
  echo
  in_eu=$(echo ${dir3} | grep -o "eu-west-1" | wc -l)
  if [ "${in_eu}" -ge "1" ]
  then

    echo "Overall in_eu status: PASS"
    echo dir3
    echo "aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/"
    aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/
    
    dir4=$(aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/ --human-readable --summarize | grep "PRE" | tail -1 | xargs echo | awk 'BEGIN {FS=" "} {print $2}'| awk 'BEGIN {FS="/"} {print $1}')
    echo "dir4 is: ${dir4}"
    echo
    
    echo dir4
    echo "aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/"
    dir5=$(aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/ --human-readable --summarize | grep "PRE" | tail -1 | xargs echo | awk 'BEGIN {FS=" "} {print $2}'| awk 'BEGIN {FS="/"} {print $1}')
    echo "dir5 is: ${dir5}"
    echo
  
    echo dir5
    echo "aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${dir5}/"
    dir6=$(aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${dir5}/ --human-readable --summarize | grep "PRE" | tail -2 | head -1 | xargs echo | awk 'BEGIN {FS=" "} {print $2}'| awk 'BEGIN {FS="/"} {print $1}')
    echo "dir6 is: ${dir6}"
    echo
    
    echo dir6
    echo "aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${dir5}/${dir6}/"
    echo "aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${dir5}/${dir6}/" >> logs_now
    
copy_top=$(echo "aws --profile user_s3 s3 cp s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${dir5}/${dir6}/")

    
    echo
    echo "The next line specifies how many logs to download. Set the tail number as a few more than the logs required. ie tail -25 brings back 22 log files typically."
    echo
    aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${dir5}/${dir6}/ --human-readable --summarize | tail -25 | grep ".log" > my_logs2.txt
    
    
    echo
    
    #lets try downloading a file from here
    
if [ -f my_logs2.txt ]; then

      echo "my_logs2.txt File found"
      declare -a myarray
      i=0
      ready="no"
      
rm -f logs_run_now.sh

      
while IFS=$'\n' read -r line_data; do

        
if echo "${line_data}" | grep -q '.log'
        then

          echo "got a line - ${line_data}"
          log1=$(echo ${line_data} | awk 'BEGIN {FS=" "} {print $5}')
          echo "Current location is: \"s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${dir5}/${dir6}/\""
          echo " CAN RUN: \"aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${dir5}/${dir6}/ --human-readable --summarize\""
          echo "log file ${i}: is: ${log1}"
          echo "${log1}" >> logs_now
          
          
echo 'echo \"Time: $(date -Iseconds). Starting now: \"' >> logs_run_now.sh
          echo -n "${copy_top}" >> logs_run_now.sh
          echo -n "${log1}" >> logs_run_now.sh
          echo " ." >> logs_run_now.sh

          
          echo " CAN RUN (2): \"aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${dir5}/${dir6}/${log1} --human-readable --summarize\""
          
          temp_start=${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${dir5}/${dir6}
          echo " CAN RUN (3): \"aws --profile user_s3 s3 ls s3://my-clb-logs/${temp_start}/${log1} --human-readable --summarize\""
          
          echo "lets try 4"
          temp2_start=$(echo "${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${dir5}/${dir6}")
          echo " CAN RUN (4): \"aws --profile user_s3 s3 ls s3://my-clb-logs/${temp2_start}/${log1} --human-readable --summarize\""
          echo "end of lets try 4"
          
          myarray[i]="${line_data}" | awk 'BEGIN {FS=" "} {print $5}' # Populate array.
          ((++i))
          
          echo "Downloading  file ${i} to the current directory"
          #aws --profile user_s3 s3 cp s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${dir5}/${dir6}/${log1} .
          echo "aws --profile user_s3 s3 cp s3://my-clb-logs/${temp2_start}/${log1} ."
        
else

          echo not found a ':.log' on this line
        
fi

      
done < my_logs2.txt

    
fi

  else
     echo "Overall in_eu status: FAIL"
  fi
  
  
cat logs_run_now.sh | tr -d '\r' > temp_tr; rm -f logs_run_now.sh; cp temp_tr logs_run_now.sh
  
  . ./logs_run_now.sh

  

    
    

logs_run_now.sh -- example

This script is created and run by the script above  
(edited for security here... )

  echo \"Time: $(date -Iseconds). Starting now: \"
  aws --profile user_s3 s3 cp s3://my-clb-logs/prod/project/AWSLogs/003358414754/elasticloadbalancing/eu-west-1/2017/04/10/003358414754_elasticloadbalancing_eu-west-1_project-prod-v2p1p1-23_20170410T2000Z_52.16.3.65_5ebajyzb.log .
  .
  echo \"Time: $(date -Iseconds). Starting now: \"
  aws --profile user_s3 s3 cp s3://my-clb-logs/prod/project/AWSLogs/003358414754/elasticloadbalancing/eu-west-1/2017/04/10/003358414754_elasticloadbalancing_eu-west-1_project-prod-v2p1p1-23_20170410T2000Z_52.30.162.178_16gjj8f4.log .
  .
  echo \"Time: $(date -Iseconds). Starting now: \"
  aws --profile user_s3 s3 cp s3://my-clb-logs/prod/project/AWSLogs/003358414754/elasticloadbalancing/eu-west-1/2017/04/10/003358414754_elasticloadbalancing_eu-west-1_project-prod-v2p1p1-23_20170410T2000Z_52.30.162.178_bdke44st.log .


    
    

Concatinate the downloaded S3 log files

Command line usage:  

  . /cygdrive/z/github/PerformanceTesting/jenkins/project_data_files.sh

project_data_files.sh  

  project=my_project . /cygdrive/z/github/PerformanceTesting/jenkins/s3_ls.sh
#[ s3_ls.sh is the script shown above ]

  
  wc -l my_project_prod_log.dat
  rm -f my_project_prod_log.dat
  rm -f my_project_prod_log.dat
  
  for i in $( ls | grep my_project | grep log); do
      echo item: $i
      if echo "${i}" | grep --quiet "mb97ky3j.log"
      then
           echo "got last log file"
          cat "${i}" | awk '{print $13}' | awk 'BEGIN {FS="my_project.server:80"} {print $2}' | grep ^/my_project >> my_project_prod_log.dat
      else
          echo "not last log file"
          
cat "${i}" | awk '{print $13}' | awk 'BEGIN {FS="my_project.server:80"} {print $2}' | grep ^/my_project >> my_project_prod_log.dat

      fi
  done
  
  echo ----------------------------------
  wc -l my_project_prod_log.dat
  echo ----------------------------------
  
  yes | cp -f my_project_prod_log.dat /cygdrive/z/github/PerformanceTesting/common_data