Our legacy site: www.webwob.com   

VUgen scripting methods (page 3)

This page will cover more BASH scripting methods, following on from the legacy site pages.

     LoadRunner CI/CD      *_LR_CI_*      LR_CI_+      Email_out      Email_in      Cookies 1      TUTORIAL      sprintf      debug      VUgen      Kill -9      disk space      AWS S3      dig (DNS++)      substitute      curl with timings      LR_Jenkins_graphs     

    

Areas covered on this page

Step 1: Configure a new user account for specific use with S3

Step 2: Download the S3 files FROM the correct location TO the local directory

Step 3: logs_run_now.sh -- example

Step 4: Concatinate all the downloaded S3 log files

Step 5: Download the S3 files as above BUT: First, find the busiest day from the last 3 weeks


Configure a new user account for specific use with S3

Command line usage:  

  aws configure --profile user_s3

Screenshot of manual input

aws configure   
  
OR you can use a pure BASH script
BUT do be aware of the security and sensitivity of this data (scrambled here for our security)

  Note the '\n's for new lines
  printf 'AK.............3RA\njwF....................biLAnc\neu-west-1\ntext\n' | aws configure --profile user_s3




    
    

Download the S3 files FROM the correct location TO the local directory

CD into the AWS S3 source directory, given the project as an input variable


NOTE this script first builds the download code, before calling it
I found through trial and error that it was more reliable to collate all my download calls into a new shell and call this after I'd finished my logfile analysis. Hence, this script builds a new shell script to do just that, and then calls it at the end. I'm not sure why this is needed but it was found in real use to make a significant difference. And it is neater anyway I guess...

Also
The second version of this script might be better for you to use. It finds the largest data day in the past 3 weeks, before deciding what to download. See further down this page for that version.

  #project=allweb
  #project=allweb
  
  echo "Time: $(date -Iseconds). Starting now: ">> logs_now
  
  if [[ $project = *[!\ ]* ]]; then
      echo "\$project now set to \"${project}\""
  else
      echo "\$project consists of spaces only. Must stop here. List of projects you can use:"
      dir_a=prod
      echo
      echo "--------------- TOP ----------------"
      aws --profile user_s3 s3 ls s3://my-clb-logs/${dir_a}/ --human-readable --summarize
      echo
      echo "--------------- TOP ----------------"
      echo
      echo "-------------------- project not defined. Exiting here --------------------"
      echo
      echo Run like this:
      echo
      echo "project=allweb . ./s3_ls.sh"
      echo
      return -1
  fi
  
  echo "First we'll delete all the local ${project} log files:"
  
  for i in $( ls | grep "${project}" | grep ".log"); do
  
       echo item: $i
  
       rm -f $i
  
       rm -f ${i}
  
  done
  echo "Finished delete of all the local ${project} log files:"
  
  dir_a=prod
  echo
  echo "--------------- TOP ----------------"
  aws --profile user_s3 s3 ls s3://my-clb-logs/${dir_a}/ --human-readable --summarize
  echo
  echo "--------------- TOP ----------------"
  aws --profile user_s3 s3 ls s3://my-clb-logs/${dir_a}/
  dirb=$(aws --profile user_s3 s3 ls s3://my-clb-logs/${dir_a}/ --human-readable --summarize | grep "${project}" | xargs echo | awk 'BEGIN {FS=" "} {print $2}'| awk 'BEGIN {FS="/"} {print $1}')
  echo "dirb is: ${dirb}"
  echo
  dir0=prod/${dirb}/AWSLogs
  #dir0=prod/my4od/AWSLogs
  echo dir0
  echo "aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/"
  aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/
  dir1=$(aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/ --human-readable --summarize | head -1 | xargs echo | awk 'BEGIN {FS=" "} {print $2}'| awk 'BEGIN {FS="/"} {print $1}')
  echo "dir1 is: ${dir1}"
  echo
  echo dir1
  echo "aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/"
  aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/
  dir2=$(aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/ --human-readable --summarize | head -1 | xargs echo | awk 'BEGIN {FS=" "} {print $2}'| awk 'BEGIN {FS="/"} {print $1}')
  echo "dir2 is: ${dir2}"
  echo
  echo dir2
  echo "aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/"
  aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/
  dir3=$(aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/ --human-readable --summarize | head -1 | xargs echo | awk 'BEGIN {FS=" "} {print $2}'| awk 'BEGIN {FS="/"} {print $1}')
  echo "dir3 is: ${dir3}"
  echo
  in_eu=$(echo ${dir3} | grep -o "eu-west-1" | wc -l)
  if [ "${in_eu}" -ge "1" ]
  then

    echo "Overall in_eu status: PASS"
    echo dir3
    echo "aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/"
    aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/
    
    dir4=$(aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/ --human-readable --summarize | grep "PRE" | tail -1 | xargs echo | awk 'BEGIN {FS=" "} {print $2}'| awk 'BEGIN {FS="/"} {print $1}')
    echo "dir4 is: ${dir4}"
    echo
    
    echo dir4
    echo "aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/"
    dir5=$(aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/ --human-readable --summarize | grep "PRE" | tail -1 | xargs echo | awk 'BEGIN {FS=" "} {print $2}'| awk 'BEGIN {FS="/"} {print $1}')
    echo "dir5 is: ${dir5}"
    echo
  
    echo dir5
    echo "aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${dir5}/"
    dir6=$(aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${dir5}/ --human-readable --summarize | grep "PRE" | tail -2 | head -1 | xargs echo | awk 'BEGIN {FS=" "} {print $2}'| awk 'BEGIN {FS="/"} {print $1}')
    echo "dir6 is: ${dir6}"
    echo
    
    echo dir6
    echo "aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${dir5}/${dir6}/"
    echo "aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${dir5}/${dir6}/" >> logs_now
    
copy_top=$(echo "aws --profile user_s3 s3 cp s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${dir5}/${dir6}/")

    
    echo
    echo "The next line specifies how many logs to download. Set the tail number as a few more than the logs required. ie tail -25 brings back 22 log files typically."
    echo
    aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${dir5}/${dir6}/ --human-readable --summarize | tail -25 | grep ".log" > my_logs2.txt
    
    
    echo
    
    #lets try downloading a file from here
    
if [ -f my_logs2.txt ]; then

      echo "my_logs2.txt File found"
      declare -a myarray
      i=0
      ready="no"
      
rm -f logs_run_now.sh

      
while IFS=$'\n' read -r line_data; do

        
if echo "${line_data}" | grep -q '.log'
        then

          echo "got a line - ${line_data}"
          log1=$(echo ${line_data} | awk 'BEGIN {FS=" "} {print $5}')
          echo "Current location is: \"s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${dir5}/${dir6}/\""
          echo " CAN RUN: \"aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${dir5}/${dir6}/ --human-readable --summarize\""
          echo "log file ${i}: is: ${log1}"
          echo "${log1}" >> logs_now
          
          
echo 'echo \"Time: $(date -Iseconds). Starting now: \"' >> logs_run_now.sh
          echo -n "${copy_top}" >> logs_run_now.sh
          echo -n "${log1}" >> logs_run_now.sh
          echo " ." >> logs_run_now.sh

          
          echo " CAN RUN (2): \"aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${dir5}/${dir6}/${log1} --human-readable --summarize\""
          
          temp_start=${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${dir5}/${dir6}
          echo " CAN RUN (3): \"aws --profile user_s3 s3 ls s3://my-clb-logs/${temp_start}/${log1} --human-readable --summarize\""
          
          echo "lets try 4"
          temp2_start=$(echo "${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${dir5}/${dir6}")
          echo " CAN RUN (4): \"aws --profile user_s3 s3 ls s3://my-clb-logs/${temp2_start}/${log1} --human-readable --summarize\""
          echo "end of lets try 4"
          
          myarray[i]="${line_data}" | awk 'BEGIN {FS=" "} {print $5}' # Populate array.
          ((++i))
          
          echo "Downloading  file ${i} to the current directory"
          #aws --profile user_s3 s3 cp s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${dir5}/${dir6}/${log1} .
          echo "aws --profile user_s3 s3 cp s3://my-clb-logs/${temp2_start}/${log1} ."
        
else

          echo not found a ':.log' on this line
        
fi

      
done < my_logs2.txt

    
fi

  else
     echo "Overall in_eu status: FAIL"
  fi
  
  
cat logs_run_now.sh | tr -d '\r' > temp_tr; rm -f logs_run_now.sh; cp temp_tr logs_run_now.sh
  
  . ./logs_run_now.sh

  

    
    

logs_run_now.sh -- example

This script is created and run by the script above  
(edited for security here... )

  echo \"Time: $(date -Iseconds). Starting now: \"
  aws --profile user_s3 s3 cp s3://my-clb-logs/prod/project/AWSLogs/003358414754/elasticloadbalancing/eu-west-1/2017/04/10/003358414754_elasticloadbalancing_eu-west-1_project-prod-v2p1p1-23_20170410T2000Z_52.16.3.65_5ebajyzb.log .
  .
  echo \"Time: $(date -Iseconds). Starting now: \"
  aws --profile user_s3 s3 cp s3://my-clb-logs/prod/project/AWSLogs/003358414754/elasticloadbalancing/eu-west-1/2017/04/10/003358414754_elasticloadbalancing_eu-west-1_project-prod-v2p1p1-23_20170410T2000Z_52.30.162.178_16gjj8f4.log .
  .
  echo \"Time: $(date -Iseconds). Starting now: \"
  aws --profile user_s3 s3 cp s3://my-clb-logs/prod/project/AWSLogs/003358414754/elasticloadbalancing/eu-west-1/2017/04/10/003358414754_elasticloadbalancing_eu-west-1_project-prod-v2p1p1-23_20170410T2000Z_52.30.162.178_bdke44st.log .


    
    

Concatinate the downloaded S3 log files

Command line usage:  

  . /cygdrive/z/github/PerformanceTesting/jenkins/project_data_files.sh

project_data_files.sh  

  project=my_project . /cygdrive/z/github/PerformanceTesting/jenkins/s3_ls.sh
#[ s3_ls.sh is the script shown above ]

  
  wc -l my_project_prod_log.dat
  rm -f my_project_prod_log.dat
  rm -f my_project_prod_log.dat
  
  for i in $( ls | grep my_project | grep log); do
      echo item: $i
      if echo "${i}" | grep --quiet "mb97ky3j.log"
      then
           echo "got last log file"
          cat "${i}" | awk '{print $13}' | awk 'BEGIN {FS="my_project.server:80"} {print $2}' | grep ^/my_project >> my_project_prod_log.dat
      else
          echo "not last log file"
          
cat "${i}" | awk '{print $13}' | awk 'BEGIN {FS="my_project.server:80"} {print $2}' | grep ^/my_project >> my_project_prod_log.dat

      fi
  done
  
  echo ----------------------------------
  wc -l my_project_prod_log.dat
  echo ----------------------------------
  
  yes | cp -f my_project_prod_log.dat /cygdrive/z/github/PerformanceTesting/common_data
  
  


    
    

Download the S3 files as above BUT: First, find the busiest day from the last 3 weeks

  #project=allweb
  
  echo "Time: $(date -Iseconds). Starting now: ">> logs_now
  
  if [[ $project = *[!\ ]* ]]; then
      echo "\$project now set to \"${project}\""
  else
      echo "\$project consists of spaces only. Must stop here. List of projects you can use:"
      dir_a=prod
      echo
      echo "--------------- TOP ----------------"
      aws --profile user_s3 s3 ls s3://my-clb-logs/${dir_a}/ --human-readable --summarize
      echo
      echo "--------------- TOP ----------------"
      echo
      echo "-------------------- project not defined. Exiting here --------------------"
      echo
      echo Run like this:
      echo
      echo "project=allweb . ./s3_ls.sh"
      echo
      return -1
  fi
  
  echo "First we'll delete all the local ${project} log files:"
  
  for i in $( ls | grep "${project}" | grep ".log"); do
  
       echo item: $i
  
       rm -f $i
  
       rm -f ${i}
  
  done
  echo "Finished delete of all the local ${project} log files:"
  
  dir_a=prod
  echo
  echo "--------------- TOP ----------------"
  aws --profile user_s3 s3 ls s3://my-clb-logs/${dir_a}/ --human-readable --summarize
  echo
  echo "--------------- TOP ----------------"
  aws --profile user_s3 s3 ls s3://my-clb-logs/${dir_a}/
  dirb=$(aws --profile user_s3 s3 ls s3://my-clb-logs/${dir_a}/ --human-readable --summarize | grep "${project}" | xargs echo | awk 'BEGIN {FS=" "} {print $2}'| awk 'BEGIN {FS="/"} {print $1}')
  echo "dirb is: ${dirb}"
  echo
  dir0=prod/${dirb}/AWSLogs
  #dir0=prod/my4od/AWSLogs
  echo dir0
  echo "aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/"
  aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/
  dir1=$(aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/ --human-readable --summarize | head -1 | xargs echo | awk 'BEGIN {FS=" "} {print $2}'| awk 'BEGIN {FS="/"} {print $1}')
  echo "dir1 is: ${dir1}"
  echo
  echo dir1
  echo "aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/"
  aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/
  dir2=$(aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/ --human-readable --summarize | head -1 | xargs echo | awk 'BEGIN {FS=" "} {print $2}'| awk 'BEGIN {FS="/"} {print $1}')
  echo "dir2 is: ${dir2}"
  echo
  echo dir2
  echo "aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/"
  aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/
  dir3=$(aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/ --human-readable --summarize | head -1 | xargs echo | awk 'BEGIN {FS=" "} {print $2}'| awk 'BEGIN {FS="/"} {print $1}')
  echo "dir3 is: ${dir3}"
  echo
  in_eu=$(echo ${dir3} | grep -o "eu-west-1" | wc -l)
  if [ "${in_eu}" -ge "1" ]
  then
      echo "Overall in_eu status: PASS"
      echo dir3
      echo "aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/"
      aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/
      
      dir4=$(aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/ --human-readable --summarize | grep "PRE" | tail -1 | xargs echo | awk 'BEGIN {FS=" "} {print $2}'| awk 'BEGIN {FS="/"} {print $1}')
      echo "dir4 is: ${dir4}"
      echo
      
      echo dir4
      echo "aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/"
      dir5=$(aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/ --human-readable --summarize | grep "PRE" | tail -1 | xargs echo | awk 'BEGIN {FS=" "} {print $2}'| awk 'BEGIN {FS="/"} {print $1}')
      echo "dir5 is: ${dir5}"
      echo
  
      echo dir5
      echo "aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${dir5}/"
      dir6=$(aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${dir5}/ --human-readable --summarize | grep "PRE" | tail -2 | head -1 | xargs echo | awk 'BEGIN {FS=" "} {print $2}'| awk 'BEGIN {FS="/"} {print $1}')
      echo "dir6 is: ${dir6}"
      echo
      
      echo dir6
      echo "aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${dir5}/${dir6}/"
      echo "aws --profile user_s3 s3 ls s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${dir5}/${dir6}/" >> logs_now
      copy_top=$(echo "aws --profile user_s3 s3 cp s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${dir5}/${dir6}/")
      echo ---
      echo "copy_top is ${copy_top}"
      echo
  
      largest=1
      larget_path=***
      
       for((i=1;i<=21;i++)); do
          echo "Inside loop: dir6 = ${dir6}"
          minus=$(echo $((${dir6} - $i)))
          echo "minus = ${minus}"
          if [ "${minus}" -lt "1" ]
          then
           echo "need to go back one sub-folder level"
           if [ "${dir5}" -gt "1" ]
           then
              minus_minus=$(echo $((${dir5} - 1)))
              echo "minus_minus = ${minus_minus}"
              
              #minus=$(echo $((${minus} + 28)))
              if [ "${minus_minus}" -eq "4" ] || [ "${minus_minus}" -eq "6" ] || [ "${minus_minus}" -eq "9" ] || [ "${minus_minus}" -eq "11" ]
              then
                  echo "minus (1) = ${minus}"
                  minus=$(echo $((${minus} + 30)))
                  echo "minus (2) = ${minus}"
              elif [ "${minus_minus}" -eq "1" ] || [ "${minus_minus}" -eq "3" ] || [ "${minus_minus}" -eq "5" ] || [ "${minus_minus}" -eq "7" ] || [ "${minus_minus}" -eq "8" ] || [ "${minus_minus}" -eq "10" ] || [ "${minus_minus}" -eq "12" ]
              then
                  echo "minus (1) = ${minus}"
                  minus=$(echo $((${minus} + 31)))
                  echo "minus (2) = ${minus}"
              else
                  echo "minus (1) = ${minus}"
                  minus=$(echo $((${minus} + 28)))
                  echo "minus (2) = ${minus}"
              fi
              
              echo "minus now = ${minus}"
              if [ "${minus_minus}" -lt "10" ]
              then
                   echo -n "JUMP: size of s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/0${minus_minus}/${minus}/ : "
                   dir6_path="s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/0${minus_minus}/${minus}/"
                   dir6_m_size=$(aws --profile user_s3 s3 ls ${dir6_path} --human-readable --summarize | grep 'Total Size')
              else
                   echo -n "JUMP: size of s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${minus_minus}/${minus}/ : "
                   dir6_path="s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${minus_minus}/${minus}/"
                   dir6_m_size=$(aws --profile user_s3 s3 ls ${dir6_path} --human-readable --summarize | grep 'Total Size')
              fi
              echo "${dir6_m_size}"
              m_size=$(echo ${dir6_m_size} | awk 'BEGIN {FS=" "} {print $3}' | awk 'BEGIN {FS="."} {print $1}')
              echo "m_size is now ^${m_size}^"
              echo "largest is now ^${largest}^"
              if [ "${m_size}" -gt "${largest}" ]
              then
                   echo "check largest"
                   largest=${m_size}
                   largest_path=${dir6_path}
              fi
           fi
          else
           echo "can run with this value"
           if [ "${minus}" -lt "10" ]
           then
              echo -n "        size of s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${dir5}/0${minus}/ : "
              dir6_path="s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${dir5}/0${minus}/"
              dir6_m_size=$(aws --profile user_s3 s3 ls ${dir6_path} --human-readable --summarize | grep 'Total Size')
           else
              echo -n "        size of s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${dir5}/${minus}/ : "
              dir6_path="s3://my-clb-logs/${dir0}/${dir1}/${dir2}/${dir3}/${dir4}/${dir5}/${minus}/"
              dir6_m_size=$(aws --profile user_s3 s3 ls ${dir6_path} --human-readable --summarize | grep 'Total Size')
           fi
           echo "${dir6_m_size}"
          
          m_size=$(echo ${dir6_m_size} | awk 'BEGIN {FS=" "} {print $3}' | awk 'BEGIN {FS="."} {print $1}')
          echo "m_size is now ^${m_size}^"
          echo "largest is now ^${largest}^"
          if [ "${m_size}" -gt "${largest}" ]
          then
               echo "check largest"
               largest=${m_size}
               largest_path=${dir6_path}
          fi
          
          fi
       done
      
      echo
      echo "NOTE: largest folder is ${largest}"
      echo "NOTE: largest_path is now ${largest_path}"
      echo
      echo "The next line specifies how many logs to download. Set the tail number as a few more than the logs required. ie tail -25 brings back 22 log files typically."
      echo
      aws --profile user_s3 s3 ls ${largest_path} --human-readable --summarize | tail -25 | grep ".log" > my_logs2.txt
      
      echo "after my_log2.txt constructed"
      echo
      
      if [ -f my_logs2.txt ] ; then
      echo "my_logs2.txt File found"
      declare -a myarray
      i=0
      ready="no"
      rm -f logs_run_now.sh
      while IFS=$'\n' read -r line_data; do
          if echo "${line_data}" | grep -q '.log'
          then
          echo "got a line - ${line_data}"
          log1=$(echo ${line_data} | awk 'BEGIN {FS=" "} {print $5}')
          
          echo 'echo \"Time: $(date -Iseconds). Starting now: \"' >> logs_run_now.sh
          echo -n "aws --profile user_s3 s3 cp ${largest_path}" >> logs_run_now.sh
          echo -n "${log1}" >> logs_run_now.sh
          echo " ." >> logs_run_now.sh
          
          myarray[i]="${line_data}" | awk 'BEGIN {FS=" "} {print $5}' # Populate array.
          ((++i))
          
          echo "Downloading    file ${i} to the current directory"
          echo "aws --profile user_s3 s3 cp ${largest_path}${log1} ."
          else
          echo not found a ':.log' on this line
          fi
      done < my_logs2.txt
      fi
  else
       echo "Overall in_eu status: FAIL"
  fi
  
  echo We are here:;pwd
  cat logs_run_now.sh | tr -d '\r' > temp_tr; rm -f logs_run_now.sh; cp temp_tr logs_run_now.sh
  
  . ./logs_run_now.sh