Our legacy site: www.webwob.com   

BASH scripting methods (page 2)

This page will cover more BASH scripting methods, following on from the legacy site pages.

    JV2 Edit LRS     Pipeline code     Jenkins with Git     Git2     bash2     bash3     bash4     Git setup     analysis     injectors     scripts     display     shells     repository     Jenkins x2     LR from cygwin     Groovy!     bash5     Calling Bamboo     Bamboo Calling

Areas covered on this page

1. wget timeouts and tries

2. wget view headers for a URL

3. wget save headers to the response output file

4. Reverse proxy with ssh

5. Grep for ip addresses

6. Trim lines at both ends

7. Check number of items before looping

8. Loop over terms in a string

9. Process the lines from a file

10. Report back main ip addresses found in a file

11. Replace an ip address in a file

12. Use arrays and loops to store series of data

13. Loop over lines in a text file

14. Delete random grepped lines from file


 

    

wget timeouts and tries

#clear the injectors disc space (the timeouts and tries limit is in case the box has gone)

wget --no-check-certificate --timeout=10 --tries=1 http://54.xxx.yyy.123/job/clear_z_drive/build?delay=0sec

wget --no-check-certificate --timeout=10 --tries=1 http://54.yy.xx.202/job/clear_z_drive/build?delay=0sec



wget --connect-timeout=900 --dns-timeout=900 --read-timeout=900 --timeout=900 https://bamboo.adm.server.net/deploy/viewEnvironment.action?id=59604996

wget view headers for a URL

wget -S --spider www.google.com

This line returns zero actual load but the stdout looks like:

    HTTP/1.1 302 Found
    Cache-Control: private
    Content-Type: text/html; charset=UTF-8
    Location: http://www.google.co.uk/?gfe_rd=cr&ei=C18xWNq......8we57a7gCA
    Content-Length: 261
    Date: Sun, 20 Nov 2016 08:30:03 GMT

wget save headers to the response output file

wget -E --save-headers http://${controller}/job/${project} -O test.html

$ wget -E --save-headers http://localhost:9091/job/191%20reboot%20controller/build?delay=0sec -O build3.html

[ That last line returns zero actual load but the output file looks like:

HTTP/1.1 201 Created
Date: Sat, 15 Oct 2016 06:36:37 GMT
X-Content-Type-Options: nosniff
Location: http://localhost:9091/queue/item/566/
Content-Length: 0
Server: Jetty(9.2.z-SNAPSHOT) ]

Reverse proxy with ssh

(I'll add some detail later but just want to capture this working solution for me:)

ssh -o "strictHostKeyChecking no" -i /cygdrive/c/curl/Graylog_openSSH2.ppk Administrator@54.yy.xxx.50 -R 9091:localhost:9091

Had to do this as well to get it working:

$ chmod og-r Graylog_openSSH2.ppk

$ chmod og-wx Graylog_openSSH2.ppk

Grep for ip addresses

(last one starts with a '{' bu tthis is not part of the search term, just what's on the line that we are actually looking for)

grep -o "\{([0-9]{1,3}[\.]){3}[0-9]{1,3}" bhbs_apigee_soak.lrs

grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' bhbs_apigee_soak.lrs

grep -o '{[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' bhbs_apigee_soak.lrs

    

Trim lines at both ends

Sometimes we want to knock off a character or two from a line:

grep -o '{[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' ucs_one_hour.lrs | cut -c 2-

echo "somestring1" | rev | cut -c 2- | rev

The 'rev' reverses the string if we want to do the other end. Experiment to find what you need.

    

Check number of items before looping

TERMS=$(echo ${graphs} | awk '{ print NF }')

echo TERMS = $TERMS

if [ ! "${graphs}" ]; then

    echo TERMS empty. Will include all graphs

fi

if [ "$TERMS" != "0" ]; then

    echo TERMS test. Will include specified graphs

fi

    

Loop over terms in a string

for ((i=1;i<=TERMS;i++)); do

    val=$(echo ${graphs} | awk -v i="$i" '{ print $i }')

    echo $val

    echo ____

done

echo after awk loop

    
Got this off the net

You are getting confused between $i the shell variable and $i the ith field in awk. You need to pass in the value of shell variable to awk in using -v:

#!/bin/bash

for i in {1..5}; do

    <command1> | awk -v i="$i" '{print $i}' | <command2>

done

This will let command2 process each column from the output of command1 separately.

how to use awk to print columns in a loop

    

Process the lines from a file

if [ ! -f rep85.txt ]; then

    echo "rep85.txt File not found!"

    exit 0

fi

echo after rep85 check

--------------------------------------------------------

declare -a myarray

let i=0

ready="no"

while IFS=$'\n' read -r line_data; do

            

    if echo "${line_data}" | grep -q ':'

    then

        echo got a line - ${line_data}

        myarray[i]="${line_data}" # Populate array.

        ((++i))

    else

        echo not found a ':' on this line

    fi

    

    if [ "${line_data}" = "================================" ] ; then

        #echo "yes"

        ready="yes"

    fi

    

    if [ "${line_data}" = "analysis report generator finished" ] ; then

        #echo "no"

        ready="no"

    fi

    

    if [ "$ready" = "yes" ] ; then

        myarray[i]="${line_data}" # Populate array.

        ((++i))

    fi

    

done < rep85.txt


    

Report back main ip addresses found in a file

grep -o '{[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' ${scenario} | cut -c 2- > injectors_original.txt

cat injectors_original.txt

injectors_now=$(cat injectors_original.txt)

TERMS=$( echo ${injectors_now} | awk 'BEGIN {OFS="\n"} {FS=" "} { print NF }')

for ((i=1;i<=TERMS;i++)); do

    val=$(echo ${injectors_now} | awk -v i="$i" 'BEGIN {OFS="\n"} {FS=" "} { print $i }')

    echo injector $i currently is $val

    echo ____

done

echo after awk loop


    

Replace an ip address in a file

NOTE: This is a technique I've used when working with other special characters with 'sed'. Working with dots is tricky, so replace all dots with '@', then do your sed, then put them back. Do this in the file AND the variables used.

########################## replace an injector ############################

#NEW directory update to cygwin...

#scenario1="Z:\GitHub\PerformanceTesting\scenarios\allweb\allweb_run_ci6.lrs"

scenario1=${scenario}

echo $scenario1

scenario2=$(echo ${scenario1} | sed -e 's~\\~/~g' | sed -e 's~:~~g')

echo $scenario2

scenario3=$(echo /cygdrive/${scenario2})

echo $scenario3



old1=$(echo ${old_inj[$i]} | sed 's#\.#@#g')

echo ${old1}

new1=$(echo ${new_inj[$i]} | sed 's#\.#@#g')

echo ${new1}



sed -i 's#\.#@#g' ${scenario3}

sed --in-place "s#${old1}#${new1}#g" ${scenario3}

sed -i 's#@#\.#g' ${scenario3}

########################## replace an injector ############################


    

Use arrays and loops to store series of data

grep -o '{[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' ${scenario} | cut -c 2- > injectors_original.txt

cat injectors_original.txt

injectors_now=$(cat injectors_original.txt)

TERMS=$( echo ${injectors_now} | awk 'BEGIN {OFS="\n"} {FS=" "} { print NF }')

for ((i=1;i<=TERMS;i++)); do

    val=$(echo ${injectors_now} | awk -v i="$i" 'BEGIN {OFS="\n"} {FS=" "} { print $i }')

    echo injector $i currently is $val

    old_inj[$i]="${val}"

    echo ____

done

echo after first awk loop

num_machines=$( echo ${injectors} | awk 'BEGIN {OFS="\n"} {FS=" "} { print NF }')

for ((i=1;i<=num_machines;i++)); do

    val=$(echo ${injectors} | awk -v i="$i" 'BEGIN {OFS="\n"} {FS=" "} { print $i }')

    echo val is ${val}

new_inj[$i]="${val}"

    echo ___________

done

echo after SECOND awk loop

    

Loop over lines in a text file


shell script to run from within Jenkins

I did have this set higher up in the script but I'm not sure if it's needed for this section:
    SAVEIFS=$IFS
    IFS=$(echo -en "\n")

    i=0
    filename="${WORKSPACE}/dir_list"
    echo Start
    while read p
    do 
        echo $p
        ((++i))
        echo item count is $i
        rm -rf $p
    done < $filename

If you include the top part, also add this line at the end:
    IFS=$SAVEIFS

    
    

Delete random grepped lines from file


shell script to run from within (cygwin) BASH

I wrote this to trim a log file by line count but with random line deletes. Can be used to match numbers of lines in a log file from day to day, for regular performance testing data input counts. So in this case, it deletes lines containing "twochicken" until we have 25 of them left in the file. The lines it deletes are selected at random from the whole file.

value=$(echo "twochicken")
echo -n "lines count currently: "
grep "${value}" items.log | wc -l

c0=$(grep "${value}" items.log | wc -l)

while [ "${c0}" -gt "25" ]; do

    echo "c0 is ${c0}"

    c1=$(grep "${value}" items.log | wc -l)
    
    echo $((1 + RANDOM % $c1))

    c2=$(echo $((1 + RANDOM % $c1)))

    echo $c2

    #grep -m$c2 "${value}" items.log | tail -n1
    #grep -n -m$c2 "${value}" items.log | tail -n1

    #grep -n -m$c2 "${value}" items.log | tail -n1 | awk 'BEGIN {FS=":"} {print $1}'
    c3=$(grep -n -m$c2 "${value}" items.log | tail -n1 | awk 'BEGIN {FS=":"} {print $1}')
    echo c3 is $c3
    echo "line to delete is ${c3}:"
    cat items.log | sed -n "${c3},${c3} p"
    sed -i "${c3}d" items.log

    echo -n "final check - lines left now: "
    grep "${value}" items.log | wc -l

c0=$(grep "${value}" items.log | wc -l)

done