Tuesday, August 13, 2013

Simple load test script with email alert

Today's simple Bash script is for those situations where you want to keep an eye on a Linux server's load statistic and have it email you if it exceeds a value you specify without setting up full-blown monitoring (Nagios, Zabbix, Zen, etc).

Here's an overview of the script and what it does (And as always, the most up-to-date version of the script will be in my Github repo):





 #!/bin/bash

# --------------------------
# Simple Load Test Utility
# --------------------------
# Written by James Berger
# Last updated: August 13th 2013


# Description:
# ------------
# This grabs the five minute load average via the uptime command.
# It compares it to the max_load variable and if the current load
# average is higher than the max_load variable, it alerts and 
# sends you an email.


# Setting our variables
#----------------------
# (note that the max load variable is set to a very low value by
#  default, 0.001. This is for testing purposes so you can verify
#  if the script works or not. If you're actually using it, you
#  should set it to a more reasonable value, like 1 or 0.75)
max_load=0.001
email_destination=generic-email-address@somewhere.com

# We call uptime, and then we use cut with space delimiters to grab
# the five minute load average, then we use translate (tr) to strip
# out any spaces and commas so we have a nice float value instead
# of a string.
current_load=$(uptime | cut -d' ' -f 15 | tr -d "," | tr -d " ")


# Comparing the current load to the max load
#--------------------------------------------
# Now we'll use an if statement to see if the current load exceeds
# what was set for the max load. Bash doesn't like to evaluate 
# floating numbers, so we'll pipe it out to bc to do a simple
# 'greater than?' check. If it passes the check, we email the
# person specified in our email_destination variable and if not,
# we simply exit.
if [ $(echo "$current_load > $max_load"|bc) -eq 1 ]
then
echo -e "ALERT.\n This is a message from the Simple Load Test Utility. The current load of $current_load has exceeded the maximum defined load of $max_load on the server $(hostname).\n This alert was created on $(date) (local time on $(hostname))." | mail -s "High load alert on $(hostname)" $email_destination
else
  exit 0
fi
exit 0

Monday, May 13, 2013

Bash One Liner to Get a Frequency Count for IP Connections with Netstat

So you want to see how many connections a given IP has open with netstat? Here's a quick Bash one-liner to get a frequency count for each IP. Note, this is for the external IP / foreign address, and it ignores what port they're connecting on, so 8.8.8.8:443 and 8.8.8.8:80 will be treated as two connections from 8.8.8.8.

So we run netstat with the -pan flags (word-ish sounding and easy to remember, shows the Program that's using the connection, All connections, Numeric version (instead of hostnames / URLs) ) and then we pipe it to awk, and print the 5th field (Foreign Address), then to cut where we discard the port numbers by telling it to print the first field before a colon (which separates the IP from the port). Then we pipe it to sort to organize it, pipe it to unique with the -c flag to get a count of how many times each IP shows up and then to sort again with the -n (sort numeric, very important) and -r flags so the highest count is at the op. And we're done!



netstat -pan | awk '{print $5}' | cut -d ":" -f 1 | sort | uniq -c | sort -nr 


You'll get something like this (IPs changed to protect the guilty): 



[root@example ~]# netstat -pan | awk '{print $5}' | cut -d ":" -f 1 | sort | uniq -c | sort -nr 
    483 127.0.0.1
    119 8.8.8.8
    104 8.8.8.9
     84 8.8.8.10
     70 8.8.8.11
...
[root@example ~]#

You can see the connection count on the left, 483 current connections from 8.8.8.8, 119 from 8.8.8.9 and so on.

Wednesday, May 8, 2013

A Bash Script for Logging Into a Website and Checking a Page

Let's say you want to monitor a page. And to get to the page, you have to log into the website. And you just want something quick without having to set up proper monitoring. 

That was the problem that I faced this afternoon. So I came up with this handy Bash script:

(Note: You can also view the most up-to-date version of this script on my Github page here)

 
#!/bin/bash

# --------------------------
# Simple Page Check Utility
# --------------------------
# Written by James Berger
# Last updated: May 22nd 2013


# Description:
# ------------
# This runs a quick check to see if our user can log in
# Curl logs into the site and places the result into a file,
# Then we grep the file and see if a string that shows up
# when you're logged in is present. The results with a time
# stamp are echoed out and stored in a log file as well
# so you can run this with a cron job every x number of
# minutes to see if the site goes down at a specific time.
 
# Setting our variables
results_file="/home/YourUserName/page-check-results.html"
target_url="https://www.YourSiteHere.com/logon"
phrase_to_check_for="PhraseOnLoggedInPage"
log_file="/home/YourUserNameHere/page-check-server-status.txt"

# A function for checking the contents of the results file, this
# is for testing purposes to make sure that the bits of the script
# that work with that file are functioning correctly. If the script
# isn't working like you think it should, you can call this at
# different points in the script to see if the file is being
# updated, cleared out, etc.
check_results_file() {
echo -e "HERE ARE YOUR RESULTS:"
echo -e "--------------------------------------------"
cat $results_file
echo -e "--------------------------------------------\n"
}

# Quick function to create a new line, little easier than typing
# all of the below out each time.
newline() {
echo -e "\n"
}

# Quick function to print out a separator line
dashline() {
echo -e "---------------------------------"
}

# Run clear to keep things tidy
clear


echo -e "#############################"
echo -e "# Simple Page Check Utility #"
echo -e "#############################\n"

echo -e " Current variables:"
dashline;
echo -e "The target URL is: "$target_url
echo -e "The phrase to check for is: \""$phrase_to_check_for"\""
echo -e "The file the results are stored in is: "$results_file
echo -e "The status of each check is being logged to: "$log_file
dashline;
newline;


echo -e " Results file:"
dashline;
# Create our results file if it doesn't already exist
echo -e "Checking to see if there's a file to store the results in."
if [ ! -f $results_file ]
  then
echo -e "No results file found."
    echo -e "Creating results file" $results_file
    touch $results_file
  else
echo -e "Results file currently exists."
    # Remove our previous results if they exist
    echo -e "Clearing out previous contents of results file."
    echo -e "" > $results_file
fi
dashline;
newline;


#check_results_file;
echo -e " HTTP status code of target URL"
dashline;
# This just makes sure that the page shows up to begin with by getting the HTTP status code of the login page.
# If it's something other than 200, then we won't be able to log in anyway.

echo -e "Running HTTP status check... "
http_status="$(curl -sL -w "%{http_code}\\n" $target_url -o /dev/null)"
echo -e "HTTP status code for the page at the target URL is: "$http_status
dashline;
newline;


echo -e " Phrase check:"
dashline;
# This logs into the site using the creds they've provided for us and the form fields from their login form.
echo -e "Running curl on the target URL."
# You can set the user agent differently if need be.
curl -A "Mozilla/4.73 [en] (X11; U; Linux 2.2.15 i686)" \
--cookie cjar --cookie-jar cjar \
--data "userName=YourUserNameHere" \
--data "password=YourPasswordHere" \
--data "login=Login" \
--location $target_url>$results_file

newline;
#check_results_file;
echo -e "Checking the contents of the results file."

if cat $results_file | grep -q "$phrase_to_check_for"
  then
echo -e "Checking for the phrase \""$phrase_to_check_for"\""
    dashline;
    newline;
    echo -e " Result:"
    dashline;
    echo -e "Found the specified phrase in the results file.\n"
    echo -e "Able to successfully log in."
    echo -e "\r\nLogin successful for $(date) with the HTTP status code $http_status" >> $log_file
    echo -e "Updating the log file now with the result for $(date)."
  else
echo -e "Checking for the phrase " $phrase_to_check_for
    echo -e " Result:"
    dashline;
    echo -e "Did not find it in " $results_file
    echo "Failed to log in."
    echo -e "\r\nLogin failed for $(date) with the HTTP status code $http_status" >> $log_file
    echo -e "Updating the log file now with the result for $(date)."
fi
dashline;
newline;
newline;
newline;
newline;
newline;


There's a few small changes I've made that aren't in this version (The latest version will be on my Github page here) and a few more things that I have planned for this.

1. Log the results to a file with a time and date stamp (done).
2. Add in a section that will email you if it registers a failure (in progress).
3. Allow the user to set the variables interactively from the command line so they don't actually have to edit the script, making it a bit more user friendly.
4. Possibly setting it so that the variables can be specified as command line arguments.
5. Sticking in a bit that will tell you what the usage is for the script if you don't specify any arguments.

The one thing I don't like about the email option is that it requires you to having a working mail server on the box that you're running this script on. That's not going to be all servers out there, so I may need to see if there's some alternative mechanism that can be used to email basic alerts to people.

Monday, April 29, 2013

Find Unused MySQL Databases

Here's a quick MySQL trick. Let's say you've inherited a legacy MySQL database server and you have no idea what databases are in use.

One method of determining what databases are old and dusty is to do the following:



SELECT FROM_UNIXTIME(UNIX_TIMESTAMP(MAX(UPDATE_TIME))) as last_update FROM information_schema.tables WHERE TABLE_SCHEMA='YourDatabaseNameHere' GROUP BY TABLE_SCHEMA;


That should get you something like this:




mysql> SELECT FROM_UNIXTIME(UNIX_TIMESTAMP(MAX(UPDATE_TIME))) as last_update FROM information_schema.tables WHERE TABLE_SCHEMA='blah' GROUP BY TABLE_SCHEMA;
+---------------------+
| last_update         |
+---------------------+
| 2011-02-15 08:24:31 | 
+---------------------+
1 row in set (0.05 sec)

mysql> 



That will let you see when the database was last updated. Then you can remove the ones that haven't been touched in ages.

Friday, April 5, 2013

DU - Show Directories Over A Certain Size


Space seems to be something there's never enough of, and when you need to find out what's using it up, du is your friend. Getting useful output from it is half the battle though, and you'll often say to yourself "I only want to find directories over a certain size!" Let's say you only care about directories that are over 2 gigs - how do we filter the output from du to show that?

Here's my favorite method:


du --max-depth=5 --block-size=GiB | grep "^[2-9]" | sort -n

So what's going on here? 

First we're telling du to only scan no deeper than five levels of sub-directories - this will help speed things up. We can always increase this number later if we need to.

Next, we're telling it to show the results in gigabytes, which will be much more human-friendly for us to read.

After that, we pipe it into grep, where we use a simple regular expression to include only lines that start with a number from two to nine. This excludes things that are smaller than two gigs.

The final step is running it through sort with the -n (numeric) option, so we can see the results sorted in order by smallest to largest.

We'll run this on my local test machine so you can see what the results will look like. I've switched the block size to megabytes, because it's a very small test server - there's nothing even in the 2 gig range! We're using a folder depth of two as well, so we can have a nice compact example.


[root@localhost /]# du --max-depth=2 --block-size=MiB | grep "^[2-9]" | sort -n
2MiB ./etc/pki
2MiB ./lib/udev
2MiB ./var/www
3MiB ./lib64/security
3MiB ./lib/kbd
8MiB ./bin
8MiB ./usr/include
9MiB ./etc/gconf
25MiB ./lib64
27MiB ./boot
28MiB ./usr/sbin
33MiB ./lib/firmware
33MiB ./usr/libexec
35MiB ./etc
48MiB ./usr/src
65MiB ./var/lib
83MiB ./var/cache
99MiB ./lib/modules
304MiB ./var/log
316MiB ./usr/lib
355MiB ./usr/lib64
452MiB ./var
3836MiB .
[root@localhost /]# 



And that's all there is to it! Short, sweet and simple.