Copyright (C) 2000 Paul Ritchey <pritchey@arl.army.mil>


This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.




This document's purpose is to give you the basic steps necessary to install and configure the Snorticus collection of scripts.


Version Covered:

This document pertains the to following scripts and version numbers:

hourly_wrapup.sh version 1.0

retrieve_wrapup.sh version 1.0


Future Plans:

I want to add a script that will push rule files out to the sensor, allowing the analyst to easily maintain generic parts (applies to all sensors), site specific (applies to specific site) and subnet specific (applies to specific subnet at specific site) parts. This will significantly ease rule file maintenance and make sure that site have the same basic rules.


About Snorticus:

Snorticus is a collection of useful scripts that are used support the automatic retrieval and processing of collected Snort data from multiple sensors. The basic concept is to have multiple sensors deployed that collect data. That data is 'wrapped up' once an hour and pulled back to a box that is used to further analyze the collected data (SnortSnarf) and then is used by analysts to view it via a web interface. Snorticus gives you the ability to manage not only data from multiple sites, but also the ability to monitor multiple subnets at a time with the same sensor (accomplished by launching multiple instances of Snort on the same sensor). While individual sensor data (or 'site' data) is kept separated, if a sensor is monitoring multiple subnets, that data will be automatically combined down so that those multiple Snort instances monitoring multiple subnets on the same sensor appear as one. Snorticus supports sites across time zones - it detects the proper date/time it should retrieve from the sensor so that all data residing on the analyst box is at most 1 hour old.


System Requirements:

As far has hardware/OS's go, the scripts don't require much. The scripts have been tested and work under Solaris (2.6 - 7) and Linux (relatively recent RedHat). Your mileage may vary under other flavors. If you do deploy using an OS other than Solaris and Linux, please let me know. If you need to make any changes (like the ones I've had to make between Linux and Solaris), please pass them along to me so that I can include those in future updates.


Software Requirements:

page-break-before: auto">Before continuing, please obtain the above required software, install and configure it. During the installation of Snorticus it is assumed that the above software is properly installed, configured and working. The installation and configuration of the above listed software will not be covered in this document except for any special requirements Snorticus may have.


Please note that gnu's 'date' command comes standard with Linux (at least with RedHat). If you are deploying to multiple sensors using multiple versions of unix make sure the retrieve_wrapup.sh script can find it in the SAME directory in each box. (You can configure the directory at the top of the retrieve_wrapup.sh script).


Ssh is used for scp'ing the wrapup files between the sensor and the analyst box. Please make sure that the account used to scp from the analysis box to the sensors can connect without being prompted for a password. Otherwise the cron jobs won't work properly.


Sensor Installation:

These are the steps needed to install the hourly_wrapup.sh script on the sensor box. This box should have gnu 'date', ssh and Snort.


  1. Build and install Snort, gnu 'date' (if non-Linux box) and ssh.

  2. Create a directory where the scripts and Snort log files will reside. Example: /home/snort.

  3. In the the directory created in step #2 create a 'LOGS', 'rules' and 'scripts' directory. The 'LOGS' directory will be where the output of Snort will reside, and the 'scripts' directory is where the hourly_wrapup.sh script will reside.

  4. Copy the hourly_wrapup.sh script into the 'scripts' directory created in step #3.

  5. For Linux users, change the first line of the hourly_wrapup.sh script to '/bin/csh' instead of '/usr/bin/csh'. Solaris users should leave it alone with '/usr/bin/csh'.

  6. Open the hourly_wrapup.sh script in your favorite editor.

  7. The top of the script contains a few items that can be custom tailored to your sensor/site. The user editable section is clearly marked and contains only a few items.

      - sensor_site: This is the name of the site being monitored by this sensor. This name is used not only on the sensor but is also used on the analyst box to keep the data from each individual sensor separated. Example: MySite1 Example 2: MySite2

      - network_interface: Set this to the interface Snort is to use for monitoring. On most Linux boxes this will be 'eth0', and for most Solaris boxes this will be 'hme0'.

      - data_expires: Unfortunately, we all have limited hard drive space. Set this value to the number of days you want to keep data on the sensor. The hourly_wrapup script will automatically clean up old data files so that you hopefully won't run out of space on your sensor.

      - rules_directory: This is the directory where the rules files for each of the subnets Snort is to monitor reside. This should be set to the path to the 'rules' directory created in step #3. Rules files and their configuration will be covered later.

      - log_directory: This is the directory where Snort is to log all of it's data to. This should be set to the path to the 'LOGS' directory created in step #3.

      - network_list_file: Set this to the path and name of the network configuration file which is used to tell the hourly_wrapup.sh script what subnet(s) are to be monitored on this sensor. The network configuration file will be covered later.

  1. After making the necessary changes to the hourly_wrapup.sh script, save it and close the file.

  2. Open the network.cfg file. You specified it's name and location in the hourly_wrapup.sh script in step #7.

  3. Add each subnet this sensor is to monitor on a separate line. To specify a subnet, use CIDR notation but leave the '/xx' ending off. For example, to monitor the 192.168 subnet you would specify '192.168.0.0' (without the single quotes) on a line by itself. This would specify a Class B network. Individual lines can be marked as being comments by starting the line with a pound sign ('#').

  4. After making the proper entries to your network.cfg file, save it and close the file.

  5. Change directories into the directory you specified in the hourly_wrapup.sh script where the rules files for each subnet are to reside.

  6. Create a normal Snort rule file for each subnet specified in your network.cfg file. The name of the file should be in the format of 'rules.xxx.xxx.xxx.xxx', substituting the appropriate numbers for 'xxx' for the subnet the rule file applies to. Example using the example subnet specified in step #10: rules.192.168.0.0

  7. Create an entry in root's crontab file similar to the following:

    01 * * * * /data/home/snort/scripts/hourly_wrapup.sh > /dev/null 2>&1

    This runs the hourtly_wrapup.sh script 1 minute after the start of every hour. The past hour's data will be tar'ed and gzip'ed up. An instance of Snort will be launched for each subnet to be monitored. Any data in the logging directory that is aged beyond the retension period specified will automatically be deleted so the sensor is kept clean and space is kept available for new data.


Analyst Box Installation:

These are the steps necessary to install the analysis part of the Snorticus scripts. This box requires SnortSnarf, ssh and a web daemon if you want to view the SnortSnarf output remotely via a web browser.


  1. Build and install ssh and a web daemon (if you desire). Make sure this box can connect to the sensor box(es) without being prompted for a password.

  2. Create a directory where the scripts and Snort logs will reside. Example: /home/snort.

  3. In the directory created in step #2, create a 'LOGS' and 'scripts' directory. The 'LOGS' directory is where the hourly Snort wrapup data pulled down from the sensor's via scp will reside. The 'scripts' directory is where the Snorticus scripts will reside.

  4. Copy the retrieve_wrapup.sh and snortsnarf.pl scripts into the 'scripts' directory created in step #3.

  5. For Linux users, change the first line from '/usr/bin/csh' to '/bin/csh'. Please note that the retrieve_wrapup.sh script has NOT been tested under Linux yet. If anyone finds any problems, please let me know so that I can fix them (and if possible please provide the fix).

  6. Open the retrieve_wrapup.sh script in your favorite editor.

  7. The top of the script contains a few items that can be custom tailored to your sensor/site. The user editable section is clearly marked and contains only a few items.

      - log_directory: This is the directory created in step #3 where the retrieved data from the sensor(s) will reside. This is also where the resulting web pages from running SnortSnarf will reside. This path must be the SAME as the path on ALL sensor(s) where the hourly wrapup data is to be scp'ed from. HINT: Configure both the sensor(s) and the analyst box with the same directory structures.

      - data_expires: This determines how long retrieved data (and the generated SnortSnarf output) will remain on the analyst box before automatically being deleted. Tailor this to suit your requirements.

      - snortsnarf_path: This is the path to the snortsnarf.pl script.

      - gnudate_path: This is the path to the gnu 'date' command. Make sure that it is accessible from the same path on ALL sensors.

      - retrieve_account: This is the account used to scp data from the sensor(s). Make sure that this account can connect via ssh WITHOUT being prompted for a password.

  1. After making the necessary changes to the retrieve_wrapup.sh script, save it and close it.

  2. Create an entry for each sensor in root's crontab file using the following format:

    15 * * * * /home/snort/scripts/retrieve_wrapup.sh MySite1 mysensor.blah.blah > /dev/null 2>&1

    This runs the retrieve_wrapup.sh script 15 minutes after the top of every hour. It will retrieve the previous hours data (previous hour for the sensor, not analyst box in case of time zone differences) for MySite1 which is located on the sensor named mysensor.blah.blah. Remember: the path that it looks for the MySite1 data on the sensor is derived from the log_directory setting. Be sure you configure both the sensor and analyst boxes with the same directory structure.

  3. Finally, if you are going to use a web server, create a link from your web document directory and point it to where the LOGS directory is ( the value of the log_directory setting in the script). From your web browser you will then be able to select the site, followed by the day that you want to look at. After selecting the day, you will see a bar at the top allowing you to select the specific hour that you want to see (similar to Shadow).



retrieve_wrapup.sh:

If there are network problems and as a result the analyst box misses retrieving data, you can do it manually. The parameters for the script are as follows:


retrieve_wrapup.sh <site_name> <hostname | ip> [yyyymmdd.hh]


The first paramter is the site name. This is the directory name in the logging directory that contains the hourly wrapup files to be retrieved. (Again, make sure your directory structures are the same on both sensor and analyst boxes.)


The second parameter is the hostname or ip address of the sensor where the data is to be pulled down from.


The third parameter is used to specify the year, month, day and hour of the data you want to have retrieved and processed. Replace 'yyyy', 'mm', 'dd' and 'hh' with the year (4 digits), month (2 digits), day of the month (2 digits) and hour (2 digits) you want to retrieve. Make sure to include the period ('.') between the day digits and hour digits.


As you can see, the final parameter is optional and is only needed when you need to retrieve and process a specific hour's worth of data. If the date/time parameter is not specified the script will automatically retrieve the previous hour's data relative to the sensor, NOT the analyst box. This allows support of sensors across multiple time zones (and is the reason gnu's 'date' command is needed).