Saturday, May 06, 2006

How to take backup of Inbox

 

Take Back Your Inbox


Not too long ago, email was a wonderful thing. It provided a fast and easy method to communicate with family, friends, and co-workers, regardless of timezone or location. Unfortunately, due to spam and viruses, many people now find email almost unusable. In this month's Tech Support, let's take back that inbox.

Spam

The first item on the agenda is eliminating spam. Spam, or unsolicited commercial email (UCE), is not only a nuisance, it's a productivity killer.

SpamAssassin (SA), which is distributed under the same license as Perl, helps put an end to this problem. Using its rule base, SA performs a wide range of heuristic tests on email headers and body text to identify and score spam. SA can also use blacklists and optional modules such as Razor, Pyzor, and a built-in Bayesian filter that learns new spam characteristics.

One of SpamAssassin's greatest assets is its flexibility. You can install SA in a wide variety of configurations, from a local install in your home directory (on a machine where you do not have root access), to a system-wide install that affects all users. You can also configure SA to allow each individual user to set their own rules, thresholds, and settings. And, because SpamAssassin tags messages by adding additional headers, it allows you to control what happens to each message.

To install SA, do the following as root:

# perl �MCPAN �e shell
cpan> install Mail::SpamAssassin

Alternatively, if you don't have root access, you can download the source from http://www.spamassassin.org and do the following after unpacking the tarball:

% cd Mail-SpamAssassin-*
% perl Makefile.PL PREFIX=~/sausr
SYSCONFDIR=~/saetc; make; make install

After you install SA, look at the configuration file called local.cf. This file allows you to whitelist certain addresses, tweak rules, add custom rules, enable/disable specific tests, and change a variety of other options.

You can also choose how you'd like to integrate SA into your MTA (if site-wide), or how you'd like to process your mail with SA (local install). SA works well with sendmail, qmail, PostFix, Exim, and most others. It can even be called via procmail, milter, AMaViS, MIMEDefang, or QMAILQUEUE.

If you installed SA in your home directory, you can put the following two rules in your procmailrc file to run SA on your mail and sort spam into a folder named caughtspam:

:0fw: spamassassin.lock
| /home/user/sausr/bin/spamassassin
:0:
* ^X-Spam-Status: Yes
caughtspam

While running SpamAssassin as above is fine for small setups, most large or system-wide configurations should consider running spamd/spamc, which improves performance by avoiding the overhead of starting Perl for each message.

Viruses

You may be thinking, "I use Linux, why do I need a virus scanner?" While it's true that not many viruses have targeted Linux, as Linux's popularity grows, it's likely that the number of viruses will increase. Beyond that, many people who run a Linux machine may have a mail server setup for a few friends and family. Some of these users likely use an operating system that is more prone to viruses. By scanning for viruses, you're not only doing them a favor, but are helping stop the spreading of viruses. After all, if everyone had an up-to-date virus scanner, the outbreaks that we've come to accept would be much less common.

Luckily, there is a free GPLed virus scanner called ClamAV (available from http://clamav.sourceforge.net/) that keeps updated definitions. Like SpamAssassin, ClamAV can be run in both system-wide and local configurations, and allows easy integration with many MTAs. It can also be called via procmail, milter, AMaViS, MIMEDefang, or QMAILQUEUE, and allows you to either reject or quarantine infected messages.

As ClamAV integration can be quite specific to your environment, specific installation and configuration instructions are beyond the scope of this article, but the install is the standard ./configure && make && make install. After installation, become acquainted with the configuration file clamav. conf, and choose between using clamscan or clamd/clamdscan. ClamAV also comes with freshclam. It can be run as a daemon or via cron to keep virus definitions up-to-date.

Fast Compilation

 

Faster and Faster Compilation


Perl and Python may be popular scripting languages, but a great deal of software � including the Linux kernel and Samba, among many others, is still written in C and C++. Accordingly, a wide variety of tools are available to boost C/C++ programmer productivity. This month, let's explore ccache and distcc, two C/C++ tools that take different approaches to saving time. Both tools were written by members of the Samba team and are licensed under the GNU Public License.

Written by Andrew Tridgell and available from http://ccache.samba.org, ccache is a compiler cache. It acts as a caching pre-processor to C/C++ compilers, using the -E compiler switch and a hash to detect when a compilation can be satisfied from cache. Incorporating ccache into your builds should result in in a five to ten-fold increase in speed. You'll gain the most from ccache if you�re continually having to rebuild the same source tree (via make clean && make) or if you perform a lot of RPM rebuilds. ccache produces exactly the same output that the real compiler produces, including the same object files and the same compiler warnings. The only difference is that ccache is faster.

Installing ccache is the typical ./configure && make && make install. Once installed, there are two ways to use ccache. First, you can prefix your compile commands with ccache. For example, changie the CC=gcc line in your Makefile to CC=ccache gcc. Use this method if you'd like to test ccache or if you only plan to use it for some projects.
Alternatively, you can create a symbolic link to ccache from the names of your compilers, which allows you to use ccache without any changes to your build system. Make sure that the symlink appears in your PATH before the actual compiler.

While ccache uses caching to speed up compilation, distcc achieves its speed increase by distributing builds across several machines on a network. Like ccache, distcc always generates the same results as a local build. Written by Martin Pool, distcc is available from http://distcc.samba.org. distcc works by sending each job�s preprocessed source code across the network. It doesn't do any of the actual code compilation itself, it's just a frontend for gcc that utilizes the -j parallel build feature of make. Compilation is driven by a client machine, which runs distcc, make, the preprocessor, the linker, and other stages of the build process. The job is then distributed to any number of machines running the distccd daemon. One nice thing about distcc is that it scales nearly linearly, at least for a small number of machines, so you do not need a lot of hardware to see a benefit.

Installation of distcc is also the normal ./configure && make && make install. Install distcc on every machine that you want to distribute compilation jobs to. After installation, run distccd on each machine as follows:

$ distccd --daemon --allow 192.168.1.0/24

Replace 192.168.1.0/24 with the IP address and CIDR mask of the machines that should be allowed to connect. You're now ready to distribute compiles. First, add the name of the machines you'd like to harness into the DISCC_HOSTS environment variable:

$ export DISTCC_HOSTS="localhost dev1 dev2 dev3"

Always put the machines in order from fastest to slowest, and if you�re using a large number of machines, (you can opt to) omit "localhost" from the list, allowing it to focus on preprocessing. You can now build over the distributed system using the following command:

$ make -j8 CC=distcc

Why 8? As a rule, double the number of CPU�s in the build system and use that number for -j. You may be thinking that it'd be great if these tools worked together, allowing you to cache what could be cached and distributing the rest. You�ll be happy to find out that the tools are completely compatible. Even better, getting the tools to work together is extremely easy. To do so, simply set the CCACHE_PREFIX environment variable to distcc, as in export CCACHE_PREFIX="distcc".

Using ccache and distcc, either separately or together, can save you a large amount of time during tedious rebuilds. Hopefully, it's enough time for a latte.

First task of Linux system administrator

Linux System Administration: First Tasks

Linux system administration has a place of its own in the hierarchy of information technology specializations. Some people excel in special areas of free software technology but haven't needed to learn system administration. For example, you may specialize in configuring e-mail or writing applications using Apache and MySQL. You may focus only on Domain Names Services and know esoteric ways of setting up servers on provider lines that frequently change IP addresses. But if I asked you to babysit a busy server or servers, you might not have the temperament or have learned the plethora of skills required to do so.

The above does not mean that good system administrators do not excel in areas such as configuring Apache, maintaining DNS zone files or writing Perl Scripts. It simply means that if you want to work as a system administrator in the Linux world, you need to know how to do everything from installing a server to securing the filesystem from mischievous crackers on the Internet. In between, you need to prepare your system to recover from the myriad ways a server can fail.

Consider, for example, a case in which you find that one of the Web sites you manage has gone down; the server has locked up and nothing works. How do you recover in the fastest possible way? Such an event happened to me two weeks ago. One of my articles wound up on Slashdot.org, Digg.com, NewsForge and other sites at the same time. None of my colleagues had seen that much traffic on a Linux site before. Aside from the several million hits on our server, we had a quarter of a million unique visitors concentrated in a five-hour period.

When you see that kind of traffic, you don't want the server to go down or you'll miss new readers. In our situation, a reboot allowed the system to return to service for a few minutes, but then it locked up again. Normally, we used less than ten percent of our system resources, so we thought we had prepared for the hottest day of the year.

Knowing the server and all the running processes, we could shut some down and focus on allowing a massive increase in simultaneous connections to our database. Although we have several thousand subscribers, we turned off processes such as those that restricted comments to registered readers. In the end, we made it through the day with only a short period of down time. But the surge of traffic rocked our boats.

Service outages such as the one described above can happen in the confines of a private network. Many services experience peak usage at specific times. For example, administrators know that one of the heaviest loads they'll have during the day occurs first thing in the morning, when people check their e-mail. People arrive at work about the same time, crank up their e-mail clients and read mail while drinking coffee.

The mail server might experience 75% of its use between 8 and 10 AM. Gateway traffic also increases and bandwidth on the network bogs down. Should you provide separate dedicated servers for mail, routing, proxy and gateway services? The majority of IT shops do that.

What if those computers averaged only 10% of CPU and memory capacity during the course of the day, but required 75% of resources for only a couple of hours a day, five days a week? Rather than buying individual computers, vendors have started recommending higher capacity machines and creating virtual severs.

You might want to configure a little larger metal to provide virtual machines for e-mail and related applications. Then, using Xen for example, you could let each application run in its own space. In that case, you might find server capacity utilization running around 50%, which helps maximize your resources and reduces server sprawl.

A system administrator should know how to climb a learning curve quickly. If a new technology arrives, such as virtualization, you need to master it before it masters you. You also need to know how to apply it in your environment.

What kinds of tasks occupy a system administrators day? That depends on the environment in which he or she works. You may find yourself managing dozens or even hundreds of Web servers. In contrast, you might find yourself running a local area network that supports knowledge workers and/or developers.

Regardless of your environment, you will find that some tasks are common to all system administration functions. For example, monitoring system services and starting and stopping them takes on a role of its own. Your Linux box might appear to be running smoothly while one or more processes have stopped. A Linux server might seem happy on the outside, for example, while the database serving Web pages has failed.

When services to users become critical needs, you need to be prepared and stay ahead of problems. Imagine a failed printing job is locking up a queue, keeping users from getting their documents printed. Do you wait to do something until you hear from irate users, or do you have a way to stay ahead of the problem?

Most system administrators have to face the fact that something will happen at some point that causes down time. Such events usually occur outside of our control. Perhaps your system incurs a power outage or spike. Sometimes a system bug pops up due to a combination of factors that exist only on your server; it's something that never occurred during project testing. In reality, sysadmins never know when a problem will occur; they only know that eventually one will arise.

Administrators need to monitor their systems in an efficient and effective manner. To this end, many administrators have discovered a plethora of monitoring and alert tools within the Free Software community. Some require you to log into a remote system by SSH and run command-line tools such as pstree, lsof, dstat and chkconfig.

Another useful monitoring tool is Checkservice, which provides the status of services on (remote) hosts. It provides results by way of logs, a PHP status page or output to other tools. Some administrators like tiger, which performs a thorough check of a system and reports the results to a log file. You can find a list and explanation of tools for Debian here.

When you have to monitor a larger server farm and do not want to spend all your time logging into remote servers and running command-line tests, look for free software tools you can use with a browser. I like a tool called monit. This monitoring and alert system works on a number of Linux-type systems. Monit provides a system administrator with the ability to define, manage and monitor processes, the filesystem and even devices. You also can configure monit to restart processes if they fail.

Stanford University keeps an updated list of network monitoring tools and sponsors a working group called the Internet End-to-End Performance Monitoring Group. Be sure to check out the latest tools at the top of the Stanford list. Cacti, for example, has become one of the more popular tools among system administrators.

Professional Linux system administration requires you to know a broad number of tasks associated with networking and providing services to users. It takes a special breed of person to work in this capacity. Obviously, many people have both the character and the interest to do the job. Over the next few months, we will explore the tasks that make up Linux system administration. I hope you'll join me for the ride.

Thursday, May 04, 2006

Monitor Hard Drive usage

Monitoring Harddrive usage automatically

If you're maintaining a lot of server's with multiple hard-drives you'll need to know how to manage and watch your harddrives. You know it's getting full or needs cleaning before it's too late and your users can't complete their work because they're out of disk-space. Nothing is worse than franticly trying to reclaim disk-space because you ran out of it. This guide will hopefully aide you in a time-savinf manner.

First of all: check the usage of your hard-drive(s)!


$>df -h
Filesystem Size Used Avail Use% Mounted on
/dev/hda3 4.4G 3.4G 764M 82% /
/dev/hda1 14G 4.5G 9.3G 33% /mnt/win32


As you can see my hda3 is at 82%. With the help of scripts you can have this task done at given times, and get an e-mail notification if the percentage of used disk-space reaches a certain threshold.

First we will make a basic bash script that will report if any paritions are over 80%.


#!/bin/bash
df | egrep "(100%|[89][0-9]%)"


The egrep statement will match any usage between 80% and 100%.

Now let's do this as a timed event:
Type crontab -e to start an empty file where you can add all your cron jobs. (for more info type man 5 crontab)

Now to make a basic cron job that will run everyday at 10p.m..


0 22 * * * df | egrep "(100%|[89][0-9]%)"


The first number, 0, indicates the minute, the second, 22, is the hour that your job is supposed to run at. The next three *'s are day of month, the month, and day of week respectively.

Finally to have this cron job email you if any of your partition's are filled 80% or more you just add the mail command, like so:


0 22 * * * df | egrep "(100%|[89][0-9]%)" | mail -s "Warning..." you@emailaddr.com


The -s is for subject and you@emailaddr.com will be replaced by your email address.

HOWTO install NVIDIA drivers on Linux

HOWTO install NVIDIA drivers on FC3

I had one heck of a time installing the nVidia drivers on my system, so I thought I would post the steps that worked for me, so others could be spared the pain. I got these instructions from several sources. For the record, these instructions worked on a GeForce 4 MX440 and a GeForce FX5200. They were performed on a clean install of FC 3, and all actions were performed as root. The latest nVidia driver(NVIDIA-Linux-x86-1.0-6629-pkg1.run) was used. I have an AMD Sempron 2600. These instructions will not work unless you have the development packages installed. It has been reported that these instructions do not work on TNT2 cards. However, I have heard there is a work around, but I'm not sure what it is.
Do not type the quotes around the command line commands. These are only there to separate what is to be typed from the rest of the instructions.

READ AND PRINT OUT THIS PAGE BEFORE STARTING

1)Download the latest nVidia drivers to a directory of your choice

2)Edit the /etc/inittab manually using a text editor. Change the line that reads "id:5:initdefault:" to read "id:3:initdefault:" . Certain tutorials will instruct you to use the telinit 3, init 3, or even runinit 3 command. These commands didn't work for me, though. However, they may work for other people.

3)Reboot your system. You will end up in a command prompt environment. DO NOT PANIC. This is what it is supposed to do. Use the "cd" command to move to the directory you downloaded the driver to. For example, if you downloaded the driver to the /root/ folder, then run the command "cd /root/".

4)Now type "sh NVIDIA-Linux-x86-1.0-6629-pkg1.run".

5)Accept the license agreement. Now it will probably say that it could not find the kernel source. You can allow the installation program to look for the source online, but it won't work unless the servers are updated. Now it will say that it needs to compile its own kernel. Please allow it to do so. This is the part of the installation were the development tools are required.

6)Now run the command "cd /etc/X11"

7)Next type "vim xorg.conf". This opens the text editor. Use the arrow keys to move the cursor. Press the Insert key, and on the bottom, either "INSERT" or "REPLACE" will be shown. If INSERT is selected, then when you type the text you types will be inserted before the selected character. If replace is selected, then what you type will replace the selected character(s). Look for a line that reads "Driver "nv"", or "Driver "vega"". Change it to read "Driver "nvidia"". Now go scroll up until you get to "Section "Module"". Make sure that one of the lines between"Section "Module"" and "EndSection" says "Load "glx"". If the lines "Load "dri"" or "Load "GLcore" are present, then comment them out by placing a # at the beginning of the line.

8) To exit and save changes, push esc. Now type ":wq" and hit enter. If you mess up you can quit without saving by hitting esc and then typing ":q!".

9)Now that you are back to command prompt, type "rpm -e --nodeps xorg-x11-Mesa-libGL"
NOTE: If you update "xorg-x11" package with up2date or yum, you will have to do the above command again.

10)Now run "modprobe nvidia"

11)Now run "cp -a /dev/nvidia* /etc/udev/devices
Allow it to overwrite what is there.

12)Now run "chown root.root /etc/udev/devices/nvidia*"

13) Almost there, all that's left is to edit /etc/inittab back to what it was. To do this, type "cd /etc/". Now type "vim inittab" . Change the "3" back to a "5".

14)Reboot by hitting ctl-alt-delete

15)You're done!!!

I hope this helps a lot of people, feedback and more information is certainly welcome.
A special thanks to perfect_circle for helping me with this. Also, parts of this tutorial were taken from http://www.fedoraforum.org/forum/showthread.php?t=26260

How to configure sound card on linux

Basic sound card hardware debuging

This article is meant to explain how to troubleshoot hardware in linux, more or less based on how to troubleshoot a soundcard.
The person should know how to open a console, and how to run commands as root, and not be afraid to compile the kernel. Compilling the kernel is outside the scope of this document.


1. What is the hardware?
You must know, before attempting to troubleshoot your hardware, the name of you hardware. Is it a USB? PCI? These are very important to know!
A lot of information can be gained by typing /sbin/lspci and /sbin/lsusb . Make sure your hardware is well pluged in and installed.


2. Is it working out of the box?
Does the hardware work out of the box? Have you tried rebooting? Some distro autoconfigure themselves to work with the devices they detect at bootup time. Plugging in a device doesn't automatically mean it'll work immediately, but sometimes it does.

Read More






3. Still doesn't work? Lets troubleshoot.
Assuming you have rebooted, the card is well installed, lets go on troubleshooting.

First step: syslog!
A lot of errors are caught and reported to syslog. Depending on your distribution, your log may (or may not) be located at /var/log/messages, /var/log/dmesg, /var/log/syslog... etc etc... please consult whatever reference manual is available with your distribution.

The command you should run as root will be cat /var/log/messages | less or perhaps cat /var/log/messages | tail.
Less will show you the date in a scrollable way and tail will post the last messages of dmesg.

These error messages are important to you, and to us. We need to know what the error messages are and the specs of your hardware at LQ if you want us to help you optimally, if this article is not sufficient, or you can't find the solution by yourself.

If you see there is an IRQ error, you will need to play with the IRQ settings in your BIOS. To enter the BIOS, you will need to press F1, F2, ESC, or similar keys at the first splash screen of your system to get into the BIOS. This, however, is tricky for inexperienced users. For the other users, you may go right ahead and modify it. How to fix IRQ problems is outside the scope of this document.

If there isn't anything that's obvious that pops out at you while reading the syslog or dmesg messages. Lets go on to the next steps.


4. Is it a classic error? Permission, module loaded, card in use, muted, external amplifier?
The most classic error is the permission denied error, IMHO.
To quickly test out whether root can access the sound card or not, test the card (To test a card, type cat /dev/urandom > /dev/dsp, type CTRL+C to stop the action.
) as root. If it works, congratulations! You are closer to making your soundcard work. If you can also test it as a normal user, you should have no problem making the sound card work.

type these commands:

ls -l /dev/dsp
ls -l /dev/audio
ls -l /dev/mixer

The owners, user would naturally be root, and the group (on some distros) "audio".
In that case, using any available GUI tool, add your user to the audio group. or using the console, type gpasswd -a user audio . You will need to relogin for the group change to make effect.

If the permissions read this : rwx------, or didn't permit "group" or "users" to read and write. Then we have a permission issue.
As root, type chmod 660 /dev/dsp or chmod 666 /dev/dsp . do that for /dev/audio and /dev/mixer too.
You can change the user and group by typing chown root:audio /dev/dsp . Change /dev/dsp for the name of the device.


If /dev/dsp doesn't exist, it's probably because your modules aren't loaded properly. Assuming the drivers are modules, and not built in the kernel, you can know whether they are loaded by typing /sbin/lsmod . If you aren't sure what modules are needed for your soundcard.... google it... http://www.google.com

If they aren't loaded, load them using modprobe or insmod... like this, as root :
modprobe atiixp

The module name will vary.

If it is loaded... we move on to the other classical error.


If the card is in use... it's in use! You can know who uses it by typing /sbin/fuser /dev/dsp.
You can also restart the alsa service using the commands provided to you by your distro.
(gentoo: /etc/init.d/alsasound restart)
(Mandrake: service alsa restart)


You may type alsamixer, to look and modify the mixer's settings. Make sure the master's volume is high enough to hear, and not muted (having "MM" below it's bar). Use the arrow keys to raise and lower the volume. Use M to unmute and mute. Use left and right to move from one bar to the other. Make sure that PCM is high enough and not muted. If you have something called external amplifier, unmuted it.

5. Testing
To test a card, type cat /dev/urandom > /dev/dsp, type CTRL+C to stop the action.

How to SSL - Encryption

SSL-Encrypting Syslog via Stunnel

SSL Encrypting Syslog with Stunnel


Abstract


In this paper, I describe how to encrypt syslog messages on the network. Encryption is vital to keep the confidential content of syslog messages secure. I describe the overall approach and provide an HOWTO do it with the help of rsyslogd and stunnel.


Background


Syslog is a clear-text protocol. That means anyone with a sniffer can have a peek at your data. In some environments, this is no problem at all. In others, it is a huge setback, probably even preventing deployment of syslog solutions. Thankfully, there is an easy way to encrypt syslog communication. I will describe one approach in this paper.

The most straightforward solution would be that the syslogd itself encrypts messages. Unfortunately, encryption is only standardized in RFC 3195. But there is currently no syslogd that implements RFC 3195's encryption features, so this route leads to nothing. Another approach would be to use vendor- or project-specific syslog extensions. There are a few around, but the problem here is that they have compatibility issues. However, there is one surprisingly easy and interoperable solution: though not standardized, many vendors and projects implement plain tcp syslog. In a nutshell, plain tcp syslog is a mode where standard syslog messages are transmitted via tcp and records are separated by newline characters. This mode is supported by all major syslogd's (both on Linux/Unix and Windows) as well as log sources (for example, EventReporter for Windows Event Log forwarding). Plain tcp syslog offers reliability, but it does not offer encryption in itself. However, since it operates on a tcp stream, it is now easy to add encryption. There are various ways to do that. In this paper, I will describe how it is done with stunnel (an other alternative would be IPSec, for example).

Stunnel is open source and it is available both for Unix/Linux and Windows. It provides a way to use ssl communication for any non-ssl aware client and server - in this case, our syslogd.

Stunnel works much like a wrapper. Both on the client and on the server machine, tunnel portals are created. The non-ssl aware client and server software is configured to not directly talk to the remote partner, but to the local (s)tunnel portal instead. Stunnel, in turn, takes the data received from the client, encrypts it via ssl, sends it to the remote tunnel portal and that remote portal sends it to the recipient process on the remote machine. The transfer to the portals is done via unencrypted communication. As such, it is vital that the portal and the respective program that is talking to it are on the same machine, otherwise data would travel partly unencrypted. Tunneling, as done by stunnel, requires connection oriented communication. This is why you need to use tcp-based syslog. As a side-note, you can also encrypt a plain-text RFC 3195 session via stunnel, though this definitely is not what the protocol designers had on their mind ;)

In the rest of this document, I assume that you use rsyslog on both the client and the server. For the samples, I use Debian. Interestingly, there are some annoying differences between stunnel implementations. For example, on Debian a comment line starts with a semicolon (';'). On Red Hat, it starts with a hash sign ('#'). So you need to watch out for subtle issues when setting up your system.


Overall System Setup


In this paper, I assume two machines, one named client and the other named server. It is obvious that, in practice, you will probably have multiple clients but only one server. Syslog traffic shall be transmitted via stunnel over the network. Port 60514 is to be used for that purpose. The machines are set up as follows:

Client

  • rsyslog forwards message to stunnel local portal at port 61514
  • local stunnel forwards data via the network to port 60514 to its remote peer


Server

  • stunnel listens on port 60514 to connections from its client peers
  • all connections are forwarded to the locally-running rsyslog listening at port 61514



Setting up the system

For Debian, you need the stunnel4 package. The stunnel package is the older 3.x release, which will not support the configuration I describe below. Other distributions might have other names. For example, on Red Hat it is just stunnel. Make sure that you install the appropriate package on both the client and the server. It is also a good idea to check if there are updates for either stunnel or openssl (which stunnel uses) - there are often security fixes available and often the latest fixes are not included in the default package.

In my sample setup, I use only the bare minimum of options. For example, I do not make the server check client certificates. Also, I do not talk much about certificates at all. If you intend to really secure your system, you should probably learn about certificates and how to manage and deploy them. This is beyond the scope of this paper. For additional information, http://www.stunnel.org/faq/certs.html is a good starting point.

You also need to install rsyslogd on both machines. Do this before starting with the configuration. You should also familiarize yourself with its configuration file syntax, so that you know which actions you can trigger with it. Rsyslogd can work as a drop-in replacement for stock sysklogd. So if you know the standard syslog.conf syntax, you do not need to learn any more to follow this paper.

Server Setup

At the server, you need to have a digital certificate. That certificate enables SSL operation, as it provides the necessary crypto keys being used to secure the connection. Many versions of stunnel come with a default certificate, often found in /etc/stunnel/stunnel.pem. If you have it, it is good for testing only. If you use it in production, it is very easy to break into your secure channel as everybody is able to get hold of your private key. I didn't find an stunnel.pem on my Debian machine. I guess the Debian folks removed it because of its insecurity.

You can create your own certificate with a simple openssl tool - you need to do it if you have none and I highly recommend to create one in any case. To create it, cd to /etc/stunnel and type:

Quote:
openssl req -new -x509 -days 3650 -nodes -out stunnel.pem -keyout stunnel.pem


That command will ask you a number of questions. Provide some answer for them. If you are unsure, read http://www.stunnel.org/faq/certs.html. After the command has finished, you should have a usable stunnel.pem in your working directory.

Next is to create a configuration file for stunnel. It will direct stunnel what to do. You can used the following basic file:

Quote:

; Certificate/key is needed in server mode
cert = /etc/stunnel/stunnel.pem

; Some debugging stuff useful for troubleshooting
debug = 7

foreground=yes

[syslog]
accept = 60514
connect = 61514


Save this file to e.g. /etc/stunnel/syslog-server.conf. Please note that the settings in italics are for debugging only. They run stunnel with a lot of debug information in the foreground. This is very valuable while you setup the system - and very useless once everything works well. So be sure to remove these lines when going to production.

Finally, you need to start the stunnel daemon. Under Debian, this is done via stunnel /etc/stunnel/syslog.server.conf. If you have enabled the debug settings, you will immediately see a lot of nice messages.

Now you have stunnel running, but it obviously unable to talk to rsyslog - because it is not yet running. If not already done, configure it so that it does everything you want. If in doubt, you can simply copy /etc/syslog.conf to /etc/rsyslog.conf and you probably have what you want. The really important thing in rsyslogd configuration is that you must make it listen to tcp port 61514 (remember: this is where stunnel send the messages to). Thankfully, this is easy to achieve: just add -t 61514 to the rsyslogd startup options in your system startup script. After done so, start (or restart) rsyslogd.

The server should now be fully operational.

Client Setup

The client setup is simpler. Most importantly, you do not need a certificate (of course, you can use one if you would like to authenticate the client, but this is beyond the scope of this paper). So the basic thing you need to do is create the stunnel configuration file.

Quote:

; Some debugging stuff useful for troubleshooting
debug = 7
foreground=yes


client=yes

[ssyslog]
accept = 127.0.0.1:61514
connect = 192.0.2.1:60514


Again, the text in italics is for debugging purposes only. I suggest you leave it in during your initial testing and then remove it. The most important difference to the server configuration outlined above is the client=yes directive. It is what makes this stunnel behave like a client. The accept directive binds stunnel only to the local host, so that it is protected from receiving messages from the network (somebody might fake to be the local sender). The address 192.0.2.1 is the address of the server machine. You must change it to match your configuration. Save this file to /etc/stunnel/syslog-client.conf.

Then, start stunnel via stunnel4 /etc/stunnel/syslog-client.conf. Now you should see some startup messages. If no errors appear, you have a running client stunnel instance.

Finally, you need to tell rsyslogd to send data to the remote host. In stock syslogd, you do this via the @host forwarding directive. The same works with rsyslog, but it supports extensions to use tcp. Add the following line to your /etc/rsyslog.conf:

Quote:
*.* @@127.0.0.1:61514


Please note the double at-sign (@@). This is no typo. It tells rsyslog to use tcp instead of udp delivery. In this sample, all messages are forwarded to the remote host. Obviously, you may want to limit this via the usual rsyslog.conf settings (if in doubt, use man rsyslog.con).

You do not need to add any special startup settings to rsyslog on the client. Start or restart rsyslog so that the new configuration setting takes place.

Done

After following these steps, you should have a working secure syslog forwarding system. To verify, you can type logger test or a similar smart command on the client. It should show up in the respective server log file. If you dig out you sniffer, you should see that the traffic on the wire is actually protected. In the configuration use above, the two stunnel endpoints should be quite chatty, so that you can follow the action going on on your system.

If you have only basic security needs, you can probably just remove the debug settings and take the rest of the configuration to production. If you are security-sensitive, you should have a look at the various stunnel settings that help you further secure the system.


Preventing Systems from talking directly to the rsyslog Server


It is possible that remote systems (or attackers) talk to the rsyslog server by directly connecting to its port 61514. Currently (Jule of 2005), rsyslog does not offer the ability to bind to the local host, only. This feature is planned, but as long as it is missing, rsyslog must be protected via a firewall. This can easily be done via e.g iptables. Just be sure not to forget it.


Conclusion


With minimal effort, you can set up a secure logging infrastructure employing ssl encrypted syslog message transmission. As a side note, you also have the benefit of reliable tcp delivery which is far less prone to message loss than udp.

tar and rpm

Install a package
rpm –ivh packagename
upgrade a package
rpm –Uvh packagename

create a tar file
tar –cvf myfiles.tar mydir/
(add z if you are dealing with or creating .tgz (.tar.gz) files)

standard install from source
tar –xvzf Apackage.tar.gz
cd Apackage
./configure
make
make install