Linux Monitoring System
Linux Monitoring System
Linux Monitoring System
GUI/Graphical:
List of tools:
Process Management:
The basic Linux monitoring commands such as pstree and ps -auxw and top will inform you of
the processes running on your system. Sometimes a process must be terminated. To terminate
a process:
This will perform an orderly shutdown of the process. If it hangs give a stronger signal with: kill
-9 <process-id-number>. This method is not as sanitary and thus less preferred.
A signal may be given to the process. The program must be programmed to handle the given
signal. See /usr/include/bits/signum.h for a full list. For example, to restart a process after
updating it's configuration file, issue the command kill -HUP <process-id-number>
In the previous example, the HUP signal was sent to the process. The software was written to
trap for the signal so that it could respond to it. If the software (command) is not written to
respond to a particular signal, then the sending of the signal to the process is futile.
QPS:
Also see the GUI tool QPS. (Handles MOSIX cluster) This tool is outstanding for
monitoring, adjusting nice values (priorities), issue signals to the process, view files the
process is using, the memory, environmnet variables and sockets the process is using.
RPM available from this site. It is so simple to use, no instructions are necessary. It can
monitor a program to make sure it isn't doing something bad. It is also reverse engineer
what applications are doing and the environments under which they run. I love this tool!!
QPS home page:
o Downloads
o QPS: 1.9.8-1.9.14
o Download RPMs for Fedora 4, 5, SuSE, Mandriva
(SuSE version 9.3 ships with a brokern QPS. Download a working
version at link above.)
Note: The RPM provided was compiled for RedHat 7.x. For RedHat 8.0+ one must
install the appropriate QT library RPMs to satisfy dependencies:
Description: GDB
Command Line: xterm -T "GDB %C" -e gdb -d /directory-where-source-code-is-
located --pid=%p
Description: gdb
Command Line: xterm -T "gdb %c (%p)" -e gdb /proc/%p/exe %p &
(As issued in RPM)
gdb man page
Description: strace
Command Line: xterm -T "strace %c (%p)" -e sh -c 'strace -f -p%p; sleep
10000'&
(show process system calls and signals. Try it with the process qps itself.)
Show output written by process:
xterm -T "strace %c (%p)" -e sh -c 'strace -f -q -e trace=write -p%p; sleep
10000'&
strace man page
Description: truss (Solaris command)
Command Line: xterm -T "truss %C (%p) -e sh -c 'truss -f -p %p; sleep 1000'&
Note that some processes may use Linux InterProcess Communication or IPC
(semaphores, shared memory or queues) which may need to be cleaned up manually:
1. Identify the semaphores: ipcs
ipcs -q List share queues.
ipcs -m Shared memory.
ipcs -s List Semaphores.
2. Remove the semaphores: ipcrm -s <ipcs id>
Example: If you are running Apache, you may see the following:
List all open files on system: lsof
(Long list)
List all files opened by user: lsof -u user-id
The commands netstat -punta and socklist will list open network connections.
Use the command lsof -i TCP:port-number to see the processes attached to the
port.
Example:
[root@node DIR]# lsof -i TCP:389
COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME
slapd 5927 ldap 6u IPv4 7560023 TCP *:ldap (LISTEN)
slapd 5928 ldap 6u IPv4 7560023 TCP *:ldap (LISTEN)
slapd 21185 ldap 6u IPv4 7560023 TCP *:ldap (LISTEN)
slapd 21186 ldap 6u IPv4 7560023 TCP *:ldap (LISTEN)
slapd 21193 ldap 6u IPv4 7560023 TCP *:ldap (LISTEN)
This shows that the command slapd running under user id ldap is running five
process connected to port 389.
Memory Usage:
...
Major (requiring I/O) page faults: 24
Minor (reclaiming a frame) page faults: 11271
Voluntary context switches: 302
Involuntary context switches: 3689
...
Explanation of terms:
Major Page Fault (MPF): When a request for memory is made but it does not exist in
physical memory, a request to the disk subsystem to retrieve pages from virtual
memory and buffer them in RAM. The MPF occurs most when an application is started.
Minor Page Fault (MnPF): Reusing a page in memory as opposed to placing it back on
disk.
Adding an extra hard drive: (See commands and dialog of adding a second IDE hard drive)
Where the drive is /dev/hdb or some device as conforms to the Linux device names:
IDE drives are referred to as hda for the first drive, hdb for the second etc... IDE uses separate
ribbon cables for primary and secondary drives. The partitions on each drive are referred
numerically. The first partition on the first drive is referred to as hda1, the second as hda2, the
third as hda3 etc ...
Note: SCSI disks are labeled /dev/sda, sdb, etc... For more info see SCSI info.
Use the command cat /proc/partitions to see full list of disks and partitions that your system can
see.
To make the drive a permanent member of your system and have it mount upon system boot,
add it to your /etc/fstab file which holds all the file system information for your system. See man
page for fstab.
At this point one may optionally check the file system created with the command: fsck /dev/sdc1
Note that fsck is NOT run against a mounted file system. Unmount it first if necessary. (umount)
Also see the man page for:
This command should work for a Red Hat installation. Other distributions may require
the following set-up:
cd /mnt
mkdir cdrom
mount -t iso9660 -o ro /dev/cdrom /mnt/cdrom
o Unix floppy: See YoLinux Tutorial - Linux Recovery and Boot Disk Creation
Ramdisk: Using a portion of RAM memory to act like a superfast disk.
[Potential Pitfall]: I've never actually tried this. Use at your own risk!
This example refers to a swap file. One may also use a swap partition. Make entry
to /etc/fstab to permanently use swap file or partition.
Note: To remove the use of swap space, use the command swapoff. If using a swap
partition, the partition must be unmounted.
Man pages:
o swapon/swapoff
o mkswap
o fstab
See:
o proc man page - process information pseudo-filesystem
o Local file Kernel 2.2 (RH 7.0-):file:/usr/src/linux/Documentation/proc.txt (local
file)
Pertains to Red Hat systems using the EXT2 filesystem (RH 7.2+ uses EXT3)
After 20 reboots of the system, Linux will perform a file system check using fsck. This is
annoying for systems with many file systems because they will all be checked at once.
The individual file system's mount count may be changed so that they will be checked
on a different reboot.
Perform the previous command on all the filesystems to obtain their mount counts. Next
change the mount counts for some of them.
umount /dev/sdb6
tune2fs -C 9 /dev/sdb6
mount /dev/sdb6
Now the filesystems will have an fsck performed on them on different system boots
rather than all at the same time.
For home users who routinely shutdown and boot their systems, one can increase the
maximum mount count: tune2fs -c 40
This feature can also be disabled: tune2fs -c -1
Check every week: tune2fs -i 7
Pertains to Red Hat 7.1 EXT2 filesystems and earlier which require an integrity check.
(RH 7.2+ uses EXT3 which is a journaled file system which maintains file system
integrity even with a crash.)
If the system crashes (due to power outage etc...) then upon boot the system will check
if the disk was unmounted cleanly. If not you may get the following message:
Also see:
tune2fs Man Page
Linux today: EXT3 info
Other journaled file systems: SGI XFS, IBM JFS and reiserfs. For files larger than 2Gb
use SGI XFS and the SGI Linux Red Hat RPM or Red Hat ISO CD install image.
Raw Devices: Commercial databases such as Oracle and IBM DB2 can maximize performance
by using raw I/O. One may use the raw command for both IDE and SCSI devices. This will map
a raw device to a blocked device for an entire disk partition. To see if your system is using raw
I/O issue the command: raw -a
raw man page
Configuration file: /etc/sysconfig/rawdevices
Add entries to this file to invoke raw I/O upon system boot.
Devices: /dev/raw/raw??
Raw device controller: /dev/rawctl
Sample use of command: raw /dev/raw/raw1 /dev/hdb5
One must be of group disk to use the raw device or change permissions:
o chmod a+r /dev/rawctl
o chmod a+r /dev/hdb5
o chmod a+rw /dev/raw/raw1
Note: The above information applies to Red Hat distributions. This info may be different for other
distributions. i.e. S.U.S.E. uses /dev/raw1 as a device and /dev/raw as the controller.
You can mimic Red Hat behavior with a symbolic link: ln -s /dev/your_raw_controller /dev/rawctl
Client Server
File: /etc/fstab File: /etc/exports
... ...
server:/directory-to-export /mnt/mount-point nfs
rw,hard,intr 0 0 /shared/images
176.168.1.0/255.255.255.0(rw)
server1:/shared/images /mnt/srv1-images
nfs rw,hard,intr 0 0 ...
...
List of directories to export and
Hard mount read/write. Mount can be restrictions.
interrupted by the kill command. For more see exports man page.
Options:
Option Description
ro Mounts of the exported file system are read-only.
rw Mounts of the exported file system are read-write.
The program accessing a file on a NFS mounted file system will hang when the
hard
server crashes.
If an NFS file operation has a major time-out and it is hard mounted, then allow
intr
signals to interrupt the file operation and cause it to return.
If the exported file system is read/write and hosts are making changes to the file
async
system when the server crashes, data can be lost.
By specifying the sync option, all file writes are committed to the disk before the
sync write request by the client is completed. The sync option, however, can lower
performance.
Causes the NFS server to delay writing to the disk if it suspects another write
request is imminent. This can improve performance by reducing the number of
wdelay times the disk must be accessed by separate write commands, reducing write
overhead. The no_wdelay option turns off this feature, but is only available when
using the sync option.
Prevents root users connected remotely from having root privileges and assigns
them the user ID for the user nfsnobody. This effectively "squashes" the power of
root_squash the remote root user to the lowest local user, preventing unauthorized alteration
of files on the remote server. Alternatively, the no_root_squash option turns off
root squashing.
Pitfalls:
o Server must run services: portmap, nfslock, netfs, nfs
o Restart server service to pick up file changes: service nfs restart
(or: /etc/init.d/nfs restart)
o Iptables may block port. Clear iptables rules with iptables -F to test. Keep ports
111 and 2049 clear.
NIS (Network Information Systems) is often used in NFS clusters to manage authentication. See
the YoLinux.com NIS tutorial.
User Info:
Commands:
User Greetings:
The three most common methods of defining a Linux user and authenticating their logins are:
Default directory configuration and files for a new user are copied from the
directory /etc/skel/. The default shell is called bash (bsh) and is a cross of the UNIX ksh
and csh command shells. The users personal bash shell customizations are held in
$HOME/.bashrc.
GUI Method:
o system-config-users: GUI admin tool for managing users and groups. (Fedora
Core 2+, RHEL4)
o redhat-config-users: GUI admin tool for managing users and groups. (Fedora
Core 1)
o linuxconf: (Note: Linuxconf is no longer included with Red Hat Linux 7.3+)
Start linuxconf:
RH 5.2: Select Start + Programs + Administration + linuxconf .
RH 6+: Select Gnome Start icon (located lower left corner) +
System + Linuxconf .
Add the user: Select options Config + User accounts +Normal + User
accounts + select button Add . There is also the option of adding the
user to additional groups. (I.e enter floppy under the heading
Supplementary groups and then Accept ) For a list of groups, the group
names should be separated by a simple space. This tool will allow you
to set default directories, shells, add rules about passwords, set e-mail
aliases, group membership and disk quotas. One can modify or delete
users from linuxconf as well.
Set user password: After creating the user, use options Config + User
accounts + Normal + User accounts .Select the user from the list. Then
select the Passwd button. This will allow you to enter an initial
password for the account.
File Editing Method: - (as root) Edit files to add/remove a user
o Create user entry in /etc/passwd
user:x:505:505:Mr. Dude User:/home/user:/bin/bash
o Create group: /etc/group
user:x:505:
o Create home directory:
cd /home
mkdir user
o Copy default files:
cp -pR /etc/skel/. /home/user
chown -R user.user /home/user
o The creation of /etc/shadow and /etc/gshadow require the execution of a
program to encrypt passwords. Use the commands pwconv andgrpconv to
synchronize the shadow files.
o Assign a password: passwd user
o Also see:
Shadow integrity verification: grpck [-r] [group shadow]
File editor: vipw.
Note:
For every user ID text string there is an associated UID integer. See the third ":"
delimited field in the file /etc/passwd.
The "Linux Standard Base" pecification states that IDs 0 to 99 should be statically
allocated by the system and that user IDs from 100 to 499 should be reserved for
dynamic allocation by system administrators and post install scripts using useradd.
[LSB chapter 21] This is of course not completely realistic as it would limit Linux to 400
users. Red Hat/Fedora Linux distributions begin incrementing user UIDs from 500. By
default the useraddcommand will increment by one for each new ID.
Large organizations need to think ahead when creating a new user. Autonomous
systems are often eventually linked together to share files using NFS at a later date and
have synchronization problems. The same user ID (text string) on two different systems
may have different UIDs. The problem this creates is when a file with one system can
not be edited when accessed from the second system as the second system regard him
as a different user because the system has a different UID. It is best to use
the useradd "-u" option to assign users a UID integer associated with the text string ID.
Many systems administrators use the employee ID as they know it will be unique across
the corporation. Group GIDs can be assigned to department or division numbers. This
will allow smooth operation of connected systems.
NFS: For systems which will use NFS to share files, one can administer user accounts
to make creation, editing and ownership of files seamless and consistent. Look at the
file /etc/passwd on the file server which you will mount to determine the user ID
number and group ID number.
user1:x:505:505:Joe Hacker:/home/user1:/bin/bash
User-ID:User-ID-Number:Group-ID-Number:comment:/home/User-ID-Home-Directory:default-
shell
Add a user to the system which matches. This will allow files generated on the file
server to match ownership of those generated on the client system.
[root]# useradd -u User-ID-Number -g Group-ID-Number User-ID
Ideally you would configure an NIS or LDAP authentication server so that login id's and
group id's would reside on one server. This tip is for separate autonomous systems or
for systems using different authentication servers which are sharing files using NFS.
This tip also can also apply to smbmounted MS/Windows shares.
Default settings for new users are stored in /etc/skel/. To modify default .bash_logout
.bash_profile .bashrc .gtkrc .kde/ configuration files for new users, make the changes
here.
Also see the YoLinux tutorial on Managing groups
Security Goals:
selinux-policy-strict
selinux-policy-strict-sources: Configuration files
selinux-policy-targeted
selinux-policy-targeted-sources: Configuration files
libselinux: Library which provides a set of interfaces for security-aware applications to
get and set process and file security contexts.
selinux-doc
Configuration file: /etc/selinux/config
Enforce:
o Use command: setenforce 1
(Alter SELinux enforcement while kernel is running.)
or
o echo 1 > /selinux/enforce
or
o Specify in /etc/grub.conf on the "kernel" command line: enforcing=1
(Sets enforcement during boot.)
Disable:
o Use command: setenforce 0
or
o echo 0 > /selinux/enforce
or
o Specify in /etc/grub.conf on the "kernel" command line: selinux=0
Security contexts:
Also see YoLinux Tutorials on Web Site configuration and SELinux policies.
For tar backups which preserve SELinux file and directory policies, see star discussed in Linux
backups and archiving below.
File: /etc/security/limits.conf :
o core - limits the core file size (KB)
o data - max data size (KB)
o fsize - maximum filesize (KB)
o memlock - max locked-in-memory address space (KB)
o nofile - max number of open files
o rss - max resident set size (KB)
o stack - max stack size (KB)
o cpu - max CPU time (MIN)
o nproc - max number of processes
o as - address space limit
o maxlogins - max number of logins for this user
o priority - the priority to run user process with
o locks - max number of file locks the user can hold
File: /etc/security/access.conf :
Limit access by network or local console logins.
File: /etc/security/group.conf :
Grant/restrict group device access.
Also see the YoLinux tutorial on Managing groups
File: /etc/security/time.conf :
Restrict user access by time, day.
Also see:
If you are planning to administer the system, one would login as root to perform the tasks. In
many instances one would be logged in as a user and wish to perform some "root" sys-admin
tasks. Here is how:
Some systems may be configured so that only the switch user (su) command may be required
without all of the X-window configuration.
dpkg:
apt-get:
## Major bug fix updates produced after the final release of the
## distribution.
deb http://us.archive.ubuntu.com/ubuntu/ dapper-updates main restricted
deb-src http://us.archive.ubuntu.com/ubuntu/ dapper-updates main restricted
## Uncomment the following two lines to add software from the 'universe'
## repository.
## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
## team, and may not be under a free licence. Please satisfy yourself as to
## your rights to use the software. Also, please note that software in
## universe WILL NOT receive any review or updates from the Ubuntu security
## team.
deb http://us.archive.ubuntu.com/ubuntu/ dapper universe
deb-src http://us.archive.ubuntu.com/ubuntu/ dapper universe
## Uncomment the following two lines to add software from the 'backports'
## repository.
## N.B. software from this repository may not have been tested as
## extensively as that contained in the main release, although it includes
## newer versions of some applications which may provide useful features.
## Also, please note that software in backports WILL NOT receive any review
## or updates from the Ubuntu security team.
#deb http://us.archive.ubuntu.com/ubuntu/ dapper-backports main restricted universe
multiverse
#deb-src http://us.archive.ubuntu.com/ubuntu/ dapper-backports main restricted universe
multiverse
Command Description
apt-cache search package-name Query repositories to see if package is available.
Also see the man pages for: dpkg, dselect, apt-get, apt-cache, apt-cdrom (add CD-Rom to
sources list), apt-config
The rpm command is used to manage software applications and system modules for Red Hat,
Fedora, CentOS, Suse and many other Linux distributions.
Step One: Import Red Hat and Fedora GPG signature keys:
Install public key: (Red Hat package up2date - now depricated. Use YUM.)
Do this once to configure RPM so that you won't constantly get the warning message
that the signature is "NOKEY".
The purpose is to protect you from using a corrupt or hacked RPM.
Once these command are performed, you are ready to use the RPM command. (This is
also required for the YUM commands below.)
Note: Many GPG public keys for other RPM packages (i.e. MySQL: 0x5072E1F5), can
be obtained from http://www.keyserver.net/.
(The following RPM installation warning will inform you of the key to obtain: warning:
MySQL-XXXX.rpm: V3 DSA signature: NOKEY, key ID 5072e1f5)
Importing a new key from key server:
Notes:
This is because a package is doubly listed: (Often due to dual 32/64 bit architectures
such as the AMD Athelon/Opteron and Intel EM64T - Extended Memory 64
Technology)
[root]# rpm -q package-name
package-name-X.X.X-X
package-name-X.X.X-X
Fix: rpm -e --allmatches package-name
[Potential Pitfall]: You try and install an RPM but you can not get the appropriate version
of the run time libraries because they are too old and not present on your system or you
get a runtime error:
/usr/bin/ld: cannot find /lib/libxx.so.1.0.4
Here is how to install some old libraries on your newer system without corrupting your
current installation.
1. First force the installation of the RPM without the dependency requirement: rpm
--nodeps -ivh xxxx-...rmp.
2. Next download an old RPM of the appropriate library, i.e. glibc-x.x.x.rpm
3. Extract the libraries from the RPM: rpm2cpio glibc-x.x.x.rpm | cpio -idv
This will install to your current directory: ./usr/lib/.. and ./lib/...
4. Manually copy the library file to the library directory or path accessible by
LD_LIBRARY_PATH or ldconfig: i.e.
cp ./lib/libxx.so.1.0.4 /lib/libxx.so.1.0.4
Also see:
RPM HowTo.
RPM.org Home Page
Alien - package converter between rpm, dpkg, stampede slp, and slackware tgz file
formats.
CheckInstall - Create packages for RPM (Red Hat, Fedora, Suse), Debian or Slackware
for install and uninstall.
Execute the following commands (in order given) to perform an automatic system update:
1. /usr/bin/rhn_register :You must first register your system with the Red Hat database.
This command will perform a hardware inventory and reporting of your system so that
Red Hat knows which software to load to match your needs.
2. /usr/bin/up2date-config :This allows you to configure the "up2date" process. It allows
you to define directories to use, actions to take (i.e. download updates, install or not
install, keep RPM's after install or not), network access (i.e. proxy configuration), use of
GPG for package verification, packages or files to skip, etc. Use of GPG requires the
Red Hat public key: rpm -import /usr/share/rhn/RPM-GPG-KEY
3. /usr/sbin/up2date :This command will perform an audit of RPM's on your system and
discover what needs to be updated. It gives you a chance to unselect packages
targeted for upgrade. It will download RPM packages needed, resolve dependencies
and perform a system update if requested.
[Potential Pitfall]: This works quite well but it is not perfect. Red Hat 7.1 Apache upgrade to
1.3.22 changed the configuration completely. (Beware. manual clean-up and re-configuration is
required). When up2date finds the first messed up dependency it stops to tells you. You then
have to unselect the package. It then starts again from the beginning.
Option Description
--nox Do not display the GUI interface.
-u Completely update the system
--update
-h Display command line arguments
--help
-v Print more info about what up2date is doing
--verbose
--showall Show a list of all packages available for your release of Red Hat Linux,
including those not currently installed.
up2date-gnome
rhn_register-gnome
Notes:
YUM (Yellowdog Updater, Modified) is a client command line application for updating an RPM
based system from an internet repository (YUM "yum-arch" server) accessible by URL
(http://xxx, ftp://yyy or even file://zzz local or NFS). The YUM repository has a directory of the
headers with RPM info and directory path information. YUM will resolve RPM package
dependencies and manage the importation and installation of dependencies.
YUM is also capable of upgrading across releases. One can upgrade Red Hat Linux 7 and 8 to
9. Red Hat 8 and 9 can be upgraded to Fedora Core. SeeRed Hat YUM upgrades.
[main]
cachedir=/var/cache/yum
debuglevel=2
logfile=/var/log/yum.log
pkgpolicy=newest
distroverpkg=redhat-release
tolerant=1
exactarch=1
retries=20
obsoletes=1
gpgcheck=1
exclude=firefox mozplugger gftp
File: /etc/yum.repos.d/fedora.repo (Fedora Core 3)
[base]
name=Fedora Core $releasever - $basearch - Base
#baseurl=http://download.fedora.redhat.com/pub/fedora/linux/core/$releasever/$basearch/os/
mirrorlist=http://fedora.redhat.com/download/mirrors/fedora-core-$releasever
enabled=1
gpgcheck=1
File: /etc/yum.repos.d/fedora-updates.repo (Fedora Core 3)
[updates-released]
name=Fedora Core $releasever - $basearch - Released Updates
#baseurl=http://download.fedora.redhat.com/pub/fedora/linux/core/updates/$releasever/
$basearch/
mirrorlist=http://fedora.redhat.com/download/mirrors/updates-released-fc$releasever
enabled=1
gpgcheck=1
Terms:
Fedora Extras:
Create file: /etc/yum.repos.d/extras.repo
[extras]
name=Fedora Extras $releasever - $basearch
baseurl=http://mirrors.kernel.org/fedora/extras/$releasever/$basearch/
http://www.mirrorservice.org/sites/download.fedora.redhat.com/pub/fedora/linux/extras/
$releasever/$basearch/
http://fr2.rpmfind.net/linux/fedora/extras/$releasever/$basearch/
gpgcheck=1
[freshrpms]
name=Fedora Linux $releasever - $basearch - freshrpms
baseurl=http://ayo.freshrpms.net/fedora/linux/$releasever/$basearch/freshrpms
enabled=0
gpgcheck=1
[rpmforge]
name = RHEL $releasever - RPMforge.net - dag
baseurl = http://apt.sw.be/redhat/el6/en/$basearch/rpmforge
mirrorlist = http://apt.sw.be/redhat/el6/en/mirrors-rpmforge
enabled = 1
protect = 0
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-rpmforge-dag
gpgcheck = 1
[flash]
name=Macromedia Flash plugin
baseurl=http://macromedia.mplug.org/apt/fedora/$releasever
http://sluglug.ucsc.edu/macromedia/apt/fedora/$releasever
http://ruslug.rutgers.edu/macromedia/apt/fedora/$releasever
http://macromedia.rediris.es/apt/fedora/$releasever
enabled=0
#gpgcheck=1
To directly enable a particular repository which is currently disabled (enabled=0): yum
-y --enablerepo=flash install flash-plugin
Fedora examples (more repositories: Jpackage, ...)
Commands:
rhn_register: GUI to enter user account and "Installation Number". Must purchase a
license to get this.
rhnreg_ks: Register a login/user account
Update:
o List packages which will be updated: yum check-update
(Does not perform an update)
o Update all packages on your system: yum update
o Update a package: yum update package-name
o Update all with same prefix: yum update package-name-prefix\*
This command will update your system. It will interactively ask permission. i.e.
"Is this ok [y/N]:"
o To avoid the prompt/questions use the command: yum -y update Sample
session:
# yum -y update
Setting up Update Process
Setting up Repos
base 100% |=========================| 1.1 kB 00:00
updates-released 100% |=========================| 951 B 00:00
Reading repository metadata in from local files
base : ################################################## 2852/2852
primary.xml.gz 100% |=========================| 367 kB 00:02
MD Read : ################################################## 927/927
updates-re: ################################################## 927/927
Excluding Packages in global exclude list
Finished
Resolving Dependencies
--> Populating transaction set with selected packages. Please wait.
---> Downloading header for mod_dav_svn to pack into transaction set.
mod_dav_svn-1.1.4-1.1.x86 100% |=========================| 8.9 kB 00:00
---> Package mod_dav_svn.x86_64 0:1.1.4-1.1 set to be updated
---> Downloading header for initscripts to pack into transaction set.
initscripts-7.93.7-1.x86_ 100% |=========================| 87 kB 00:00
---> Package initscripts.x86_64 0:7.93.7-1 set to be updated
---> Downloading header for gtk2 to pack into transaction set.
...
...
Dependencies Resolved
Transaction Listing:
Install: aqhbci.x86_64 0:1.0.2beta-0.fc3 - updates-released
Install: aqhbci-devel.x86_64 0:1.0.2beta-0.fc3 - updates-released
Install: kernel.x86_64 0:2.6.11-1.14_FC3 - updates-released
...
...
o [Potential Pitfall]: Many times I have found that I can get the following
errors:
o I find that the error is traced to having two version of a package installed at
once. The following command will reveal if this is true: rpm -qpackage-name. If
there are two versions of the same package installed, I find that removing the
newer version and re-running YUM to install an upgrade gets past these errors.
To install a single package: yum -y install package-name
This will also resolve package dependencies.
Remove a package: yum remove package-name
Info:
o List available packages, version and state (base, installed, updates-
released): yum list
o List the packages installed which are not available in repository listed in config
file: yum list extras
o List packages which are obsoleted by packages in yum repository: yum list
obsoletes
Clean local cache of headers and RPM's: yum clean all
(See: /var/cache/yum/)
Yum Commands:
rhn_register Register to a Red Hat Network hosted server. Typically useful for
licensed Red Hat Enterprise Linux.
See yum man page for a full listing of commands and command arguments.
Notes:
#!/bin/sh
if [ -f /var/lock/subsys/yum ]; then
/usr/bin/yum -R 10 -e 0 -d 0 -y update yum
/usr/bin/yum -R 120 -e 0 -d 0 -y update
fi
Links:
YUM Homepage
YUM Guides - YUM download, install and YUM server configuration.
YumEx will allow you to manage the RPM packages on your system. It allows the administrator
to install/update packages from internet repositories as well as un-install RPMs from the system.
The command rdist helps the system administrator install software or update files across many
machines. The process is launched from one computer.
Command: rdist -f instruction-file
Instruction file:
files=(
/fully-qualified-path-and-file-name
/next-fully-qualified-path-and-file-name
)
dest = ( computer-node-name )
install /fully-qualified-directory-name-of-destination;
For more info see the rdist man page and rdistd man page (section 8: "man 8 rdistd").
Also see the rsync man page to migrate file changes.
File: files-to-sync.txt
+index.html
-README
+webpage-1.html
+webpage-2.html
+webpage-3.html
Files to include (+) and files which are excluded from synchronization (-).
Links:
Note: The lastlog command prints time stamp of the last login of system users. (Interprets
file: /var/log/lastlog)
Also see last command.
Many system and server application programs such as Apache, generate log files. If left
unchecked they would grow large enough to burden the system and application.
The logrotate program will periodically backup the log file by renaming it. The program will also
allow the system administrator to set the limit for the number of logs or their size. There is also
the option to compress the backed up files.
Configuration file: /etc/logrotate.conf
Directory for logrotate configuration scripts: /etc/logrotate.d/
The configuration file lists the log file to be rotated, the process kill command to momentarily
shut down and restart the process, and some configuration parameters listed in the logrotate
man page.
Form of command: find path operators
Examples:
Search and list all files from current directory and down for the string ABC:
find ./ -name "*" -exec grep -H ABC {} \;
find ./ -type f -print | xargs grep -H "ABC" /dev/null
egrep -r ABC *
Find all files of a given type from current directory on down:
find ./ -name "*.conf" -print
Find all user files larger than 5Mb:
find /home -size +5000000c -print
Find all files owned by a user (defined by user id number. see /etc/passwd) on the
system: (could take a very long time)
find / -user 501 -print
Find all files created or updated in the last five minutes: (Great for finding effects
of make install)
find / -cmin -5
Find all users in group 20 and change them to group 102: (execute as root)
find / -group 20 -exec chown :102 {} \;
Find all suid and setgid executables:
find / \( -perm -4000 -o -perm -2000 \) -type f -exec ls -ldb {} \;
find / -type f -perm +6000 -ls
Note: suid executable binaries are programs which switch to root privileges to perform
their tasks. These are created by applying a "sticky" bit:chmod +s. These programs
should be watched as they are often the first point of entry for hackers. Thus it is
prudent to run this command and remove the "sticky" bits from executables which either
won't be used or are not required by users. chmod -s filename
Directive Description
-name Find files whose name matches given pattern
-print Display path of matching files
-user Searches for files belonging to a specific user
-exec command {} \
Execute Unix/Linux command for each matching file.
;
Find files accessed more that +t days ago, less than -t or precisely t
-atime (+t,-t,t)
days ago.
-ctime (+t,-t,t) Find files changed ...
-perm Find files set with specified permissions.
-type Locate files of a specified type:
c: character device files
b: blocked device
d: directories
p: pipes
l: symbolic links
s: sockets
f: regular files
-size n Find file size is larger than "n" 512-byte blocks (default) or specify a
different measurement by using the specified letter following "n":
nb: bytes
nc: bytes
nk: kilobytes
Also see:
Finding/Locating files:
locate/slocate Find location/list of files which contain a given partial name
Find executable file location of command given. Command must be in
which
path.
whereis Find executable file location of command given and related files
rpm -qf file Display name of RPM package from which the file was installed.
File Information/Status/Ownership/Security:
ls List directory contents. List file information
chmod Change file access permissions
chmod ugo+rwx file-name :Change file security so that the user, group and
all others have read, write and execute privileges.
chmod go-wx file-name :Remove file access so that the group and all others
have write and execute privileges revoked/removed.
chown Change file owner and group
chown root.root file-name :Make file owned by root. Group assignment is also
root.
fuser Identify processes using files or sockets
If you ever get the message: error: cannot get exclusive lock
then you may need to kill a process that has the file locked. Either terminate the
process through the application interface or using the fuser command: fuser
-k file-name
file Identify file type.
file file-name
Uses /usr/share/magic, /usr/share/magic.mime for file signatures to identify file
type. The file extension is NOT used.
Add shell script to have run hourly, daily, weekly or monthly into the appropriate directory:
/etc/cron.hourly/
/etc/cron.daily/
/etc/cron.weekly/
/etc/cron.monthly/
These are preconfigured schedules. To assign a very specific schedule add a line to
the /etc/crontab file. Cron entries may also be added to a crontab formatted file located in the
directory /var/spool/cron/.
The administrator can allow users to use this facility with specific control by using the
/etc/cron.deny and /etc/cron.allow files.
The at facility may be controlled with the /etc/at.deny and /etc/at.allow files.
Man pages:
cron
crontab
The at command will schedule single jobs. (cron is for re-occurring jobs) The
daemon /usr/sbin/atd will run jobs scheduled with the at command. Access control to the
command is controlled using the files /etc/at.allow (list of user id's permitted to use
the at command) and /etc/at.deny.
The at command will respond with it's "at>" prompt upon which you enter the command you
wish to execute followed by "Enter". More commands may be entered. When done enter
"control-d".
[prompt]$ atq
1 2002-03-07 12:00 a user-id
[prompt]$ atrm 1
Man pages:
Managing Time:
The BIOS computer clock stores hardware time while the OS keeps track of system time. The
system time is initialized during boot by syncing OS time to the hardware time. It is common for
web servers to set their clocks to GMT0 time as their audience is worldwide and GMT is the
only true standard time. Your local office server would most likely be set to local time.
Read time:
Read system time (Linux OS time):
o date
Read hardware clock (BIOS clock):
o /sbin/hwclock
(Same as /sbin/hwclock --show)
o /sbin/hwclock --utc
Note that the time zone setting is a soft link from /etc/localtime to a file
under /usr/share/zoneinfo/ (or /usr/lib/zoneinfo/ on older systems). To set the default time zone
to US CST, generate a new link manually with the command: ln -sf
/usr/share/zoneinfo/US/Central /etc/localtime
Try: /usr/sbin/ntpdate -q time.ucla.edu
Note: Typically many web servers set their time to GMT due to the world wide nature of their
service. Internally UNIX systems use Coordinated Universal Time (UTC) which is the number of
seconds since Jan 1, 1970 0 hrs. "Calendar Time" is then calculated based on your time zone
and whether you are on Standard or Daylight Savings time (Second Sunday of March to First
Sunday of November - beginning March 2007).
The timed (time server daemon) allows one to synchronizes the host's time with the
time of another host. This is a master - slave configuration. See
the timed and timedc man pages.
TZ Environment Variables:
GMT
TZ Variable Description
Offset
GMT0 0 Greewich Mean Time
UTC0 0 Universal Coordinated Time
FST2FDT 2 Fernando De Noronha Std
GST3 3 Greenland Standard Time
BST3 3 Brazil Standard Time
EST3EDT 3 Eastern Brazil Standard Time
NST3:30NDT 3.5 Newfoundland Standard Time/Newfoundland Daylight Time
AST4ADT 4 Atlantic Standard Time/ Atlantic Daylight Time
EST5EDT 5 USA Eastern Standard Time/ Eastern Daylight Time
EST6CDT 5 USA Eastern Standard Time/ Central Daylight Time
CST6CDT 6 USA Central Standard Time/ Central Daylight Time
MST7 7 USA Mountain Standard Time
MST7MDT 7 USA Mountain Standard Time/ Mountain Daylight Time
USA Pacific Standard Time/Pacific Daylight Time, 8 hrs
PST8PDT 8
from GMT
AKS9AKD 9 USA Alaska Standard Time/Alaska Daylight Time
YST9YDT 9 Yukon Standard Time/Yukon Daylight Time
HST10 10 USA Hawaiian Standard Time/ Hawaiian Daylight Time
NZST-12NZDT -12 New Zealand Standard Time/ New Zealand Daylight Time
EST-10 -10 Australian Eastern Standard Time
Australian Eastern Standard Time/Australian Eastern
EST-10EDT -10
Daylight Time
CST-9:30 -9.5 Australian Central Standard Time
CST-9:30CDT -9.5 Australian Central Standard Time/Australian Central
Daylight Time
JST-9 -9 Japan Standard Time
KST-9KDT -9 Korean Standard Time
WST-8:00
-8 Australian Western Standard Time
WAS-8WAD
CCT-8 -8 China Coast Time
HKT-8 -8 Hong Kong Time
JST-7:30 -7.5 Java Standard Time
NST-7 -7 North Sumatra Time
IST-5:30 -5.5 Indian Standard Time
IST-3:30IDT -3.5 Iran Standard Time
MSK-3MSD -3 Moscow Time
SAST-2SADT -2 South Africa Standard Time/South Africa Daylight Time
Eastern European Time/Eastern European Time Daylight
EET-2EEST -2
Savings Time
Middle European Time/Middle European Time Daylight
MET-2METDST -2
Savings Time
Central European Time/Central European Time Daylight
CET-1CEST -1
Savings Time
WAT-1 -1 West Africa Time
Western European Time/Western European Time Daylight
WET0WETDST 0
Savings Time
See /usr/share/zoneinfo/.
The daemon ntpd will continually monitor time and synchronize your system clock with
that of a known accurate time system (atomic clock). Corrections are implemented in
small steps to correct the clock over time. Errors of over 1000 seconds causes ntpd to
abort correction. The init script /etc/rc.d/init.d/ntpd issues the
command /usr/sbin/ntpdate to set the time.
Time servers:
time.nist.gov
ns.arc.nasa.gov
tick.usno.navy.mil
Configuring NTP:
Client Configuration:
server time1.ntpServer.gov
server time2.ntpServer.gov
restrict time1.ntpServer.gov mask 255.255.255.255 nomodify notrap noquery
restrict time2.ntpServer.gov mask 255.255.255.255 nomodify notrap noquery
restrict 127.0.0.1
2. This will synchronize your system clock with the times servers listed.
Note that using IP addresses instead of fully qualified domain names will provide a
faster response.
restrict options:
option Description
Limits the remote NTP server to a single IP address (255.255.255.255),
mask
default mask 0.0.0.0.
nomodify Run time configuration can not be modified by remote NTP server
notrap Do not log remote messages.
noquery Do not allow remote ntpq or ntpdc querries
notrust Deny cryptographically un-authenticated NTP querries.
3. Syncronize time with NTP server: ntpdate -u time1.ntpServer.gov
4. Start NTP daemon: service ntpd start
(or: /etc/init.d/ntpd start)
5. Configure NTP daemon to start during boot: chkconfig ntpd on
6. Check time: date
Note:
NTP uses UDP on port 123 for inbound and outbound communication.
Check /var/log/messages for errors.
PHP has an independent setting in /etc/php.ini
[Date]
; Defines the default timezone used by the date functions
;date.timezone = GMT-0
date.timezone = Europe/London
[mysqld_safe]
timezone = Europe/London
Command: /usr/bin/system-config-time
[root]# yast2 ntp-client
Links:
When you login, this message may greet you. The system will often send a mail message to the
"root" user after the completion of some cron jobs, software installation or as an error message
meant to alert the system administrator. Type the console command "mail". The following
simple commands will help you navigate through this simple mail client.
The "mail" command is included with the package "mailx". This is included with the default
Fedora and Red Hat installations. Ubuntu users must include the "universe" repository to get
access to the package "mailx".
[prompt]$ tty
/dev/pts/4
stty: Text Terminal configuration commands.
Control
Description C format ASCII (decimal)
Character
Linefeed ctrl-j \n 10
Carriage Return ctrl-m \r 13
Escape Character ctrl-v 22
Stop screen scroll ctrl-s 19
Resume screen scroll ctrl-q 17
Backspace (and delete) one character ctrl-h \b 8
Backspace (and delete) one word ctrl-w 23
Delete line ctrl-u 21
End of file ctrl-d 4
ctrl-z (DOS/VAX)
Interrupt signal SIGINT ctrl-c 3
Suspend signal SIGSTOP ctrl-z 26
Quit signal SIGQUIT ctrl-\ 28
Typically repaint screen. ctrl-r 18
(In bash reverse search of command
history)
(Non POSIX)
Note:
When typing a "ctrl-m" is just like hitting the "Enter" key. If you want to enter the "ctrl-m"
as part of the entry to the stty command then prefix it with "ctrl-v" so that the "ctrl-m"
"escaped" from acting as a terminal directive but instead acts as command input.
Check terminal type: echo $TERM
Set terminal type: export TERM=xterm
This is a very common fix for many remote terminal problems.
Gnome Terminal:
Terminal configuration to handle the annoying backspace problems associated with telnet-ing to
a different system. For example, how to configure the Linux gnome-terminal for use with an
SGI/IRIX system:
Start /usr/bin/gnome-terminal
Select: "File" + "New Profile..."
Enter profile name: SGI
Base on: Default
Select: "Create"
Select: "Edit" + "Profiles..."
Select profile: "SGI" and select "Edit" button.
Select the tab: "Compatibility"
Backspace key generates: change from "ASCII DEL" to "Control-H"
Select: "Close"
Select: "Terminal" + "Profile" + "SGI"
Man Pages:
Also see /usr/include/bits/termios.h
Directory Listings and Terminal Colors for "ls": If you alter your terminal background color,
you will quickly find that the display from the command "ls" may obscure some of the results.
There are three options for setting the colors applied to the results of the "ls" command:
1. The color scheme can be ignored and all output displayed in the foreground color.
Set an alias in your $HOME/.bashrc file: alias ls='ls -F'
The output will use symbols instead of colors to identify the types:
o A closing "/" will denote a directory.
o A "@" denotes a symbolic link.
o An "*" denotes execute permissions.
2. Use the command dircolors to list the system default. Change and assign new colors
using the environment variable "LS_COLORS". This can be set in
your $HOME/.bashrc file.
LS_COLORS='no=00:fi=00:di=01;34:ln=01;36:pi=40;33:so=01;35:do=01;35:
bd=40;33;01:cd=40;33;01:or=40;31;01:ex=01;32:*.tar=01;31:
*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.zip=01;31:
*.z=01;31:*.Z=01;31:*.gz=01;31:*.bz2=01;31:*.deb=01;31:
*.rpm=01;31:*.jar=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:
*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:
*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:
*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.avi=01;35:*.fli=01;35:
*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.ogg=01;35:*.mp3=01;35:*.wav=01;35:';
export LS_COLORS
System defaults shown. (Fedora Core 3)
3. Specify colors used in the system configuration file: /etc/DIR_COLORS or in your local
file $HOME/.dir_colors
Hardware Info:
/usr/bin/lsdev List devices and info on system hardware. Also IRQ's.(RPM package
procinfo)
Also cat /proc/devices
/sbin/lspci list all PCI devices (result of probe) Also lspci -vvx and cat /proc/pci
cat /proc/interrupts List IRQ's used by system and the device using the interrupt.
cat /proc/ioports List I/O ports used by system.
cat /proc/dma List DMA channels and device used by system.
cat /proc/cpuinfo List info about CPU.
Also See:
PERL Administration/Maintenance:
At some point you will be required to administer the installation of PERL modules.
Installation can be done:
Manually:
o Un-zip/Un-tar module: tar xzf yourmodule.tar.gz
o Build with PERL makefile:
perl Makefile.PL
make
o Install: make install
Automatically: (preferred)
cpan> help
This method rocks! It connects to a CPAN server and ftp's a gzipped tarball and installs
it. First time through it will ask a bunch of questions. (Answer "no" to the first question
for autoconfigure.) Defaults were good for me. The only reason to manually configure
this is if you are using a proxy. It then asks for your location (i.e. North America) and
country. I entered a number for the first CPAN server but after that the actual URL was
cut and pasted in whole.
If it fails, you must load the appropriate RPMs and retry using "force install module-
name"
File: testAuthenNIS.pl
#!/usr/bin/perl
BEGIN{push @INC, "/usr/lib/perl5/site_perl/5.8.5/Apache";}
eval "use Apache::AuthenNIS"; $hasApacheAuth = $@ ? 0 : 1;
printf "Apache::AuthenNIS". ($hasApacheAuth ? "" : " not") . " installed";
printf "\n";
Test: [root]# ./testAuthenNIS.pl
o Good: Apache::AuthenNIS installed
o Not good: Apache::AuthenNIS not installed
(Installation)
Most PERL modules are now available as RPMs. See:
o RpmForge.org (Also available via YUM)
o Search RpmFind.net
File packing/archiving:
It should be noted that automated enterprise wide multi-system backups should use a system
such as Amanda. (See Backup/Restore links on YoLinux home page) Simple backups can be
performed using the tar command:
This will backup the files, directories and all it's subdirectories and files of the
directories /home and /opt to the first SCSI tape device. (/dev/st0)
#!/bin/bash
tar -cz -f /mnt/BackupServer/user-id/backup-weekly-`date +%F`.tar.gz -C /home/user-id
dir-to-back-up
SELinux Tar:
"Security Enhanced" Linux archive backup, "star", will save and restore the SELinux attributes.
Note that the "tar" command will not operate with the "star" archive.
star -xattr -H=exustar -c -f archive-file.star /directory/path/to/backup/
Notes:
Backup using compression to put more on SCSI tape device: tar -z -cvf /dev/st0
/home /opt
List contents of tape: tar -tf /dev/st0
List contents of compressed backup tape: tar -tzf /dev/st0
Backup directory to a floppy: tar -cvf /dev/fd0 /home/user1
When restored it requires root because the root of the backup is "/home".
For more on Linux floppy devices see the YoLinux tutorial: Using floppies with Linux.
Backup sub-directory to floppy using a relative path: tar -cvf /dev/fd0 src
First execute this command to go to the parent directory: cd /home/user1
Backup sub-directory to floppy using a defined relative path: tar -cvf /dev/fd0 -C
/home/user1 src
Restore from floppy: tar -xvf /dev/fd0
Backup directory to a compressed archive file:
tar -z -cvf /usr/local/Backups/backup-03212001.tar.gz -C /home/user2/src project-x
List contents: tar -tzf /usr/local/Backups/backup-03212001.tar.gz
Restore:
cd /home/user2/src
tar -xzf /usr/local/Backups/backup-03212001.tar.gz
Also see:
System Fixes:
Fix the error: "Failed to activate 'OAFID:GNOME_SettingsDaemon" This annoying
dialog box may appear after one logs in. Themes, sounds or background may cease to
operate properly. You may also get the error message "The Settings Daemon restarted
too many times."
Admin Tips:
Unix command line output is sent to the screen (default) but you would also like the
output to print to a file (bash shell):
command 2>&1 | tee output-file.txt
Red Hat Enterprise 4/Fedora Core (2+) GUI system configuration tool commands begin
with "system-config-". Type this in a bash shell and press tab twice to view all the GUI
configuration tool commands available.
Links:
Process Monitoring HowTo - Alavoor Vasudevan
SysAdmin Magazine - Journal for Unix System Administrators
LinuxConf (Solucorp)
Shell Script Resources:
o Bash: Linux terminal command guide
o http://theory.uwinnipeg.ca/UNIXhelp/scrpt/index.html
o Regular Expressions - By Peter Benjamin
SysAdmin Tools:
Webmin
Alternate configurations:
Diskless-HOWTO
Diskless-root-NFS-HOWTO