Monitoring command Linux server
That
depends on what you want to use the log for. Do you want to use it
only for troubleshooting purposes, or do you want to capture
everything that’s happening? Is it a legal requirement to capture
what each user is running or viewing?
If
you are using logs for troubleshooting purposes, save only errors,
warnings or fatal messages. There’s no reason to capture debug
messages, for example.
Benefits
of Centralizing Logs
Centralizing
your logs makes them faster to search, which can help you solve
production issues faster. You don’t have to guess which server had
the issue because all the logs are in one place. Additionally, you
can use more powerful tools to analyze them, including log management
solutions. These solutions can transform
plain text logs into
fields that can be easily searched and analyzed.
Which
Protocol: UDP, TCP, or RELP?
There
are three main protocols that you can choose from when you transmit
data over the Internet. The most common is UDP for your local network
and TCP for the Internet. If you cannot lose logs, then use the more
advanced RELP protocol.
UDP
sends
a datagram packet, which is a single packet of information. It’s an
outbound-only protocol, so it doesn’t send you an acknowledgement
of receipt (ACK). It makes only one attempt to send the packet. UDP
can be used to smartly degrade or drop logs when the network gets
congested. It’s most commonly used on reliable networks like
localhost.
TCP
sends
streaming information in multiple packets and returns an ACK. TCP
makes multiple attempts to send the packet, but is limited by the
size of the TCP
buffer.
This is the most common protocol for sending logs over the Internet.
RELP
is
the most reliable of these three protocols but was created for
rsyslog and has less industry adoption. It acknowledges receipt of
data in the application layer and will resend if there is an error.
Make sure your destination also supports this protocol.
If
rsyslog encounters a problem when storing logs, such as an
unavailable network connection, it can queue the logs until the
connection is restored. The queued logs are stored in memory by
default. However, memory is limited and if the problem persists, the
logs can exceed memory capacity.
Common
Linux log files names and usage
/var/log/messages
:
General message and system related stuff
/var/log/auth.log
:
Authenication logs
/var/log/kern.log
:
Kernel logs
/var/log/cron.log
:
Crond logs (cron job)
/var/log/maillog
:
Mail server logs
/var/log/qmail/
:
Qmail log directory (more files inside this directory)
/var/log/httpd/
:
Apache access and error logs directory
/var/log/lighttpd/
:
Lighttpd access and error logs directory
/var/log/boot.log
:
System boot log
/var/log/mysqld.log
:
MySQL database server log file
/var/log/secure
or
/var/log/auth.log
:
Authentication log
/var/log/utmp
or
/var/log/wtmp
:
Login records file
/var/log/yum.log:
Yum command log file.
/var/log/mysqld.log
/var/lib/mysql/hostname.err
/var/lib/mysql/hostname.err
/var/log/boot.log
–
This file contains the boot and reboot information of this Linux
box/var/log/btmp
–
This file records failed login attempts/var/log/utmp
–
This file has the current login state of each user/var/log/wtmp
–
This file contains all logins and logouts history/var/log/cron
–
This file contains all cron related messages like when the cron
daemon started a job, failure messages etc./var/log/dmesg
–
This file contains messages related to device drivers. There is also
a command dmesg which can be used to view the messages in this
file./var/log/lastlog
–
This file contains information about last logins of all users. It is
a binary file which can be read by the command
lastlog./var/log/maillog
–
This log files has the messages from mail server running on the
system e.g. sendmail/var/log/messages
–
This is the general system activity log. Everything is logged to this
file including logins, authentication failed, anonymous logins,
network connections, ftp sessions etc. If you need to debug a process
running on your system, this is the log file to go
to./var/log/mysqld.log–
This is the MySQL log file. MySQL logs all debug, failure and success
messages to this file. It also has information about the starting,
stopping and restarting of the MySQL daemon
mysqld./var/log/pureftp.log
–
This log file is of the pureftp process that listens for FTP
connections. All connections, FTP logins and authentication failures
are logged to this file./var/log/secure
–
This file contains all security related messages on the system. This
includes authentication failures, possible break-in attempts, SSH
logins, failed passwords, sshd logouts, invalid user accounts
etc./var/log/spooler
–
This file is rarely used and is empty on my server. This log files
used to contain messages from USENET/var/log/xferlog
–
This file conatains all FTP file transfer sessions including the file
name and the user who initiated the FTP transfer/var/log/yum.log
–
This file contains the activity of the yum package installer. All
packages installed, updated and deleted are logged to this file. Any
errors or warnings are also logged./var/log/httpd/
–
This directory contains the error_log and the access_log files of the
Apache httpd daemon. As the names suggest, the error_log contains all
errors encountered by httpd including memory issues, and other system
related errors. The access_log contains a log of all requests
received over HTTP.
see
when did someone last log in to the system.
lastlog
locate
and find both commands will find file.
Locate
simply
looks its database and reports the file location.find
does
not use a database, it traverses all the directories and their sub
directories and looks for files matching the given criterion.Find
searches
in the real system. Is slower but always up-to-date and has more
options (size, modification time,...)
2.
tail, head and cat commands
cat
command which is useful in displaying entire file content. But in
some cases we have to print part of file. In today’s post we will
be talking about head and tail commands, which are very useful when
you want to view a certain part at the beginning or at the end of a
file, specially when you are sure you want to ignore the rest of the
file content.
Jobs:-
alternate way of listing your own processes
bg-
put a process in the background
fg-
put a process in the forground
A
Basic Example of WC Command
The
‘wc‘
command without passing any parameter will display a basic result of
”tecmint.txt‘
file. The three numbers shown below are 12
(number
of lines),16(number
of words)
and112(number
of bytes)
of the file
#
wc tecmint.txt
12
16 112 techmint.txt
wc-w
:- number of words in a file
wc
-c count of bytes in a file
wc
-m count of characters from a file
wc
-l length of the longest line in a file
locate
and find both commands will find file.
Locate
simply
looks its database and reports the file location.find
does
not use a database, it traverses all the directories and their sub
directories and looks for files matching the given criterion.Find
searches
in the real system. Is slower but always up-to-date and has more
options (size, modification time,...)
2.
tail, head and cat commands
cat
command which is useful in displaying entire file content. But in
some cases we have to print part of file. In today’s post we will
be talking about head and tail commands, which are very useful when
you want to view a certain part at the beginning or at the end of a
file, specially when you are sure you want to ignore the rest of the
file content.
We
have other option -s which should always be used with -f” will
determine the sleep interval, whereas tail -f will keep watching the
file, the refresh rate is each 1 second, if you wish to control this,
then you will have to use the -s option “sleep” and specify the
sleep interval.
#
Tail is a command which prints the last few number of lines (10
lines by default)
of a certain file.
tail
/var/log/messages
Mar
20 12:42:22 hameda1d1c dhclient[4334]: DHCPREQUEST on eth0 to
255.255.255.255 port 67 (xid=0x280436dd)
##
Head command will obviously on the contrary to tail, it will print
the first 10 lines of the file. Till this part of the post, the
head command will do pretty much the same as tail.
head
/etc/passwd
root:x:0:0:root:/root:/bin/bash
If
you want to remove this header, use the -q option for quiet mode
To
list all files open under a directory
lsof
+D /etc/httpd/logs/
To
list all files opened by a specific process name:
lsof
-c httpd | head -n 10
[root@server
~]# lsof -c httpd | head -n 10
COMMAND
PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
httpd
3858 root cwd DIR 144,166 4096 140085666 /
head
and tail command in Linux
head
-n 20 /etc/passwd | tail -n 5
Output:
syslog:x:101:104::/home/syslog:/bin/falsemessagebus:x:102:106::/var/run/dbus:/bin/falseusbmux:x:103:46:usbmux
3.
grep and find different
grep
command can be used only with files .it doesn't work for
directories
find command works with both files and directories.
find command works with both files and directories.
4.
less and more different
less
is
a more modern version that has the capabilities of more
along
with additional new capabilities.
the
less
command.
You are only one page into the file. Less
allows
you to view long files one page at a time, instead of scrolling them
off the top of your screen.
Less,
more,
and pg
are
utilities for reading a very large text file in small sections at a
time.
You
can navigate the page up/down using the less command and not possible
in more command.
you
know that issue happened between 4 and 5 pm, you can use this:
grep
"2013-08-26 16:" sample.log | less
Execute
previous command that starts with a specific word
#
!ps
ps
aux | grep yproot
16947 0.0 0.1 36516 1264 ? Sl 13:10 0:00 ypbind
Examples
of commands used together with pipes
Command
|
What
it does
|
ls
-lt |head
|
Displays
the 10 newest files in the current directory.
|
du|
sort -nr
|
Displays
a list of directories and how much space they consume, sorted from
the largest to the smallest.
|
Displays
the total number of files in the current working directory and all
of its subdirectories.
|
Using
sar utility you can do two things: 1) Monitor system real time
performance (CPU, Memory, I/O, etc) 2) Collect performance data in
the background on an on-going basis and do analysis on the historical
data to identify bottlenecks.
sar -b 1 3
- Collective CPU usage
- Individual CPU statistics
- Memory used and available
- Swap space used and available
- Overall I/O activities of the system
- Individual device I/O activities
- Context switch statistics
- Run queue and load average data
- Network statistics
- Report sar data from a specific time
Mpstat :- mpstat reports processors statistics, Option -A, displays all the information that can be displayed by the mpstat.
pmap 5732
The
following example displays the memory map of the current bash shell.
In this example, 5732 is the PID of the bash shell.
pmap
command displays the memory map of a given process
pmap
-x gives some additional information about the memory maps
Socket Statistics – SS
ss
stands for socket statistics. This displays information that are
similar to netstat command.
ss -l
monitoring
system performance, w command will hlep to know who is logged on to
the system.
W
, uptime
Lsof
Lsof
stands for ls open files, which will list all the open files in the
system
tcpdump
is a network packet analyzer. Using tcpdump you can capture the
packets and analyze it for any performance bottlenecks.
tcpdump -A -i eth0
When
I was looking up how to do this, I came across this solution twice:
find / -type f -exec grep -H 'text-to-find-here' {} \;
Along
with
these,
--exclude, --include, --exclude-dir or --include-dir parameters
could be used for efficient searching:
121down vote |
Just
do:
In your root directory.
|
try adding the text inside quotes followed by
{}
\
Simply
running,
grep -RIl "" .
will
print out the path to all text files
i.e. those containing only printable character
i.e. those containing only printable character
grep
-Erni + "text you wanna search"
egrep
'(fail|denied|segfault|segmentation|reject|oops|warn)'
/var/log/messages
It
would be quicker and easier if you were to do something like thi
-atime
+7:
All files that were last accessed more than 7 days ago
- -atime 7: All files that were last accessed exactly 7 days ago
- -atime -7: All files that were last accessed less than7 days agofind /home -atime +7
Finding all files owned by a user
Find
out all files owned by user vivek:
#
find / -user vivek
Finding
files modified within a specified time – Display list of all
files in /home directory that were not last modified less than then
days ago.
#
find /home -mtime -7
asterisk meta character (*) is interpreted as meaning "zero or more of the preceding element
No comments:
Post a Comment