Tag Archives: howto

Linux daily skills (continuous updating)

Cleanup process list

Kill these process that zombie or stopped.

# kill zombie process list
ps -A -ostat,ppid | grep -e '[zZ]' | tail -n +2 | awk '{ print $2 }' | xargs kill -9

# cleanup stopped process list
ps -A -ostat,pid | grep -e '[T]' | tail -n +2 | awk '{ print $2 }' | xargs kill -9

Mandatory logged out user session

Tips: 当出现服务器用户数过多,造成别人登陆不上去,管理员可强行踢出用户

w list current logon sessions, and the kill it with pkill -kill -t [tty]

pkill -kill -t pts/2

Make Tab auto-completion case-insensitive

# If ~./inputrc doesn't exist yet, first include the original /etc/inputrc so we don't override it
if [ ! -a ~/.inputrc ]; then echo "\$include /etc/inputrc" > ~/.inputrc; fi

# Add option to ~/.inputrc to enable case-insensitive tab completion
echo "set completion-ignore-case On" >> ~/.inputrc

Note: to make this change for all users, edit /etc/inputrc

Get the networking connection statistics

netstat -n | awk '/^tcp/ {++S[$NF]} END {for(a in S) print a, S[a]}'

TIME_WAIT 22
ESTABLISHED 3254   # data transfer state
LAST_ACK 236
FIN_WAIT_1 648
FIN_WAIT_2 581
CLOSING 7
CLOSE_WAIT 4916

Kill the process that bind the specific port

GistURL: https://gist.github.com/allex/8b399a93749703acd780

#!/bin/sh

port=$1
while [[ -z "${port}" ]]; do
    read -p "Please type the port you wanna kill (q to exit): " port
done
[ "${port}" = "q" ] && exit 0;

PID=""
case "`uname `" in
    Linux*)  PID=`lsof -t -i:${port}` ;;
    Darwin*) PID=`lsof -i -n -P | grep ":${port} (LISTEN)" | awk '{print $2}'` ;;
esac

if [ -n "$PID" ];
then
    kill -9 $PID && echo "procss with pid: ${PID} on port ${port} had killed success!"
else
    echo "The specific process with port '${port}' not found. exit";
fi

Usage: ./killport.sh [port]

Find IP address in shell

# OS X may not work except
ip = `hostname -I | cut -d' ' -f1`

# or use complex Linux shell
ip=`ifconfig | sed -n 's/.*inet addr:\([0-9.]\+\)\s.*/\1/p' | grep -v '127.0.' | head -n 1`

Example:

#!/bin/sh
ip=`ifconfig | sed -n 's/.*inet addr:\([0-9.]\+\)\s.*/\1/p' | grep -v '127.0.' | head -n 1`
wget -q -O - http://${ip}:8092/yn/build.hash -H 'Host: st.comm.miui.com' | base64 -d

Compare differences between directories

cp -R $local $bak
rsync $server:$remdir/* $local/
rsync $local/ $server:$remdir/*
diff -wur $local $bak

Use cron job to cleanup log files

Linux system various kinds logs and tmp generated in /var/log/, /tmp, How to clean these files automatically?

Using tmpwatch to automate temporary file cleanup

first we need install the 3rd tool tmpwatch

yum install tmpwatch -y

once tmpwatch is installed run command

/usr/sbin/tmpwatch -am 12 /tmp

this will delete all files over 12 hours old

next, we will configure your server to do this automatically.

from SSH type: crontab -e

go to the very bottom and paste

0 4 * * * /usr/sbin/tmpwatch -am 12 /var/log

For more daily job script:

$ cat /etc/cron.daily/tmpwatch

flags=-umc
/usr/sbin/tmpwatch "$flags" -x /tmp/.X11-unix -x /tmp/.XIM-unix \
    -x /tmp/.font-unix -x /tmp/.ICE-unix -x /tmp/.Test-unix 240 /tmp
/usr/sbin/tmpwatch "$flags" 720 /var/tmp
for d in /var/{cache/man,catman}/{cat?,X11R6/cat?,local/cat?}; do
  if [ -d "$d" ]; then
    /usr/sbin/tmpwatch "$flags" -f 720 "$d"
  fi
done

-x is an entry to be excluded from the clean up operation.


Using a shell script do the same thing if none tmpwatch

find /var/log -type f -name "*.tmp" -exec rm {} \+

Normally we can execute as find /path -name "*.tmp" -exec rm {} \;
This may sometimes fail to work because the argument list may grow larger (in bytes) than the maximum allowed by the shell (getconf ARG_MAX). This may be solved by xargs with the -L option.

Also configure as a cron job to run automatically.

find /var/log -type f -mtime +12 -print0 | xargs -0 -L 5000 rm

Reference Links:

Git recover from git reset –hard

How to recover uncommitted changes to the working directory from a

git reset --hard HEAD?

You can try git fsck --lost-found to see if your changes still in lost-found:

$ git fsck --lost-found
Checking object directories: 100% (256/256), done.
Checking objects: 100% (54/54), done.
dangling blob 15f9af8379f13672ca0e75d56df100edfd67fe6b
dangling commit 18fc9548f20eb8938dde68ab4a3dd0b7a0212dc3
dangling commit 33a832866e3855e300504ea6b584732e9c3c286c
dangling blob 568ca393d5e21cdc9eda2824111a5429a70d5113
dangling blob 89cdac4d3fc03546b5ab485aa8a9905b34702a4a
dangling blob abf03d6c84484a2b096a4d7f0ee5a85361f8a3d6 <- it's this one
dangling commit bc05be5eac21134b63ca51fbd20fee5c8782a640
dangling commit c0fa59cfaa0bad5f8ca8a1a845ba1673bb207b2d
dangling commit d140d6f693d8ef83d040d483bec3db95db084cd9
dangling blob e9c3eb31aa0589ab59f46630f7926681f7a14476  <- it's this one

Then you will get a dangling blob by git show

git show e9c3eb31aa0589ab59f46630f7926681f7a14476

will give you the file content back of reset.

To find unreferenced commits I found a tip somewhere suggesting this.

gitk --all $(git log -g --pretty=format:%h)

I found them in the other directory within the <path to repo>/.git/lost-found/. From there, I can see the uncommitted files, copy out the blobs, and rename them.

Note: This only works if you added the files you want to save to the index (using git add .). If the files weren’t in the index, they are lost.

Rotate Nginx log files

Nginx is a great web server, however a default install will not rotate log files for you. there’s no size limit, it will keep getting bigger until your disk is full.

this is a problem especially on busy sites, as the access log can eat up disc space quite quickly.

In this tutorial, I will show you how to rotate ngnix log files automatically, my version is nginx/1.4.3. but any modern distribution should function in a similar way.

Manual rotating nginx log files via Cron

First we need to create the job bash script for cron that will do the log rotation.

sudo vi /usr/local/sbin/rotate_nginx_log.sh

Here are the contents of the script (this is based off the example from the Nginx wiki):

#!/bin/bash
# <https://gist.github.com/allex/10360845>

# Set variable
logs_path="/var/log/nginx/"
old_logs_path=${logs_path}/old
nginx_pid=`cat /var/run/nginx.pid`

time_stamp=`date -d "yesterday" +%Y-%m-%d`

mkdir -p ${old_logs_path}

# Main
for file in `ls $logs_path | grep log$ | grep -v '^20'`
do
    if [ ! -f ${old_logs_path}/${time_stamp}_$file ]
    then
        dst_file="${old_logs_path}/${time_stamp}_$file"
    else
        dst_file="${old_logs_path}/${time_stamp}_$file.$$"
    fi
    mv $logs_path/$file $dst_file
    gzip -f $dst_file  # do something with access.log.0
done

kill -USR1 $nginx_pid

Note:

First, we move the current log to a new file for archiving. A common scheme is to name the most recent log file with a suffix of current time stamp. e.g, $(date "+%Y-%m-%d").

The command that actually rotates the logs is kill -USR1 $(cat /var/run/nginx.pid). This does not kill the Nginx process, but instead sends it a SIGUSR1 signal causing it to re-open its logs.

THen execute sleep 1 to allow the process to complete the transfer. We can then zip the old files or do whatever post-rotation processes we would like.

Next please make sure that the script file is executable by running

chmod +x /usr/local/sbin/rotate_nginx_log.sh

In our final step we will create a crontab file to run the script we just created.

sudo crontab -e

In this file let’s create a cron job to run every day at 1am

Add the following lines to the file as follows:

00 01 * * * /usr/local/sbin/rotate_nginx_log.sh &> /dev/null

Also we can config suppressing cron jobs status email notifications. see Suppressing Cron Job Email Notifications


Log Rotation With Logrotate

The logrotate application is a simple program to rotate logs.

sudo vim /etc/logrotate.d/nginx

Put this content inside and modify the first line to match your Nginx log file

/var/log/nginx/*.log {
    daily
    missingok
    rotate 30
    dateformat .%Y-%m-%d
    compress
    delaycompress
    notifempty
    create 640 nginx adm
    sharedscripts
    postrotate
        [ -f /var/run/nginx.pid ] &amp;&amp; kill -USR1 `cat /var/run/nginx.pid`
    endscript
}

Wait 24 hours until cron daily runs and check out if you see any .gz file inside your logs directory, if you see some gzipped files, your Nginx rotation is working fine :D


Related References:

CURL usage

There are some powerful features of curl you did not know before

Used curl to grab all headers with ‘-H’

$ curl -I "http://iallex.com:8080"
HTTP/1.1 200 OK
X-Powered-By: PHP/5.5.1
X-Pingback: http://iallex.com:8080/xmlrpc.php
Content-Type: text/html; charset=UTF-8
Date: Mon, 24 Mar 2014 02:04:01 GMT
Server: lighttpd/1.4.34

Send a header to web server with ‘-H’

  • The curl command supports -H or –header option to pass extra HTTP header to use when getting a web page from a web server.
  • This option can be used multiple times to add/replace/remove multiple headers, the syntax is:

curl -H 'HEADER-1' -H 'HEADER-2' ... <URL>

E.g Check if 10.237.110.22:8088 apache node is working or not

curl -I -H 'Host: iallex.com' 'http://10.237.110.22:8080/'

Checking gzip/deflate server responses with curl

Curl provides a simple tool for checking server responses.

First, a few curl arguments that will come in handy:

-s, --silent prevents curl from showing progress meter

-w, --write-out 'size_download=%{size_download}\n' instructs curl to print out the download size

-o, --output instructs curl to throw away the output, sending it to /dev/null

Using these arguments, we can make a simple request for a path on the server:

curl -s -w "size_download=%{size_download}\n" -o /dev/null \
     "http://code.jquery.com/jquery-2.1.0.min.js"
size_download=83615

Here, you can the response was 83615 bytes. Next up, lets make the same
request, this time adding the Accept-Encoding header to ask for compressed
content.

curl -s -w "size_download=%{size_download}\n" -o /dev/null \
     -H "Accept-Encoding: gzip,deflate" \
     "http://code.jquery.com/jquery-2.1.0.min.js"
size_download=34151

Nice! This downloaded only 34151 bytes of data, so it the data is definitely
being compressed. Up next, lets try making the request a third time, now
making the request a HTTP1.0 request.

curl -s -w "size_download=%{size_download}\n" -o /dev/null \
     -H "Accept-Encoding: gzip,deflate" \
     --http1.0
     "http://code.jquery.com/jquery-2.1.0.min.js"
size_download=83615

This time, response same as first request, which makes sense when using Nginx
with the [gzip_http_version](http://wiki.nginx.org/NginxHttpGzipModule#gzip_ht
tp_version) set to 1.1.

Specify the user name and password to use for server authentication

Reference from manual

-u, –user

Specify the user name and password to use for server authentication. Overrides -n,
–netrc and –netrc-optional.

If you simply specify the user name, curl will prompt for a password.

The user name and passwords are split up on the first colon, which makes it
impossible to use a colon in the user name with this option. The password can, still.

curl -u allex:d9e871f "http://10.200.11.128:7890?do=syncx"

References:

git squashing commits with rebase

git rebase – Forward-port local commits to the updated upstream head

We can use git rebase --interactive (or -i) mode to squash multiple snapit commit.

git rebase -i HEAD~2
pick b76d157 b
pick a931ac7 c

Changing b’s pick to squash will result in the error you saw, but if instead you squash c into b by changing the text to:

pick b76d157 b
s a931ac7 c

For more details reference:

Get absolute path from a shell path

While coding shell script, normally get the current directory of the script file like:

sh_dir=`cd -P — “$(dirname “$0″)” && pwd -P`

But, if running by source like source foo.sh, we may got the unexpected result.

Here is a safer and more readable way to do this job:

# get current script directory
DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )

# get current executing script full path
sh_path=$( unset CDPATH && cd "$(dirname "$0")" && echo $PWD/$(basename "$0") )

Notes:

  • If $0 is a bare filename with no preceding path, the original script will fail but the one given here will work. (Not a problem with $0 but could be in other applications.)
  • Either approach will fail if the path to the file doesn’t actually exist. (Not a problem with $0, but could be in other applications.)
  • The unset is essential if your user may have CDPATH set.
  • Unlike readlink -f or realpath, this will work on non-Linux versions of Unix (e.g., Mac OS X).
DIR=$(cd `dirname "${BASH_SOURCE[0]}"` && pwd)/

Using ${BASH_SOURCE[0]} instead of $0 produces the same behaviour
regardless of whether the script is invoked as

name.sh or source name.sh

For more details about difference in linux between ‘source’ and ‘sh’