Tag Archives: shell

Bash array tutorial

Like some of other advanced program language, Bash also has Array data structures. There are some basic array tutorials can be found in The Ultimate Bash Array Tutorial with 15 Examples

declare -a Unix=('Debian' 'Red hat' 'Ubuntu' 'Suse' 'Fedora' 'UTS' 'OpenLinux');

Also the keyword declare -a can be omitted.

There are still some tips in daily really shell scripts:

Declare Array more simple

Array is created automatically when a variable is used in the format like,


Length of the Array vs Length of the nth Element

We can get the length of an array using the special parameter called $#.

${#arrayname[@]} gives you the length of the whole array.

But, if the @ sign replace with nth of the element (>=1), then gives you the length of the nth Element in an array. Also if omit the [nth], the nth Defaults to the first element of the array.

echo ${#Unix[@]} # Number of elements in the array. => 7
echo ${#Unix}  # Number of characters in the first element located at index 1. i.e Debian => 6
echo ${#Unix[2]} # Echo the 2th element 'Red hat' length => 7 

Difference between @ and * when referencing array values

This Bash guide says:

If the index number is @ or *, all members of an array are referenced.

LIST=(1 2 3)
for i in "${LIST[@]}"; do
  echo "example.$i "

Gives: example.1 example.2 example.3 (desired result).

But if use ${LIST[*]}, The loop will get example.1 2 3 instead.

when using echo, @ and * actually do give the same results like,

echo ${LIST[@]}
echo ${LIST[*]}

both echos get the desired result: 1 2 3

The difference is subtle; $* creates one argument, while $@ will expand into separate arguments, so:

for i in "${LIST[@]}"

will deal with the list (print it) as multiple variables


for i in "${LIST[*]}"

will deal with the list as one variable.

Read Content of a File into an Array

You can load the content of the file line by line into an array by cat, example like,

$ cat loadcontent.sh

filecontent=( `cat "logfile" `)
for t in "${filecontent[@]}"; do
  echo $t
echo "Read file content!"

Also you can use [read][3] for more duplex through for loop. such as Reading Columns like,

var="one two three"
read -r col1 col2 col3 <<< "$var"
printf "col1: %s, col2: %s, col3 %s\n" "$col1" "$col2" "$col3"

Dump first column value of each line

while read -r -a line; do
  i=$((${#line[@]} - 1));
  [ $i -eq -1 ] || echo "${line["$i"]}";
done <~/.ssh/config

Parse predefine config in ~/.ssh/config like,

$ cat ~/.ssh/config
#def USER_NAME apps
#def HOST_PREFIX 10.200.51
Host *
    ControlMaster auto
    ControlPath ~/.ssh/master-%r@%h:%p
while read -r x k v; do
  if [ "$x" == "#def" ]; then
    echo "{$k/$v}";
done <~/.ssh/config

In the above example, the k/v prefixing with #def has printed through for loop.

Linux daily skills (continuous updating)

Cleanup process list

Kill these process that zombie or stopped.

# kill zombie process list
ps -A -ostat,ppid | grep -e '[zZ]' | tail -n +2 | awk '{ print $2 }' | xargs kill -9

# cleanup stopped process list
ps -A -ostat,pid | grep -e '[T]' | tail -n +2 | awk '{ print $2 }' | xargs kill -9

Mandatory logged out user session

Tips: 当出现服务器用户数过多,造成别人登陆不上去,管理员可强行踢出用户

w list current logon sessions, and the kill it with pkill -kill -t [tty]

pkill -kill -t pts/2

Make Tab auto-completion case-insensitive

# If ~./inputrc doesn't exist yet, first include the original /etc/inputrc so we don't override it
if [ ! -a ~/.inputrc ]; then echo "\$include /etc/inputrc" > ~/.inputrc; fi

# Add option to ~/.inputrc to enable case-insensitive tab completion
echo "set completion-ignore-case On" >> ~/.inputrc

Note: to make this change for all users, edit /etc/inputrc

Get the networking connection statistics

netstat -n | awk '/^tcp/ {++S[$NF]} END {for(a in S) print a, S[a]}'

ESTABLISHED 3254   # data transfer state
FIN_WAIT_1 648
FIN_WAIT_2 581

Kill the process that bind the specific port

GistURL: https://gist.github.com/allex/8b399a93749703acd780


while [[ -z "${port}" ]]; do
    read -p "Please type the port you wanna kill (q to exit): " port
[ "${port}" = "q" ] && exit 0;

case "`uname `" in
    Linux*)  PID=`lsof -t -i:${port}` ;;
    Darwin*) PID=`lsof -i -n -P | grep ":${port} (LISTEN)" | awk '{print $2}'` ;;

if [ -n "$PID" ];
    kill -9 $PID && echo "procss with pid: ${PID} on port ${port} had killed success!"
    echo "The specific process with port '${port}' not found. exit";

Usage: ./killport.sh [port]

Find IP address in shell

# OS X may not work except
ip = `hostname -I | cut -d' ' -f1`

# or use complex Linux shell
ip=`ifconfig | sed -n 's/.*inet addr:\([0-9.]\+\)\s.*/\1/p' | grep -v '127.0.' | head -n 1`


ip=`ifconfig | sed -n 's/.*inet addr:\([0-9.]\+\)\s.*/\1/p' | grep -v '127.0.' | head -n 1`
wget -q -O - http://${ip}:8092/yn/build.hash -H 'Host: st.comm.miui.com' | base64 -d

Compare differences between directories

cp -R $local $bak
rsync $server:$remdir/* $local/
rsync $local/ $server:$remdir/*
diff -wur $local $bak

Use cron job to cleanup log files

Linux system various kinds logs and tmp generated in /var/log/, /tmp, How to clean these files automatically?

Using tmpwatch to automate temporary file cleanup

first we need install the 3rd tool tmpwatch

yum install tmpwatch -y

once tmpwatch is installed run command

/usr/sbin/tmpwatch -am 12 /tmp

this will delete all files over 12 hours old

next, we will configure your server to do this automatically.

from SSH type: crontab -e

go to the very bottom and paste

0 4 * * * /usr/sbin/tmpwatch -am 12 /var/log

For more daily job script:

$ cat /etc/cron.daily/tmpwatch

/usr/sbin/tmpwatch "$flags" -x /tmp/.X11-unix -x /tmp/.XIM-unix \
    -x /tmp/.font-unix -x /tmp/.ICE-unix -x /tmp/.Test-unix 240 /tmp
/usr/sbin/tmpwatch "$flags" 720 /var/tmp
for d in /var/{cache/man,catman}/{cat?,X11R6/cat?,local/cat?}; do
  if [ -d "$d" ]; then
    /usr/sbin/tmpwatch "$flags" -f 720 "$d"

-x is an entry to be excluded from the clean up operation.

Using a shell script do the same thing if none tmpwatch

find /var/log -type f -name "*.tmp" -exec rm {} \+

Normally we can execute as find /path -name "*.tmp" -exec rm {} \;
This may sometimes fail to work because the argument list may grow larger (in bytes) than the maximum allowed by the shell (getconf ARG_MAX). This may be solved by xargs with the -L option.

Also configure as a cron job to run automatically.

find /var/log -type f -mtime +12 -print0 | xargs -0 -L 5000 rm

Reference Links:

CURL usage

There are some powerful features of curl you did not know before

Used curl to grab all headers with ‘-H’

$ curl -I "http://iallex.com:8080"
HTTP/1.1 200 OK
X-Powered-By: PHP/5.5.1
X-Pingback: http://iallex.com:8080/xmlrpc.php
Content-Type: text/html; charset=UTF-8
Date: Mon, 24 Mar 2014 02:04:01 GMT
Server: lighttpd/1.4.34

Send a header to web server with ‘-H’

  • The curl command supports -H or –header option to pass extra HTTP header to use when getting a web page from a web server.
  • This option can be used multiple times to add/replace/remove multiple headers, the syntax is:

curl -H 'HEADER-1' -H 'HEADER-2' ... <URL>

E.g Check if apache node is working or not

curl -I -H 'Host: iallex.com' ''

Checking gzip/deflate server responses with curl

Curl provides a simple tool for checking server responses.

First, a few curl arguments that will come in handy:

-s, --silent prevents curl from showing progress meter

-w, --write-out 'size_download=%{size_download}\n' instructs curl to print out the download size

-o, --output instructs curl to throw away the output, sending it to /dev/null

Using these arguments, we can make a simple request for a path on the server:

curl -s -w "size_download=%{size_download}\n" -o /dev/null \

Here, you can the response was 83615 bytes. Next up, lets make the same
request, this time adding the Accept-Encoding header to ask for compressed

curl -s -w "size_download=%{size_download}\n" -o /dev/null \
     -H "Accept-Encoding: gzip,deflate" \

Nice! This downloaded only 34151 bytes of data, so it the data is definitely
being compressed. Up next, lets try making the request a third time, now
making the request a HTTP1.0 request.

curl -s -w "size_download=%{size_download}\n" -o /dev/null \
     -H "Accept-Encoding: gzip,deflate" \

This time, response same as first request, which makes sense when using Nginx
with the [gzip_http_version](http://wiki.nginx.org/NginxHttpGzipModule#gzip_ht
tp_version) set to 1.1.

Specify the user name and password to use for server authentication

Reference from manual

-u, –user

Specify the user name and password to use for server authentication. Overrides -n,
–netrc and –netrc-optional.

If you simply specify the user name, curl will prompt for a password.

The user name and passwords are split up on the first colon, which makes it
impossible to use a colon in the user name with this option. The password can, still.

curl -u allex:d9e871f ""


Get absolute path from a shell path

While coding shell script, normally get the current directory of the script file like:

sh_dir=`cd -P — “$(dirname “$0″)” && pwd -P`

But, if running by source like source foo.sh, we may got the unexpected result.

Here is a safer and more readable way to do this job:

# get current script directory
DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )

# get current executing script full path
sh_path=$( unset CDPATH && cd "$(dirname "$0")" && echo $PWD/$(basename "$0") )


  • If $0 is a bare filename with no preceding path, the original script will fail but the one given here will work. (Not a problem with $0 but could be in other applications.)
  • Either approach will fail if the path to the file doesn’t actually exist. (Not a problem with $0, but could be in other applications.)
  • The unset is essential if your user may have CDPATH set.
  • Unlike readlink -f or realpath, this will work on non-Linux versions of Unix (e.g., Mac OS X).
DIR=$(cd `dirname "${BASH_SOURCE[0]}"` && pwd)/

Using ${BASH_SOURCE[0]} instead of $0 produces the same behaviour
regardless of whether the script is invoked as

name.sh or source name.sh

For more details about difference in linux between ‘source’ and ‘sh’

libpcre.so.1: cannont open shared object file: No such file or directory.

after installed nginx, got some error message when raunch as: `/opt/nginx/sbin/nginx`

/opt/nginx/sbin/nginx: error while loading shared libraries: libpcre.so.1: cannot open shared object file: No such file or directory

still i’m sure i’ve installed the latest pcre. and also i can find the libpcre.so.1 with:

find /usr/ -name "libpcre.so.1"

ok, why cannot found the libpcre.so.1 in `/user/local/lib` ??

strace /opt/nginx/sbin/nginx

So how does the dynamic loader know where to look for executables? As with many things on Linux, there is a configuration file in /etc. In fact, there are two configuration files, /etc/ld.so.conf and /etc/ld.so.cache. Note that /etc/ld.so.conf specifies that all the .conf files from the subdirectory ld.so.conf.d should be included.
Dynamic library configuration

ldconfig -p | grep "libpcre.so.1"

not found any matches.

so the problem is the dynamic loader not serach for my lib dir in /usr/local/lib

Then Use shared libraries in /usr/local/lib??

For the current session you can

export LD_LIBRARY_PATH=/lib:/usr/lib:/usr/local/lib

or to make the change permanent you can add /usr/local/lib to /etc/ld.so.conf (or something it includes) and run `ldconfig` as root.

keep reading. If not, read aboout ldconfig first.

after all, grep it:

ldconfig -p | grep "libpcre.so.1"
libpcre.so.1 (libc6,x86-64) => /usr/local/lib/libpcre.so.1

ok, got it!
run again /opt/nginx/sbin/nginx

that all..

these are some keyword need to read for details: `strace`, `ldconfig`, `/etc/ld.conf`, `/etc/ld.conf.cache`.

How to: Check the bash shell script is being run by root or not

Sometime it is necessary to find out if a shell script is being run as root user or not.

When user account created a user ID is assigned to each user. BASH shell stores the user ID in $UID variable. Your effective user ID is stored in $EUID variable. You can

Old way…

You can easily add a simple check at the start of a script:


# Make sure only root can run our script
if [ "$(id -u)" != "0" ]; then
   echo "This script must be run as root" 1>&2
   exit 1

New way: Using EUID

# Make sure only root can run our script
if [[ $EUID -ne 0 ]]; then
   echo "This script must be run as root" 1>&2
   exit 1

Example: Mount /dev/sdb1 only if you are a root


if [[ $EUID -ne 0 ]]; then
  echo "You must be a root user" 2>&1
  exit 1
  mount /dev/sdb1 /mnt/disk2

Reference from http://www.cyberciti.biz/tips/shell-root-user-check-script.html

The final version function checkRoot

checkRoot() {
    if [ "x$EUID" = "x" ] ; then
      EUID=`id -u`
    if [ "$EUID" != 0 ] ; then
      case "`uname 2>/dev/null`" in
          # Cygwin: Assume root if member of admin group
          for g in `id -G 2>/dev/null` ; do
            case $g in
              0|544) root=t ;;
          done ;;
      if [ $root != t ] ; then
        echo "$self: You must run this as root" >&2
        exit 1

Backing up the MBR

Just another note about restoring the boot loader for dual boot systems,
after Windows messes it up. In Linux, the “dd” command can read and
write to/from raw disks and files. If you have a floppy drive, creating
a boot disk is as simple as putting a floppy in the drive and typing

$ su
<type password>
# dd if=/dev/hda of=/dev/fd0 bs=512 count=1

This makes an exact copy of the MBR of the first hard drive, copying it
to a floppy disk. You can boot directly from this floppy, and see your
old boot menu. You can restore it by switching the “if=” and “of=”
(input file, output file) parameters.

If you don’t have a floppy drive, you can back it up to a file with

# dd if=/dev/hda of=/home/john/boot.mbr bs=512 count=1

Then you can boot into a CD-ROM distribution such as Knoppix, or often
use your Linux distribution’s installation CD to boot into rescue mode,
and restore it with:

$ su
# dd if=/mnt/hda5/john/boot.mbr of=/dev/hda bs=512 count=1

(you’ll need to find and mount the partition containing the directory
where you backed up the MBR for the “if” parameter–this is an example).


John Locke

@see http://www.brunolinux.com/01-First_Things_To_Know/Backing_Up_the_MBR.html

@see http://www.brunolinux.com/01-First_Things_To_Know/Restoring_the_XP_MBR.html