Tag Archives: tips

Avoid Getting Redirected to Country-Specific Versions of Google

Some tips for enjoy google

When you’re in another country, Google.com usually redirects to a local version of the Google page—like Google.fr for France, or Google.de for Germany. Here’s how to easily visit the regular Google.com in another country.

All you need to do is head to www.google.com/ncr. The NCR stands for “No Country Redirect”, and it’ll take you back to the regular, English-speaking Google.com without all the local results. Note that it will redirect to Google.com, so if you don’t see the /ncr after you press Enter, that’s normal.

you can use the pws=0 parameter to disable personalized results (e.g google.com/#q=google&pws=0)

you can set another country geo location using the gl paramter (e.g google.com/#q=google&gl=us)

Git get back some commit from (no branch)

Sometimes you would find that you’re not on any branch when git branch, is described here. This usually happens when you’re using a submodule inside another project. Sometimes you’ll make some changes to this submodule, commit them and then try to push them up to a remote repository, for more details see here:

git branch
* (no branch)

But,, when checkout master, you’ll lost the specific branch named (no branch)

git branch
* master

So, how can we got it back again?

As long as you’ve not done a git gc, then you’ve not lost anything. All you need to do is find it again :) What do you get with:

git reflog # (the commit-ish will be on the first line)

That should show you what happened, and the id of the missing node(s).

To get back to it, just do a git checkout <commit-ish> and your old pseudo branch is restored. also your can merge the specific <commit-ish> to your master.

git checkout master
git merge <commit-ish>

CentOS how to tips

How to remove RPM packages with several dependencies

If you are using fedora, simply use this simple script but be careful when answering y/N:

yum remove $(rpm -qa | grep PACKAGENAME)

  • Change PACKAGENAME with your Package name
  • For disabling plugins just add --disableplugin=PLUGIN-NAME
  • If you can’t access the Internet, just add this options to the line above --disablerepo=*

Find out what files are in my rpm package

Use following syntax to list the files for already INSTALLED package:

The –v (verbose) option can give you more information on the files when used with the various query options.

rpm -ql package-name

Use following syntax to list the files for RPM package:

rpm -qlp package-name

Type the following command to list the files for gitlab*.rpm package file:

rpm -qlp gitlab-7.1.1_omnibus-1.el6.x86_64.rpm

See also: HowTo: Extract an RPM Package Files Without Installing It

Update yum repositories for CentOS, RHEL Systems

Get the latest yum repos from one of the two links below, selecting to match your host’s architecture:

# CentOS/RHEL 6, 64 Bit (x86_64):
rpm -Uvh http://packages.sw.be/rpmforge-release/rpmforge-release-0.5.3-1.el6.rf.x86_64.rpm

Then enjoy update with yum update yum-updatesd

Change CentOS language

vi /etc/sysconfig/i18n

check the lang is your expected, such as:

LANG="en_US.UTF-8"  <<-----

and re-login with you user/passwd, check it with command locale

Yum install/update with specific repository

# update git with rpmforge-extras repository
yum --disablerepo=base,updates --enablerepo=rpmforge-extras update git

vimdiff ignore white space

To ignore white spaces while using vimdiff.

set diffopt+=iwhite

From the command line use:

vimdiff -c 'set diffopt+=iwhite' ...

To have vimdiff ignore whitespace while normal vim doesn’t, simply put this into your .vimrc:

if &diff
    " diff mode"
    set diffopt+=iwhite

After upgrade to OSX Mavericks

After upgrade to OSX Mavericks, and reinstall xcode, if make some project, You’ll get the error a bit more…

“Agreeing to the Xcode/iOS license requires admin privileges, please re-run as root via sudo.”

– What?

Ah! Ok, X-code was obviously re-installed with OSX Mavericks.

sudo xcodebuild -license

This allowed me to view the X-code licence, and then agree to the terms. Voila, that’s it. Everything worked just fine after that.

Linux daily skills (continuous updating)

Cleanup process list

Kill these process that zombie or stopped.

# kill zombie process list
ps -A -ostat,ppid | grep -e '[zZ]' | tail -n +2 | awk '{ print $2 }' | xargs kill -9

# cleanup stopped process list
ps -A -ostat,pid | grep -e '[T]' | tail -n +2 | awk '{ print $2 }' | xargs kill -9

Mandatory logged out user session

Tips: 当出现服务器用户数过多,造成别人登陆不上去,管理员可强行踢出用户

w list current logon sessions, and the kill it with pkill -kill -t [tty]

pkill -kill -t pts/2

Make Tab auto-completion case-insensitive

# If ~./inputrc doesn't exist yet, first include the original /etc/inputrc so we don't override it
if [ ! -a ~/.inputrc ]; then echo "\$include /etc/inputrc" > ~/.inputrc; fi

# Add option to ~/.inputrc to enable case-insensitive tab completion
echo "set completion-ignore-case On" >> ~/.inputrc

Note: to make this change for all users, edit /etc/inputrc

Get the networking connection statistics

netstat -n | awk '/^tcp/ {++S[$NF]} END {for(a in S) print a, S[a]}'

ESTABLISHED 3254   # data transfer state
FIN_WAIT_1 648
FIN_WAIT_2 581

Kill the process that bind the specific port

GistURL: https://gist.github.com/allex/8b399a93749703acd780


while [[ -z "${port}" ]]; do
    read -p "Please type the port you wanna kill (q to exit): " port
[ "${port}" = "q" ] && exit 0;

case "`uname `" in
    Linux*)  PID=`lsof -t -i:${port}` ;;
    Darwin*) PID=`lsof -i -n -P | grep ":${port} (LISTEN)" | awk '{print $2}'` ;;

if [ -n "$PID" ];
    kill -9 $PID && echo "procss with pid: ${PID} on port ${port} had killed success!"
    echo "The specific process with port '${port}' not found. exit";

Usage: ./killport.sh [port]

Find IP address in shell

# OS X may not work except
ip = `hostname -I | cut -d' ' -f1`

# or use complex Linux shell
ip=`ifconfig | sed -n 's/.*inet addr:\([0-9.]\+\)\s.*/\1/p' | grep -v '127.0.' | head -n 1`


ip=`ifconfig | sed -n 's/.*inet addr:\([0-9.]\+\)\s.*/\1/p' | grep -v '127.0.' | head -n 1`
wget -q -O - http://${ip}:8092/yn/build.hash -H 'Host: st.comm.miui.com' | base64 -d

Compare differences between directories

cp -R $local $bak
rsync $server:$remdir/* $local/
rsync $local/ $server:$remdir/*
diff -wur $local $bak

Use cron job to cleanup log files

Linux system various kinds logs and tmp generated in /var/log/, /tmp, How to clean these files automatically?

Using tmpwatch to automate temporary file cleanup

first we need install the 3rd tool tmpwatch

yum install tmpwatch -y

once tmpwatch is installed run command

/usr/sbin/tmpwatch -am 12 /tmp

this will delete all files over 12 hours old

next, we will configure your server to do this automatically.

from SSH type: crontab -e

go to the very bottom and paste

0 4 * * * /usr/sbin/tmpwatch -am 12 /var/log

For more daily job script:

$ cat /etc/cron.daily/tmpwatch

/usr/sbin/tmpwatch "$flags" -x /tmp/.X11-unix -x /tmp/.XIM-unix \
    -x /tmp/.font-unix -x /tmp/.ICE-unix -x /tmp/.Test-unix 240 /tmp
/usr/sbin/tmpwatch "$flags" 720 /var/tmp
for d in /var/{cache/man,catman}/{cat?,X11R6/cat?,local/cat?}; do
  if [ -d "$d" ]; then
    /usr/sbin/tmpwatch "$flags" -f 720 "$d"

-x is an entry to be excluded from the clean up operation.

Using a shell script do the same thing if none tmpwatch

find /var/log -type f -name "*.tmp" -exec rm {} \+

Normally we can execute as find /path -name "*.tmp" -exec rm {} \;
This may sometimes fail to work because the argument list may grow larger (in bytes) than the maximum allowed by the shell (getconf ARG_MAX). This may be solved by xargs with the -L option.

Also configure as a cron job to run automatically.

find /var/log -type f -mtime +12 -print0 | xargs -0 -L 5000 rm

Reference Links:

Git recover from git reset –hard

How to recover uncommitted changes to the working directory from a

git reset --hard HEAD?

You can try git fsck --lost-found to see if your changes still in lost-found:

$ git fsck --lost-found
Checking object directories: 100% (256/256), done.
Checking objects: 100% (54/54), done.
dangling blob 15f9af8379f13672ca0e75d56df100edfd67fe6b
dangling commit 18fc9548f20eb8938dde68ab4a3dd0b7a0212dc3
dangling commit 33a832866e3855e300504ea6b584732e9c3c286c
dangling blob 568ca393d5e21cdc9eda2824111a5429a70d5113
dangling blob 89cdac4d3fc03546b5ab485aa8a9905b34702a4a
dangling blob abf03d6c84484a2b096a4d7f0ee5a85361f8a3d6 <- it's this one
dangling commit bc05be5eac21134b63ca51fbd20fee5c8782a640
dangling commit c0fa59cfaa0bad5f8ca8a1a845ba1673bb207b2d
dangling commit d140d6f693d8ef83d040d483bec3db95db084cd9
dangling blob e9c3eb31aa0589ab59f46630f7926681f7a14476  <- it's this one

Then you will get a dangling blob by git show

git show e9c3eb31aa0589ab59f46630f7926681f7a14476

will give you the file content back of reset.

To find unreferenced commits I found a tip somewhere suggesting this.

gitk --all $(git log -g --pretty=format:%h)

I found them in the other directory within the <path to repo>/.git/lost-found/. From there, I can see the uncommitted files, copy out the blobs, and rename them.

Note: This only works if you added the files you want to save to the index (using git add .). If the files weren’t in the index, they are lost.

Rotate Nginx log files

Nginx is a great web server, however a default install will not rotate log files for you. there’s no size limit, it will keep getting bigger until your disk is full.

this is a problem especially on busy sites, as the access log can eat up disc space quite quickly.

In this tutorial, I will show you how to rotate ngnix log files automatically, my version is nginx/1.4.3. but any modern distribution should function in a similar way.

Manual rotating nginx log files via Cron

First we need to create the job bash script for cron that will do the log rotation.

sudo vi /usr/local/sbin/rotate_nginx_log.sh

Here are the contents of the script (this is based off the example from the Nginx wiki):

# <https://gist.github.com/allex/10360845>

# Set variable
nginx_pid=`cat /var/run/nginx.pid`

time_stamp=`date -d "yesterday" +%Y-%m-%d`

mkdir -p ${old_logs_path}

# Main
for file in `ls $logs_path | grep log$ | grep -v '^20'`
    if [ ! -f ${old_logs_path}/${time_stamp}_$file ]
    mv $logs_path/$file $dst_file
    gzip -f $dst_file  # do something with access.log.0

kill -USR1 $nginx_pid


First, we move the current log to a new file for archiving. A common scheme is to name the most recent log file with a suffix of current time stamp. e.g, $(date "+%Y-%m-%d").

The command that actually rotates the logs is kill -USR1 $(cat /var/run/nginx.pid). This does not kill the Nginx process, but instead sends it a SIGUSR1 signal causing it to re-open its logs.

THen execute sleep 1 to allow the process to complete the transfer. We can then zip the old files or do whatever post-rotation processes we would like.

Next please make sure that the script file is executable by running

chmod +x /usr/local/sbin/rotate_nginx_log.sh

In our final step we will create a crontab file to run the script we just created.

sudo crontab -e

In this file let’s create a cron job to run every day at 1am

Add the following lines to the file as follows:

00 01 * * * /usr/local/sbin/rotate_nginx_log.sh &> /dev/null

Also we can config suppressing cron jobs status email notifications. see Suppressing Cron Job Email Notifications

Log Rotation With Logrotate

The logrotate application is a simple program to rotate logs.

sudo vim /etc/logrotate.d/nginx

Put this content inside and modify the first line to match your Nginx log file

/var/log/nginx/*.log {
    rotate 30
    dateformat .%Y-%m-%d
    create 640 nginx adm
        [ -f /var/run/nginx.pid ] &amp;&amp; kill -USR1 `cat /var/run/nginx.pid`

Wait 24 hours until cron daily runs and check out if you see any .gz file inside your logs directory, if you see some gzipped files, your Nginx rotation is working fine :D

Related References:

How to get url hash state safely

Normally, we can get the hash string by location.hash. But currently i fond the value is not actually correct in FF. FF automatically decoding encoded hash string in the URL.

So safely method is avoid use location.hash, and it is better to use location.href.split('#')[1] instead of location.hash.

Indeed location.href.split('#!')[1] does not get decoded by FF automatically (at least today).

var currentUrl = '';
var getHashPath = function() { 
    return location.href.split('#!')[1];
$(window).on('hashchange', function(e) {
    var url = getHashPath();
    if (url !== currentUrl) {
        currentUrl = url;
        // do some business logic...