Git Basics: branch cleanup

通常我们的项目都是创建分支开发来做功能迭代,随着项目的开发周期延长功能的不断迭代,git 远程库里会出现大量的分支。
理想状态下,当我们开发完一个功能后提交测试的时候,在处理 Merge Request 时,git merge 业务分支,同时勾选删除源开发分支 Delete source-branch,这样可以防止过多无用的 git 分支。这里记录一下如何手动清理分支:

Delete a branch both locally and remotely

To remove a local branch from your machine:

git branch -d <branch_name>

NOTE: If using -d (lowercase d), the branch will only be deleted if it has been merged. To force the delete to happen, you will need to use -D (uppercase D).

To remove a remote branch:

git push origin :<branch_name>

Thought : wasn’t a good indicator for [delete], As of Git 1.7.0, use this alternative syntax to delete remote branches:

git push origin --delete <branch_name>

[Tips]: if you want to complete both these steps with a single command, you can make an alias for it by adding the below to your ~/.gitconfig:

  rmbranch = "!f(){ git branch -d ${1} && git push origin --delete ${1}; };f"

Alternatively, you can add this to your global config from the command line using

git config --global alias.rmbranch '!f(){ git branch -d ${1} && git push origin --delete ${1}; };f'

Sync local refs with remote repository

git remote prune <upstream>

will remove any remote refs you have locally that have been removed from your remote.

Also can fetch and prune with git fetch -p | --prune or git remote update [upstream] -p

The -p argument prunes deleted upstream branches. Thus, if the foo branch is deleted in the origin repository, git remote update -p will automatically delete your origin/foo ref.

Cleanup branches already merged

To delete local branches which have already been merged into master:

git checkout dev
git fetch -p # sync remote to local, also cleanup branches that have already been deleted
git branch --merged master |grep -v "\* master" |xargs -n 1 git branch -d

Note: For above command, if omit the master branch argument, we will get local branches which have already been merged into the current HEAD.

Now we need remove all remote branches have already been merged into dev:

git branch -r --merged origin/dev |grep -v '\*\|master\|dev' |sed 's#\s*origin/##' |xargs -n 1 echo

Make sure these branches are expected, And then remove from remote by replacing the echo with git push origin --delete

Another syntax to cleanup:

git push origin $(git branch -r --merged origin/master | sed "s/origin\\//:/" | egrep -v "HEAD|master|dev")

Note: Please make sure you known what these command to do before you execute it.

Now we can easily write a bash script (or ruby, or whatever) that goes through (maybe as a cron job) and deletes merged branches.

Reference Links

Bash array tutorial

Like some of other advanced program language, Bash also has Array data structures. There are some basic array tutorials can be found in The Ultimate Bash Array Tutorial with 15 Examples

declare -a Unix=('Debian' 'Red hat' 'Ubuntu' 'Suse' 'Fedora' 'UTS' 'OpenLinux');

Also the keyword declare -a can be omitted.

There are still some tips in daily really shell scripts:

Declare Array more simple

Array is created automatically when a variable is used in the format like,


Length of the Array vs Length of the nth Element

We can get the length of an array using the special parameter called $#.

${#arrayname[@]} gives you the length of the whole array.

But, if the @ sign replace with nth of the element (>=1), then gives you the length of the nth Element in an array. Also if omit the [nth], the nth Defaults to the first element of the array.

echo ${#Unix[@]} # Number of elements in the array. => 7
echo ${#Unix}  # Number of characters in the first element located at index 1. i.e Debian => 6
echo ${#Unix[2]} # Echo the 2th element 'Red hat' length => 7 

Difference between @ and * when referencing array values

This Bash guide says:

If the index number is @ or *, all members of an array are referenced.

LIST=(1 2 3)
for i in "${LIST[@]}"; do
  echo "example.$i "

Gives: example.1 example.2 example.3 (desired result).

But if use ${LIST[*]}, The loop will get example.1 2 3 instead.

when using echo, @ and * actually do give the same results like,

echo ${LIST[@]}
echo ${LIST[*]}

both echos get the desired result: 1 2 3

The difference is subtle; $* creates one argument, while $@ will expand into separate arguments, so:

for i in "${LIST[@]}"

will deal with the list (print it) as multiple variables


for i in "${LIST[*]}"

will deal with the list as one variable.

Read Content of a File into an Array

You can load the content of the file line by line into an array by cat, example like,

$ cat

filecontent=( `cat "logfile" `)
for t in "${filecontent[@]}"; do
  echo $t
echo "Read file content!"

Also you can use [read][3] for more duplex through for loop. such as Reading Columns like,

var="one two three"
read -r col1 col2 col3 <<< "$var"
printf "col1: %s, col2: %s, col3 %s\n" "$col1" "$col2" "$col3"

Dump first column value of each line

while read -r -a line; do
  i=$((${#line[@]} - 1));
  [ $i -eq -1 ] || echo "${line["$i"]}";
done <~/.ssh/config

Parse predefine config in ~/.ssh/config like,

$ cat ~/.ssh/config
#def USER_NAME apps
#def HOST_PREFIX 10.200.51
Host *
    ControlMaster auto
    ControlPath ~/.ssh/master-%r@%h:%p
while read -r x k v; do
  if [ "$x" == "#def" ]; then
    echo "{$k/$v}";
done <~/.ssh/config

In the above example, the k/v prefixing with #def has printed through for loop.

Nodejs memory leak detector

Just wrote a simple module to tracking down memory leaks.

Create a standalone module

cat menoryUsageInfo.js

 * Memoryusage info detections.
 * @author Allex Wang (
module.exports = function() {
  // output memory usage info every 2 minutes
  var min = 0, last = 0, interval = 2 * 1000
  var pid =
  process.nextTick(function f() {
    var o = process.memoryUsage()
    var percent = ~~((o.heapUsed / o.heapTotal) * 100) + '%'
    if (!min || o.heapUsed < last) {
      min = o.heapUsed
      console.warn([pid, (min / 1048576) + 'M' + '/' + (o.heapTotal / 1048576) + 'M', percent])
    last = o.heapUsed
    setTimeout(f, interval)

Create a simple web server for memory leak detections

var http = require('http');

http.createServer(function (req, res) {
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('Hello World\n');
}).listen(7070, '');

console.log('Server running at Process PID: ',;


If we hit this continuously with

while true; do curl -s; done

in one shell, and in another shell view our process: top -pid <process pid> we will see very high and erratic memory usage for this node process.

For more details about memory detections there are two awesome node modules – memwatch and heapdump.

Reference Urls

Add cron jobs to delete logs periodically

From Wikipedia:

cron is the time-based job scheduler in Unix-like computer operating systems. cron enables users to schedule jobs (commands or shell scripts) to run periodically at certain times or dates. It is commonly used to automate system maintenance or administration.

The final cron job script is here

sudo crontab –e
2 0 * * * /opt/tools/ /var/log >> /var/log/cron_job.log 2>&1

Crontab format

The basic format for a crontab is:

minute hour day_of_month month day_of_week command [args]
  • 1: minute (0-59)
  • 2: hour (0-23)
  • 3: day_of_month (0-31)
  • 4: month (0-12 [12 == December])
  • 5: day_of_week Day of the week(0-7 [7 or 0 == sunday])
  • /path/to/command – Script or command name to schedule

Easy to remember format:

* * * * * command to be executed
- - - - -
| | | | |
| | | | ----- Day of week (0 - 7) (Sunday=0 or 7)
| | | ------- Month (1 - 12)
| | --------- Day of month (1 - 31)
| ----------- Hour (0 - 23)
------------- Minute (0 - 59)

Multiple times may be specified with a comma(,), a range can be given with a hyphen(-), and the asterisk symbol(*) is a wildcard character. Spaces are used to separate fields. For example, the line:

*/5 9-16 * 1-5,9-12 1-5 ~/bin/

Will execute the script at five minute intervals from 9 AM to 4:55 PM on weekdays except during the summer months (June, July, and August). More examples and advanced configuration techniques can be found below.

Basic commands for cron management

Crontabs should never be edited directly; instead, users should use the [crontab][1] program to work with their crontabs. To be granted access to this command, user must be a member of the users group (see the gpasswd command).

To edit their crontabs, they may use:

$ crontab -e

Note: By default the crontab command uses the vi editor. To change it, export EDITOR or VISUAL, or specify the editor directly: EDITOR=vim crontab -e.

To view their crontabs, users should issue the command:

$ crontab -l

To remove their crontabs, they should use:

$ crontab -r

Remove or delete single cron job using command

crontab -l | grep -v '/var/crontab/' | crontab -

crontab -l lists the current crontab jobs

grep -v filter some line

crontab - adds all the printed stuff into the crontab file.

Related Articles

Avoid Getting Redirected to Country-Specific Versions of Google

Some tips for enjoy google

When you’re in another country, usually redirects to a local version of the Google page—like for France, or for Germany. Here’s how to easily visit the regular in another country.

All you need to do is head to The NCR stands for “No Country Redirect”, and it’ll take you back to the regular, English-speaking without all the local results. Note that it will redirect to, so if you don’t see the /ncr after you press Enter, that’s normal.

you can use the pws=0 parameter to disable personalized results (e.g

you can set another country geo location using the gl paramter (e.g

VIM skills

Merge multiple blank lines


Note: \_s match white space and blank lines

Grep by unicode character

# replace back slash(\) to front slash(/)

:help regexp for details about some specific syntax to select unicode characters with a regular expression in Vim.

|/\%d|  \%d \%d match specified decimal character (eg \%d123)
|/\%x|  \%x \%x match specified hex character (eg \%x2a)
|/\%o|  \%o \%o match specified octal character (eg \%o040)
|/\%u|  \%u \%u match specified multibyte character (eg \%u20ac)
|/\%U|  \%U \%U match specified large multibyte character (eg \%U12345678)

Git get back some commit from (no branch)

Sometimes you would find that you’re not on any branch when git branch, is described here. This usually happens when you’re using a submodule inside another project. Sometimes you’ll make some changes to this submodule, commit them and then try to push them up to a remote repository, for more details see here:

git branch
* (no branch)

But,, when checkout master, you’ll lost the specific branch named (no branch)

git branch
* master

So, how can we got it back again?

As long as you’ve not done a git gc, then you’ve not lost anything. All you need to do is find it again :) What do you get with:

git reflog # (the commit-ish will be on the first line)

That should show you what happened, and the id of the missing node(s).

To get back to it, just do a git checkout <commit-ish> and your old pseudo branch is restored. also your can merge the specific <commit-ish> to your master.

git checkout master
git merge <commit-ish>

Fix nginx ssi unsafe URI was detected error

If your FE project enabled SSI (Server Side Includes) plugin, you may get some errors or a lots:

[error] 1788#0: *149 unsafe URI “xxx” was detected while sending response to client

Normally, nginx support SSI syntax like

<!--# include virtual="include/foo.html"-->

But if your include path has some go up dir such as ../, then get our errors:

<!--# include virtual="../include/foo.html"-->

have a search NGINX source code, got the SSI related files

ngx_http_parse_unsafe_uri in src/http/ngx_http_parse.c

if (ngx_path_separator(ch) && len > 2) {

    /* detect "/../" and "/.." */

    if (p[0] == '.' && p[1] == '.'
            && (len == 3 || ngx_path_separator(p[2])))
        goto unsafe;

OK, for develop ENV test, we can fix this issues by comments these codes:

ngx_http_ssi_include in src/http/modules/ngx_http_ssi_filter_module.c

if (ngx_http_parse_unsafe_uri(r, uri, &args, &flags) != NGX_OK) {
    return NGX_HTTP_SSI_ERROR;

GitLab Installation on CentOS with nginx integration

Installation gitlab

S1. Follow the local installation guideline:

yum install openssh-server
yum install postfix # Select 'Internet Site', using sendmail instead also works, exim has problems
rpm -ivh gitlab-7.1.1_omnibus-1.el6.x86_64.rpm

S2. Initial basic gitlab config in /etc/gitlab/gitlab.rb

For troubleshooting and configuration options please see the Omnibus GitLab readme

# Change the external_url to the address your users will type in their browser
external_url ''
#git_data_dir '/home/git/git-data'

S3. Setup gitlab services configurations

gitlab-ctl reconfigure

That’s all if your server just for gitlab standalone.

You can login as an admin user with username root and password 5iveL!fe

Separation Nginx Server from gitlab Suite kit

Stop gitlab service first:

gitlab-ctl stop

Give nginx access to git group:

ensure your Nginx running with a specific user www in /etc/nginx/nginx.conf

usermod -a -G git www

Change some gitlab permissions:

# ensure gitlab-rails is owner by git group
chown git.git /var/opt/gitlab/gitlab-rails/ -R

# ensure `/var/opt/gitlab/gitlab-rails/tmp/sockets/gitlab.socket` and `uploads` accessable by nginx
chmod g+rwx /var/opt/gitlab/gitlab-rails/ -R

# link gitlab nginx config
ln -sf "/var/opt/gitlab/nginx/etc/gitlab-http.conf" /etc/nginx/conf.d/

# disable gitlab internal nginx service and symbolics link to global nginx config 
rm -f /opt/gitlab/service/nginx
ln -sf "/var/opt/gitlab/nginx/etc/gitlab-http.conf" /etc/nginx/conf.d/

# test permission
sudo -u www ls "/var/opt/gitlab/gitlab-rails/tmp/sockets/gitlab.socket"

Restart gitlab services and nginx

gitlab-ctl start
/etc/init.d/nginx restart


Related Links:

Running compass watch in the background

Since a long time ago, i wanna write a script to auto compile sass -> css. A slight disadvantage (besides of having a ton of advantages) of using Sass over native CSS is that you need to compile your Sass files to CSS before loading them up in your browser by compass command.

In bash shell script, the ampersand “&” is used to fork processes and runn in the background:

compass watch [path/to/project] &

Will cause the find command to be forked and run in the background (you can always kill it by it’s PID)

The problem with using the & (ampersand) for forking a process in the shell is that whenever you close the shell the process is going to be killed because the parent process is killed.

& runs the whole thing in the background, giving you your prompt back immediately.

nohup allows the background process to continue running even after the user logs out (or exits the initiating shell).

nohup compass watch &


Every Linux process opens three I/O channels, an input “stdin”, a standard output “stdout” and a standard error output “stderr”. They can be used for binary but are traditionally text. When most programs see stdin close, they exit (this can be changed by the programmer).

When the parent shell exits, stdin is closed on the children, and (often, usually) the children exit as well. In addition the children receive a software signal, SIGHUP, indicating the user has “hung up” (formerly, the modem) and the default here is to exit as well. (Note, a programmer can change all of this when writing the program).

So, what nohup does is give the child process a separate I/O environment, tying up the ins and outs to something not tied to the parent shell, and shielding the child from the SIGHUP signal. Once the user disconnects, you will see the nohup background process owned by init (process 1), not the user’s shell.

Save following script as ~/bin/sass-watch and chmod +x for x permission.

if [ -f $pidfile ]; then
    kill -9 `cat $pidfile` >/dev/null 2>&1
    [ "$1" = "stop" ] && echo 'shutdown!' && exit 0;
nohup compass watch>$logfile 2>&1&
echo $! >$pidfile
echo "compass watch success, pid: $!";

Also the final script is

Run sass-watch on your sass project for launch daemon.

Some helper reference at internal variables

$! represents the PID of the last process executed.

$$ means the process ID that the script file is running under. For any given script, when it is run, it will have only one “main” process ID. Regardless of how many subshells you invoke, $$ will always return the first process ID associated with the script.