Git recover from git reset –hard

How to recover uncommitted changes to the working directory from a

git reset --hard HEAD?

You can try git fsck --lost-found to see if your changes still in lost-found:

$ git fsck --lost-found
Checking object directories: 100% (256/256), done.
Checking objects: 100% (54/54), done.
dangling blob 15f9af8379f13672ca0e75d56df100edfd67fe6b
dangling commit 18fc9548f20eb8938dde68ab4a3dd0b7a0212dc3
dangling commit 33a832866e3855e300504ea6b584732e9c3c286c
dangling blob 568ca393d5e21cdc9eda2824111a5429a70d5113
dangling blob 89cdac4d3fc03546b5ab485aa8a9905b34702a4a
dangling blob abf03d6c84484a2b096a4d7f0ee5a85361f8a3d6 <- it's this one
dangling commit bc05be5eac21134b63ca51fbd20fee5c8782a640
dangling commit c0fa59cfaa0bad5f8ca8a1a845ba1673bb207b2d
dangling commit d140d6f693d8ef83d040d483bec3db95db084cd9
dangling blob e9c3eb31aa0589ab59f46630f7926681f7a14476  <- it's this one

Then you will get a dangling blob by git show

git show e9c3eb31aa0589ab59f46630f7926681f7a14476

will give you the file content back of reset.

To find unreferenced commits I found a tip somewhere suggesting this.

gitk --all $(git log -g --pretty=format:%h)

I found them in the other directory within the <path to repo>/.git/lost-found/. From there, I can see the uncommitted files, copy out the blobs, and rename them.

Note: This only works if you added the files you want to save to the index (using git add .). If the files weren’t in the index, they are lost.

Rotate Nginx log files

Nginx is a great web server, however a default install will not rotate log files for you. there’s no size limit, it will keep getting bigger until your disk is full.

this is a problem especially on busy sites, as the access log can eat up disc space quite quickly.

In this tutorial, I will show you how to rotate ngnix log files automatically, my version is nginx/1.4.3. but any modern distribution should function in a similar way.

Manual rotating nginx log files via Cron

First we need to create the job bash script for cron that will do the log rotation.

sudo vi /usr/local/sbin/rotate_nginx_log.sh

Here are the contents of the script (this is based off the example from the Nginx wiki):

#!/bin/bash
# <https://gist.github.com/allex/10360845>

# Set variable
logs_path="/var/log/nginx/"
old_logs_path=${logs_path}/old
nginx_pid=`cat /var/run/nginx.pid`

time_stamp=`date -d "yesterday" +%Y-%m-%d`

mkdir -p ${old_logs_path}

# Main
for file in `ls $logs_path | grep log$ | grep -v '^20'`
do
    if [ ! -f ${old_logs_path}/${time_stamp}_$file ]
    then
        dst_file="${old_logs_path}/${time_stamp}_$file"
    else
        dst_file="${old_logs_path}/${time_stamp}_$file.$$"
    fi
    mv $logs_path/$file $dst_file
    gzip -f $dst_file  # do something with access.log.0
done

kill -USR1 $nginx_pid

Note:

First, we move the current log to a new file for archiving. A common scheme is to name the most recent log file with a suffix of current time stamp. e.g, $(date "+%Y-%m-%d").

The command that actually rotates the logs is kill -USR1 $(cat /var/run/nginx.pid). This does not kill the Nginx process, but instead sends it a SIGUSR1 signal causing it to re-open its logs.

THen execute sleep 1 to allow the process to complete the transfer. We can then zip the old files or do whatever post-rotation processes we would like.

Next please make sure that the script file is executable by running

chmod +x /usr/local/sbin/rotate_nginx_log.sh

In our final step we will create a crontab file to run the script we just created.

sudo crontab -e

In this file let’s create a cron job to run every day at 1am

Add the following lines to the file as follows:

00 01 * * * /usr/local/sbin/rotate_nginx_log.sh &> /dev/null

Also we can config suppressing cron jobs status email notifications. see Suppressing Cron Job Email Notifications


Log Rotation With Logrotate

The logrotate application is a simple program to rotate logs.

sudo vim /etc/logrotate.d/nginx

Put this content inside and modify the first line to match your Nginx log file

/var/log/nginx/*.log {
    daily
    missingok
    rotate 30
    dateformat .%Y-%m-%d
    compress
    delaycompress
    notifempty
    create 640 nginx adm
    sharedscripts
    postrotate
        [ -f /var/run/nginx.pid ] &amp;&amp; kill -USR1 `cat /var/run/nginx.pid`
    endscript
}

Wait 24 hours until cron daily runs and check out if you see any .gz file inside your logs directory, if you see some gzipped files, your Nginx rotation is working fine :D


Related References:

CentOS LOG – Safety optimizations

Kernel optimization vi /etc/sysctl.conf

We can view the system kernel settings by sysctl -a.

# Not accept ICMP redirects (prevent MITM attacks)
net.ipv4.conf.all.accept_redirects = 0
net.ipv6.conf.all.accept_redirects = 0

# Do not send ICMP redirects (we are not a router)
net.ipv4.conf.all.send_redirects = 0

优化内核阻挡SYN洪水攻击 sysctl -a | grep syn

# 设置syncookies:
sysctl -w net.ipv4.tcp_syncookies=1
sysctl -w net.ipv4.tcp_max_syn_backlog=3072
sysctl -w net.ipv4.tcp_synack_retries=0
sysctl -w net.ipv4.tcp_syn_retries=0
sysctl -w net.ipv4.conf.all.send_redirects=0
sysctl -w net.ipv4.conf.all.accept_redirects=0
sysctl -w net.ipv4.conf.all.forwarding=0
sysctl -w net.ipv4.icmp_echo_ignore_broadcasts=1

# 防止PING:
sysctl -w net.ipv4.icmp_echo_ignore_all=1

Add iptables to avoid Sync Flood Attack

# 防止Sync Flood, 缩短SYN- Timeout时间 (-limit 1/s 限制SYN并发数每秒1次,可以根据自己的需要修改)
iptables -A FORWARD -p tcp --syn -m limit --limit 1/s -j ACCEPT
iptables -A INPUT -i eth0 -m limit --limit 1/sec --limit-burst 5 -j ACCEPT

# 防止各种端口扫描
iptables -A FORWARD -p tcp --tcp-flags SYN,ACK,FIN,RST RST -m limit --limit 1/s -j ACCEPT

# 防止 Ping of Death 攻击
iptables -A FORWARD -p icmp --icmp-type echo-request -m limit --limit 1/s -j ACCEPT

# 每秒 最多3个 syn 封包 进入
iptables -N syn-flood
iptables -A INPUT -p tcp --syn -j syn-flood
iptables -A syn-flood -p tcp --syn -m limit --limit 1/s --limit-burst 3 -j RETURN
iptables -A syn-flood -j REJECT

# 拦截具体IP范围 (eg. 10.0.0.0/8)
iptables -A INPUT -s 10.0.0.0/8 -i eth0 -j Drop

开防火墙,不用的端口都禁止掉

iptables -F
iptables -A INPUT -p tcp -i vnet0 –dport ssh -j ACCEPT
iptables -A INPUT -p tcp -i vnet0 –dport 80 -j ACCEPT
iptables -A INPUT -i vnet0 -m state –state ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -p ICMP -j DROP
iptables -A INPUT -i vnet0 -j DROP

修改好之后重启iptables

/etc/init.d/iptables restart

How to get url hash state safely

Normally, we can get the hash string by location.hash. But currently i fond the value is not actually correct in FF. FF automatically decoding encoded hash string in the URL.

So safely method is avoid use location.hash, and it is better to use location.href.split('#')[1] instead of location.hash.

Indeed location.href.split('#!')[1] does not get decoded by FF automatically (at least today).

var currentUrl = '';
var getHashPath = function() { 
    return location.href.split('#!')[1];
};
$(window).on('hashchange', function(e) {
    var url = getHashPath();
    if (url !== currentUrl) {
        currentUrl = url;
        // do some business logic...
    }
});

MySQL DBA commonds

Create user and grand privileges;

mysql> create user wp_blog_usr@localhost identified by '123qwe';
Query OK, 0 rows affected (0.05 sec)

mysql> grant all privileges on wp_blog.* to wp_blog_usr@'localhost';
Query OK, 0 rows affected (0.02 sec)

mysql> flush privileges;

By default in MySQL server remote access is disabled. To provide a remote access to user is:

  1. comment this line in /etc/my.cnf:
    # bind-address = 127.0.0.1

  2. grant pivileges for user:
    GRANT ALL PRIVILEGES ON *.* TO 'USERNAME'@'IP' IDENTIFIED BY 'PASSWORD';
    Where IP is the IP you want to allow acess and USERNAME is the user you use to connect If you want to allow access from any IP just put % instead of your IP

  3. restart mysql server

To tell the server to reload the grant tables, perform a flush-privileges operation. This can be done by issuing a FLUSH PRIVILEGES statement or by executing a mysqladmin flush-privileges or mysqladmin reload command.

Change root password

mysqladmin -u root password 123qwe;

See full query from show processlist

show full processlist;

So how many processes or connections are now actually doing anything? We now must check for ‘Threads_running’.

mysql> SHOW GLOBAL STATUS LIKE 'Threads_running';
+-----------------+-------+
| Variable_name   | Value |
+-----------------+-------+
| Threads_running | 24    |
+-----------------+-------+
1 row in set (0.00 sec)

And so we have Threads_cached, Threads_connected & Max_used_connections.

Reset Forgotten MySQL Root Password

# start up the mysql daemon and skip the grant tables which store the passwords.
mysql_safe --skip-grant-tables &

# connect to mysql without a password.
mysql --user=root mysql

# update password
UPDATE user SET password=PASSWORD('new-password') WHERE user='root';
flush privileges;
exit;

Some related links:

MySQL terminology: processes, threads & connections

maven pom.xml settings

Adding main class to manifest

<plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-jar-plugin</artifactId>
  <version>2.4</version>
  <configuration>
    <archive>
      <manifest>
        <mainClass>com.yahoo.platform.yui.compressor.Bootstrap</mainClass>
      </manifest>
    </archive>
  </configuration>
</plugin>

Customize java source directory:

Some project’s source directory is not following the maven convention. Instead of being inside src/main/java

Just add this to your pom in the build section.

<sourceDirectory>
  ${basedir}/src
</sourceDirectory>

Here’s the relevant section of the POM doc on configuring the directories.

Including local JAR files as dependency in a Maven project

To avoid fetch dependency from remote repository is to use Maven’s system scope and systemPath feature:

<dependency>
  <groupId>net.sf</groupId>
  <artifactId>jargs</artifactId>
  <version>1.0</version>
  <scope>system</scope>
  <systemPath>${basedir}/lib/jargs-1.0.jar</systemPath>
</dependency>

This will reference a dependency from the local filesystem, which means you do not have to install the JAR into the repository in order to use it. This is particularly useful when you’re doing some prototyping or research into a new technology.

Skipping Tests

If you want to skip tests by default but want the ability to re-enable tests from the command line, you need to go via a properties section in the pom:

<project>
  [...]
  <properties>
    <skipTests>true</skipTests>
  </properties>
  [...]
  <build>
    <plugins>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-failsafe-plugin</artifactId>
        <version>2.16</version>
        <configuration>
          <skipTests>${skipTests}</skipTests>
        </configuration>
      </plugin>
    </plugins>
  </build>
  [...]
</project>

This will allow you to run with tests disabled by default and to run them with this command:

mvn install -DskipTests=false

If you absolutely must, you can also use the maven.test.skip property to skip compiling the tests. maven.test.skip is honored by Surefire, Failsafe and the Compiler Plugin.

mvn install -Dmaven.test.skip=true

For details see Skipping Tests

Feedly

The best thing about Feedly is that you can sync it with your Google Reader so any subscriptions you want to hang on to, just transfer them over with one click. This is the frontrunner and most talked about reader that may very well take the place of Google Reader. Why? Because it’s easy to navigate, everything is in a familiar format but with a nice hipster like spin to make it easier on the eyes.

I’ve yet to hear of a downside to using Feedly. As a matter of fact I’ve only heard of one: If the Feedly server is down then you can’t access the reader. How often is that really going to happen though?

It only took me minutes to get up and running with Feedly. I also love their apps, as I tend to consume most of my RSS feeds on my mobile. I’m pretty impressed, and excited to see how Feedly improves once they reach larger audiences.

image0059 637x410 6 Feed Readers to Replace the Google Reader Void

CURL usage

There are some powerful features of curl you did not know before

Used curl to grab all headers with ‘-H’

$ curl -I "http://iallex.com:8080"
HTTP/1.1 200 OK
X-Powered-By: PHP/5.5.1
X-Pingback: http://iallex.com:8080/xmlrpc.php
Content-Type: text/html; charset=UTF-8
Date: Mon, 24 Mar 2014 02:04:01 GMT
Server: lighttpd/1.4.34

Send a header to web server with ‘-H’

  • The curl command supports -H or –header option to pass extra HTTP header to use when getting a web page from a web server.
  • This option can be used multiple times to add/replace/remove multiple headers, the syntax is:

curl -H 'HEADER-1' -H 'HEADER-2' ... <URL>

E.g Check if 10.237.110.22:8088 apache node is working or not

curl -I -H 'Host: iallex.com' 'http://10.237.110.22:8080/'

Checking gzip/deflate server responses with curl

Curl provides a simple tool for checking server responses.

First, a few curl arguments that will come in handy:

-s, --silent prevents curl from showing progress meter

-w, --write-out 'size_download=%{size_download}\n' instructs curl to print out the download size

-o, --output instructs curl to throw away the output, sending it to /dev/null

Using these arguments, we can make a simple request for a path on the server:

curl -s -w "size_download=%{size_download}\n" -o /dev/null \
     "http://code.jquery.com/jquery-2.1.0.min.js"
size_download=83615

Here, you can the response was 83615 bytes. Next up, lets make the same
request, this time adding the Accept-Encoding header to ask for compressed
content.

curl -s -w "size_download=%{size_download}\n" -o /dev/null \
     -H "Accept-Encoding: gzip,deflate" \
     "http://code.jquery.com/jquery-2.1.0.min.js"
size_download=34151

Nice! This downloaded only 34151 bytes of data, so it the data is definitely
being compressed. Up next, lets try making the request a third time, now
making the request a HTTP1.0 request.

curl -s -w "size_download=%{size_download}\n" -o /dev/null \
     -H "Accept-Encoding: gzip,deflate" \
     --http1.0
     "http://code.jquery.com/jquery-2.1.0.min.js"
size_download=83615

This time, response same as first request, which makes sense when using Nginx
with the [gzip_http_version](http://wiki.nginx.org/NginxHttpGzipModule#gzip_ht
tp_version) set to 1.1.

Specify the user name and password to use for server authentication

Reference from manual

-u, –user

Specify the user name and password to use for server authentication. Overrides -n,
–netrc and –netrc-optional.

If you simply specify the user name, curl will prompt for a password.

The user name and passwords are split up on the first colon, which makes it
impossible to use a colon in the user name with this option. The password can, still.

curl -u allex:d9e871f "http://10.200.11.128:7890?do=syncx"

References:

Git daily tips

Retrieve a single file from specific revision in Git

git show somebranch:path/to/your/file

We can also do multiple files and have them concatenated: git show branchA~10:fileA branchB^^:fileB

NOTE:

If you want to get the file in the local directory (revert just one file) you can checkout: git checkout somebranch^^^ -- path/to/file

Remove local (untracked) files from my current Git branch

git-clean

git clean -f -d

If needed to remove untracked files from particular subdirectory:

git clean -f {dir_path}

And combined way to delete untracked dir/files and ignored files:

git clean -fxd {dir_path}

Git status give the output in an easy-to-parse format for scripts.

git status --porcelain

Pull with rebase instead of merge

$ git pull --rebase

# e.g. if on branch "master": performs a `git fetch origin`,
# then `git rebase origin/master`

When across merge commits, we’re get a [merge commits] with a message reading something like Merge branch 'master' of 'origin/master'.So we can avoid the unnecessary micro-merges on regular git pull by --rebase options.

Rebasing ensures that the commits are always re-applied so that the history stays linear. git will move your local commit aside, synchronise with the remote and then try to apply your commits from the new state.

You can configure certain branches to always do this without the --rebase flag:

# make `git pull` on master always use rebase
$ git config branch.master.rebase true

You can also set up a global option to set the last property for every new tracked branch:

# setup rebase for every tracking branch
$ git config --global branch.autosetuprebase always

You can configure all of pull with rebase option: git config --global pull.rebase true, and use git pull --no-rebase to disable this feature.

I usually use a fetch/rebase combination so my current (local) work stays at the top:

git fetch
git rebase origin/release-1.0.0

Git get specific branch head version.

git ls-remote --heads git@git.n.xiaomi.com:yp-develweb/devel-web-home.git release-1.0.0 | awk '{print $1}' | cut -c1-10

NOTE:

if in git shell context, can use these commonds:

last_commit=$(git rev-parse --short HEAD)
last_commit=$(git log --pretty=format:'%h' -n 1)

Stashing changes in a dirty working directory away

Stashing is a great way to pause what you’re currently working on and come back to it later.

read from Stashing your changes
Normally, we can use different local branches git branch xxx for jobs. But this may causes lot of unexpected logs in commits.
Use git stash when you want to record the current state of the working directory and the index, but want to go back to a clean working directory. The command saves your local modifications away and reverts the working directory to match the HEAD commit.

For some issues, we need git rebase to HEAD and fix some bugs:

git stash -u save current state, with git clean, leaving the working directory in a very clean state.

do some bugfixes and commit …

git stash apply Like pop, but do not remove the state from the stash list. to restore previous jobs states.

As of git 1.7.7, git stash accepts the –include-untracked option (or short-hand -u). To include untracked files in your stash, use either of the following commands:

git stash --include-untracked
git stash -u

Tips: Force git stash to overwrite added files

Use git checkout instead of git stash apply:

git checkout stash -- .
git commit

This will restore all the files to their stashed version.

If there are changes to other files in the working directory that should be kept, here is a less heavy-handed alternative:

git merge --squash --strategy-option=theirs stash

Note: for more details about stash, please view docs: git stash

Git Tags

# Create a new tag from your current HEAD (i.e. the HEAD of your current branch)
git tag <TAGNAME>

git tag <TAGNAME> <COMMIT> you can even specify which commit to use for creating the tag.
Regardless, a tag is still simply a “pointer” to a certain commit (not a branch).

Rename a Git tag

# build an alias of the old tag name:
git tag new_tag_name old_tag_name

# Then you need to delete the old one locally:
git tag -d old_tag_name

# delete the tag on you remote location(s)
# can be simplified to `git push origin :old_tag_name`
git push origin :refs/tags/old_tag_name

# add your new tag to the remote location
git push origin --tags

Other useful commands

Bypassing the git hooks by -n, like git commit -n [...]

-n, –no-verify
This option bypasses the pre-commit and commit-msg hooks.

# rebase these commit since dd61ab32 from HEAD
git rebase -i dd61ab32^
# Deleting the last commit
git push mathnet +dd61ab32^:master

Where git interprets x^ as the parent of x and + as a forced non-fastforward push.
This command same as:

# do it in two simpler steps: First reset the branch to the parent of the current commit, then force-push it to the remote.
git reset HEAD^ --hard
git push mathnet -f

Using pre-push git hook to runs unit tests on every push

read from git pre-push
Below is an example pre-push script that let’s us specify a branch to ‘protect’ so that our tests will only run if there are commits to push and we are on ‘master’. Also because pre-push will execute regardless of if there are commits to push or not, the script ensures we don’t fire off a lengthy test command, only to find out we actually didn’t need to.

#!/bin/bash 

CMD="ls -l" # Command that runs your tests
protected_branch='master'

# Check if we actually have commits to push
commits=`git log @{u}..`
if [ -z "$commits" ]; then
    exit 0
fi

current_branch=$(git symbolic-ref HEAD | sed -e 's,.*/\(.*\),\1,')

if [[ $current_branch = $protected_branch ]]; then
    $CMD
    RESULT=$?
    if [ $RESULT -ne 0 ]; then 
        echo "failed $CMD"
        exit 1
    fi
fi
exit 0

For more about git hooks we can read from the manual of githooks.

More git hooks articles:

Make websafe colors

For Web colors plugins.

Convert a normal hex color to a websafe color.

var round = Math.round;
var floor = Math.floor;

function get_hex(dec) { return dec.toString(16); }
function get_dec(hex) { return parseInt('0x' + hex, 16); }

function rgb_to_hex(r, g, b) {
    var c1 = get_hex(floor(r / 16));
    var c2 = get_hex(floor(r % 16));
    var c3 = get_hex(floor(g / 16));
    var c4 = get_hex(floor(g % 16));
    var c5 = get_hex(floor(b / 16));
    var c6 = get_hex(floor(b % 16));
    return c1 + c2 + c3 + c4 + c5 + c6;
}
function hex_to_rgb(hex) {
    var i = 0, arr = [], c1, c2;
    while (i < 6) {
        c1 = get_dec(hex.substring(i, ++i));
        c2 = get_dec(hex.substring(i, ++i));
        arr.push((c1 * 16) + c2 * 1);
    }
    return arr;
}

function web_safe(r, g, b) {
    var t;
    t = r % 51; if (t > 25) { t = r + 51 - t; } else { t = r - t; }
    var c1 = get_hex(round(t / 17));
    t = g % 51; if (t > 25) { t = g + 51 - t; } else { t = g - t; }
    var c2 = get_hex(round(t / 17));
    t = b % 51; if (t > 25) { t = b + 51 - t; } else { t = b - t; }
    var c3 = get_hex(round(t / 17));
    return c1 + c1 + c2 + c2 + c3 + c3;
}

function get_safe_color(c) {
    if (c.charAt(0) === '#') c = c.substring(1);
    var rgb = hex_to_rgb(c), r = rgb[0], g = rgb[1], b = rgb[2];
    return '#' + web_safe(r, g, b);
}

console.log(get_safe_color('#1255FF')); // #0066FF

216 Web Safe Colors table see also http://websafecolors.info/