Tag Archives: snippet

How to get url hash state safely

Normally, we can get the hash string by location.hash. But currently i fond the value is not actually correct in FF. FF automatically decoding encoded hash string in the URL.

So safely method is avoid use location.hash, and it is better to use location.href.split('#')[1] instead of location.hash.

Indeed location.href.split('#!')[1] does not get decoded by FF automatically (at least today).

var currentUrl = '';
var getHashPath = function() { 
    return location.href.split('#!')[1];
};
$(window).on('hashchange', function(e) {
    var url = getHashPath();
    if (url !== currentUrl) {
        currentUrl = url;
        // do some business logic...
    }
});

Make websafe colors

For Web colors plugins.

Convert a normal hex color to a websafe color.

var round = Math.round;
var floor = Math.floor;

function get_hex(dec) { return dec.toString(16); }
function get_dec(hex) { return parseInt('0x' + hex, 16); }

function rgb_to_hex(r, g, b) {
    var c1 = get_hex(floor(r / 16));
    var c2 = get_hex(floor(r % 16));
    var c3 = get_hex(floor(g / 16));
    var c4 = get_hex(floor(g % 16));
    var c5 = get_hex(floor(b / 16));
    var c6 = get_hex(floor(b % 16));
    return c1 + c2 + c3 + c4 + c5 + c6;
}
function hex_to_rgb(hex) {
    var i = 0, arr = [], c1, c2;
    while (i < 6) {
        c1 = get_dec(hex.substring(i, ++i));
        c2 = get_dec(hex.substring(i, ++i));
        arr.push((c1 * 16) + c2 * 1);
    }
    return arr;
}

function web_safe(r, g, b) {
    var t;
    t = r % 51; if (t > 25) { t = r + 51 - t; } else { t = r - t; }
    var c1 = get_hex(round(t / 17));
    t = g % 51; if (t > 25) { t = g + 51 - t; } else { t = g - t; }
    var c2 = get_hex(round(t / 17));
    t = b % 51; if (t > 25) { t = b + 51 - t; } else { t = b - t; }
    var c3 = get_hex(round(t / 17));
    return c1 + c1 + c2 + c2 + c3 + c3;
}

function get_safe_color(c) {
    if (c.charAt(0) === '#') c = c.substring(1);
    var rgb = hex_to_rgb(c), r = rgb[0], g = rgb[1], b = rgb[2];
    return '#' + web_safe(r, g, b);
}

console.log(get_safe_color('#1255FF')); // #0066FF

216 Web Safe Colors table see also http://websafecolors.info/

Convert thrift java bean to JSON object

Today we have a java bean generated by Thrift with schema like:

struct T {
   1: string address
}

Normally we can translate the java bean to a org.json.JSON object like:

...

T bean = new T();
JSONObject json = new JSONObject();
Map<String, Object> dataMap = org.apache.commons.beanutils.BeanUtils.describe(bean);
for (Entry<String, Object> entry : dataMap.entrySet()) {
    json.put(entry.getKey(), entry.getValue());
}
...

// or translate with JSONObject simple
...
JSONObject json = new JSONObject(bean);

But for beans which generated by thrift, Both we’ll get some no-used properties prefixes with set*:

{"address":"abc","setAddress":true}

……

After some googled, we are get some wiki API method, see also http://wiki.apache.org/thrift/ThriftUsageJava

/**
 * Convert the generic TBase<?, ?> entity to JSON object.
 *
 * @param tobj
 * @author Allex Wang
 * @return
 */
public JSONObject convertBeanToJSON(final TBase<?, ?> tobj) {
    TSerializer serializer = new TSerializer(new TSimpleJSONProtocol.Factory());
    try {
        String json = serializer.toString(tobj, "utf8");
        return new JSONObject(json);
    } catch (TException ex) {
        LOGGER.error("Convert TBase object to JSON fails: " + ex.getMessage());
    } catch (JSONException ex) {
    }
    return null;
}

Now we are get the JSON entity as expected:

{"address":"abc"}

git squashing commits with rebase

git rebase – Forward-port local commits to the updated upstream head

We can use git rebase --interactive (or -i) mode to squash multiple snapit commit.

git rebase -i HEAD~2
pick b76d157 b
pick a931ac7 c

Changing b’s pick to squash will result in the error you saw, but if instead you squash c into b by changing the text to:

pick b76d157 b
s a931ac7 c

For more details reference:

mysqldump –where option

The MySQL provides a great tool mysqdump to dump database. Its the official sql dump utitlity for MySQL database. It makes the life of dba so easy that he can backup and restore the database within just two commands. But sometimes due to the lacking of the infrastructure you can not dump and restore that easily. Specially when you are dealing with huge amount of data. Our database grows over time. Few hundred GBs are quite common. But if you run a software for long it might get in to Tera byte range. The problem starts when size is this huge.

Split the big tables by tables.

mysqldump db1 table1 table2 table3 > table1-3.sql

Split the big tables by rows with –where.

-w, --where=name    Dump only selected records. Quotes are mandatory.

If your table has auto column typed with auto_increment you can split by any number of chunks.

mysqldump --where "id%2=0" db1 table1 > table1_even.sql
mysqldump --where "id%2=1" db1 table1 > table4_odd.sql

limit clause.

# Dump the first 100000 rows from the table named table5 into the file dump.sql
# Use LIMIT [OFFSET, ] LIMIT So some thing like that:
mysqldump --where "1 LIMIT 10000" database1 table5 > dump.sql

Others. As the –where switch allows any sql condition you can use any criteria.

mysqldump --where "year(date) <= 2008" db1 payment > payment_prior_2009.sql

 Some useful options:

–skip-add-drop-table Avoid generate the DROP TABLE statements.
–skip-create-options Disable include all MySQL specific create options.
–replace Use REPLACE INTO instead of INSERT INTO.

Fast, Effective PHP Compression

Normally, we just need config the apache setting to enable Gzip compression features.

#####
# Enable compression (gzip compression) – Apache Server (httpd)
# (start)
#
<IfModule mod_deflate.c>
    SetOutputFilter DEFLATE
    # file-types indicated will not be compressed
    SetEnvIfNoCase Request_URI \.(?:gif|jpe?g|png|rar|zip|pdf)$ no-gzip dont-vary
    <IfModule mod_headers.c>
        Header append Vary User-Agent
    </IfModule>
</IfModule>
<IfModule mod_log_config.c>
    <IfModule mod_deflate.c>
        DeflateFilterNote Input instream
        DeflateFilterNote Output outstream
        DeflateFilterNote Ratio ratio
        SetEnvIf Request_URI \.(?:gif|jpe?g|png|rar|zip|pdf)$ ignore-log
        SetEnvIf Request_URI \.html image-request
        LogFormat '"%r" %{outstream}n/%{instream}n (%{ratio}n%%)' deflate
        CustomLog /var/log/apache2/deflate.log deflate env=!ignore-log
    </IfModule>
</IfModule>

<IfModule mod_expires.c>
    # enable expirations
    ExpiresActive On
    # expire GIF images after a month in the client's cache
    ExpiresByType image/gif "access plus 1 month 15 days 2 hours"
    # test that cached version is not used for the request
    ExpiresByType text/html "access plus 30 seconds"
    # check with cached version
    #ExpiresByType text/html "access plus 30 days"
</IfModule>

##### gzip configuration - (end)

Alternate Method

Place the following code before the (X)HTML content in any PHP script:

<?php if (substr_count($_SERVER['HTTP_ACCEPT_ENCODING'], 'gzip')) ob_start("ob_gzhandler"); else ob_start(); ?>

In this case, the ob_flush() command is unnecessary as PHP inherently flushes the buffer. The script delivers gzipped content to capable browsers and uncompressed content to incapable browsers.