As of 2016-02-26, there will be no more posts for this blog. s/blog/pba/
Showing posts with label JavaScript. Show all posts

A week ago, I removed all packages relating to Java from my system. I thought I had absolutely no need of them until I tried to minify CSS, via Makefile, using YUI Compressor which is written in Java. The same situation would have happened when I need to minify JavaScript files, I use Google Closure Compiler, same, written in Java.

Oh, right, thats why I still kept the Java runtime environment, the open source one, IcedTea, I thought to myself.

At this point, you can bet that I want to get rid of Java for real. For IcedTea binary package, its about 30+ MB plus near 9 MB of YUI Compressor and Google Closure Compiler. To be fair, they dont use a lot of space, but I just dont like to have Java on my system since there are only two programs need it. Besides, in order to have IcedTea installed, it pulls two virtual packages and two more packages for configurations.

So, my choice was to use online ones as I already know Google hosted one at and there is also a popular Online YUI Compressor hosted by Mike Horn.

Its only a matter of commands for achieving Java-less, using curl, first one is for YUI Compressor, second one is for Google Closure Compiler:

curl -L -F type=CSS -F redirect=1 -F 'compressfile[]=input.css; filename=input.css' -o output.min.css
curl --data output_info=compiled_code --data-urlencode js_codeinput.js > output.min.js

Replace the input and output filenames for yours. If you need to pipe, then use - instead of inputfile, - indicates the content comes from standard input.

Although YUI Compressor can also minify JavaScript, however I found Google Closure Compiler does better job by only a little. If you use YUI Compressor for JavaScript as well, simply change to type=JS.

Both (Online) YUI Compressor and Google Closure Compiler have some options, you can simply add to the command. It shouldnt be hard since you have a command template to work from. I only use the default compression options, they are good enough for me.

Recently, I had a thought about the Disqus loading. Instead of loading by a button, automatically loading when comments section (about) comes into view might be a nice idea. But I dropped the idea as I thought about my browsing behavior, most of time, I would scroll down to the bottom of page. If so, simply loading the Disqus is a better method in my opinion.

Anyway, I still made a quick sample code:

The important part is the detection:
if ($w.scrollTop() + $w.height() >= $t.offset().top - $w.height()) {
    inview = true;
$w = $(window) and $t = $('#target'). When the bottom of view area reaches target's top with a margin $w.height(), then it's detected. Much like sticking an element at top, this is as simple as that, nothing much to explain.

You can do many things with such detection, such as infinite scroll (which I don't like), loading images or external resources, or loading Disqus comments to reduce bloating.

This only demonstrates the Y-axis and trigger-from-above detection. In practice, it should only be triggered once if the state has not reverted back. It's not hard to do, only I don't need it, so the post ends here.

Generated tree

As you may have noticed that there are some marks on the right edge, which are the "Right-side navigation" as what I'd call it. They serve as like Table of Contents.

For now, it's three levels:
  1. Post title,
  2. First heading level in the post, and
  3. Second heading level.
Here is a screenshot:

I hope this will encourage people to go through the headings and may be interested in reading more if you don't have to scroll down a page in order to skim over the content, but simply hover their cursors over these marks.

At first, I was planning a list-like at top-right corner as opposite to the settings at top-left corner. It would look like an automatic Table of Content, which I'd wanted for long time, but never started to write it.

However, I got this idea that visualizes the logical position of the heading in the page on the right side.

I wrote first changeset for post titles only, because I was afraid that subheadings might crowd the navigation. Later, subheadings were added in second changeset and the navigation looks fine, not over-crowded at all.

I didn't add a switch for this new navigation yet, I might or not. If you think this navigation doesn't look good, please complain in comment, so I can be motivated to add a switch for it for you to shut it off.

When you click on one mark, the page will scroll to the heading smoothly, not just jump to it directly. This reminds me of my jQuery jk navigation plugin, which also uses smoothing scroll and was installed briefly on this blog long time ago. I seriously want to add it back, so visitors can not only click on the right-side navigation with mouse, but also use keyboard to navigate.

I believe that I am not the first one to implement such navigation style, if you know someone has implemented similar feature for a page, or as a plugin or script, please post the link.

I think this it worth to make it as a jQuery plugin, if you would like to use it, please do let me know. I may create one based on these code and generalize it for broad usages.

I have only tested it on Firefox, so if you see any bugs, I am sure you will, please leave a comment with what browser you are using. Also, any feedback are welcome!

Updated on 2012-05-21T09:40:04Z

It has problem with Google Chrome and since no one even commented about this navigation, I decided to remove this right-side navigation. However, you may still see it in this page.

Right now, every place should be after April 1 in their local time. On March 31, I decided to create my own April Fools Day join and here is a screencast of what would happen when you visit my blog on April 1 local time in case you missed it:

Within 24 hours, the code will be removed. You may still be able to see this if you reset the date to 1st.

I even posted a fake post for luring feed readers in because I know they wouldnt see anything in their favorite reader. Also a post on Google+, just did my best to spread the fun.

I dont know if I should paint some eggs next.

A recent engage with a reader reminded me of a long existing issue with threaded comments.


The content in this post is no longer reflecting the status of this blog. (2015-12-10T02:50:47Z)

I like threaded more than flat. They both have advantages and disadvantages. In threaded comments, as the depth increasing, you can lose in the discussion as you may have in flat comments.

One reason, I prefer threaded is because flat is very easy to get confused when read someones reply to another comment, which could be posted a few ahead. An experienced commenter would include the comment # or the commenters name, but its very problematic.

Another good reason to use threaded comments is some comments are asking for help, with threaded comments, I can reply/answer to the question specifically. The asker can clearly know which comment is my answer to his or her question. This is not something, you can work around with flat comments easily.

Threaded could easy resolve such confusion, but it has own issues. As discussion grows, it can still get complicated as flat comments do.

But another issue has bothered me more than the issue mentioned above, the layout. The comments section is a fixed width design, when there is too many levels of replies, the widths of deeper comments get smaller and smaller. Eventually, it becomes too narrow as you can see the comments in this post.

In Disqus, you can switch to flat or limit the maximal depth of comments. But I dont want to use flat or to have limit of depth.

Right now, there is a new option on top-left corner to switch between variable width and fixed width. It is not meant to resolve, but simply a workaround. I dont really have any good idea for a brilliant layout for threaded comments.

I took the chance to add a notice just before the comments, it says:

Please keep your comments relevant to this post and try not to comment something like only Thanks in entire comment, use the Like button of Disqus, instead. You can use some HTML tags if you like.

This was something I had wanted to do a year ago, finally, I added it. This blog is created with fixed width at 640px, I have to break it, which I dont like, but I dont have better way to deal with.

Okay, I confess that I might have some tabby issue. I wrote Keep Last Two Tabs extension for Chrome to prevent accidental quit.

I think it has been a long time, I always have to manually create a tab beside the pinned tabs in Firefox. Somehow, I can feel comfortable without an unpinned tab. It has to be look like

There must be one unpinned tab! (Any good shrink? xD)

Actually, it makes sense since you don't want to change URL of pinned tab, that's why you pin it at first place, isn't it? As a Pentadactyl user, I can always your keybinding for new tab, so I will open URL in new tab directly.

But every time when I look at the tabs bar, it just doesn't feel right without an unpinned tab.

So, I added few code to my configuration of Pentadactyl. I need to implement in Firefox, not on top of Pentadactyl by using autocmd, which does not have tab removal event. However, you can check tabs with LocationChange, but that will have many unnecessary checks.

Note that closing animation will cause tab still remains for a short time after TabClose event is fired, but it can be checked as you can see in my code. The code also takes care when open tab just gets pinned by attaching to TabPinned, it will immediately create a new tab.

I think the code could be used directly in Vimperator as well.

I am glad Chromium is not my main browser, or I will also need to add new option to KLTT for similar thing.

I had this idea in the end of September 2010 when I was playing with Google Analytics' tracking code. I wrote some code for rating blog posts using the option value, the code did stay on my blog for a day or two before I took it down, it wasn't too useful for me. But a function allows visitor to report page issue could be very helpful if someone is willing to click on some buttons.

I have finished a simple code and it's at bottom of this blog:

Well, it doesn't look pretty. Here is the code in that HTML/JavaScript gadget:

<script src=""></script>
function init_page() {
  var gawr_options = {
    target: 'ga-wr',
    report_options: [
        title: 'Image is not loaded'
        title: 'Link is broken'
        title: 'Other'
  new GAWR(gawr_options);
<div id="ga-wr"></div>

For report of issues report, I can write my own program to get daily report using my current daily report as base. But I don't think I will trouble myself, not yet anyway. Right now, I can see the report with custom report in Google Analytics:

It works great for me for now. Note that, you need to use Alert/Total Events instead of Pageviews. It's event not page. The report does get updated very quick, probably a few minutes after reported. I will say that's instant almost.

Now a little technical background of this script. Basically, you should use different profile. It will track page when a report is being submit and the report is recorded as Event. Event action is the issue name and option label is the additional information as you seen in the image above.

Option value can only accept integer, custom value probably can do the trick, but I put the data in option label. There is another way to record is to rewrite the page URL when tracking the page, but I don't like that. But this could be a benefit, rewriting url to be /original-page-url/issue and still send the event. This way, if you watch Real-time tab, you can see there is a report just comes in if you don't use separate profile.

And remember when visitor reports, page URL is recorded by page tracking, also user's browser and system and everything Google Analytics collects by default is already in the data. Isn't this awesome and brilliant? I don't even need to code for collecting such data if I need to check visitor's browser, they are just there for me to read.

Google Analytics API can do more than just website access statistics, you can set up a poll or some thing more. Imagine you let people to vote and you use visitors metric or something to prevent some degree of voting spam.

Only the data isn't public without coding and they require process.


The project is dead and some links have been removed from this post. (2015-12-02T00:25:07Z)

ItchApe is a simple solution which enables you showing off your apes current itch to the world. You scratch your ape and its itch can be read.

1   Features

  • An itch can be described in up to 140 characters. (Its not a bird, its an ape!) Every character will be shown literally, no HTML will take effective.

2   Notes

  • An itch can be kept up to an hour, but there is no guarantee since itches are stored in memory cache.
  • All itches will not be stored in database. Once they are gone from memory cache, they are gone.

3   Get started

3.1   Adopt an Ape

You need to adopt an ape first, you will get a Secret Key and Ape ID after you submit your Secret Phrase. Make sure you write down these three information.

3.2   Install the code

Once you have your Ape ID, you can install the following HTML code,

<script src=""></script>
<script src=""></script>
<script>get_itch('<YOUR_APE_ID>', 'itchdiv')</script>
<div>My ItchApe: <span id="itchdiv"></span></div>

The itch will be shown in itchdiv. It may read like:

My ItchApe: This is my itch (3 minutes ago)

3.3   Scratch your ape

You can scratch your ape, enter the description of the itch and the phrase, key, and ID.

3.4   Scripts

There are two basic Bash scripts for scratching and getting itch, you can download them on Google Code.

4   Developers Information

4.1   Rendered code

The rendered HTML code by /itchape/itchape.js looks like

<span class="itch">The description of itch.</span> <span class="itch_timesince">(3 mintues ago)</span>

4.2   /itchape/getitch.json API

If you want to write your own script, here is how you get the itch. Send a GET request to<APE_ID>, the return data is a JSON or JSONP if you also give callback in query string,

"ape_says": "...",
"itch": "...",
"scratched_at": 123456789.123
  • ape_says is actually the error message, it may have the values listed in ape_says section below.
  • itch is the description of the itch.
  • scratched_at is time of the ape gets scratched, the seconds after Unix epoch, its a float number.

4.3   /itchape/scratch API

If you request using GET method, then it will be a normal page. If you request using POST method, its the API for scratching.

You need to supply secret_phrase, secret_key, ape_id, and itch. If its a successful call, then the data will be sent back as if you make a getitch.json call; if not, then you will get this json {"ape_says":"I'm not your ape"}.

You can also supply callback for JSONP.

4.4   ape_says (error message)

  • "Yeah, I was itching for that!": An itch description is retrieved successfully.
  • "Not itching, yet!": There is no data in memory cache for that Ape ID.
  • "I'm not your ape!": The phrase, key, and ID do not match, there you cant scratch this ape.
  • "Oooh... that feels good!": Scratch is successful and wonderful.

You have to parse these message, there is no error codes or simple true/false to know if its successful or not. Ape doesnt know about whats an API, they say what they want.

5   Support

If you have anything want to report or to request, please submit an issue to issue tracker.

You probably have noticed (really, you have already?) that if you click using any mouse button or using Enter, then the link will be highlighted with a blue background.


This blog is no longer using this, nor is YouTube. (2015-12-25T05:11:08Z)

The idea came from YouTubes new designed homepage, I noticed the link feature when its still experimental. I feel its a nice idea, especially you are watching video from a list selectively in new tab. You will know the position of the one you just finish watching, then you can move on.

I stole the idea but not the code, I didnt even use tool to read the code, JavaScript or CSS, its simple stuff. You only need to have I get an idea! And thats what I didnt have.

The code I made you can read the diffs, JavaScript diff and CSS diff. If you know them code, its simple code as I said. The only special thing I want to mention is margin compensates the extra space, which padding uses. The reason of adding padding is the edge of text will be touching the edge of the blue box if you dont use it.

Here are sample link styles with padding and without padding.

Since its possible to open a link using mouse or keyboard, so two events need to be handle.

My originally modified idea is having an indicator flying to new link where user just clicks. Indicator takes off from previous click, then flies to new link, but that seems to be too much and overdo. How about a simple background color fade in and out? Using CSS transition?

Try to press right mouse button with any link, like it?

1Update on (2011-07-21T22:24:21Z)

Some months after I posted this, this didnt work anymore. Yesterday, I finally decided to fix this. The JavaScript code is actually being evaluated, but the prompt with :bmark doesnt show up. If I run manually, it still doesnt sure, but the command has been entered into the history.

I read some plugins source code, but nothing really gave me any hint. They just work with virtually same code. But then I realized, they actually is executed after XMLHttpRequest().

So, the fix is

# map a :js my_bookmark_adder()<CR>
map a :js setTimeout(my_bookmark_adder, 0)<CR>

Or you can do it in the function.

I dont know when and what actually cause this and I dont really care. If you find out which commit causes this, feel free to tell me.

2Original post

If you use bookmarks to organize your to-read list, or you bookmark on certain website a lot. You might want to tag with readlater or remove some words from bookmark titles, such as websites name.

I have a project called BRPS, which has an old client script brps.js. Whenever this client request data from BRPS server, the server will increase the requests count and it has a statistics page for showing the count. Recently, a new client is implemented, gas.js. This new client doesnt communicate with BRPS server, I need to find a way to get a statistic number about how many requests it has been made. I dont want to write more code on my server to log those requests. So, Google Analytics is the best option for me.

1   Non-asynchronous method

function _track() {
  try {
    var pageTracker = _gat._getTracker("UA-#######-#");
  catch(err) {
if (window._gat) {
else {
  $.getScript(('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '', function(){

With the code above, my script can track requests from different domains1, which are not mine. I didnt assign a path via pageTracker._trackPageview('/path/something'), because I want to see where exactly the request are made from. The UA-#######-# is only used by this script and I dont need to log status such as /status/success or /status/failed.

2   Filters

I created a new profile and two more based on the first one. The last two are using filter each. The first filter is
Custom filter Advanced  
FieldA Hostname (.*)
FieldB Request URI (.*)
Output Request URI $A1$B1

The profile with this filter can see results like A sample output:

The second one is

Custom filter Advanced  
FieldA Hostname (.*)
FieldB unused  
Output Request URI $A1

The profile with this filter can see results like, I would like to know which websites are top users. A sample output:

3   Asynchronous method

I knew there was a method called asynchronous tracking. But I wasnt catching it when I saw the code using JavaScript Array _gaq[] to store commands. At first, I thought thats kind of bad. They embedded ga.js to read that array every time? Did script clean it up?

I was wrong until I read this:

When Analytics finishes loading, it replaces the array with the _gaq object and executes all the queued commands. Subsequent calls to _gaq.push resolve to this function, which executes commands as they are pushed.

So, my _track() needs a little modification:

function _track() {
  var _gaq = window._gaq || [];
  _gaq.push(['_setAccount', 'UA-#######-#']);
  _gaq.push(['_setDomainName', 'none']);
  _gaq.push(['_setAllowLinker', 'true']);
  if (!window._gaq)
    window._gaq = _gaq;

4   Updates

  • 2010-09-25T23:48:40+0800: Add Asynchronous method section

[1] is gone.

A standard asynchronous Google Analytics tracking code would look like:

<script type="text/javascript">

  var _gaq = _gaq || [];
  _gaq.push(['_setAccount', 'UA-#######-#']);

  (function() {
    var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
    ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '';
    var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);


I didnt like how they look, so I decided to re-write with jQuery:

if (window._gat) {
else {
  $.ajaxSetup({cache: true});
  $.getScript(('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '', function () {
  $.ajaxSetup({cache: false});

It checks if there already is a Google Analytics script included. The script is the same and reusable, some websites might have multiple tracking code being executed. There is no need to create many <script>. If the script isnt included, then it loads it using jQuerys getScript(). Within the callback, it logs the pageview. You might also want to put _gat... into a try {} catch.... The older non-asynchronous tracking code does that.

You can also see it uses $.ajaxSetup() to set up cache use. By default, jQuery appends a timestamp like _=1234567890 as a query parameter after the URL of the script you want to load, that timestamp is called cachebuster, which causes web server sends same content to client even the content isnt modified. I discovered this behavior when I was adding new code on this blog.

In normal request, your web browser will check with server. If server returns 304, then browser will use the ga.js it already has in hand. With cachebuster, that wont happen, browser receives same content again and again. Using ajaxSetup() is to ensure cache is in use.

The only part I dont like in my code is how it decides the script link, it doesnt look pretty to me.

It's slow, looks like Ishihara color test but it's not wise to use it as a medical examination, color blindness. The code is a mess, don't read it, it's like a cheap movie on this IMDb list. The best and fastest generation is on Chromium. Firefox is kind of slow, but it's not its fault, you can blame on my code.

Oh! the font is Comic Sans MS!

1   Firefox 4 and Opera

Just heard3 about this new benchmark Kraken4, so I tried it with FF4 beta and Opera 10.61. I only have two browsers on my system currently, its too bad that I dont have Chromium to compare because the results are quite interesting

TEST                         COMPARISON            FROM                 TO               DETAILS
                                                  (Opera 10.61)   (FF 4.0b7pre 20100914)

** TOTAL **:                 1.50x as fast     19627.2ms +/- 2.3%   13061.5ms +/- 1.1%     significant


  ai:                        1.41x as fast      3344.5ms +/- 13.0%    2371.8ms +/- 6.4%     significant
    astar:                   1.41x as fast      3344.5ms +/- 13.0%    2371.8ms +/- 6.4%     significant

  audio:                     1.55x as fast      6657.7ms +/- 1.2%    4307.2ms +/- 1.0%     significant
    beat-detection:          1.071x as fast     1301.8ms +/- 3.7%    1215.3ms +/- 2.0%     significant
    dft:                     2.54x as fast      2844.6ms +/- 2.8%    1119.4ms +/- 2.7%     significant
    fft:                     1.051x as fast     1137.7ms +/- 1.3%    1082.4ms +/- 3.0%     significant
    oscillator:              1.54x as fast      1373.6ms +/- 1.8%     890.1ms +/- 0.9%     significant

  imaging:                   2.30x as fast      7628.8ms +/- 1.4%    3318.3ms +/- 1.6%     significant
    gaussian-blur:           3.46x as fast      5256.8ms +/- 1.8%    1517.7ms +/- 0.8%     significant
    darkroom:                1.67x as fast       989.6ms +/- 1.9%     592.7ms +/- 0.8%     significant
    desaturate:              1.144x as fast     1382.4ms +/- 1.3%    1207.9ms +/- 3.8%     significant

  json:                      *1.32x as slow*     302.4ms +/- 1.4%     399.1ms +/- 0.7%     significant
    parse-financial:         *1.88x as slow*     134.9ms +/- 1.9%     253.9ms +/- 0.8%     significant
    stringify-tinderbox:     1.154x as fast      167.5ms +/- 2.1%     145.2ms +/- 1.0%     significant

  stanford:                  *1.57x as slow*    1693.8ms +/- 2.8%    2665.1ms +/- 0.6%     significant
    crypto-aes:              *1.93x as slow*     377.8ms +/- 9.8%     729.6ms +/- 0.6%     significant
    crypto-ccm:              *1.109x as slow*    473.2ms +/- 10.5%     524.7ms +/- 1.9%     significant
    crypto-pbkdf2:           *1.85x as slow*     634.3ms +/- 4.0%    1176.0ms +/- 0.7%     significant
    crypto-sha256-iterative: *1.126x as slow*    208.5ms +/- 1.1%     234.8ms +/- 1.6%     significant

As you can see FF4 is faster in three categories of tests: ai, audio and imaging; and slower in json and crypto categories. Last month, from results of SunSpider, Opera 10.61 is faster than FF4.0b5pre. Now, with tests above, FF4.0b7pre is faster.

[3]The original link was, but it returns 410 GONE.
[4]The original link was, the content was gone.

2   Firefox 3.6.9

RESULTS (means and 95% confidence intervals)
Total:                       27035.3ms +/- 1.2%

  ai:                         4529.0ms +/- 5.3%
    astar:                    4529.0ms +/- 5.3%

  audio:                      9842.9ms +/- 1.3%
    beat-detection:           2299.4ms +/- 2.2%
    dft:                      3488.6ms +/- 1.8%
    fft:                      2212.1ms +/- 3.0%
    oscillator:               1842.8ms +/- 4.0%

  imaging:                    7502.1ms +/- 1.5%
    gaussian-blur:            3481.2ms +/- 2.8%
    darkroom:                  834.8ms +/- 0.8%
    desaturate:               3186.1ms +/- 2.5%

  json:                        520.9ms +/- 1.2%
    parse-financial:           350.7ms +/- 1.4%
    stringify-tinderbox:       170.2ms +/- 1.9%

  stanford:                   4640.4ms +/- 1.0%
    crypto-aes:               1367.0ms +/- 0.9%
    crypto-ccm:               1028.7ms +/- 1.4%
    crypto-pbkdf2:            1676.7ms +/- 1.8%
    crypto-sha256-iterative:   568.0ms +/- 0.6%

I ran it four times, two of them crashed Firefox and this test used a lot of memory, more than 1 GB. I am also compiling Chromium for this benchmark, result will be added later.

3   Firefox ESR 17.0.2 with Kraken 1.1

RESULTS (means and 95% confidence intervals)
Total:                        6688.6ms +/- 1.4%

  ai:                          236.7ms +/- 3.7%
    astar:                     236.7ms +/- 3.7%

  audio:                      2528.1ms +/- 3.7%
    beat-detection:            610.8ms +/- 1.1%
    dft:                      1047.5ms +/- 6.1%
    fft:                       444.9ms +/- 0.6%
    oscillator:                424.9ms +/- 18.1%

  imaging:                    2505.4ms +/- 2.4%
    gaussian-blur:            1438.1ms +/- 4.1%
    darkroom:                  550.3ms +/- 0.2%
    desaturate:                517.0ms +/- 4.3%

  json:                        276.8ms +/- 2.7%
    parse-financial:           152.6ms +/- 3.6%
    stringify-tinderbox:       124.2ms +/- 4.4%

  stanford:                   1141.6ms +/- 1.4%
    crypto-aes:                263.0ms +/- 1.5%
    crypto-ccm:                199.5ms +/- 1.5%
    crypto-pbkdf2:             509.0ms +/- 1.7%
    crypto-sha256-iterative:   170.1ms +/- 3.7%

4   Chromium 7.0.517.5

RESULTS (means and 95% confidence intervals)
Total:                        22962.1ms +/- 0.6%

  ai:                          1220.5ms +/- 0.5%
    astar:                     1220.5ms +/- 0.5%

  audio:                       8739.2ms +/- 0.8%
    beat-detection:            2288.9ms +/- 1.1%
    dft:                       3169.9ms +/- 2.1%
    fft:                       2370.6ms +/- 0.5%
    oscillator:                 909.8ms +/- 0.6%

  imaging:                    11067.4ms +/- 0.9%
    gaussian-blur:             5549.8ms +/- 1.9%
    darkroom:                  2748.7ms +/- 1.7%
    desaturate:                2768.9ms +/- 1.1%

  json:                         890.2ms +/- 0.3%
    parse-financial:            507.0ms +/- 0.4%
    stringify-tinderbox:        383.2ms +/- 0.4%

  stanford:                    1044.8ms +/- 0.7%
    crypto-aes:                 229.3ms +/- 0.9%
    crypto-ccm:                 190.3ms +/- 0.5%
    crypto-pbkdf2:              435.0ms +/- 0.8%
    crypto-sha256-iterative:    190.2ms +/- 1.4%

5   Summary,0,30000&chxt=y&chbh=a&chs=628x240&cht=bvg&chco=FF9900,80C65A,76A4FB,FFCC33&chds=0,30000,0,30000,0,30000,0,30000&chd=t:13061.5|19627.2|22962.1|27035.3&chdl=Firefox+4|Opera+10|Chromium+7|Firefox+3.6&chdlp=b&chma=0,0,0,10|0,35
Browser   Version     Total Time   To FF4
Firefox    4.0b7pre   13061.5 ms  --------
Opera     10.61       19627.2 ms  + 50.27%
Chromium   7.0.517.5  22962.1 ms  + 75.80%
Firefox    3.6.9      27035.3 ms  +106.98%

I seem to find more feature of Firefox 4 every time I use it. I was wandering through the menu, believe me, I have done that when I just installed it. Somehow, I didnt notice there were two new things: Inspect and Web Console. You can activate them by pressing Ctrl+Shift+I and Ctrl+Shift+K.

The Inspect (the top windows on the left, right, and bottom) is really something called very pre-alpha feature, I got crash once by just hovering my cursor around. If you try to compare it with Firebug1 or Developer Tools in WebKit-based browsers, you will be so disappointed at it. It just shows you the values, nothing fancy, you can not edit or tweak your HTML on the fly. FFs Inspect is like a blackboard updated by hand, others are like a 60 HD plasma TV updated by automatic intelligent programs.

The Web Console (the frame above Mozilla webpage) seems to be more mature and it supports console.log(). And dont forget that JavaScript Console (Ctrl+Shift+J), its the place to read error occurrences in JavaScript, Web Console is not for that purpose. I think Firefox would have integrated developer tools someday, Firebug is great but I would like to have built-in.

[1]The latest 1.6b1 still failed for six tests in Firefox 4.0b4.


BRPS is dead. (2015-12-02T02:44:34Z)

After 40 posts, I decided to add my own BRPS1 (Blogger Related Posts Service) gadget to this blog. I modified the current brps.js and embedded it into source, it uses jQuery 1.3 and it would cause some problem in this blog.

Here is the code:

<div id='related_posts'></div>
// GPL'ed, verson 3 or later
function BRPS_watchdog() {
  if (window.brps_start == undefined)
  diff = (new Date()).valueOf() - brps_start;
  if (diff >= 30 * 1000) {
    $('#related_posts').append('<p style="color:#f00">Something went wrong with BRPS server!</p>');
    window.brps_start = undefined;
    window.setTimeout('BRPS_watchdog()', 5000);

function BRPS_get() {
  var key = '%%%%% YOUR BRPS KEY %%%%%';
  var blog_id = '%%%%% YOUR BLOGGER BLOG ID %%%%%';

  // Get Post ID
  var links = $("link[rel='alternate']");
  var post_id = '';
  for (var i=0; i<links.length; i++) {
    m = /.*\/feeds\/(\d+)\/comments\/default/.exec($(links[i]).attr('href'));
    if (m != null && m.length == 2) {
      post_id = m[1];
  var $rps = $('#related_posts');
  if (blog_id != '' && post_id != '') {
    if (window.brps_start == undefined)
      window.setTimeout('BRPS_watchdog()', 5000);
    window.brps_start = (new Date()).valueOf();
    $.getJSON("" + blog_id + "&post=" + post_id + "&key=" + key + "&callback=?",
          window.brps_start = undefined;
          if (data.error) {
            $('<p>' + data.error + '</p>').appendTo($rps);
            if (data.code == 3)
              // Need to retry in 5 seconds
              window.setTimeout('BRPS_get()', 5000);
          else {
            if (data.entry.length > 0) {
              var $rps_ul = $('<ul></ul>').appendTo($rps).hide();
              $.each(data.entry, function(i, entry){
                  .append($('<a/>').attr('href','title', 'Score: ' + entry.score.toString()).text(entry.title))
            else {
              $('<p>No related posts found.</p>').appendTo($rps);
  else {
    $('<p>Only available in single post.</p>').appendTo($rps);

Since my blog always requires jQuery, therefore I didnt put the embedding code in the code above. You might need the following code to be before the code above:

<script src=''></script>

A few changes:

  • The Blog ID2 and Key1 are hand coded in script.
  • Remove the BRPS options, because I know what I want.
  • No longer rendering the title and if the page is not a single post page, then it will show a message instead of emptying gadget to show nothing.
  • The list will be sliding down and fading in.

Generally, it should run faster, though you wouldnt feel that.

[1](1, 2) The brps.js is no longer available to new users, see Using BRPS new method to get related posts list.
[2]Search your blog HTML source for blogID, you will see it.

When I was a kid (around 10, almost 20 years ago, 90s), one day my father brought a IBM 5550. It had a program, which I totally have no idea whats the command name. You can draw a line on a plane, then it will generate a 3D image as follows.

I didnt know English at the moment, I doubt I could even spell English correctly. I wrote a similar version using JavaScript + HTML5 Canvas, I am not sure if its the same result, but it should be pretty close.

1   Play it!

Only test with Chromium 6.0 and Firefox 3.6, so please dont swear if it doesnt work for your favorite browser.

Hold Shift key and move mouse, or simply click, click, and click.

2   Your Masterpiece

Your creativity will be placed below, so you can save them if you like. Note: the image doesnt have background color actually.

3   Do you know it?

So, do you know the name of this command? The only things I could still remember are that computer ran MS-DOS, I think. And the monitor is a yellow monochrome CRT (Thats the reason I made the drawing yellow) It was an auction item, so the program probably was installed by the previous owner.

And do you know what you can call this kind of process?