As of 2016-02-26, there will be no more posts for this blog. s/blog/pba/
Showing posts with label thought. Show all posts

Three years ago, I listed what programs I ran. 10 out of 31 programs have been either removed or replaced with others. Three years and 32 % of changes. I only list removed or replaced programs here, the rest 21 are still in use.

No more these

Five programs I have stopped using after the initial post for different reasons.

CenterIM and Pidgin, I think I quited instant messaging around May, 2010, as well as Twitter and all so-called social network websites. I reverted back to the old-school tool, something called the email for better communications.

Shell.FM, probably removed after I canceled my Last.fm subscription. I still used Last.fm scrobbling, just dont listen to its radio. The Grooveshark might have something to do with why I canceled the subscription.

Arch Linux, well, I still have it on another computer, but I havent booted that computer up for years. So, I dont actually remove it, but simply dont use it anymore. However, I still read its forums and post replies.

Xpdf, it all started with how I wanted to reduce packages on my system and a script for downloading and serving a local copy of PDF to feed to Google Doc Viewers was born. Then I finally got a better way to do it with Google Docs Viewer right within browser.

Switched

OpenOffice.org to LibreOffice, the reason couldnt be simpler, LibreOffice is free, OpenOffice.org was doomed when Oracle started to do things to OpenOffice.org like those to others which Oracle has acquired. Even now its called Apache OpenOffice, its never going to come back like those days OO.o had once had. And after LibreOffice was born, I could tell there was more improvements were made.

BashPad to Bashrun, which has a lot of features and I just switched to. Its 1.0+ version has even more features, but I never updated to the latest version. The years old one has served me well, thats running the programs I ask for.

Conky to dzen-status. In the beginning, I started with Bash scripts, then ported to C code for better performance. Although, it doesnt show many information as the Conkybecause lack of space, only has one-linerbut its more than enough to have glimpses of current system status and resources usage.

Vimperator to Pentadactyl. This switch decision was made after I read about the split of developers. Itd be two years this year, so far, I like Pentadactyl, its stable and working nicely.

FluxBox to dwm, stacking to tiling, moving around to not moving around. Its hard to believe that Ive not been using dwm for two years, I felt like its been ages since the switch. I used to think tiling not really my thing, but now I didnt feel there really was a huge difference. A week of use, you will feel the charm of tiling and more the keyboard power, managing windows with ten fingers without leaving keyboard for the mouse. Efficient.

There more years?

After three years, there were some significant changes, such as the window manager. But I dont think there would be much change three years from now, because I really feel I have the right composition of programs. I know them well and make them better for me.

Now?

The current header and footer is shown as follows, and they probably would last forever:

New blog header and footer

New blog header and footer

It may look insignificant at first glance or may be considered too small to notice, but its not. Why not? Because we tend to know what to expect at top and bottom of page. You know whats at first line of page whether the text is in 72pt or 8pt, size doesnt matter when the information is known about already. When you need to read blogs name, you scroll to the top and there youd have it.

Even current header looks very simply, I want to simplify more, because Outputs directly from me doesnt mean a lot to me, when I now read it and think about it. I could remove it and move the navigation menu up, next to the title. But lets leave it for next time.

As for the footer, same thing. Only I kept that sort of tag line since it has been on my blogs for more than four years.

Final thoughts

There is no standard defining what has to be in header and footer, and it shouldnt be one. For me, they have to be simple. Ive learned lengthy sentence doesnt equal to being informative. Keep them simple and essential, thats useful and practical.

You do not need a 100-word blog description below the blog title, nor a 100-word disclaimer in your footer. Although, its situational, depends on type of websites, but less truly is more.

You also do not need a fancy header image. Yes, one such image may showcase your artistic designs if your website is about design. However, I have seen a few websites such as photographers, the only images they have are the photos they shoot. Sometimes, no, most of time, text is even more powerful than image. Just like black/white photos can have more impact on the viewers mind than color photos do. The less of complicated element is a trade of more focus.

Nonetheless, there does have times that more is more. Well, only I cant think of one at this moment.

Good header and footer are good when they are.

Weird behavior

Around last May1, I started to notice a weird behavior: some people like to clone a repository but without adding any commits on GitHub, Google Code, or any other source code hosting provider I believe. A few people even fork my dotfiles, that really loses me.

For example in the following screenshot, it shows only one of 18 clones has made some changes:

Those 17 clones have never been done with anything.

Rationale?

I have long try to understand or to explain to myself why they do that. Whenever I see cloned repositories under some peoples account, which have no commits, I ask myself

Why?

Whats the point of cloning a repository under ones account and making no commits to it? Why not just clone the original repository to local even you do plan to make some commits locally without pushing to the public accessible repository.

Maybe they might commit or contribute some changes later?

That still makes no sense at all. You can always push to another repository later on, you do not need to clone to your account first.

If you are one of this type of cloner or you know their reasons, please leave a comment, Id like to understand.

My cloning

Whenever I want to contribute, I

  • Clone the original repository to local, then work on it.
  • Commit and commit.
  • go to hosting provider, fork or clonewhatever they call itthe original repository under my account
  • Edit local repository configuration, change default push path to the one under my account
  • Push changes.
  • create a pull request.
  • once pulled, I delete the forked or cloned repository unless I am still going to make more contributions.

The reason I dont fork or clone before working on it, thats because sometimes you realize that your idea may not be good enough once you seriously read the source code in order to make changes. You may drop your idea. I make sure I have changes committed before I fork or clone. Only do that when I am sure I have something to contribute or to push.

I dont keep repository whose commit history is part of original. I really dont understand why so many people keep a clone, an super outdated clone under their account. Well, not just one, I have seen a few people own accounts which contain dozens of outdated clones and thats all those accounts have: full of outdated clones.

Use star or watch

One reason I can remotely guess is, they use fork or clone to track repository. If so, then please:

  • If you like a repository, star it, dont fork or clone it.
  • If you are interested in changes of a repository, just watch it!

You wont lose them, for example, on GitHub, you have a special Stars page lists all repositories you have stared. Similar list can also be found on your Google Code profile page.

Once I have tried to search a repository on GitHub via search engine, there are some real clones, but when mixed with those without additional commits, its really hard to see which it is original or good forks. You need to understand, sometimes forks can get even higher rankings in search results if more people use the forked ones.

And there are also people fork or clone by creating a new repositories, not by those fork or clone buttons. This adds another dimension of troubling. Things sometimes can get little messy.

From my perspective and habit of keeping my online account in clean state as much as possible, those people stuff trash or useless repositories in their account. I am by no means saying those repositories literally are trash, just its useless since they are outdated. Its like hoarding online, those forkers or cloners are repository hoarders in my opinion, thatd be how I describe them as I dont know why they do it.

Please do understand that I am not trying to criticize those peoples behaviors but simply let them know there is no need to do so for themselves or for others. They is no absolutely right way or wrong way about how you use, but there does have better way to do things if criteria meet.

[1]I first noticed at 2012-05-23T02:38:45Z.

Over the years, I always feel uneasy when I read news. As Internet technology improving, receiving news is much easier whenever and wherever you are. You can read breaking news on mobile and watch live broadcast as if you are watching television in the living room a decade ago.

It's amazing but also frightening.

No more delay, straight delivered to you. You are no longer having time to think or comprehend what is going on, they hit you as you are. The flood of news can easily overwhelm you.

A few hours ago, I knew about Sandy Hook Elementary School shooting via Photo Blog. The word shooting is the reason I don't read news often and I try not to visit Google News. I have subscribed to some photo journals, I skipped reading some entries when I knew the content is about violence, crime, or any could turn your day into darkness.

I still remembered Aurora and Virginia Tech shootings.

And I will remember this photo, remember the faces of these scared school children, remember even they were being escorted outprotected by the policemen, they were still in terror, remember a first grade teacher hid [15] [.] students in a bathroom and barricaded the door, telling them to be completely quiet in order to keep them safe.

I had put a link to Dynamic Views (simply append /view/ in URL) on top navigation for a very long time. Dynamic Views isn't too bad as an alternative way to view a blog, if you have a nice computer, it's kind of resource hug in my opinion. Anyhow, I'd like readers to have option to choose when it's in arm's reach already.

But two days ago, I noticed it now shows comments and a comment form. I may be wrong since I don't use Dynamic Views, but I don't recall they were in Dynamic Views before, a popup window or something was the way to access comments before.

I went to change Comment Location to Hide, I thought it did what it meant, hiding the comments. And it did hide the comments section, but it also disallowed commenting. This caused my blog showing no comment section for almost two days until I just found out moments ago.

Why I mention disallow when it seems to be implied as you choose to hide comments? Because I use Disqus and I enable the comment synchronization, so the comments will be posted back to Blogger's comment system from Disqus' for backup purpose.

The first problem is it hides not only the comments but also the comments in normal Blogger layout, it's a simple template design issue. I can bypass it, but the problem is the Hide means Disallow, so Disqus can no longer synchronize back the comments. I have no choice to turn it back on, if I want to backup comments.

Why in the first place I do not want comments to show up in Dynamic Views? Because I can only make comments synchronize back to Blogger, not the other way around, or duplex. Therefore, I must not allow regular readers to have chance to access Blogger comment form.

If you want to suggest why not just enable Disqus in Dynamic Views? Because we can't, maybe in future. Dynamic Views is a unique interface and I totally agreed with that, but it should be little more flexible on option on/off.

There may be a way to get around, I think you can add Disqus embed code in each post to have it, I saw some project for having Gist embedded in Dynamic Views. However, it's utterly impractical as how much effort you must to put in for Disqus to show up.

I tried to add a message to Comment Form Message to ask people not to comment in Dynamic Views, that setting doesn't work for Dynamic Views, same as Embedded/Full page/Popup window, comments always embedded in the post.

Anyway, since I have no control for comment section in Dynamic Views, I decided to take that link down, even though people can still append /view/ to access it.

I have to say, I hardly see any blogs use Dynamic Views. Not even Google's blogs. It seems Blogger team developed a product doesn't gain much utilization. Much same to Blogger Stats, but in the value bloggers give to it. The worst part is not the lack of customization, but no option to turn them off completely.

I began to use Block unwanted website in late February, just a month later, it's been broken since then for almost 4 months, a discussion was started back in late March. Like some Google products if it's not current hot product like Google+, you often get late response or even nothing from Google's staff. Lucky this time, we did have a couple of replies from Google employee.

Mar 18, the OP posted about the issue, three months later, Jun 19, finally a Google employee replied to acknowledge the issue. It's better late than never, right?

A half month later, July 4, second reply from same employee said the team is working on the issue and provided barely a workaround for unblocking, which I don't need and I already know that unblocking function. The most important part is still no mention about the problem, why the function doesn't work.

When I first noticed the issue, it's like someone pulled a minified JavaScript from Google's server deliberately.


To me, it doesn't look like something is actually broken because of coding. Like I said, it looks like being pulled, therefore the functions are not available.

I don't now what the real cause is, only Google knows it, but I guess we, the users, must have blocked a lot of sites. 500 sites allowed per user. It's a lot. I think maybe Google can't handle that kind of per-user filter. But that's only my guessing.

The thing is Google must know from beginning if they did pull JavaScript and I really hate Google for late response or lack of proper handling. Beside, suddenly when no one blocks websites, they must have noticed the database stopped growing. No way on Earth that they didn't know when the issue appeared. They are Google, this blocking data is worth to make some statistics even they have never planned to use it for ranking algorithm.

If they want to pull the functionality, it's fine by me. But they need to tell people, just put up a notification saying the function is temporarily disabled, that's really okay by most of people. Disappointed, yes, but much better than unknowing to the cause.

I sincerely feel I have become more and more dislike Google's way to manage things over recent years, and this case is just one of the reasons. They keep talking about government transparency, but they aren't even transparent enough to tell us the cause. They don't need to tell the technical detail, most people wouldn't understand, anyway. A simple summary would satisfy us, who have been waiting an answer and a resolution for nearly 4 months.

I just saw a referral from adf.ly:


I didn't click on that link, because I didn't know what it actually was, even apparently it's some sort of shortened URL. I googled it and read a few pages of results.

Clearly, I dislike adf.ly.

Making money out of your shortened link is fine, but it's not fair (because the owner of linked content isn't getting anything if differ than link creator) and you can bet that attracts bad people, who makes money by cheating.

(Please forgive me for not linking you to what I've read, I intentionally do not want to link to those pages)

In one discussions, OP said his Google AdSense account got banned because of adf.ly traffic, which he bought. So, you see how this is going. People buy traffic, although I have no idea how this kind of things work, it must be different than AdWords' method, that might be the reason why he got banned.

One reply is very interesting, How about you do that to your competing website to get them banned? I have to say this is devious. You buy them some traffic or make tons of shortened links for them, they would be astonished by sudden traffic and ecstatic by upcoming paycheck, which would never come.

From the search results, there are a lot of bots and scripts, well, for cheating to next level, I believe. One can automatically convert all links in your content to adf.ly.

Some people are really using every measure to make money online, it's pretty ugly through my eyes. Why can't they write good content and only wish receiving good feedback? If you can have those, people naturally will link to, refer to, and the worst, steal your content, whether you like or not. Wasting time doing those cheats, SEO tricks, just pathetic.

By the way, I still don't know the link in the screenshot will lead me to. I didn't click, because it can be fake referrer and probably links to other site. So, that's another method you can cheat.

If you own a domain name and have searched using it as search keyword, you may have seen this kind of results in the screenshot on the right.

The last one on this page is totally legit, the first one is okay, but between them are websites which I categorize as garbages. This kind of websites is a variation of content farm. It's not like usual content generation, but using domain-related data to produce content to fill up the page, so it would look something in search bots' eyes. They will grab all sorts of results via API of other services and gives you some whois information.

That's really for noob to read, who don't know about where to look for information about a domain from the original sources, mainly for their own domain. I don't think one would want to read other's domain information, generally, at least not to read from this sort of trash websites.

Unfortunately,  Googlebot crawls and indexes this kind of websites. The time range in the screenshot about was set within 24 hours and the search hit 75 results (and it has increased to 80 while I am writing). Sadly, I haven't been able to use blocked unwanted sites, which is a feature of Google Search. That page has some JavaScript error, it's been broken for days since I've noticed. Don't know where to report a bug except using that community support forums, and I do not want to use that. Just another typical Google support method nowadays.

An interesting point is my domain is not even the focus of the matched results. These websites will put a list of domain names next to basically unrelated domain names, so they can somewhat increase search engine hit ratio. It's cheating, I would say, and Googlebot isn't that smart to know that.

I have been experiencing a new reward system which should keep me writing or doing things. The purpose is somewhat like Getting Things Done (GTD), making sure things can be done eventually, but using different method which has rewarding factor.

It's very simple and only involves one number and two categories. One is Plus category, which includes things will earn you one point when you finish one task from the category. Another is Minus category as opposite to previous category, they cost you 1 point when you do it, that is -1 point in other words.

The points are calculated as the only number in this system. There is no strict rule about what you should do next, not like GTD. You can do all stuff categorized as Minus if you want. However, you will see the number towards -.

One important key is to try to keep balance, namely, the number is near zero. Unfortunately, I wasn't able to do so:


It has reached 2-digit and hasn't yet come down. The image was taken days ago, 16 was the record so far, 14 at this moment. Somehow, I am reluctant to collect my rewards, which usually is a one-hour TV watching or reading a page of fun stuff. If I write a blog post or a piece of code, then I get one point.

I think the system is good, because it's not strict and free to decide what you want to be a reward and a point gatherer. The problem is just like before, only this time in on another side, I still am procrastinating and the problematic component.

I will really need to get some games, so I can reduce the reward points. But, that may result more blog posts. At this point, after I press the publish button, I get another point.

Even with all these systems and people working to stop bad ads, there still can be times when an ad slips through that we dont want. There are many malicious players who are very persistentthey seek to abuse Googles advertising system in order to take advantage of our users. When we shut down a thousand accounts, they create two thousand more using different patterns. Its a never-ending game of cat and mouse.

This summarizes up how worse those bad and abusive people are on the Internet, even Google confirmed it and has to deal with them with much efforts in order to keep them away. I recommend that you read the post, just to know how serious the situation is and what steps Google are taking to fight against scammers, though I don't feel Google is the one winning, but it's trying.

I read it because I need to read something, I am forcing myself to read everyday. Not I was interested in this topic, but I was glad that I read it with patience. Google has a system which involves three different aspects to decide and co-judge whether a subject is bad or good, do they (machine or human) need to take further steps or just block, and so on.

You will find that human truly is the last defense in all three aspects, they make the final call when machine can't tell. Like other modern system, decision isn't made by fixed rules, they can evolve and learn from mistakes (errors, which human points out), still, human is the most crucial component.

If I recall correctly, I only reported once on a Google ads. That ads tricked me to click on a text saying "Close," and I did click on it. It was instinctive reaction when that ads kept flashing strong colors like between red and yellow. It was so annoying, although I rarely saw such kind of trash ads via Google's advertisement system, but I did that time.

Out of natural instinct, I clicked on that "Close," however that would never got you to close that ads. Instead, I was brought to the ads' website. I was fooled, clearly.

That was only time I saw a bad ads via Google, I think Google does a good job on protecting their profitable advertisement system. There might be more, only I hardly pay much attention to ads, and my eyes and brain are well-trained ad-filter, but some slip through. I care more about email spam since it's main source I would see a spam or phishing email.

I would believe that Google has actually brought those scammers to court, they had done so to spammers. They have all information of all advertisers, so there should have no problem to file a lawsuit against a scammer. Name, address, phone number, credit card, everything they need is already in Google's database. Some could be fake, but it can't be all. Identity thief, come on, this is not a drama.

Anyways, I am looking forward to more posts like this.

Of 2012-04-20, pageviews reached a record high of 1,056 views and 671 views according to Blogger Stats and Google Analytics, respectively. One day before, 2012-04-19, they were 722 views and 566 views.

It is coming down, it's just peak because someone listed Three years with Gentoo on some website, then the post got reddit'd and dugg.

Last time this kind of things happened were via StumbleUpon, I don't dislike but nor like. For that kind of submitting websites, I believe submissions with LOL contents suit better. For little deeper reading materials, just not right. The site doesn't actually benefit from it for anything except a burst of pageviews, and that's all about it.

People don't really read these, they just skim over and look for images for a good laughter. The average visit duration is around 1 minute for my blog. However, I did get one nice comment and average visit duration is around 3 minutes for that blog post, but that post has 685 views (in Blogger Stats, as of 2012-04-21T17:54:06Z), so you know how things would pan out from that kind of listing websites.

It's only a short-term like a temporary excitement, a stimulation, which never lasts long. If with luck, you get new subscribers, that is for long-term and good for your blog. From what I see, my subscriber count only increased by one in last few days, I can not conclude that was from that post at all.

Sooner or later, the pageview count will go back to what it used to be and everything become normal again, I bet it would be in just a couple of days.

I still can't believe it's been three years when I checked the date of installation of my Gentoo, I thought it was only two years. Time flies by fast, I guess. During this period, Gentoo hardly had disappointed me and it was never broken by updating. By broken, it means system doesn't get unbootable or something oopsie which needs some special hacks in order to fix. It works nicely for me.

It's no argue from me that Gentoo isn't suitable for everyone, especially people who don't RTFM and/or STFW, or update system in regular basis. Gentoo requires your attention, but that is same for all systems in my opinion.

From time to time, there are some posts about leaving Gentoo or suggesting it is dying. A leave is really up to the user, when he or she feels it is not the distribution they like, then that's what it is. I never got to understand what those leaving posts were for. If you want to leave, then just do it, why do you show up and mumble things without much constructive effect added to community? It's not like you would be able to give more feedback sine you were not be using Gentoo anymore.

Furthermore, if you've used Gentoo for a year and you want to leave, I don't think it's wise unless you don't actually use or utilize it well. Flexibility of merging is one major attraction from it, you can't find such thing on binary-based distribution. Only thing you need to pay is the compilation time. I don't agree that would be really an issue if you are already a one-year user. The fact is merging doesn't really take long time, you can't argue that compilation time takes a lot of your time. Nowadays, 1 CPU has multiple core, you only need to spare one core for compilation. Computer is old? My laptop is more than five years old, long compilation time is just for an excuse and nothing more from it.

As you may have heard about USE flag, it's the main reason of why I like Gentoo. If you have compiled from source tarball on you own and have checked out the build script of the source, you would know there usually are some switches to enable/disable features, which would be regarding the dependency you will need to compile the source. Generally, a well-written ebuild will pass those switches to Gentoo users as USE flags. That is not something binary-based distribution can give you.

Stability is another point, but this vary with the environment and packages you merge. I dare to say as long as you use stable packages, i.e. no ~arch, your system should be 99% stable. 0.5% unstable is the bugs, another 0.5% is your stupidity. (these numbers are just from my feelings)

This brings up the question: Do you really need up-to-date version?

To be honest, the stable packages in Portage tree isn't so out of date. They get updated very often certainly for common packages and almost immediately after a significant security hole being exposed and upstream has a quick fix.

If you really need to use latest version, then ~arch on the package or use those definitely-straight-out-source-repository 9999 ebuilds. From my experiences, I rarely needed to use latest version, simply because you don't even know what latest version has provided features which don't exist in previous version. Latest version is just for feeding your ego, not your actual need.

As I learn more and more about Gentoo, I have my own way to do the update and I stick with it, never got myself into troubles. One thing very important for Gentoo users about maintenance is to update in regular basis, I would say at least once a month, weekly is the best in my opinion.

If you think using Gentoo can make you learn more about a package or Linux, well, I don't think that's entirely true. For configuration file (RC), it's just as same as other distribution, RC files are processed by the programs, not Gentoo. It's hardly for you to learn from Gentoo instead of from manpage or some documentation. You still need to read documentations if you read other distributions. What distribution you use isn't very important for learning a package. The important factor of leaning Linux is you, not the distribution.

And Gentoo actually has some custom programs or scripts in order to help some packages run smoothly, e.g. multi-slots, webapp setups, Bash completion files, font-config, etc. You actually don't need to touch some configuration files, just run the helper and press 1 or 2 or 3 or whatever the option is numbered with.

Nevertheless, you still have chance to learn a little when you first install Gentoo. But you have Gentoo documentation to help you out, you only need to read once and do once. At least, I only did once because I have never broken my Gentoo, never had a chance to do the second time.

The fact in it is that it's not much difference from other distributions, they just have some fancy GUI or TUI to help with those. Some basically are still configuration files, and you can edit with text editor. If you really want to learn, then do it from source tarball, that's how you learn things the most.

For a Gentoo newbie, the biggest obstacle is actually configuring kernel in my opinion if you don't want to use default/safe settings. There are like a million options to choose in kernel. But once you successfully compile your first working kernel, you wouldn't have anymore problem. And you will and can do it at once as long as your RTFM, which is the famous Gentoo handbook.

I haven't checked newly added option in newer versions of kernel for very long time. Every time, a new version is marked stable, I just copied the old configuration to newer kernel and compiled it. It has too many new stuff, so I stopped checking out the new options and just went straight to compile with old configuration file. So, that 0.5% comes from this kind of behavior, user's stupidity. But it only counts when things is broken due to this kind of laziness.

As of dying of Gentoo, I would say that's unlikely. You can't find one like it, that's unique, which means market that others can't take from it. It owns a portion and that always is there, even thought it's just a small portion.

What actually would die is a derivation of a distribution without uniqueness. They come and disappear. For long life of a derivation, you just can't cut it with only some fancy default background or pre-selected packages. Fundamentally, it's still the same as the original.

Three years, I've learned a lot more about Linux. Even though it's not directly about Gentoo, but it was with me and will still be for many years. I can tell you this, Gentoo is getting better and it's already awesome.

Wanna try Gentoo? Do you read?

As you may have noticed that the posting count has been down for a few days. I've few things to sort out and this probably will last for another week. Not sure how this will go, hopefully I will come back at my full power soon.

This doesn't mean I will stop for this week, only slow down on purpose. The week after next week has some significances to this blog, and I don't want to miss that.

So, whether you care or not, that's probably all for this post.

I just sent a feedback:


You can read my thought about the "Reading list," it is useless to me and I really don't want it. I wonder if any Blogger blogger really read others' blogs in the dashboard. Isn't a feed reader more efficient?

I think such feature is only for who doesn't know what "Feed" means and is for. Of course, you can show your readers what blogs you are following. But you can achieve by a Links gadget (or whatever that is called). And indeed, it shows your appreciation when you follow a blog. But I think linking to one specific post from a related post of yours is much more meaningful to the blog.

Anyway, I just want an option to disable it or hide it, no data of that section is transmitted from Blogger. It's very interesting that you have a lot of options for individual Blogger blog, but not this dashboard. You only can turn on or off the important notification from Blogger.

Added at 2012-04-05T09:43:12Z.

After replied to the first comment, I think I need to clarify more. First of all, I do not follow any blogs even you see "Blogger Buzz" in the screenshot above. Blogger thinks you need something to read, so they put their Buzz blog for you. If you read the description of my feedback in the screenshot, you would know I have subscribed to Buzz already in Google Reader.

Secondly, you may ask why don't you follow any blogs? I want to but I have to un-follow all because I don't want to have plenty of data I don't read in this dashboard. I read them in Google Reader. I un-followed all great blogs when the Draft was still in that old orangish background color interface, so that must be a year or two years ago.

I wanted to let people know those blogs when they read the old Blogger profile, I really did. Now, I have connected my Blogger profile to Google+ Profile, so this part doesn't really matter anymore. So basically, this blog following feature is no more useful to me, since it can't automatically show the list in Google+ profile. You need to manually add links.

Once again, manual mentioning or g+ a post or a blog is much more meaningful and they emphasize your appreciations greatly. But this is just me and my humble opinion.

I have a project called jquery-jknav, which was not hosted on GitHub until two months ago. I also set up a Google Alerts for monitoring and I just got another notification, I want to see if anyone makes good use of it.

But every a few months, I will see people post exactly source code of jknav, here is a screenshot of well-used keyword:


The most bizarre thing which I can't understand is not why people duplicate the code, but why they duplicate the very old version, which is 0.1.0.2. The latest version is 0.5.0.1.

The strange thing is they all duplicated the same old version. To be honest, I will have to dig for a while to find that version from the old Hg repo on Google Code. I didn't tag versions when did a release at that time.

I can only guess there must be a website provides the code of that version and people don't even bother to check up for updates. The release date is noted in source as you can see in the screenshot above. Almost two years, it is very old.

It's funny, when you search for "jknav," the project website will be listed at fourth place and first two are my blog posts about jknav which I have updated the links.

As for duplicating, I don't understand the need of putting the exactly the same code on Pastebin if there is no modification involved. Isn't a link to where you read it a better option?

I believe no developers have fully used their own creations. I do mean using not testing.

I recently wanted to watch Mass Effect 1/2 walkthrough, because I wanted to understand the storyline. So, I searched for playlists for those two series, but soon that reminded me of an annoying situation I had already seen and forgot.

The default quality is either 360p or 480p (if you enlarge the player and have set auto option in account playback settings). If you watch without fullscreen, then there is not problem at all, but if you switch to fullscreen, two annoyances are waiting for you.

First, the auto option will switch to 720p for you (if you have checked the checkbox for HD quality), which is great and that's what I want to have. But after this point, you need to try not to exit fullscreen mode. Once you do, exiting fullscreen mode, two things come along:
  1. Page will be reloaded (only happens after player plays other videos from playlist, which definitely will happen, that's the reason why you watch videos from playlist). Which actually make sense, the page has not yet updated even player has played other videos. Although this can be annoying, but I can let this one go.
  2. The side effect from #1, the video's buffer is cleaned and rebuffered since page reloaded, because the default quality is 360p or 480p, which is different than 720p, therefore rebuffering is required and wasted. I was watching 720p and I want to continue watching in 720p.
There are some related questions on help forums, I didn't look into those. Because there is no need, or this kind of annoyance had already been resolved.

If you have used playlist for real, I mean watching a playlist with 50+ videos, then you can understand. But the chance to watch such playlist is not very often.

The introduction of auto switching of video quality is meant to save bandwidth, I believe, only loading higher quality stream when is requested. But think about this case, the video is fully buffered, then player exits fullscreen. The 720p buffer is wasted, because it gets cleaned, if page reload also occurs.

There are some Firefox Addons about HD, I don't know if that will help or not, or if that is possible for me to append a parameter in the URL to force HD quality. Like I said, it's not often and I have survived watching those videos without exiting fullscreen too many times before I wrote this post.

I would call it stupid, just I don't think the dev has thought about it. If they did, then it's the worst, they don't care.

Everyday, there are some new services are born and some have to be shut down. There is no eternity for many things. Websites certainly don't have that.

So, here is the one-liner for Gists:
page=0; while let page++; wget -q -O - "https://api.github.com/users/$USER/gists?page=$page&per_page=100" | grep -o 'git://.*\.git'; do :; done | while read git_url; do git clone $git_url; done

and for public repos:
page=0; while let page++; wget -q -O - "https://api.github.com/users/$USER/repos?page=$page&per_page=100" | grep -o 'git://.*\.git'; do :; done | while read git_url; do git clone $git_url; done

You may need to edit $USER to match your username on GitHub.

This is only for one-time run and doesn't have any error handling. You should run it in a specifically created directory for storing repos. It won't update if you add new repos afterwards. But you can add some condition check for updating when a repo is already presented in the filesystem.

It's not the time for me or anyone else to use it, since GitHub is alive and probably won't be out of business or gets closed anytime soon.

I have this thought because there is another service closed, which I used when I was still on Twitter, due to being merged into bigger company.

Six or seven years ago, I lost data on a harddisk. Since then, I have been trying not to store data on local disks. I am lazy, never want to do the backups. It's not like it's hard, the script is easy to write, just I don't like put the backup harddrive online when the system only needs it once a while when the backing up is in progress.

Certainly, you can pay some money for so-called cloud storage or just remote backup storage. It doesn't seem to matter to me, someday they will be gone and does a backup of a backup is like, well, WTH was I doing that in the first place?

Anyway, for backing up public stuff on GitHub is easy.

Updated on 2012-03-15: If you also use Bitbucket, here is the one-liner.

On February 24, a few hours after I posted Google Search's Blocked Sites, I noticed ads clicks increased in abnormal rate.
  • 2012-02-25T01:10:00Z - 0 clicks - Blog post posted.
  • 2012-02-25T07:06:07Z - 25 clicks - The first time I noticed the situation.
  • 2012-02-25T08:28:11Z - 31 clicks
  • 2012-02-25T08:44:19Z - 34 clicks
  • 2012-02-25T08:59:54Z - 35 clicks
  • 2012-02-25T09:14:10Z - 39 clicks
  • 2012-02-25T20:47:37Z - 62 clicks
I knew this could cause problem when there was only 25 clicks. Yes, only, the final is 62 clicks. I googled "Someone is clicking on my ads" right away and found out how to Keeping my account in good standing when invalid clicks shows up. I reported those invalid clicks when there was only 25 invalid clicks.

Needless to say, this might do some damage if I didn't report and I was really scared because I had never thought someone would target at my blog and AdSense has no real way to tell if that's made by account holder or not.

I believe these clicks were done with malicious intention. I have few possible theories about why the person did this:
  1. Someone wants to get my AdSense account terminated.
  2. Someone thinks this can make money for me because he or she extremely appreciates my works. (I extremely doubt this)
  3. Someone wants to poke for AdSense's response by clicking on other's AdSense ads. Read and Check the other's website, if the site posts any updates.
  4. Someone is competitor to this blog. (My blog isn't popular at all)
  5. Someone hates me. (I am sure I didn't piss off someone recently)
  6. Someone is sociopath, he or she just want to make people some troubles.
I believe #1, #2, #4, #5, and #6 is not the case. Since I believe it's #3, so I deliberately postponed posting this post for a week. I do not want the clicker gets the information I am about to say. Hopefully, one week is long enough.

When I was filling out that report form and prompted for any information about the clicker is helpful, such as date/time, location or IP.

I have Google Analytics and AdSense account linked, so I can read the revenue in Google Analytics reports. From there, I know the exact amount ad clicks on that blog post and the city of the clicker from, which was Horsham, Pennsylvania. I also checked all visit from that city. There was only three in January and the rest were those invalid clicks. So I was 100% sure, the clicker's location was in Horsham.

Unfortunately, I didn't have IP address since Google Analytics does not provide such information. This is a Blogger blog, I have no webserver access log. Later, I knew I could get IP from Google App Engine access log, becaues the JavaScript, stylesheet, and webfonts are hosted on App Engine.

I wrote a simple code to find out the locations from IPs on GAE, but none of them was from Horsham. Which only makes me believe more that someone was using not only the ads on my blog but also computer program to emulate clicks, because:
  • There didn't seem to have requests on other resources (JavaScript, CSS, etc).
  • It seemed to be two ads clicks per pageview. In that blog post, there is two ad units.
  • The clicking rate is not so irregular to me, it doesn't seem to be possible to have a human sit in front of computer, hit refresh, then click on ads.

So, what have AdSense team responded with? That's one I am not going to tell, but remember there is a note on that form: AdSense team may not respond. I am not confirming nor denying anything here.

Beside the fear of being terminated, I wanted AdSense to remove those revenue from my account since it's not the money I earned. No contest from me, I would like some pocket money, but not this money.

This post is written for people who might encounter the same situation, I hope this will help people to find out clicker's information and to know where to report the invalid clicks in order to protect yourself.

Yesterday, I was watching a stream. The broadcaster received a Twitter following request via Facebook post (or something, not sure what that is called on Facebook). The screen showed the caster clicked on the link to the requester's Twitter page. The next ten or twenty seconds, the viewers watched the caster trying hard to find the "Follow" button to follow which had been under cursor for most of time.

As you could imagine, viewers were laughing so hard as I was, or ROFL or LMAO for short. Even I has stopped using Twitter almost two years ago and the layout has been changed quite a lot, I still could spot that button right away.

Recently, I found out a very interesting fact. Even someone in their sixties, they may be good at using Facebook but not be able to take care of their Windows operating system. They may be tapping like a pro on iPad, having some awesome apps installed by themselves. But never heard of or used RSS feed.

People seem to get used to be in a certain circle or website. For example, the Facebook. They are so used to it and somehow can't have a concept that they can also follow/subscribe to same group of people on other websites. Everything has to be provided or accessed via Facebook. If something is mentioned by someone, some will ask for a link.

Day after day, you will see same people ask for same thing again and again. They never learn to receive the new updates from other website which is the original source. It looks to me as if everything has to be on Facebook and only, or they would be completely unknown to those people.

Do people really have problem using other website which they are not familiar with? Certainly not, just too lazy to click.

I had used Facebook for some time and de/re/activated my account for two times at least. I just couldn't get the idea of Facebook. It's not social, in fact, not really any of social networking websites is real socializing, they are social networking, but not social.

Well, maybe they are not for general purpose, but for people to hook up someone for sexual purpose in my feeling. Nevertheless, it's still possible you can use it to find your high school classmate or long lost friends.

I also tried a few times on Google+, but never got into it, not even tried to add someone to the circles. The more I used these websites, the more I feel it's the sea of messages or posts or updates or whatever you call it.

We are buried in those messages. For me, I am afraid that I may be missing something important, something has real value. Not just that kind of chit-chat or j/k or LOL messages. It's not like I dislike those, life without those will be boring, but I don't want those to mess up with real message, either.

On Google+, you have a slider to filter the amount of messages. I know I definitely wouldn't want to try that because the chance of missing a message. I guess you can make sure someone's message will always be shown, but you never know if you remember to maintain the whitelist. In many case, it's human error of user's. They forget to set or update certain message.

You must be either having a crystal mind or giving no about what you may miss if you want to use social networking correctly, or it will just be obstacle in social life.

Email, phone, and letter are better than those.