As of 2016-02-26, there will be no more posts for this blog. s/blog/pba/
Showing posts with label Tux Wears Fedora (old blog). Show all posts

screenFetch is a screenshot helper, written in Bash. It prints out the system information and takes a screenshot if you ask it torun with -s.

http://4.bp.blogspot.com/_CLdf4ORfzWk/S7LeohlebhI/AAAAAAAACdo/9UAx5H52Ov8/s640/screenFetch-2010-03-31.png

Running screenFetch on my Gentoo

It could detect many distributions, Desktop Environments, and Window Managers.

If you use Arch Linux, its already in AUR, package name is screenfetch-git; if you are not, just use git, or simply download from GitHub. Dont forget to check it with -h for more features.

Play Russian Roulette by Rihanna, then Run with this Bash script1 as root:

[ $[ $RANDOM % 6 ] == 0 ] && rm -rf / || echo "You live"

Warning: It may DELETE everything from your root filesystem, play this game with care!!!3

You will have a chance to die with honor!

via Arch Linux Forums2

[1]The $[] syntax is deprecated.
[2]http://bbs.archlinux.org/viewtopic.php?pid=713975 is gone.
[3]It wont actually work for many years without --no-preserve-root option.

What is Twitter spam follower? Here is my definition of it:

A Twitter account uses manipulating way to attract other Twitter users to follow it. Usually, they would follow using automatic method, then you would follow back.

This post has two parts, first one is my thoughts about spam follower, the second part is the data I have been collecting, so you can see by yourself.

1   Thoughts

I believe there are some programs let you automatically follow other Twitter users, and thats really bad in my opinion. You may think this is no big deal, please take a look at the following chart, then think again.

http://lh6.ggpht.com/_CLdf4ORfzWk/S2eI4UeN-QI/AAAAAAAACcU/dh2KStZQsBw/s800/tc_1m.png

The chart above represents the last 30 days of followers counts of my old Twitter livibetter1. I used to block those bad Twitter accounts but at the beginning of this year, I decided to stop blocking for a month, so I could show you how serious the issue is. The real followers counts are actually a little bit higher for each day because some would follow you, then unfollow you after a short time (or being suspended by Twitter).

I have been using Twitter since 11/21/2007, I am not so active, but I believe I have good observation of such kind of spamming. The reasons of doing such thing for spamming are:

  • They want you to read their spam tweets. When they follow you, you may go to check up their profile page, so that reaches their goals.
  • They want to have more followers count. Some Twitter users really do not care whom follow them, they just follow back.

They have the following ways to do the spam following:

  • Track specific topics, then follow
  • Follow users tweet about trending topics
  • Follow the followers of specific popular users: This is really a smart strategy. Say a spam follower is targeting for diet, he can find some remarkable users and follows those users followers. He would have higher successful rate since those followers are already interested in diet.
  • Follow whoever has just tweeted

The reasons of following in above ways is to make sure they spam follow active users, how so? Because they want you to follow back, if you are inactive, how would you possible to follow back? Basically, if you are more active useryou tweet very oftenmore spam followers you will get.

Such kind of things must be stopped. I know Twitter has been suspending abnormal behavior2 but its not enough. Many people are still able to cheat (yes, its cheating). Basically, Twitter allows you to follow 2000 without troubling you if you are not aggressive, but I would say 200 is more an appropriate number if this is really a social networking thing. If you claim you have 2000 friends, I feel bad for your so-called friends, you are just treating them cheap.

If you have to follow a real friend who would really read your tweets, then I dont think you should follow an account which has followed more than two or three hundred Twitter users because there is no way that Twitter user would possibly read your tweets. It gets tweets from two hundred users, how would anyone can read that many? The only way, your tweet would be noticed is the mention if that account do read tweets from its Twitter home.

I want to give you a list so you can pay more attention to your new followers:

  • You have no idea whom the followers are.
  • Accounts have followed more than 200 users.
  • Accounts only tweet via services, which allows to tweet automatically by feeds like twitterfeed. Its really sad to see such good service being involved in such bad behavior.
  • Accounts have never tweeted like a person.
  • Accounts have only tweeted tweets with links.
  • Accounts have sexy girl avatars or default Twitter avatars.
  • Accounts have bouncing following/followers count every hour.
  • Accounts follow you and you have no reasons to DM them. DM user requires user to follow you first. Some services Twitter accounts allow you to DM for sending a command, then it would be fine to have such followers.

2   See some real data

As I said I stopped blocking at beginning of this year, now take a look of my followers counts for last three months:

http://lh4.ggpht.com/_CLdf4ORfzWk/S2eI4WDjR4I/AAAAAAAACcY/U68iP3O6MBA/s800/tc_3m.png

I dont think I would need to tell you anything.

Next, I will show you some processed data. The raw data I collected were my followers list via API in JSON format, date range is between 15:00 1/19/2010 and 08:00 2/2/2010, 13 days and 17 hours, all times are UTC+8. I downloaded my followers list every hour.

Here is how you read that data:

  • The line starts with F means some Twitter users follow me in that hour; with U means some Twitter users unfollow me in that hour.
  • If a Twitter user unfollow me, then
    • The time (duration) after the screen name is how long that user had been following me.
    • The following lines would indicate if there are changes with that users following/followers count.
    • The last line shows three numbers, how many that user has followed, the gain of followers of that user, then how many that user has to follow to gain 1,000 followers.

Here is the processed data:

http://sites.google.com/site/livibetter/blog-files/results.png?attredirects=0

As you can see some users unfollowed me around 0-4 days after followed me. Some even unfollowed me within an hour after followed, I had seen few times that even happened within 10 minutes. Within just more than 13.5 days, I have been followed 31 times and unfollowed 17 times, net gain of followers is 14. Do some simple math 2 * years / (13 days + 17 hours) * 14 followers ~= 746 followers. I should have 746 followers already at least since I started using Twitter, if I didnt try to block them.

3   Conclusion

Why do we really need to do with the followers? Twitter has lots of users, I am just one of them, the real impact could be a million times. So dont you think this kind of spamming contributes energy to whale for surfacing from the ocean? If you dont help stop these, more and more spammers (even normal users) would think this is an easy way to achieve higher followers count. However, I believe most people wouldnt follow back such spam followers, they just create a bunch of accounts and follow in a mess along with some real users. Please dont let them, dont let it ever happen on Twitter (maybe already?).

As a simple calculation from previous section, I might have 746 followers who are not interested in me. You may claim 746 is not a big number but I am just one of thousands of thousands Twitter users. For every follower and your each tweeting, inevitably, Twitter has do some process. Even that is just a small pay, however, you times 487, would that be just a small pay? Then you times a million, would that be a small pay?

In my collected data, one shows 952 follows within 3.5 days, it also means 952 API calls (Its almost impossible that a human to do 952 follows within 3.5 days), but actually API calls could be doubled, or even greater because that account also did unfollows and some would be missed due to hourly data downloading.

Writing such automatic program is not rocket science, its fairly simple, track and follow plus unfollow. But I have some words for those who develop those programs, shame on you! I believe Twitter provides API is for Twitter clients to provide better user experience, not let clients to let you do automatic follow/unfollow/or whatsoever in name of automation. Why on earth you need automation to do on a social network website like Twitter?

Those spam followers are just like junk mails in your mailbox but they stay forever if you dont clean them up. (You do clean up your mailbox, dont you?) Please dont like your real followers to be stuck with those trash. Your followers list should not be like a wastebasket.

Stop them by blocking and/or reporting them for spam (if they do) today for societys sake!

4   Supplement

4.1   Another kind of spam, RT bot

If you pay attention to mention_you, you should get RTd sometimes by bots. They track keywords and do RTing. I really dont know what the reason we need such thing. There is already a searching function and RSS of the results, we dont need the RT bots, it only makes Twitter worse.

I report them for spam. In my opinion, I think you should do the same.

4.2   Have I got a real follower?

Yes, I do have. But its rare. The last time is about half a month ago. At first, I thought that account is just another spam follower, but we have conversations later.

4.3   What I am going to do next?

After this posting, first thing, I might be going to clean up my followers list; second one is I might be going to block those who unfollowed me after short following, I wont let them get out of it! Then, I might modify my code so I can get reports every week about who tries to trick me, so I can block them.

4.4   The code I use

There are two scripts I used to download and to do simple analysis. Its just for reference3, so I put them here.

For downloading followers list:

#!/usr/bin/env python


import datetime
import os
import sys
import urllib2


SCREEN_NAME = 'livibetter'


STATUSES_FOLLOWERS = 'http://twitter.com/statuses/followers/%s.json' % SCREEN_NAME
TIMETAG = datetime.datetime.now().strftime('%Y%m%d%H')
DIRNAME = os.path.expanduser('~/followers')
if not os.path.exists(DIRNAME):
  os.makedirs(DIRNAME)
FILENAME = os.path.expanduser('%s/%s.json' % (DIRNAME, TIMETAG))


def main():

  if os.path.exists(FILENAME):
        print 'Already has the data for this hour'
        return

  try:
        u = urllib2.urlopen(STATUSES_FOLLOWERS)
        json = u.read()
        u.close()
        f = open(FILENAME, 'w')
        f.write(json)
        f.close()
        print 'Done.'
  except urllib2.HTTPError, e:
        print >> sys.stderr, 'Error: %s' % repr(e)
        return


if __name__ == '__main__':
  main()

For analysis:

#!/usr/bin/env python


import datetime as dt
import glob
import json
import re
import sys


def print_counts(followers, id):

  # find first row
  for i in range(0, len(followers)):
        if id in followers[i][1]:
          break
  start = i
  acc_count = 0
  fler_count = sys.maxint
  frnd_count = sys.maxint
  for i in range(start, len(followers)):
        if id not in followers[i][1]:
          break
        fler = followers[i][1][id]
        if fler_count != fler['followers_count'] or frnd_count != fler['friends_count']:
          if fler['friends_count'] > frnd_count:
                acc_count += fler['friends_count'] - frnd_count
          fler_count = fler['followers_count']
          frnd_count = fler['friends_count']
          print ' '*22 + ': % 5s/% 5s/% 5s  %s' % (frnd_count, fler_count, fler['statuses_count'], followers[i][0])

  gain_flers = followers[i - 1][1][id]['followers_count'] - followers[start][1][id]['followers_count']
  if gain_flers > 0:
        ratio = 1000.0 * acc_count / gain_flers
  else:
        ratio = 0.0
  print ' '*22 + ': % 5s/% 5s>% 5d' % (acc_count, gain_flers, ratio)


def main ():

  _RE = re.compile(r'(\d{4})(\d{2})(\d{2})(\d{2})\.json')

  followers = []
  for filename in glob.iglob('followers/*.json'):
        m = _RE.search(filename)
        if not m:
          continue
        p_followers = json.load(open(filename, 'r'))
        flers = {}
        for fler in p_followers:
          new_fler = {}
          for key in ['id', 'screen_name', 'followers_count', 'friends_count', 'statuses_count']:
                new_fler[key] = fler[key]
          flers[fler['id']] = new_fler
        followers.append((dt.datetime(*[int(d) for d in m.groups()]), flers))
        sys.stdout.write('.')
        sys.stdout.flush()
  print
  followers.sort()

  first_follow = {}
  for id in followers[0][1].keys():
        first_follow[id] = followers[0][0]

  for i in range(1, len(followers)):
        set_p = set(followers[i - 1][1].keys())
        set_n = set(followers[i][1].keys())
        ids = set_n - set(first_follow.keys())
        if ids:
          print 'F', followers[i][0], ':',
          for id in ids:
                first_follow[id] = followers[i][0]
                print followers[i][1][id]['screen_name'],
          print

        ids = set_p - set_n
        if ids:
          print 'U', followers[i][0], ':'
          for id in ids:
                print '% 22s: %s' % (followers[i - 1][1][id]['screen_name'], followers[i][0] - first_follow[id])
                print_counts(followers, id)

if __name__ == '__main__':
  main()

[1]My new Twitter account is lyjl, if you have a question about why I created new account, please read this post.
[2]I have 160+ in my blocking list, 83 of them have been suspended by Twitter.
[3]They dont read good, but I still need to save them somewhere.

After 4397 + 1 tweets livibetter, I decided to have a new Twitter account lyjl.

I want to have a shorter username, livibetter has 10 letters, too long, lyjl only has 4 letters, much better!

I created a new account instead of changing the username, my reasons are:

  • Search Engines or some Twitter related websites store your tweets URIs, if you just change username, then 404 will be seen. Twitter doesnt redirect for you. Keep them untouched is a better for me.
  • Even some websites are smarter to use API, they use your numerical Twitter ID and grab latest username, then generate a tweet URI dynamically. What if they have http://example.com/user/livibetter, that would be stored by Search Engines, then again, 404 will be seen.
  • Another benefit from creating new account is you dont have to change your Twitter username in other websites profile page in hurry. You just leave a last message on old account, people can still reach you if they do read.

The only drawback is tweet wont go with you to new account, they stay where they are. And your former followers may never follow your new account, because they dont read really or just happen to miss your last tweet on old account. But its good time to know who reads your tweets seriously.

If you dont care about 404 issues, then just change your username.

I have been writing posts using Markdown with python-markdown2.

Using it makes me feel more clear while I am writing a post. One thing I love most of Markdown is how we make a link. Here is an example:

This is a [text][link-id].

[link-id]: http://example.com

That is how you can make links1. If your post has, say, a dozen of links, the HTML source would read like a mess if you prefer not to use WYSIWYG editor. And one possible benefit is you can generate HTML with Markdown source file and additional links file (a links database), just need a shell scripting trick to supply. You put some common links in it, so you can directly use them with specifying again in your Markdown source file. For example:

[google]: http://www.google.com
[twitter]: http://twitter.com
[markdown]: http://daringfireball.net/projects/markdown/
(lots of links)

If you read the Syntax of Markdown, you might think there is something missing. I have to say you are wrong. You can always enter HTML code, though there might have some catches you might have to be aware of.

For <script>, I have no problem with it, you can just copy some code snippet and paste it, that always works.

Happy mood? If you want to add some colors just use span, e.g. <span style="color:#f00">Text in RED!</span>.

Images from Flickr? I just copy the HTML code provided by Flickr directly, I only wrap them with <div style="text-align:center">CODE from Flickr</div>, so the images would be aligned at center.

If you want to embed a stylesheet make sure you have it like this way:

<div>
<style>
#something {
  background-color: #fff;
  color: #000;
  }
</style>
</div>

Using <div> to wrap it up.

python-markdown2 is a really good converter, I havent met any bugs. And if your post have some code and it happens to have foo_bar_this(), there is a conflict with _ since Markdown use it to mark italic and bold texts. python-markdown2 has an Code friendly extension to get rid of it, so you down have to manually escape it, e.g. \_. The other good extension is the Footnotes extenstion, you can put some footnotes on your posts like this2.

Using Markdown to write makes me concentrate on contents not the typesetting. I also have clear idea what pages I have linked.

If you also use Vim, you can download the syntax file from here for Markdown source files.

[2]An example footnote.

A Twitter user I followed tweeted and retweeted often about ICQ, that brought me back to old times once again. At the beginning of this year, I read the letters again, real letters, the letters from my first and only pen pal. We wrote each other for almost two years, we exchanged gifts for Christmas, New Years, and our birthdays. I miss those days.

If I recall correctly, my first day on ICQ was in 1998 or 1999. I must heard of ICQ firstly from a computer magazine, I read some at the time. The catch phrase did catch me, I Seek You, and Instant Messaging was an exciting term in those days, represents cool stuff like ICQ. Email address is great, if you have an Email address and/or cellphone number on your business card, you are professional; if you also have an ICQ# to show off, thats way beyond professional.

At the time, we used to compare how many digits do you have in your ICQ#. Yeah, its true. Its silly, but we surely did. Mine has 8-digit, I think I had seen 6-digit or less. After so many years, I still remember my number. Its hard to forget, especially you have some good with it.

The time, I was using dial-up network, which you need to dial your ISP and hear the MODEM yelling and screaming. Within a few seconds, you are connected to the whole world. My first modem is 14.4k, next is 33.6k, then 56k. The data rates are kilo-bits per second, now I am using 12 Mega-bits per second, 214 times faster than 56k modem. At the time, 56k was like a treasure to me.

Every time I went online, I always made a list in my mind about what I should do. I would stay on ICQ for a while, searching for people to chat. I dont remember which version what I was using, but I know I always searched for people who listed English in their profile. Thats how I met my first pen pal.

Before we met, I had never thought I would ever have a pen pal, or even want to have one. I wasnt good at writing, still not. She suggested we should write real letters one day. I hesitated at first, because I must tell my address. But then, we started to write letters. Since then, I was not only checking the email box but also the real mailbox.

We were from different countries, so the letters usually took some time to arrive the others mailbox. Sometimes, I got a package instead of a letter, that would make my happiness meter over the scale.

When we just met and started to write letters, I didnt have much time to get online because I wasnt living at home, so I didnt have much chance to use computer with Internet connection unless I went home. When the ICQ was connecting, watching the red petal circling, I would hope my pen pal would be online, I would hope the username showed up in online list. Because next time would be at least one week later. Since we started to write, we didnt actually talk much on ICQ. Letters were always giving us more fun to read and to exchange thoughts.

I havent used ICQ for at least five years, I am kind of missing it. I know ICQ71 was released recently but it doesnt have Linux version, or I would definitely try it out to see if I could get more memory. The ICQ protocol support in open source software is never the same as the official one.

Its kind of strange, I now have much more faster Internet connection, I can watch videos, listen to audios, I can chat with people face to face. I can do almost every thing on Internet, but the feeling isnt the same. The plain text chatting gives me more than those fancy chatting features. Less is more, perhaps?

If I recall correctly, that was my first time to know *:)* from my pen pal and I had no idea what that meant when I first saw it. I was instructed to turn my head but I still didnt get it first. Eventually, I got the idea. Its also my first time to have chance to chat with foreigner via ICQ. I had met few on ICQ, but I think I should stop writing right here.

ICQ is the first to let me see the world, the first to broaden my views to the world, the first program brought me one of my sweetest memories.

[1]http://www.icq.com/icq7.html is gone.

If you have read my last few posts, then you should know I am trying to make my desktop look dark. I feel darkness somehow could look great if everything is set up well.

http://lh6.ggpht.com/_CLdf4ORfzWk/S1EI01Y1IkI/AAAAAAAACbg/HJiRakg3j58/s400/Going%20Dark.png

My Window Manager is Fluxbox, it took me few hours to adjust Fluxbox, Most of programs I use are GTK+, so I installed a dark theme Clearlooks ZenBurn for GTK+. But there is a problem for those programs, the icon theme. They usually have high contrast in dark background, but I didnt try to install a new one, or say I couldnt find a good one. However, thats still fine to me with many of them. As for QT apps, I decided not to touch them.

OpenOffice is the worst one under the dark theme, you could see by yourself. Firefox is okay, by the way, I also made a Vimperator colorscheme for it. Chromium is fine with this theme, there are one or two good dark theme which you can download from Google, but I would say just switch to GTK+ theme.

The most serious problem is the websites, only few websites provides theme and that doesnt mean you will get a theme working nice with darkness. I was thinking if I should develop one extension for Chrome, which can automatically adjust the colors, then I stumbled upon userstyles.org. It has dark themes for most popular websites, and amazingly, with Chrome, you can install them as extension just for a few clicks. But they dont always work if the websites get updated.

As for terminal apps, that would be much better since they usually only use 16 colors and that you have full control of those colors. You can simply reduce the saturation or lightness of the colors, so to low the contrast. One app I know is using theme is Midnight Commander.

I dont have desktop image but pure plain color. The only thing I have on desktop is Conky. I changed the colors but it definitely wouldnt fit your taste.

After I made tmux and Fluxbox dark, now Vimperator goes dark, too.

http://farm5.static.flickr.com/4010/4274389540_c6c1c07708_o.png
http://farm3.static.flickr.com/2718/4274389598_33ef01aec2_o.png

This Vimperator is created for matching Clearlooks Zenburn GTK+ theme12.

Download the .vimp and put it into ~/.vimperator/colors/. You can preview it in Vimperator by entering :colorscheme <TAB>, you should be seeing it on the list. Add colorscheme vimPgray to your ~/.vimperatorrc. If you are not a Linux user, you need to find out where to put .vimp on your own.

The file is modified from the default scheme from here with quite a lot of changes and some additional explanation, but there are still some I have no idea where they would be applied.

[1]If you are using Windows, just find a nice dark Windows or Firefox theme to match this Vimperator theme.
[2]Download it and unpack it to ~/.themes, then use the switcher to use it

The post title should have told you why this theme is named 3am.

I want a theme which is dark, very dark, so you can work while the light is off and the room is pretty dark. At 3am, you must be tired, any bright text would burn your eyes. Darkness, FTW!

3am is modified from bora_black, whose author wrote a nice guide, and there is a Fluxbox Style Guide on ArchLinux. If you want to create, those would be great to start.

1   Screenshots

http://lh6.ggpht.com/_CLdf4ORfzWk/S03c08xJgQI/AAAAAAAACbM/33c2iU7as_4/s420/3am.png http://lh4.ggpht.com/_CLdf4ORfzWk/S03c1T1VRPI/AAAAAAAACbQ/MVbV20R3KCw/s320/3am%20menu.png

2   Installation

mkdir -p ~/.fluxbox/styles
wget "http://sites.google.com/site/livibetter/blog-files/dotfiles/3am?attredirects=0&amp;d=1" -O !$/3am

Select the 3am style from Fluxbox menu or edit your ~/.fluxbox/init (reconfig, or just log out/in):

session.styleFile:    /home/<USERNAME>/.fluxbox/styles/3am

I set up SSMTP with Gmail because I wanted to get mails for cron results.

I have these in /etc/ssmtp/ssmtp.conf:

root=<USER>gmail.com
mailhub=smtp.gmail.com:587
#rewriteDomain=
hostname=gmail.com
UseTLS=YES
UseSTARTTLS=YES
AuthMethod=LOGIN
AuthUser=<USER>gmail.com
AuthPass=<SECRET>
FromLineOverride=YES

Make sure this is only root-and-ssmtp-readable.

And /etc/ssmtp/revaliases:

root:<USER>gmail.com:smtp.gmail.com:587
livibetter:<USER>gmail.com:smtp.gmail.com:587

You can test with

echo mailbody | mail -v -s "mail subject" someoneexample.com
echo mailbody | sendmail -v someoneexample.com

I dont have this mail command on my Gentoo, but it seems popular in every page I have read.

If you want to send a more complete test email via sendmail you can

echo -e "Subject: mail subject\nTo: someoneexample.com\n\nmailbody" | sendmail -v someoneexample.com

I use crontab to do some tasks in background. I have a Python script, I just put it into crontab not long ago. But I got a mail from crontab, the script raised ImportError.

It imports PyQuery, which I just knew and started to use it for my own good. I installed it into my home directory not system-wide. Of course, environment variable PYTHONPATH is correct when I am running that script. But its not the environment that cron can have.

I didnt know how to resolve until I read /etc/crontab. It has these lines before the table of tasks:

# Global variables
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
HOME=/

So, I added the following to my crontab:

PYTHONPATH="/home/livibetter/lib/python2.5:/usr/local/lib64/python2.6"

Which is from my .bash_profile.

That resolved.

This is probably the first time I tried to create a skin for an application, lets get into the final result straight!

http://farm3.static.flickr.com/2447/4261342613_e2cb6445af_o.png

Save this elite_commander.ini to ~/.mc/skins/elite_commander.ini and edit ~/.mc/ini to have skin=elite_commander.ini under [Midnight-Commander] section.

Few things I have to tell you first:

  1. The version of my Midnight Commander is 4.7.0-pre3.
  2. The original ini is from /usr/share/mc/skins/default.ini on my Gentoo.
  3. I am not satisfied with the background colors because they are too bright, I can change the 16 colors for rxvt-unicode but I rare not to do so because that may not look good in other applications. Hope Midnight Commander will support 256 colors someday.
  4. I feel ashamed to use Elite in this skins name. :)

Keep in mind when using this skin:

  • The cursor is in red bold text.
  • The marked files is in white bold text.
  • The cursor on a marked file is in red bold and white background text.

My goal of creating this skin is to have a dark background and it sure does and have better looking in a dark room, and to make the colors are close to what I see in Bash Shell on Gentoo. In my .Xdefaults, I have these for colors:

#############
# Rxvt Colors

Rxvt*background: #242424
Rxvt*foreground: #e2e2e5
Rxvt*color0: #242424

So the 'black' in the skin ini actually represents #242424 not #000000 in the screenshot above.

Note that there are two areas are not customizable, which are the hint and shell command areas. You might also want to check up the filehighlight.ini on your system (use locate command to see where they are), which defines the file types.

Midnight Commander is really a good file manager. Once you use to its hotkeys, its easier than drag-and-drop. I also want to mention this scrollbar indicator patch, hope it would be merged into 4.7 not being push back again, you can see it in this screenshot. Its not necessary stuff but it looks very cool.

The following screenshots are from old color scheme, you can download the old skin ini:

http://farm3.static.flickr.com/2764/4258478263_c79d2cd802_o.png
http://farm5.static.flickr.com/4060/4258481905_914020d7dd_o.png
http://farm3.static.flickr.com/2792/4258478301_b3a6b79717_o.png

There is a big issue since I started to use Vim, using Vim with terminal multiplexer such as GNU/Screen or tmux, there is always some issue comes from nowhere and you wouldnt have if you just use Vim in xterm or urxvt.

I had been navigating in my files with a silly way: Press Left/Right key to go up/down in a line. There is a faster way: Press Shift+Left/Right to move backward/forward a word. But it never worked when I was in Screen or tmux.

I googled it and got the solutions, and knew the exactly the problem is. If you look at the key control codes (returned by cat -v), you will see the problem.

$TERM Left S-Left C-Left
xterm ^[[D ^[[1;2D ^[[1;5D
rxvt-unicode ^[[D ^[[d ^[Od
screen ^[[D ^[OD ^[OD

For xterm and urxvt, they both have correct reaction in Vim; for Screen and tmux (tmux uses screen as $TERM), they havent and they are worse because there is no distinguish between Shift+Left and Ctrl+Left, but for former problem its fixable.

There are a few ways to fix it, I decided to remap the keys. And I have these in my .vimrc:

map ^[OC <Right>
map ^[OD <Left>
map ^[[C <S-Right>
map ^[[D <S-Left>

You can not just copy them to your .vimrc, you have to input ^[OC by pressing in insert mode: Ctrl+V, .

I just started using Backupify to back up some data of popular websites. First of all, go signing up now before 1/31/2010 if you want to use it for free.

Note

On May 1, 2015, I received an email about an action required to upgrade for my Social Media freemium plan. If I recall correctly, there seem to be a lifetime promise. Anyway, nothing is free forever. (2015-07-23T22:46:11Z)

My quick list of its features:

  • Supports own S3 account
  • Supports multiple accounts of all services
  • Email notifications of backing up status

You can back up your data to its S3 or your own S3 account (That setting doesnt seem changeable after sign up) . You can also get a daily or weekly updates about the status. It supports multiple accounts of a service, for example, you have two Twitter accounts. You can authenticate first one, then re-logging in with another and authenticate the second one. You only need one Backupify account to back up all accounts your have that it supports. You can see how it support multiple account of a service from the screenshots as follows.

http://1.bp.blogspot.com/_CLdf4ORfzWk/Szq2QDWs_XI/AAAAAAAACZs/BB9X5_aZKfY/s400/Backupify%20::%20Settings_1262138841344.png

As it is a backup service, the data preservation is the point not the presentation, therefore the backed up data may not be pleasant as you view them.

1   Twitter

The data includes almost everything except new Retweets:

http://4.bp.blogspot.com/_CLdf4ORfzWk/Szq3jMY9dpI/AAAAAAAACZw/B0bvA3qjCmQ/s640/twitter.png

It even provides a PDF version of your tweets. Here is first two pages of mine:

http://1.bp.blogspot.com/_CLdf4ORfzWk/Szq3w8Cet7I/AAAAAAAACZ0/ij0HV7eresQ/s400/tmp1.png http://3.bp.blogspot.com/_CLdf4ORfzWk/Szq32X2-GLI/AAAAAAAACZ4/QoQWZdDjeyI/s400/tmp2.png

The rest of data are all XML format or Atom feed, they should be the same as you access to your feeds or via Twitter API.

For normal users, its just a file backup, you probably couldnt do much with it if you dont have a proper software to process it, such as simple searching.

2   Flickr

My Flickr account is free. As you should know, photos of free account can only be accessed from your photostream for latest 200 photos. But obviously, the Flickr API allows more or all, you can see by yourself:

http://2.bp.blogspot.com/_CLdf4ORfzWk/Szq5Z72n_uI/AAAAAAAACZ8/-oNEf8LvOuk/s400/flickr.png

I am not satisfied with this because you only have photo files and the titles of photos. Your descriptions of photos are not backed up. Each link is linked to a image file.

You can only download one by one, there is no option to allow you to download them at once (this is unrealistic I have to say), or download them by batch.

The should at least provide a full link list of files on S3, therefore you can use other software if you do need to download them into your harddrive. If you cant get data onto your computer, this sometimes might be less useful.

Also, using pagination to look up a photo is really hard if you know which photo you need. Searching functionality is required in my opinion.

I also wonder if I upgrade my Flickr account to pro, would it re-backup? Because photos on free account are limited up to 1024 pixels in both width and height.

3   Google Docs

Its same story for Google Docs. Title only, no searching, no batch download. Docs and Spreadsheet files are mixed up, sorting by dates. More, the file formats are .DOC and .XLS. If your are a OpenOffice.org supporter, you wouldnt be like this.

4   WordPress

I have WordPress.com but no self-hosting WordPress. This backup needs to install a plugin.

5   Delicious/Facebook

Both are XML files. For Facebook, the files contains friends, links, notes, statuses, and events. I dont have any photo albums on Facebook. I am not sure if Backupify would back them up.

6   Blogger

It backs up all posts of your blogs in Atom feed format.

7   FriendFeed

In XML feed with stylesheet, it looks like in browser:

http://1.bp.blogspot.com/_CLdf4ORfzWk/Szq90fLrv0I/AAAAAAAACaA/fcjXe45WCag/s800/friendfeed.png

8   Gmail

It supported Gmail, but it is disabled now, it might be back online soon.

9   Conclusion

It has more service supports, but thats all I have been using. Basically, it does what it claims, backing up. If you need more to do on back up files, you will need another program.

However, I think simple searching and batch download is really a requirement. They may not be used often but once you need them, that would be pain because you dont have them on Backupify.

One more thing, some backup seems to be one revision snapshot. That might be a big deal but if you really want to know when you unfollow someone or follow someone, you wouldnt know. But who on earth needs to know that? :)

Its free so far and it does great for the fundamental job. I say dont waste it, go signing up and setting up all of your accounts.

http://3.bp.blogspot.com/_CLdf4ORfzWk/SzqL2QAmz3I/AAAAAAAACZo/Wuia1gXTFxE/s1600/icon_znurt.png

Firstly, you will like it after it goes public comparing to the old one or the official one, its still in alpha testing. I have the chance to preview it because I had asked after I read the developers blog post.

I asked if I could post some screenshots, he kindly wished I wouldnt and I respect that because you wouldnt be surprise when you firstly see Znurt (the buddy on the top-right at this post). Anyway, I would tell your little about it.

The layout is redesigned and it has more functionalities and information about a package, the color scheme is softer but remains same generally.

You can read Changlog and bugs right in the package page, you dont have to be redirected to cvs webserver or bugzilla. Use flags descriptions and dependencies (you dont have these two on official packages website).

I know its hard to understand, but I believe you will see by yourself soon. Its really much better than official one, I hope someday it will take the place of official one.

Have you tried the Incognito mode in Google Chrome (Chromium)? And have you read it carefully? I finally did and that did make my day. See this screenshot:

http://farm3.static.flickr.com/2624/4217240184_428d2f79bc.jpg

The text is:

Going incognito doesnt affect the behavior of other people, servers, or software. Be wary of:

  • Websites that collect or share information about you
  • Internet service providers or employers that track the pages you visit
  • Malicious software that tracks your keystrokes in exchange for free smileys
  • Surveillance by secret agents
  • People standing behind you

The last two are superb, I was laughing at fourth and gone crazy at last one. But its not just for laughing, they are also true. In fact, the last one was probably the most efficient way to hack into your accounts.

I recalled there was a joke (I couldnt remember exactly, sorry). It was like this Someone was hired to hack to bring down the system. He was asked to provide the code he used. He did. His code had only one-liner of comment, actually. Made a phone call and requested unplugging the power plug from the socket.

Honesty, I had never thought about a security breach from them as I would never thought of using Skype to dial 911 (Skype has a notice about it. And no, it wont work if you wonder). People always have some thing beyond our imagination.

tiv is an interesting tool to view image in your terminal if you have 256 colors support. It is written in Perl and requires ImageMagick (It can also generate from raw image if you have ufraw installed).

Here is a sample of this blogs logo:

https://lh3.googleusercontent.com/-Fea6ur3pC6s/SzUxVJqdTRI/AAAAAAAACZA/LVQuw14euvc/s640/2009-12-26--05%253A36%253A16_1468x771.png

Note that you have to specify the width (-w) of your terminal and the path to the image file (-f) at least.

You can save it to file by running:

tiv -w $COLUMNS -h $LINES -f /path/to/image.jpg > result

and run:

cat result

to view it later. Perhaps this is a new way that you can send your Happy New Year card to your friends?

This is probably my third time I tried to use Chromium. First time, I pulled source code and compiled it, the result wasnt good at all. Second time, I grabbed the binary package from Gentoo tree, its okay at the time. But it didnt support extension at the time. This time, I installed the binary package again. I am satisfied with its performance and almost every bit of it.

However, I am still using Firefox. I just let both browsers sit on my desktop, I mainly opened web pages in Chromium. The reason of keeping Firefox is Chromium didnt have Vimperator-like extension and I doubted a full port of Vimperator would come in near time. Chromiums API didnt allow to do many things, you couldnt have done what you could do in Firefox yet. Without Vimperator, I have to heavily rely on mouse, I have to move the cursor to click on bookmarks, that was really time wasting. (Typing in address bar isnt fast enough)

Vimperator enables you to use bookmarks keyword (which Chromiums bookmarks do not have such property, you could enter the keyword in Firefox address bar, that is a fast way to use bookmark). I have gm for Gmail, gr for Google Reader, etc. Every time, I need to read emails, I just type (I dont have to move cursor to anywhere) ogm, five keystrokes send me to my mailbox, its quick and fast. But now, I still couldnt do that in Chromium.

The only similar extension is Vimlike Smooziee1, but its just not Vimperator. I didnt even try it after I watch the video. If you are a fan of Vimperator go to star this issue2, we may have Vimperator on Chromium someday.

Beside Vimperator, I really couldnt find anything else to complain. With Firefox, I have several addons, but now I have basically only one, the RSS Subscription Extension, and two of mine, Keep Last Two Tabs and Twimonial.

I used to have AdBlock Plus, FlashBlock, Firebug, Screengrab, Zemanta, and Vimperator. I used AdBlock and FlashBlock because Firefox is really a memory hug, I dont want to have more to eat up memory. Now, I dont even worry about that. Chromium has Developer Tools, which is similar to Firebug, but Firebug is still more powerful than Developer Tools, however its good enough for me. As for Screengrab, I didnt find an alternative one to use.

Chromium is really fast and clean. I now dont have to restart the browser every a while. Firefox could easily eat up to 20% of 2G memory by just refreshing a page.

1   Update (2009-12-20)

I found out I forgot to mention one bug. The version I use is 4.0.275.0. The Chromium window will flick about three times when it gets or loses focus.

And there is another thing I just discovered, if you drag tab in Firefox to make a separate window which plays a YouTube video, the page would reload and so would the video. But in Chromium, it works smoothly, the video would be interrupted. Awesome!


[1]https://chrome.google.com/webstore/detail/donnjgnmaheadpiphiedimcjpiefdnnj is gone.
[2]http://code.google.com/p/vimperator-labs/issues/detail?id=139 is gone.

I got this Load and Help not long ago, you can download a free copy of SoftMaker Office 2008 and this company will donate 0.1 for each download. So I downloaded it but I didnt give any hope because I didnt see 64 bit mentioned in download page and the file I downloaded indeed a 32 bit version. I couldnt use it because my system is a pure 64 bit. I really want to try another office suite other than OpenOffice.org, though SoftMaker Office 2008 still looks like MS Office-like. (I am kind of sick of that UI)

But I still want to help spread the word. Download it even your system is like mine (and ask them for 64 bit version, I did). It supports both Windows and Linux 32 bit, and you will get, in MS Office terms, Word/Excel/Powerpoint/VBA(Windows only) apps.

PS. This is my first time heard of this office suite.