As of 2016-02-26, there will be no more posts for this blog. s/blog/pba/
Showing posts with label API. Show all posts

tl;dr: Choose your Flickr uploader carefully, always make sure to set permission correctly, especially for private photos. Best not to upload any private content on Internet.

The following is a Flickr mail I sent to Flickr support, unfortunately, they didnt understand what I meant, the reply is just standard response from support. I will explain more after the mail content.

From: Flickr Customer Care
Subject: Re: [Flickr Case 2201518] Re: Other issues

Hello, livibetter!

This is a copy of a help case reply:

Thank you for contacting Flickr Customer Care.

I understand that you have a concern regarding the privacy and safety
levels of photos on Flickr. I apologize for any inconvenience this has
caused you. Let me provide you this information.

Both the privacy and safety settings of your photos can affect the
visibility of your photos. Photos can be public (anyone can see them),
private (only you can see them), or viewable only to other Flickr
members that you have designated as Friends and/or Family. If someone
can't see a photo make sure that it has the permissions you want. For
example, if it is a Friends/Family photo, make sure that person is
listed as a Friend/Family in your Contacts.

Moreover, Flickr is a global community made up of many different kinds
of people. What's OK in your backyard may not be OK in theirs. Each one
of us bears the responsibility of categorizing our own content within
this landscape. So we've introduced some filters to help everyone try to
get along.

When you upload content to Flickr, you need to choose what Safety Level
it fits in:

* Safe - Content suitable for a global, public audience

* Moderate - If you're not sure whether your content is suitable for a
global, public audience but you think that it doesn't need to be
restricted per set, this category is for you

* Restricted - This is content you probably wouldn't show to your mom
and definitely shouldn't be seen by kids

A good rule of thumb is, bare breasts and bottoms are "moderate." Full
frontal nudity is "restricted."

If you are seeing abusive items on Flickr, please use the "Report Abuse"
link at the bottom of all Flickr pages. This is the best and fastest way
to get this information to the correct team.

I hope the information I have provided to you above will help address
your concern. If you need further assistance, please let me know by
replying to this email with these details:

- Please give us a detailed description of the problem you are having,
including the exact steps you would like to do.

- The exact steps you are taking.

- Any other information that you feel is important in helping us resolve
your issue.

Flickr values your input on making our service experience better. We
appreciate your time and patience for letting us work on this issue.

Thank you again for contacting us.



Flickr Customer Care

To get the most out of Flickr, please upgrade your browser to the latest


Original Message Follows:


My flickr account name is "livibetter"

Over the years, I have seen some private photos on

Which originally I believe it's uploaders' faults for not
setting photo permissions along with photo uploading.

The situation goes like:

1. I see a new uploaded photo,
2. Click on it,
3. Flickr tells me that's a private photo; or I see the
photo page, but it disappears from the user's photo stream,
go back one page, Flickr tells that page is now private.

When I first I see this, I didn't pay much attention to it,
because I believed that could be user's fault, they set
photo private after uploaded. But as time goes on, I began
to see more and more this kind of situation.

Sometimes, it appears right away, sometimes, it takes a few
seconds. Which provide enough time, if someone wants to
download a larger version.

And more than one time, I saw intimate moment between
couples, the kind of moment that two people are naked even
having sex.

If you check my flagging logs, you will see I flag a lot of
nude photos from spam uploaders whenever I see one. So, I am
pretty good to tell if that's a photo uploaded by spammers.

In those few times, those accounts do not occur to me they
are spammers, because those photos look like from normal
people. In their photo stream, they are just some normal
family, kids, pets, etc.

So, I started to think, there is an issue with private

Somehow, they are leaked for extremely short moment, only a
few seconds. But that could be enough for people to
download, even manually without utilizing scripts.

Just a few hours ago, I saw another but they are normal
photos. You can watch the recording while I manually showed
that I could even get the photo pages.

I have contacted and
exchanged with him and got a possible explanation for the
leaking, that's why I believe this could be uploader.

Since I have seen more than one time, I feel this issue must
be addressed. Just one intimate photo can ruin people's life
even a small version of thumbnail, last time I saw was
probably few days ago and I recalled I had enough time to
get into photo page. I am sure another time was in April,
because I was thinking about providing it by uploading a
sequence of testing private photo using the web uploader,
but I didn't do it.

I want to know what exact the problem lies and warn people
about it. (Which could be advising people choose their
uploader carefully)

I attach mails between me and audiere, he provided code he
used, he used own code to upload and I believe some
uploaders have same issues not because there is some timing
delay for photo permission between Flickr servers.

Sincerely yours,
Yu-Jie Lin


To: audiere Christopher Brown
Subject: About private photos


My name is Yu-Jie Lin. I know this may sound strange, but I
need your private photos to show Flickr about my concern of
private photos.

I saw one of a thumbnail of your private photo via and I took this chance to
record because you had been constantly uploading private

Private photos shouldn't be seen, but there was a very short
timing that people might see them, which I have notice more
than a year, just didn't have a chance to make a record of
the issue.

Anyway, please check out this video:

*** Please don't share the link to anyone else, I want to
get Flickr's response first and wait for them to fix it ***

I demonstrated two successes of viewing them in photo page,
although seeing thumbnails have already shown the issue.

1st success: 0:39
2nd success: 3:18

I don't know if this is caused by app or timing of photo
processing, which only Flickr can be sure about it.

I want to let you know about this issue and get your
agreement of using your photos as demonstrations. (Even
those private photos are licesed under CC)

I will need to know what uploader were you using? The new
Flickr web uploader, old Flash, or other?

After I get your response, I will contact Flickr privately
with more detail and send you.

Please don't let anyone else know about this, I don't want
to cause an insecure feeling among Flickr users and make
some people attack Flickr on this issue.

Once Flickr fixes this, it's fine if you want to let people
know about it. I plan to write about this and I will make
that video accessible to public.


To: audiere Christopher Brown
Subject: About private photos [2]

I think I need more info.

Beside the uploader, how exactly you do the uploading? Do
you remember to set uploads private or after photo being

Yu-Jie Lin


From: audiere Christopher Brown
Subject: Re: About private photos

Oh, wow, that's very revealing, thanks for the info! The
photos I'm uploading aren't super private; if someone saw
the thumbnails of a few of them I wouldn't be too
distraught, but yes, it's definitely worth looking into!

Actually, I think it's mostly my fault; here's the code I
was using (I wrote my little backup script myself--so, it's
not surprising that there are a few bugs):

photo = flickr_api.upload(photo_file=self.fullpath,
photo.setPerms(is_public=0, is_friend=0, is_family=0,
perm_comment=0, perm_addmeta=0)
description='flickr-store', tags='flickr-store', hidden=2)

And here's the code I'm about to use:

photo = flickr_api.upload(photo_file=self.fullpath,
is_public=0, is_friend=0, is_family=0, hidden=2)
photo.setPerms(is_public=0, is_friend=0, is_family=0,
perm_comment=0, perm_addmeta=0)
description='flickr-store', tags='flickr-store', hidden=2)

Now, we'll see if that prevents my photos hitting the public

It's probably the way flickr processes incoming pictures;
because there is a gap between the upload and the privacy
settings in the former lines of code, it thinks they're
public until someone actually requests to see a picture,
which is likely after the setPerms call hits the Flickr

Hope my little fix didn't ruin your plans, but you're
totally welcome to show them that video, if you like.
They're all marked CC-attribution because that's my account
default. But thanks for the bug report!

I'll probably open source my little uploader to at some point, but I just haven't yet.


From: audiere Christopher Brown
Subject: Re: About private photos [2]

Yeah, and those 10 or so that were sitting in my stream at
the top, were probably at that halfway point--just after
upload and just before setPerms, when my script crashed, or
I killed it or something.

So, really, stupid user error on my part. Thanks again!


To: audiere Christopher Brown
Subject: Re: About private photos

Thanks for the code, that's actually one possibility I have
thought of, that's why I send you a second email for
details. I wasn't entire sure where the problem lies on.

The issue is I have seen this kind of situation many times
over the years, and some of them are initmate moments
between couples. (I am obssesed with Flickr

Since you wrote your own code, I begin to think there might
have some problematic uploaders.

Anyway, I will still send a mail to Flickr with copies of
our mails, so I can get the confirmation that problems are
entirely on uploaders.


From: audiere Christopher Brown
Subject: Re: About private photos

Ha, I don't think I'd send any intimate moments to Flickr,
private or not.

Here's my code, which seems to have fixed the privacy issue:

At least, I haven't seen any more of my photos on

Thanks again for the heads-up!

Flash: Flash V:11.2.202 D:Shockwave Flash 11.2 r202
Agent: Mozilla/5.0 (X11; Linux x86_64; rv:10.0.5)
Gecko/20100101 Firefox/10.0.5
AMT info:

This is a copy of our response to your recent help query sent to the primary email address associated with your Flickr account. If you'd like to respond, please send the email to to ensure a timely response.

You can change your primary email address at any time via your account page:

Ive been using Flickr for years and I am obsessed with recent uploads. When I am bored, I go to refresh a few pages. Thats how I get my Flickr contacts, every time I see cats and home-cooked dishes (especially bread baking in recent times), I add them as contact.


Recen photos have not been available since July, 2013. (2015-12-07T07:28:59Z)

But its not all good or safe photos, there are a lot of nude or even pornography being uploaded as you are reading. Just head over and keep refreshing for 10 minutes, you are most likely to see a couple of those photos. There is nothing terribly wrong with those as long as they dont break the law or violate Flickr terms, the only problem is why I was seeing them?

The possible reasons are:

  • They are uploaded by spammers, they dont care if those fall into public viewable area, which is actually what they want.
  • Uploaders forget to set permission.
  • Something is mishandled during uploading process, which is the case I am writing for this post.

Since I spend a lot of time in that page, I am quite good to tell if thats a spammers account. (I can even tell a photo is uploaded via Instagram or not from that small square thumbnail, because that, I even wrote about Instagram photos on Flickr.)

Its not uncommon that I will see a photo become private after I click on its thumbnail on that page. At first, I didnt pay much attention to this, because I think the uploader later correct the photos permission.

Then, I started to see some photo that should never be seen other than two people in the photo. They are intimate photos and I know that photos are uploaded by the real account user, not hacked, because same faces show in other photos, those normal family photo with lovely kids or pets.

They disappear a few seconds from the accounts photostream, and that only indicates they are indeed private photos, which have no intention to be seen by others.

I realized this is an issue, a serious one, it must be known and aware of. But it took me more than a year to write this post, because I didnt really know the cause. However, I did form some ideas in my mind.

I have just seen another incident a few days ago, but I didnt have means to prove or to find the real problem, besides I couldnt use others intimate photo to show the problem. Its a woman sitting on chair, forgive me to omit the detail. Even I only see the thumbnail, I know what it is about. The other photos are with her young son, probably only four or five years old, and her husband. Imagine if I knew this family in person, that would be very awkward. Hopefully, I will forget the detail soon.

Back in April (2012-04-02T05:08:23Z), I determined to find out whats really going on, but as usual, I never tried to do it. At the time, I suspect it may be the official uploader which leaks until now I still dont have answer for it.

Less than 24 hours, I saw another incident. Luckily, this time those photos are normal photos, but set to be private and I took this opportunity. Here is a recording showing you I even get to the photo page for two successful attempts by manually refreshing and clicking:

(1st success: 0:39. 2nd success: 3:18)

After contacted the Flickr user audiere, I confirmed one of my suspected theories. The flawed uploading code, you can read our exchange mail above, but I quote those code to show you:

photo = flickr_api.upload(photo_file=self.fullpath, title=self.title)
photo.setPerms(is_public=0, is_friend=0, is_family=0, perm_comment=0, perm_addmeta=0)

He wrote his own uploader which included a flawed process of uploading, it takes two API calls to upload and set up the permission. Between two calls, it creates a timing, a very short gap, a few seconds, that others can have possibly accessible to the photo if act quickly.

Since I have seen this many times, I can tell you that sometimes I can even have time to download the largest or original version just use my mouse to click and click.

Imagine if I was a very bad person, I could spread those intimate photos using some measures so no one would find out the source of spreader. That could hurt people who upload those to Flickr and think the uploader take care of permission perfectly, i.e. photo is never leaked even for a few seconds.

To use Flickr API correctly, the permission can be set along with uploading as audiere fixes his code:

photo = flickr_api.upload(photo_file=self.fullpath, title=self.title, is_public=0, is_friend=0, is_family=0, hidden=2)

This is not a Flickrs fault, but uploaders flaw. Though I cant get a confirmation about their server process the permission setting before pushing out the photo to be accessible.

I am certain there are more flawed uploaders around because I have seen many times of this case, though only a few times were intimate photos. If your favorite uploader is open source, please go review its source code to see if it handles permission correctly. File a bug report, if it doesnt.

I am writing this post because I hope you can be careful for your private photo and not only limited to Flickr, but all private stuff you put online. Please choose uploader (including official ones) or any third-party app carefully, they may make mistake in coding. Even its just a thumbnail, it can ruin your life.

Unfortunately, I was unable to get a confirmation from Flickr support about the official uploader and if there is any chance that their process can leak even if permission is set with uploads. The support clearly didnt read through my mail, sadly. I was hoping they would confirm there are some known uploaders may have this kind of issue at least. But I basically got nothing informative from support.

The best way is not to upload any stuff you dont want people to see or to read. If you insist and entirely trust any website you use, you are at your own risk as I already demonstrate you that I can just hit refresh and get a larger photo because of a flaw of using API. If I write a program to monitor an account which I know the account user uploads private stuff via flawed uploader, I am certain my code can manage to download a few largest photos.

Actually, the best way is not to create any you may regret if other people see it. I never understand why some guys want to photo down there or girls want to take a braless photos, well even just in underwear. And yes, I do see those via the page and I am certain they are just like normal people you see every day in your life.

To give you a worse case, one-time, just one-time, Id seen a teen girl goes braless or naked, I couldnt remember which. (Glad I couldnt) I know its a regular teen girl because of her other photos. When I got to the photo page, was planning to flag the photo, I immediately felt I was like a pedo when I saw the face, in thumbnail it looks like a spammers photo. (There are certain spammers upload same sets of normal-people-look-alikes naked photos over and over again, no idea where they get those photos) I wasnt sure if I should flag it, because she would know photo must have been seen. I believe if your photo gets flagged you will get notification, I wasnt sure. But view count certainly can tell.

Anyway, there are a lot of strange photos going on in that page, pornography, all sorts of fetish, trans-dresser, infant just got out of womb (this is fine, but little bloody), adult comics, two naked dolls placed in sexual-suggestive-positions, and even more. If your mind cant comprehend, its better you keep yourself away from that page. If you can, flag as much as you can whenever you see one isnt suitable to public.

Years before, I even used report abuse to report entire account. One time, I saw a suspicious photostream filled with young girls with very less clothing on, not nude, but very strange when you see those photos, some probably not even old enough to be teens. I reported that account as suspicious pedo account.

I think Flickr should offer an option allow users to decide if their photos can be seen via that page, and/or only show photos from users have joined for a month at least or longer. New user may make mistake on settings and this probably have more time to identify spammers account.

Honestly, I definitely dont think that page is okay for everyone to view, especially minors. Its quite often to see those photos in my experience.

But in other hand, its how I get my awesome contacts. You can see silly cats, inspiring home cooking, places and cultural around the world, or know what holiday is today from those recent photos. Most photo are still great.

Again, its best not to create any private stuff digitally, there is no absolute way to keep those safe.

Once in a while, I would loop a song and keep listening to it over and over again. That's what I am doing and curious what songs I had done with before. I wrote a quick Bash script to use user.getRecentTracks API for listening timeline, here is the script:

You will need to obtain a API key in order to use this script, the script is run with this syntax:
It will retrieve last 1,000 tracks (using 5 API calls) that you have listened to and check consecutive plays of same tracks. Only three or more consecutive plays will be printed out as final results. The following screenshot is a sample output:

Each numbers means how many times the track had been played consecutively including starting play. I use URL to group the result, which serves a unique key and it's okay to be used in final results. Not really hard to read the artist's and track's names from it, although simple process can be used to perfect the output format, but it's not necessary. The first two of same URL are not a glitch of this script, there were some tracks played between them.

The script only retrieve 1,000 tracks, because for every 200 tracks, it's about 140K of XML file. Quite a waste of bandwidth for just the URLs of tracks. Of course, I can pipe through YQL to select <url> elements, but that is a bit overkill for this quick script. Although the API page listed JSON as alternative format, but that only returns empty response. This is not my first time to see JSON format returns empty response, I guess their API documentation is pretty out-of-date or just inaccurate to current status.

Here is a text version of the sample output, in case you wonder what these songs are:

I realized that Yahoo! Search BOSS API was gone last month, now Bing is going to shut down its Search API and asking developers to move on to Windows Azure Marketplace, which, unfortunately, is same as Yahoo's new API, not free. This is the email I just received:

Dear Bing API Developer:

For the past several years, the Bing Search API has made search data available for developers to innovate and build upon. Today we are announcing that the Bing Search API will transition to an offering made available on the Windows Azure Marketplace. The Windows Azure Marketplace is a one stop shop for cloud data, apps, and services, including the Microsoft Translator API. Through this platform, developers can access hundreds of data sets and APIs and distribute their applications through the marketplace.

A few important things to note regarding the upcoming transition:
  • With the transition, Bing Search API developers will have access to fresher results, improved relevancy, and more opportunities to monetize their usage of the Search API. To offer these services at scale, we plan to move to a monthly subscription model. Developers can expect subscription pricing to start at approximately $40 (USD) per month for up to 20,000 queries each month.
  • The transition will begin in several weeks and will take a few months to complete. Developers will be encouraged to try the Bing Search API for free on the Windows Azure Marketplace during the transition period, before we begin charging for the service.
  • At this time, you can continue using Bing Search API 2.0 free of charge. After the transition period, Bing Search API 2.0 will no longer be available for free public use.
Details regarding the transition timeline, pricing structure, and other changes will be announced in upcoming weeks. In the meantime, we encourage you to explore the Windows Azure Marketplace and read the documentation. As a Bing Search API developer, you can expect the transition to involve targeting a new API end point, moderate changes to the request and response schemas, and a new security requirement to authenticate your application key. Developers using approximately 3 to 4 million queries and above can expect to transition through a separate process (details will be provided shortly).

We understand that many of you are using the API as an important element in your websites and applications, and we will continue to share details with you through the Bing Developer Blog as we approach the transition. We appreciate your patience during this time.

Bing Developer Team

If lucky, we will still have a few months of free Bing Search API.

Years ago, when you talked about an API, you just assumed that it's free for use. The only thing you need to care is not to violate the terms of use. To date, what you care are how much does it charge and what authentication model and library you should use.

APIs are getting more restrict, mind you it's not for security as the starting point, not entirely, but slightly rather for identifying the app which is accessing them.

One thing I really don't like is you have secret and/or key, then a hash. All is for the security. Even it's accessing public data, you may still be asked to sign your request. All I really want to do is
and process them in a shell script. But no, you have to go through a process in order to make sure everything is safe. It's harder and harder for shell scripts, even just to get a number via API.

I am sure in a near future, someone would ask "Anyone still remembers the HTTP Basic Authentication?" Maybe someday, OAuth will be supported by Wget or cURL.

Well, Bing Search isn't important to me, I only need a number from it and I can live without it.

Updated on 2012-05-20T06:26:05Z

I received another email and noticed this part:

For up to 5,000 queries per month, developers can access the API for free on the Windows Azure Marketplace. At this level, the large majority of our existing developers including non-profits, educational institutions, and smaller scale applications can continue using the service for free.

It sounds great for my usage of Bing Search since I only need a few API calls. However, after I signed up and got the new Account Key, I found out there is no data for total results from returned Atom (XML) or JSON format. In the end, I will still need to remove Bing Search, which will be gone on August 1, 2012.


The site is dead and some links have been removed. (2015-12-14T06:43:25Z)

Yesterday, I was looking into an old stuff I created using Google App Engine, which was evolved from a Bash script back in December, 2009. Here is a screenshot:

Yes, I shamelessly Google/Yahoo/Bing my username and yes, I unblushingly made a record for the number of search results with a chart. (Cough, two charts and I have this.)

When I opened that page, I noticed the records stopped on October 3, 2011. At first, I thought it might be the result fetching limit in my program, so I went to update the code, but I realized it was not like that.

Something went wrong when I saw a huge amount of unsuccessful tasks in task queue. Retry counts were around thousands. I looked into the logs and found out the problem was with Yahoo Search BOSS API. The domain of v1 API is gone.

So, I googled and found this announcement for v2 API, v1 was scheduled to be shut down on July 20, 2011, but it lasted until August 27. But then on September 20, it was back and lasted until October 3, then it is gone for sure since then.

Because my program requires three API returns successfully before it writes data into datastore. I should check tasks retry count, it is too high, then drop it and send me an email. But I didnt code it in that way, because I didnt think there is a chance of having the thousands of retries until yesterday. I lost about 6 months of data from Google and Bing. Will I add the email notification for something like this? Nah.

Yahoo Search BOSS v1 is not only one gone, soon Google Web Search API will be shut down around 2013. It was declared as deprecated before Yahoos, but it has longer transition time for developers, which is 3 years.

I think they both move to paid version of API, v2 and Custom Search. I dont use API for making money, so when they are gone, my program stops updating.

I had this idea in the end of September 2010 when I was playing with Google Analytics' tracking code. I wrote some code for rating blog posts using the option value, the code did stay on my blog for a day or two before I took it down, it wasn't too useful for me. But a function allows visitor to report page issue could be very helpful if someone is willing to click on some buttons.

I have finished a simple code and it's at bottom of this blog:

Well, it doesn't look pretty. Here is the code in that HTML/JavaScript gadget:

<script src=""></script>
function init_page() {
  var gawr_options = {
    target: 'ga-wr',
    report_options: [
        title: 'Image is not loaded'
        title: 'Link is broken'
        title: 'Other'
  new GAWR(gawr_options);
<div id="ga-wr"></div>

For report of issues report, I can write my own program to get daily report using my current daily report as base. But I don't think I will trouble myself, not yet anyway. Right now, I can see the report with custom report in Google Analytics:

It works great for me for now. Note that, you need to use Alert/Total Events instead of Pageviews. It's event not page. The report does get updated very quick, probably a few minutes after reported. I will say that's instant almost.

Now a little technical background of this script. Basically, you should use different profile. It will track page when a report is being submit and the report is recorded as Event. Event action is the issue name and option label is the additional information as you seen in the image above.

Option value can only accept integer, custom value probably can do the trick, but I put the data in option label. There is another way to record is to rewrite the page URL when tracking the page, but I don't like that. But this could be a benefit, rewriting url to be /original-page-url/issue and still send the event. This way, if you watch Real-time tab, you can see there is a report just comes in if you don't use separate profile.

And remember when visitor reports, page URL is recorded by page tracking, also user's browser and system and everything Google Analytics collects by default is already in the data. Isn't this awesome and brilliant? I don't even need to code for collecting such data if I need to check visitor's browser, they are just there for me to read.

Google Analytics API can do more than just website access statistics, you can set up a poll or some thing more. Imagine you let people to vote and you use visitors metric or something to prevent some degree of voting spam.

Only the data isn't public without coding and they require process.

After I posted my first try of using Google Analytics Data Export API, I realized that I didnt need to send two requests for calculating visits change, one is enough. Moreover, I could also make a chart.

=== General ===

125 |                                                          #
    |                      #           ##      ##            ####
    |                      #           ###     ##     ##    #####
    | ##                  ###   ##  # ######   ## #  ####   #####
    |####  # #    # #  # #####  ############  ##### ###### ######
    |#### #####   # ## ##########################################
    |###########  ###############################################
    |########### ################################################
  0 +------------------------------------------------------------

  116 visits (  -7.20%)
  Average time on site: 122.655172414 seconds (  55.75%)</pre>

I wonder if there is a popular Python library or common CLI tool to make an ASCII chart.

DAYS = 60
date = ( - dt.timedelta(days=1)).strftime('%Y-%m-%d')
date_start = ( - dt.timedelta(days=DAYS)).strftime('%Y-%m-%d')

# General
data_query ={
    'ids': table_id,
    'start-date': date_start,
    'end-date': date,
    'dimensions': 'ga:date',
    'sort': 'ga:date',
    'metrics': 'ga:visits,ga:avgTimeOnSite'})
feed = my_client.GetDataFeed(data_query)
visits = [int(entry.metric[0].value) for entry in feed.entry]
max_visits = max(visits)
print '=== General ==='
VISIT_WIDTH = len(str(max_visits))
for y in range(CHART_HEIGHT, -1, -1):
  if y == CHART_HEIGHT:
    sys.stdout.write('%d |' % max_visits)
    sys.stdout.write('%s |' % (' '*VISIT_WIDTH))
  for x in range(-DAYS, 0):
    vst = visits[x]
    # vst / max_visits >= y / CHART_HEIGHT
    if vst * CHART_HEIGHT >= y * max_visits:
      sys.stdout.write(' ')
print '%s0 +%s' % (' '*(VISIT_WIDTH-1), '-'*DAYS)
visits_change = 100.0 * (visits[-1] - visits[-2]) / visits[-2]
avg_time = float(feed.entry[-1].metric[1].value)
avg_time_before = float(feed.entry[-2].metric[1].value)
avg_time_change = 100.0 * (avg_time - avg_time_before) / avg_time_before
print '  %s visits (%7.2f%%)' % (visits[-1], visits_change)
print '  Average time on site: %s seconds (%7.2f%%)' % (avg_time, avg_time_change)

Last month, I used HC to draw a heart. Now, I am using Google Chart API to do the same thing and having a prettier heart!,t,0,6.283,0.001,%282-2*sin%28t%29%2Bsin%28t%29*sqrt%28abs%28cos%28t%29%29%29%2F%28sin%28t%29%2B1.4%29%29*cos%28t%29%7C1,t,0,6.283,0.001,%282-2*sin%28t%29%2Bsin%28t%29*sqrt%28abs%28cos%28t%29%29%29%2F%28sin%28t%29%2B1.4%29%29*sin%28t%29&chds=-3,3,-4.5,1.5&chm=D,FF0000,0,0,1%7CB,FF0000,0,0,0

Image source url:|-1&chs=157x157&chfd=0,t,0,6.283,0.001,(2-2*sin(t)%2Bsin(t)*sqrt(abs(cos(t)))%2F(sin(t)%2B1.4))*cos(t)|1,t,0,6.283,0.001,(2-2*sin(t)%2Bsin(t)*sqrt(abs(cos(t)))%2F(sin(t)%2B1.4))*sin(t)&chds=-3,3,-4.5,1.5&chm=D,FF0000,0,0,1|B,FF0000,0,0,0

And you can play with Live Chart Playground1:


Have hearty!

[1] is gone.


Now I use Google Docs to record the number, see this doc. Its easier to add a data point, the only problem is its too ugly. (2010-06-03)

The chart below is the weekly users of Keep Last Two Tabs, a Chrome Extension of mine. This extension does what it claims, its silly and amazingly to see the number of users is actually growing up.

A week ago, I stumbled upon Flot, a pure JavaScript plotting library for jQuery. It produces nice and pretty chart comparing to Google Visualization APIs Annotated Time Line, a Flash based. Here is a post about how I used it.

I feel Annotated Time Line is more easier (or less customizable) to use because you dont need some functions, but Flot is more comprehensive library (and its Open Source), you can have more types of chart and make the chart do whatever you want if you know JavaScript.

This is another not so useful script of mine. It is a Bash script and it gathers the search results counts via Google/Yahoo/Bing APIs to make an historical chart of specified keyword using the Annotated Timeline (with no annotations, :-))of Google Visualization API.

I made this script,, because I wanted to have a historical chart of this keyword livibetter. Yep, I searched my nickname regularly, I admit it! I like watching the number of result climbing up, which would make me feel better. :-D

I wasn't planning of using Yahoo and Bing because their APIs require AppIDs, the IDs will be out to apparently public if I use Bash to write this. I don't like it, but I couldn't resist to see the result counts from them.

Because it is still new, I could not have much data to show you. The following chart was collected about two weeks. (Bing results was not included)

The following is the screenshot of rendered HTML page:

I googled myself and made a chart. on Twitpic

Please aware of few things if you want to use this script:
  • You can use cron to run it regularly, several times a day. Don't worry, it will only update the data file once a day.
  • It will only update when three counts from three search engine are available. If any of them couldn't return the result, you may have a missing data. But it should be okay, the counts do not change much from day to day.

I just tried to add two entity counts to my app's statistics page. Then I found out, the statistics APIreleased on 10/13/2009, version 1.2.6is not available for development server.

You can run the following code without errors:
from google.appengine.ext.db import stats
global_stat = stats.GlobalStat.all().get()

But global_stat is always None.

So I ended up with a code as follows:
db_blog_count = memcache.get('db_blog_count')
if db_blog_count is None:
blog_stat = stats.KindStat.all().filter('kind_name =', 'Blog').get()
if blog_stat is None:
db_blog_count = 'Unavailable'
db_blog_count = blog_stat.count
memcache.set('db_blog_count', db_blog_count, 3600)

The documentation didn't explicit mention whether if the statistics is available for development server or notmaybe I didn't read carefully, neither did Release Notes.

PS. I know the code is awful, str / int types mixed, terrible. But I am lazy to add and if clause in template file to check if db_blog_count is None or something like -1, or anything represents the data is not available.

PS2. The code should be just if blog_stat: (fourth line) and swap the next two statements if you know what I meant.