As of 2016-02-26, there will be no more posts for this blog. s/blog/pba/
Showing posts with label link. Show all posts

You probably have noticed (really, you have already?) that if you click using any mouse button or using Enter, then the link will be highlighted with a blue background.

Note

This blog is no longer using this, nor is YouTube. (2015-12-25T05:11:08Z)

The idea came from YouTubes new designed homepage, I noticed the link feature when its still experimental. I feel its a nice idea, especially you are watching video from a list selectively in new tab. You will know the position of the one you just finish watching, then you can move on.

I stole the idea but not the code, I didnt even use tool to read the code, JavaScript or CSS, its simple stuff. You only need to have I get an idea! And thats what I didnt have.

The code I made you can read the diffs, JavaScript diff and CSS diff. If you know them code, its simple code as I said. The only special thing I want to mention is margin compensates the extra space, which padding uses. The reason of adding padding is the edge of text will be touching the edge of the blue box if you dont use it.

Here are sample link styles with padding and without padding.

Since its possible to open a link using mouse or keyboard, so two events need to be handle.

My originally modified idea is having an indicator flying to new link where user just clicks. Indicator takes off from previous click, then flies to new link, but that seems to be too much and overdo. How about a simple background color fade in and out? Using CSS transition?

Try to press right mouse button with any link, like it?

I was reading UTF-8 and Unicode FAQ for Unix/Linux, I found many links are dead. Thats the beginning of why I wrote this scrip, linkckr.sh.

http://farm6.static.flickr.com/5100/5416743679_3bb1f5e404_z.jpg

Give it a filename or a URL:

./linkckr.sh test.html
./linkckr.sh http://example.com

It does rest for you. You might want to tee, because there is no user interface, it prints results. If a page has many links, you may flood the scrollback buffer. The script is simple, actually, it does too much for me. (Ha, who needs coloring.)

I dont grep the links from HTML source, there always is a missing point in regular expression or the regular expression looks like Hulk. I decided to see if I could use xmllint to get valid links. It means only from normal <a/>, not those hidden somewhere or using JavaScript to open, nor URLs in HTML when you read it plainly with interpreting as HTML. It only takes /HTTPS?/ URLs to check.

The checking is using cURL and only used HEAD request, so you might get 405 and this script does not re-check with normal GET request. Also, those return 000, which might mean timeout after 10 seconds waiting for response. If a URL is redirected with 3xx, then cURL is instructed to follow up, and the last URL is shown to you.

There are few interesting points while I wrote this script. Firstly, I learned xmllint can select nodes with XPath:

xmllint --shell --html "$1" <<<"cat //a[starts-with(href,'http')]"

And standard input will be seen as command input in xmllints shell.

Secondly, cURL supports output format using -w:

curl -s -I -L -m 10 -w '%{http_code} %{url_effective}\n' "$url"

Note that even you specify a format, the headers of requests are still printed out. The output with the format is appended at last. The script retrieves the last line using sed '$q;d', if you are not familiar with such syntax, you should learn it. sed is quite interesting. Then it parses with built-in read, another interesting I have learned by myself long ago. Using cut is not necessary and its not so good, though read would have problem with additional spaces if those have significant meaning.

The rest is boring Bash. There is a bug I have noticed, the HTML entity in link, that would cause issue.

A screenshot is the best living proof:


It was taken on this post of Blogger Buzz.

Long story short, those links in this screenshot are all unrelated to that post except one blog, which is an awful splog, stealing Google's contents. They are listed because these three blogs--which I have checked as you can see the link text color is different--use Blog List gadget. One thing in common is their blog lists are quite long. I don't know if that's a working trick to get own blog exposed in exponential scale.

In this blog, after I updated to HTML5 template, I have removed "Links to this post" from template. Well, now I see I happened to make a right decision.

This kind of situation is not only on Blogger, however it's a publicly visible issue. Probably a year or two, I have a blog which received lots of link-in, reported by Google Webmaster Tools, from one blog. That blog owner kindly listed my blog at sidebar of his blog, it's not a spam actually, our topics both are about Linux.

The fundamental problem is Blogger uses Google Search results if I recall correctly. There is a flaw: it looks into whole page not just blog post content. It's not like trackback. When you link to other blog's post, you do mean to link back with related content.

I am not recommending that you should remove "Links to this post" or the Blog List gadget, but you should reconsider if you do care about quality in your blog and in others' blogs.

Maybe we should rename "Links to this post" to "Probably (definitely not) related links to this post" as I did for my related posts list as shown below.

Do you know the next blog button on Blogger.com's navigation bar or WordPress.com's next button? Even the Delicious.com's randomizer button?



Now you can have them all (and even more) with just one button on your bookmark toolbar!



Build Next Button helps you create your own next button as long as you have random sources.



Let's kill your free time by clicking on NEXT button!



If you have other random source, leave me a comment with the link, I might add it as default random source.

I already noticed that Googlebot parses things look like links nine months ago. Once again, they still have many to do with their searching algorithm.

Since two days ago, I saw a strange request on my Google App Engine application, which resulted a 404:


At first, I have no idea how this came from, but I know this must be something about JavaScript script of that application. Later I checked the log and got:


I didn't notice any until I saw the IP: 66.249.85.129, that is quite familiar. It's from Google. That JavaScript script has a block like:


I think that's clear enough. However, I won't obscure the JavaScript script in order to get rid of this. Even though I will see 404 report, but I can know when Googlebot comes. Hit me! Google!