New API: Expand URL

Today, the Internet is flooded with shortened URL. All links shared on Twitter are shortened (start with There is nothing wrong with it. I just don’t feel comfortable clicking a shortened URL without knowing where it will bring me to. How paranoid I am! There are several reason I don’t trust shortened URL: They might contain tracking or referral code. SlickDeals heavily uses it (to make money, of course). Again, there is nothing wrong with it. They provide you a service and they deserve to get some money out of it. They might hide malicious URL, like XSS I hate them! There are several expand URL services out there, however they just support a very limited set of URL shorten service. What I want is something more. I want to be able to track the final URL after a bunch of redirections. So I wrote this API:<your_url_here_no_escape_needed> For example: Here is the API result. You will be amazed how many hops it take to the final destination:!GQ96-PAKTEzy%3C1DD9%3A%2F%2FGGG.3x8A8wI.w86%3ASK%2Fw52w4-LMMPMQR-LKOOKSTR%3C%3Ca%3C1DD9C%3A%2F%2FGGG.08805y.w86%2F%3C Suggestions welcome! and are likely to be the same company

Normally I don’t care who owns what. However, when it come to spam email, it’s a completely different problem. I hate spam. Yes. I setup a specific domain to act as honeypot to catch spam. This week I received more than 3 emails from under the email I submitted into I don’t remember giving my email to and I generated an unique email every time I submit to a website. Surprise? In their websites, there is no link to each other and they don’t claim they are the same company either. So who the hell give the permission to spam me? They share the same address, same phone number and used to be in the same server. In my opinion, they are likely to be the same, or under the same owner. Some information:

Speedtest for your Linux server

Have you ever wonder how to test network speed (Internet specifically) of your server? Well, with GUI you can use something like, but how about CLI server, where you only have command-line interface? There are indeed several option: 1. Speedtest for CLI: Install: easy_install speedtest-cli Use: speedtest 2. wget You first need to find some “big” files. My favorite is Ubuntu image: Use: wget -O /dev/null your_link It will actually not save anything on your system, so you don’t have to deal with clean up stuffs after you’ve done.

DigitalOcean droplets (at least for NYC2 region) are having trouble connecting to

I noticed a noticeable degrade in network performance in my droplets. It took forever to open a connection. It happened from last week I guess. Restart server does not help. I though it’s just temporary. However today I noticed that, DigitalOcean by default assign 2 DNS servers for every droplet in NYC2 region: nameserver nameserver Here is the result for ping from my droplet to both servers: tuananh@codepie:~$ ping PING ( 56(84) bytes of data. 64 bytes from icmp_req=1 ttl=46 time=13.7 ms 64 bytes from icmp_req=2 ttl=46 time=13.8 ms 64 bytes from icmp_req=3 ttl=46 time=13.8 ms 64 bytes from icmp_req=4 ttl=46 time=13.8 ms 64 bytes from icmp_req=5 ttl=46 time=13.7 ms 64 bytes from icmp_req=6 ttl=46 time=13.7 ms 64 bytes from icmp_req=7 ttl=46 time=13.7 ms 64 bytes from icmp_req=8 ttl=46 time=13.7 ms 64 bytes from icmp_req=9 ttl=46 time=13.7 ms 64 bytes from icmp_req=10 ttl=46 time=13.7 ms ^C — ping statistics — 10 packets transmitted, 10 received, 0% packet loss, time 9014ms rtt min/avg/max/mdev = 13.705/13.774/13.883/0.147 ms tuananh@codepie:~$ ping PING ( 56(84) bytes of data. ^C — ping statistics — 167 packets transmitted, 0 received, 100% packet loss, time 167318ms Performing dig returns similar problem: tuananh@codepie:~$ dig @ ; < > DiG 9.8.1-P1 < > @ ;; global options: +cmd ;; connection timed out; no servers could be reached As you can see, somehow my droplet won’t be able to connect to A simple switch to as main DNS resolver and thing’s back to normal.

Free course: The Complete iOS 7 Course – Learn by Building 14 Apps (was $499)

I always want to learn how use build an iOS app, and I have some idea in mind. However, I’m just lazy and procrastinating to do so. Today I found a free course (via Slickdeal) to build an iOS app, and already registered (and you should). It’s free (was $499). Link:

Markdown is available for self-hosted WordPress through Jetpack

Jay! I’ve just noticed that. It’s funny when searching for Markdown on WordPress return this article: Write (More) Effortlessly With Markdown Basically, they mentioned that Markdown is only available for blog, by a simple switch in configuration. I am skeptical, so I always try to use “official” version of plugins. Yes, there are many Markdown plugins out there When I check my Jetpack version, it’s already there! So if you want to write in Markdown style (like Github and StackOverflow), just enable it (why not?).

MySQL bug prevents you from connecting to custom port on MySQL server

It took me a great deal of time and effort to figure out this. In MySQL-client you can specify hostname and port to connect to a different MySQL instance on a different machine and/or different port rather than default localhost instance on your machine. For example, I have 2 MySQL instances running on two different machine, and one of them is behind firewall. Therefore, I need to use SSH tunnel to forward requests to port 3306 of the machine behind firewall. Things got little complicated when I tried to connect using –port or -P. Since I used the same password for both MySQL server (which I shouldn’t), it took me a while to figure out I still connect to the localhost instance. The reason is that, when you specify -P only, mysql will switch to socket mode instead of TCP mode. Here is what you need to do: mysql -P port –protocol TCP Adding –protocol TCP will force mysql to use TCP connection, thus will connect to the remote instance instead. Hope that helps!

Dropbox-like synchronization for Linux

One of the requirement for load-balancing servers is server file need to be synchronized. Otherwise, part of your visitors can see your new WordPress post but won’t be able to see the attached photos. rsync can’t do the job properly, because any synchronization tool need to look at the previous state of files in order to determine if new files have been added or any file has been changed or deleted. Fortunately there are several tool: BittorrentSync: This is a fully automatic solution, and as close to Dropbox as possible. You just need to download <code>btsync</code>, generate directory private key and input it into another instance in the other server. Done. Unison: This program run on top of rsync as default (but you can change it), but you need a little trick in order to run it properly. My favorite command is: unison -batch -prefer newer -silent -owner -group -times -perms 777 //dir1 //dir2 The meaning of this command is: -batch: Run in batch mode without asking confirmation -prefer newer: Prefer newer file if conflict -owner: Preserve owner information -group: Preserve group information -times: Preserve time -perms 777: Preserve permission That’s it. If you know any other tool, let me know in the comment section. Happy sync’ing!

My perfect setup (hint: CloudFlare, DigitalOcean, StartSSL, nginx, apache and private servers)

My situation is a little bit complicated: I have a powerful server completely under firewall (no inbound connection from outside) I want to run several websites (mostly blogs) I want to support SSL At the beginning, DigitalOcean is the best choice. I will have my own server, host unlimited websites, have full control and DigitalOcean is blazingly fast. I selected the smallest plan with 20G SSD and 512MB RAM. It would be more than enough for my blogs. I installed my own LAMP stack, get my own SSL certificate from StartSSL (you should get your own, too. It’s free!) Everything is fine until after several week. My server crashed every few hours. There are lots of requests coming for wp-comment-post.php, xmlrpc.php and wp-login.php. Unfortunately I can’t disabled them. Apache’s mod_security and mod_qos couldn’t help much. I have to write a temporary cron script to restart apache2 daemon whenever server load bigger than 20. Doesn’t improved much. My server still crash. There come nginx. Not work. Then CloudFlare. The same Until I decided to use my dedicated server to handle requests. Then it works!, not perfectly but we will be there later. In short, my configuration is like this: INTERNET < -> CLOUDFLARE < -> NGINX (DIGITALOCEAN) < -> APACHE (MY DEDICATED SERVER) There are several technical challenges that need to be solved: How can I forward requests to my dedicated server (completely under firewall) How can my end point (apache on my dedicated server) recognize IP from visitors correctly (since there are several layers in between? The solution for my first challenge is actually very simple: SSH Tunnel. There is one catch: Each website in my dedicated server will have to use its own port. And here is why: Assume I have 2 websites, and I assigned port