Have you ever gotten an idea stuck in your head? One that you start your day thinking about in the shower, and then try as you might, you just can't get rid of it?
That's what happened to me a few months ago. Specifically, I started thinking about Twitter, and the problems that it was experiencing. At that time, the "Fail Whale" was making very frequent appearances, indicating that Twitter was having problems keeping up with the demands being made on it. My Tweeps (social networking friends on Twitter), all with short attention spans like myself, began chattering about moving to a more reliable platform.
But, in my estimation, Twitter was still the best platform to remain on. Among other things, it was actually the most "mature" in its class - if you can call something a mere two years old as being mature.
So, Jason, what is Tourniquet?
After quite a few weeks of thinking about the various problems that I was aware of, I came up with a pretty simple solution. I needed to bounce some ideas off of a sounding board, so I fired off an email to some of my friends: Alan Stevens, Keith Elder, and Micheal Eaton.
The contents of that email still serve as a pretty good overview for my vision, which has been realized as "Tourniquet":
Think of a multi-faceted approach to fixing Twitter's issues that ultimately concentrates on reducing the number of calls that you make into Twitter itself, as well as provides transparent access to Twitter data during Twitter downtimes:
1. A personal Twitter proxy.
This is simply an API passthrough service that you host yourself. The thought process here is that Witty and other clients could be configured as to what server is accessed to execute an API call (i.e., by default, it's http://www.twitter.com/, but someone hosting a Twitter proxy would be able to specify something like http://thisismydomainyo.com/tourniquet/ ). Aside from the server, there's nothing different about the request or response. That is, the client might hit http://thisismydomainyo.com/tourniquet/statuses/friends_timeline.xml instead of
http://www.twitter.com/statuses/friends_timeline.xml.
This is not a public/shared service - it would be intended for just the user.
2. Obfuscation/encryption
Network Nazis suck, and so do people who brag about having smart phones and data plans. :-) There may be other reasons why someone would want the URL obfuscated and the response from Twitter encrypted when transfered between the Twitter Proxy and the client. But, that's what I'm talking about. Clients would need to be modified to support encryption/obfuscation before the user can utilize it.
3. Caching
While the Tourniquet proxy is fetching the information from Twitter, it might as well cache it to some form of persistent storage. This can be used to save some calls into the Twitter API, especially for historical data (which is also useful for when they disable access to historical data during times of heavy demand)
4. Store and Forward
Is Twitter down? Damn! But, no worries with Tourniquet! Your status update is saved until Twitter comes back up. This is really no different than other store-and-forward services, except you're not giving your twitter credentials to some unknown third party website.
5. Automated fetching
Perhaps Tourniquet can periodically fetch your timeline for you and cache it, either by means of some external triggering or by a timer. Then, when you hit the proxy to check for updates, it's already there (and your current request would likely trigger another fetch just to make sure that it has the latest data)
6. Tribe-Net Sharing/Synchronization
Here's where the service gets interesting: @keithelder and @jfollas follow each other, and both run Tourniquet. So, both proxies can be configured to be able to sync statuses between each other, hopefully saving some Twitter API hits that count towards your hourly usage. (I'm thinking that Direct Messages can somehow be used to announce Tourniquet endpoints). If Twitter is down and there are some status updates that are available (but not yet on Twitter), those can be propogated across the cloud via proxy-to-proxy synchronization. Eventually, they'll show up as actual Twitter statuses.
I picture the communication resembling something like this:
@jfollas's proxy calls @keithelder's proxy, and announces the highest message ID for each person that @jfollas follows. @keithelder's proxy returns a list statuses for each of those people that it has cached where the message ID is higher (plus any new/unpublished statuses that are in store-and-forward). @keithelder's proxy will need to call @jfollas's proxy to reciprocate the process.
Another possibility is to also synchronize a single person, perhaps with the goal of maintaining a cache of the last 100 tweets per person on your follow list. In this case, @jfollas's proxy will call @keithelder's proxy and list all of the message id's that it has cached for the person. If @keithelder is also following that person (or otherwise happens to have some tweets cached for the person), then any new messages (or any in-between messages that @jfollas's proxy might have missed) will be supplied in the response.
That was the original concept. The name "Tourniquet" came from the same place where all good project names come from: the Thesaurus. I simply looked for synonyms for the word "bandage", stumbled upon this word, and discovered that it was not already well-known as a software product.
Alright, sounds good. Where is Tourniquet?
Tourniquet is not a product, per se. It's a project, and an open source project at that (MIT license). You can download the source and do just about anything with it from the project site on Codeplex:
http://codeplex.com/tourniquet
To run Tourniquet, you will need to grab the release from Codeplex, set up a database (i.e., run the create scripts), copy the files from the release to your webserver, and then set up a new web application on your webserver. If this sounds too complicated, then perhaps you should wait until it's a little more refined before checking it out. I'm just sayin'... :-P
As I wrote the code, I tried to keep in mind that not everyone has a really sweet hosting deal. Therefore, I targeted what I thought to be the lowest common denominators: ASP.NET 2.0 and SQL Server 2005.
Classifying SQL Server as a LCD, though, bothered me, for a lot of "common man" hosting plans do not include access to a database at all. But, being a SQL Server MVP, I found this to be the quickest way to build the prototype and release the project to Codeplex. The persistance layer actually uses the Provider model, so my goal is to make alternative providers that do not require SQL Server (i.e., maybe XML on the filesystem, or Amazon SimpleDB, etc).
The point that I'd like to drive home is that the current codebase is very much a proof-of-concept or prototype, albeit a fully functional one (I've been using it for a few weeks). People may point at my code and say "Why did you do this like that? Where are your tests? This code sucks!" and that's okay. In its current form, it does most of what I outlined in the email above, so I'm content (working software is the #1 measure of success).
I encourage anyone who wants to participate in taking this prototype to the next level to join the development team. Contact me through the site, Codeplex, or Twitter (@jfollas). I'd be very happy if some enthusiastic people could take on the development of parts of the system and run with it.