In case you don’t know, the RailsRumble is an annual event where contestants have 48hours to build the most awesome webapp they can. We were one of the sponsors of the event, and offered to help out with building a realtime dashboard for it. This has gone rather silent now the event has finished, but \[…\]
In case you don’t know, the RailsRumble is an annual event where contestants have 48hours to build the most awesome webapp they can. We were one of the sponsors of the event, and offered to help out with building a realtime dashboard for it. This has gone rather silent now the event has finished, but you can see a video of it in action here:
Rails Rumble real-time dashboard from Ismael Celis on Vimeo.
Tweets, irc messages and commits show up in realtime, and are matched to the teams in the grid.
We felt we’d like to give people an idea of what Pusher could actually do, as well as having our logo on the site. It was also a great opportunity to ‘eat our own dogfood’, since we always need to keep checking that Pusher is completely painless to get set up with.
The dashboard received a lot of praise via twitter, so I thought it would be a good idea to look into how we went about it. I’m gonna talk about a few areas of the actual app, and a give a couple of tips that you can use when building your own Pusher interfaces (such as fake event emitters).
The app was separated into 2 distinct components:
Our data sources were numerous – team data and collaborator data would come from a JSONP API. IRC, Tweets and Commits would come in via WebSockets. Later we would request historical feeds of these latter events to prepoluate the various sections of the page.
This app really demonstrates a new way of designing interfaces, where the view layer is highly de-coupled from the backend. In our part of the project, there was almost no server side code (except for some spoofed data sources). It is really great to be able to build a clever client that maintains its own state in the browser. In this design, you just point your HTML and JS at some data sources, and away you go.
The 2 parts of this app were built on opposites sides of the world, with a huge time zone difference. We agreed early on what we expected of each other in terms of data, and this was critically important.
This is a new way that mashups can work, which is pretty exciting. The traditional method has always been to pull in the data sources on the server, munge them there, and then spit the data out to the client. In this model, the client brings together the data sources itself.
We are big fans of client-side MVC, and it is a great pattern that is well worth employing. We didn’t have the most highly structured MVC setup, but we put a bit of effort into separating the tiers. As listed in a bit more detail below, we used JS-Model for models; a custom templating solution for the views; and the majority of the controller code was handled as initialisers and domready functions.
To keep the state of our models, and give us the means to iterate through them etc., we used our old friend JS-Model. There have been a few similar projects appearing recently, but this library is our favourite, and has some great functionality.
The most important models we created first were Teams and their Collaborators. These were the focus of the application, and the idea was to show how active they were all being.
We tend to have separate directories for our javascript files, and therefore we built a little file loader to take care of including them:
function require (file) { var script = $("
The contents of team.js was initially fairly dull, but essentially created a factory/collection handler for our data:;
var Team = Model('team')
function require (file) { var script = $('