Panda vs Zombies: my new Android video game


Over most of last year I’ve been working along a couple of college friends on an action game for Android and it’s finally out. It’s done using cocos2d-x in C++, which is by no means a language I like, so hopefully I’ll get to write here about the experience I had building it.

In the meantime, here’s the link to install it from the google play store and the trailer of the game, for those who are interested in checking it out.

Better authentication for (no query strings!)


This post describes an authentication method for that sends the credentials in a message after connection, rather than including them in the query string as usually done. Note that the implementation is already packed in the socketio-auth module, so you should use that instead of the code below.

The reason to use this approach is that putting credentials in a query string is generally a bad security practice (see this, this and this), and though some of the frequent risks may not apply to the connection request, it should be avoided as there’s no general convention in treating urls as sensitive information. Ideally such data should travel on a header, but that doesn’t seem to be an option for, as not all of the transports it supports (WebSocket being one) allow sending headers.

Needless to say, all of this should be done over HTTPS, otherwise no security level is to be expected.


In order to authenticate connections, most tutorials suggest to do something like:

io.set('authorization', function (handshakeData, callback) {
  var token = handshakeData.query.token;
  //will call callback(null, true) if authorized
  checkAuthToken(token, callback);

Or, with the middleware syntax introduced in 1.0:

io.use(function(socket, next) {
  var token = socket.request.query.token;
  checkAuthToken(token, function(err, authorized){
    if (err || !authorized) {
      next(new Error("not authorized"));

Then the client would connect to the server passing its credentials, which can be an authorization token, user and password or whatever value that can be used for authentication:

socket = io.connect('http://localhost', {
  query: "token=" + myAuthToken

The problem with this approach is that it credentials information in a query string, that is as part of an url. As mentioned, this is not a good idea since urls can be logged and cached and are not generally treated as sensitive information.

My workaround for this was to allow the clients to establish a connection, but force them to send an authentication message before they can actually start emitting and receiving data. Upon connection, the server marks the socket as not authenticated and adds a listener to an ‘authenticate’ event:

var io = require('').listen(app);

io.on('connection', function(socket){
  socket.auth = false;
  socket.on('authenticate', function(data){
    //check the auth data sent by the client
    checkAuthToken(data.token, function(err, success){
      if (!err && success){
        console.log("Authenticated socket ",;
        socket.auth = true;

    //If the socket didn't authenticate, disconnect it
    if (!socket.auth) {
      console.log("Disconnecting socket ",;
  }, 1000);

A timeout is added to disconnect the client if it didn’t authenticate after a second. The client will emit it’s auth data to the ‘authenticate’ event right after connection:

var socket = io.connect('http://localhost');
socket.on('connect', function(){
  socket.emit('authenticate', {token: myAuthToken});

An extra step is required to prevent the client from receiving broadcast messages during that window where it’s connected but not authenticated. Doing that required fiddling a bit with the namespaces code; the socket is removed from the object that tracks the connections to the namespace:

var _ = require('underscore');
var io = require('').listen(app);

_.each(io.nsps, function(nsp){
  nsp.on('connect', function(socket){
    if (!socket.auth) {
      console.log("removing socket from",
      delete nsp.connected[];

Then, when the client does authenticate, we set it back as connected to those namespaces where it was connected:

socket.on('authenticate', function(data){
  //check the auth data sent by the client
  checkAuthToken(data.token, function(err, success){
    if (!err && success){
      console.log("Authenticated socket ",;
      socket.auth = true;

      _.each(io.nsps, function(nsp) {
        if(_.findWhere(nsp.sockets, {id:})) {
          console.log("restoring socket to",;
          nsp.connected[] = socket;


The road to Invisible.js

This post will describe the development process of Invisible.js, the isomorphic JavaScript framework that Martín Paulucci and I have been working on for around a year, as our Software Engineering final project at the University of Buenos Aires.

Motivation and Philosophy

We came from different backgrounds; I was programming Django for years, working in applications with increasingly complex UIs, moving from spaghetti jQuery to client MVCs such as backbone; Martin was already getting into Node.js development, also using AngularJS after trying other client frameworks. We both regarded the current state of web development, centered in REST servers and MV* clients, as one of unstable equilibrium. Some problems were evident to us: inherent duplication (same models, same validations) and continuous context switches between front and back end code. The latter one was partially solved by Node.js, letting the programmers use the same language in both sides. But we felt there wasn’t enough effort put into taking advantage of the potential of the platform, to erase or at least reduce the gap between client and server in web applications. That was the direction we wanted to take with Invisible.js, acknowledging the limitations of being a couple of developers working in our free time.

With that goal in mind, we started out with some months of research on frameworks and technologies, most of which we weren’t familiar with or hadn’t used yet; then we built a couple of prototypes to test them out. After that, we had a better picture on how to lay out the development of Invisible.js. We weren’t out to build a full stack framework like Derby or Meteor, trying to cover each aspect of web development; rather, we wanted to pull together the awesome modules available in Node.js (express, browserify, in order to achieve client/server model reuse as gracefully as possible. In that sense, the nodejitsu blog was a great source of inspiration.

As a side note, the framework is named after Invisible, a progressive rock group led by Luis Alberto Spinetta in the ’70s.


Invisible.js stands on a more or less MEAN stack; it’s actually not tied at all to AngularJS, but we usually choose it as our front end framework, as we think it’s the best and places no constraints on the models it observes, which makes it a good fit for Invisible (as opposed to backbone, for example). As of the database, we appreciate the short distance between a JSON object and a Mongo document, plus it has a nice, flexible Node.js driver, but certainly an interesting branch of development for Invisible.js would be to add support for other data stores.

The main component of Invisible are the models. The developer defines them with its methods and attributes, and registers them in Invisible; this way they are exposed both in client and server, and augmented with methods to handle database access, real-time events and validations. The implementation of those methods will change depending on where the call is made, as the figure shows.


What goes under the hood is that the Invisible server, which replaces the express one, exposes a dynamically generated browserify bundle that contains all the registered model definitions for the client. It also exposes the REST controllers to handle the CRUD methods that they call.

Further development

We’re very pleased with the result of our work; most of the things we’ve tried worked out, and we went further than we expected. Indeed, we feel that Invisible.js not only solves its initial goal of exposing reusable models, but also that it’s simple to use and gives a lot of non-trivial stuff out of the box, with a few lines of code.

Nevertheless, we are very aware that it’s still a toy; a fun one, which we’d like to keep working on, but a toy so far. As the Russians fairly point out, Invisible.js exposes the database to any JavaScript client (namely, the browser console), without any restriction whatsoever. Thus, the main thing we’ll be working on in the short term is providing means for authentication and authorization: establishing the identity of the clients and restricting the segments of data they can access, both in the REST API and the events. We’ve already started studying the topic and made some implementation attemps.

Apart from security, we still have to see how well the framework escalates, both in resources and code base, as it’s used in medium and big applications. We hope other developers will find it interesting enough to give it a try and start collaborating, so it does turn into something more than a toy.

My take on RESTful authentication

The topic of authentication in REST architectures is a debatable one; there are several ways to do it, not all of them practical, not all RESTful; no standard and a lot of room for confusion. Ever since I got into REST, this was the one thing which wasn’t evident to me, even after a decent amount of research. Recently I got the time to dive deeper in the problem, evaluated thoroughly the alternatives and made my conclusions. While they may be inaccurate at some degree, I gather them here since I found no one place that would present the topic in a friendly fashion.

First let’s establish some ground rules for the analysis, to avoid a lot of the usual confusion.

  1. I want to authenticate my own clients: a Single-Page Web App or a Mobile App is the front end, and a REST API is the back end of my application. I don’t want to authenticate third-party consumers of my API, which is the focus of most REST traditional bibliography.
  2. I want to do pragmatic REST: I’m not interested in being too subtle on how much state is RESTful; I won’t start by quoting Fielding’s dissertation on why REST should be stateless. I know statelessness induces some desirable properties on the architecture, therefore it’s good to reduce the application state to the minimum and try to keep it on the client side. But, some compromises can be made to get other desirable properties, such as security, simplicity of the implementation, better user experience (i.e. no ugly Basic Auth browser dialog). For example, having a resource that matches an access token to a user, and states the expiry time, sounds like a fair trade-off between convenience, security and RESTfulness.
  3. You can’t get away without SSL/TLS. If you want to provide a decent level of security (and why else would you worry about authentication?), you need to have a secure channel of communication, so you have to use HTTPS.
  4. Using a home-brew authentication method is probably a bad idea.

That being said, let’s look at the authentication methods available.

HTTP Basic

Why would you use HTTP Basic authentication? well, it’s simple, and that always counts. It does send the password on each request, which in most cases isn’t that big of a deal since, as we established, we are over HTTPS. Most methods will make you send the credentials anyway, and although some of them do it in just one request, I don’t see this as a deal-breaker for the most common authentication method out there. The biggest issue with Basic (also applicable to digest) is that it displays an ugly browser login dialog, and you can’t avoid that just by including the authorization header manually via JavaScript, because it would appear in the case of invalid credentials. To get around this you have to incur in inconvenient hacks, moving away from the standard. Thus we loose the simplicity we started with, we get too close to the ill-fated place of rolling our own security method (without adding any desirable extra features), and so we should probably look into one of the other options.

HTTP Digest

Digest is intended to be a more secure alternative to HTTP Basic, and could be considered if we were not using HTTPS, which we are. Without a secure connection, the method is vulnerable to Man-in-the-Middle attacks, you’d be sending credentials hashed with a weak algorithm and you wouldn’t be allowed to use a strong encryption method to store the passwords. Moreover it’s less simple than Basic and you still have to deal with the browser login box. So we rule out digest.


A classic resource on RESTful Authentication is the homonymous stackoverflow question. The most voted answer there mentions the problems of using Basic Auth and proposes a custom method based on storing a session id in a cookie. I don’t mind having a narrow scoped session (for example with expiration date), but if you’re rolling a custom method, I don’t see any advantages in using cookies over an Authorization Header, either mimicking Basic Auth or with a different logic.


OpenID provides federated authentication by letting the user log in an application using his account from another provider such as Google or Yahoo. It is in theory a more adequate approach than OAuth for delegating the credentials management to a third-party provider, but it’s harder to implement and I haven’t found a single source discussing how it may be used as a method for REST authentication.


OAuth is probably the biggest source of confusion: you have two widely deployed versions, with a lot of debate behind, and several workflows to handle different scenarios. What’s more, OAuth is an authorization standard, that in some cases may be bent into doing authentication.


The most common use case of OAuth is a user authorizing a consumer application to access his data on a third party application (i.e. Facebook), without giving away his credentials. This authorization schema can be used as a way of delegated authentication: if the consumer is granted access to the user data, then the identity of the user is proven. While this works, it has some pitfalls: first, it assumes that having access to user data equals to being the user, which isn’t necessarily true (this is not enforced by the protocol), but more importantly, it gives the consumer application access to data that shouldn’t be required for authentication (i.e. photos, contacts). That’s why this is referred as pseudo-authentication. It’s worth noting that OpenID Connect is being developed as a complement to OAuth to solve this problem.

2-legged and Client Credentials

There are cases where you want to handle credentials yourself, so you don’t need the third party provider in the workflow. Some articles suggest using OAuth1 2-legged Auth or the OAuth2 Client Credentials grant, but I’ve found that both of them solve the authorization part, providing an access token to include in the requests, but leave authentication (how you establish the identity when requesting for that token) to be handled by other method. Thus, it’s not of much use for the problem at hand.

Resource Password Owner

OAuth2 Resource Password Owner flow does solve authentication when you are in control of the credentials. It exchanges an initial request with user and password for a token that can be used to authenticate (and authorize) subsequent requests. This is an alternative to Basic Auth, slightly better in the sense that you just include credentials on the first call (thus you don’t need to store them in the client). It’s also a standard with a simple implementation and avoids the browser interaction problem of the standard HTTP methods, making it the better choice in this scenario.

Secure Remote Password

Meteor.js recently introduced the Secure Remote Password protocol as a way to handle authentication in web applications. It’s hailed as the one method that guarantees security without HTTPS, but SRP itself only provides a way to log a user in without using its credentials in the application server. Upon user registration, a verifier is stored instead of the password; for authentication the user sends some parameters derived from the password that can be checked against that verifier. The credentials indeed are never sent, and can’t be guessed by the parameters, but you still need a secure transaction when registering the verifier. An attacker that gets a hold of the verifier can obtain the passwords with a dictionary attack. An interesting case of this is the attack to Blizzard servers in 2012.


Avoiding password management is generally a good idea for application developers. With that in mind, I’d start looking at delegated and federated authentication for securing my RESTful APIs. OAuth is formally less appropriated, but simpler and more widely used than OpenID, which some state to be dead, so that looks like the safer bet. If you want to handle credentials yourself, OAuth’s Resource Password Owner flow is probably the best choice.

A Node.js primer

I finally took the time to start fiddling with Node.js, and as I expected from such a young and dynamic technology, I ran into some gotchas and configuration headaches, so I’ll put down some notes here that might be helpful for other people getting started with Node.

First off, a good resource to get familiar with Node and its philosophy: The Node Beginner Book. It’s a long tutorial that guides you through a very simple web application, explaining some of Node’s basic concepts and JavaScript programming techniques on the way. This book was completely available at up until a couple of months ago, with a little disclaimer in the middle for those interested in buying it. Now there’s just the first half online, but the author points in the comments that the full book is still available at github.

As I moved along the tutorial, I ran into the first problem: it uses the formidable node module to handle the file uploads, which is not compatible with the most recent versions of Node (the current one is 0.10.5). Looking into this I found out a couple of interesting facts:

  • The versioning scheme of Node states that odd versions are unstable and even versions are stable.
  • Since 0.10 is a very recent version, it’s recommended for those starting out to stick with 0.8 (the previous stable version).

So I needed to install version 0.8.something.

At this point I started to feel uncomfortable messing around with different versions on a global Node installation. Neither I liked to sudo everytime I needed to install a new  module. There was some misleading advice around the web on doing a chown on your /usr/local folder as a way to avoid this, which didn’t look all that good. Coming from Python and virtualenv I like to handle my installations locally. This is the simplest way to do it I’ve found.

There are several modules that allow handling multiple Node versions, the most popular being nvm and n. I found n was difficult to configure to work with the local installation, so I switched to nvm instead. The code needed to install it and switch to 0.8 was something like:

wget -qO- | sh
echo '[[ -s ~/.nvm/ ]] && . ~/.nvm/' >> .bashrc
nvm install 0.8
nvm alias default 0.8

A farewell to Django (and Python?)

For about three years, I’ve been programming (professionally) almost exclusively with Django. It let me work fast as a solo programmer, faster than most of other programmers in my country (which are still doing mostly Java and PHP), and have the freedom to pick only the jobs I was interested in.

Things are starting to change in the web industry, though. And I’m not talking about some hyped technology that’s supposed to be the future of web programming, but about what the standard user expects of today’s web applications. Most programmers will know what I’m talking about: as client programming gets more and more complex, it’s getting harder (not to say impossible) to stay on the DRY side of things. This situation is very well explained in this article. It’s time to start looking for alternatives to LAMP and its variants.

Recently there was a series of posts in Hacker News discussing how cloudy Python’s future was (here, here and here). I don’t think Python is going away in the near future. I personally consider it the best general purpose programming language, and my weapon of choice in most cases; it’s probably what I’ll be comparing against every other language I try in the next couple of years. That being said, it’s clear that Python (using Django or some other framework) is not the best tool for some of the hottest jobs in the market, complex web applications being one of them. As a side note, it is interesting that the case of Python being “too slow” or “too CPU intensive”, which I always disregarded (and still do for the most part) for not being a bottleneck for most applications, has finally found a raison d’être in the battery consumption rate of mobile devices.

I’m not sure what the future of web programming will look like, but I sure as hell know how the present does: JavaScript.

I never liked JavaScript; I was one of those people that learned it because they had to: the browser speaks JavaScript and there’s no way around it. But saying that I learned it is overstating, I just started using it as I needed without much idea of what was going on, but with the certainty that whatever it was, it was weird. Indeed, as Douglas Crockford puts it in his JavaScript: The Good Parts:

The amazing thing about JavaScript is that it is possible to get work done with it without knowing much about the language, or even knowing much about programming. It is a language with enormous expressive power.

Over the years, it got less painful as I understood a bit more of the language (and after having experience with other languages) although I never really took the time to study its foundations and cleaner idioms. That’s what I’ve started to do now, and I must say that I see potential in JavaScript. If you stick to the good parts, that is. For one, it was JavaScript, not on its own merit as a programming language but for historical reasons, that managed for features such as dynamic typing and first class functions (that users of “better” languages have been advocating for for years) to go mainstream.

Finally the day came when Python is not the best tool for the job, at least not for my job (I plan to stay as a web programmer for the time being) and to move on to a better one. The first step, is to learn JavaScript. I mean to learn proper JavaScript.