The road to Invisible.js

This post will describe the development process of Invisible.js, the isomorphic JavaScript framework that Martín Paulucci and I have been working on for around a year, as our Software Engineering final project at the University of Buenos Aires.

Motivation and Philosophy

We came from different backgrounds; I was programming Django for years, working in applications with increasingly complex UIs, moving from spaghetti jQuery to client MVCs such as backbone; Martin was already getting into Node.js development, also using AngularJS after trying other client frameworks. We both regarded the current state of web development, centered in REST servers and MV* clients, as one of unstable equilibrium. Some problems were evident to us: inherent duplication (same models, same validations) and continuous context switches between front and back end code. The latter one was partially solved by Node.js, letting the programmers use the same language in both sides. But we felt there wasn’t enough effort put into taking advantage of the potential of the platform, to erase or at least reduce the gap between client and server in web applications. That was the direction we wanted to take with Invisible.js, acknowledging the limitations of being a couple of developers working in our free time.

With that goal in mind, we started out with some months of research on frameworks and technologies, most of which we weren’t familiar with or hadn’t used yet; then we built a couple of prototypes to test them out. After that, we had a better picture on how to lay out the development of Invisible.js. We weren’t out to build a full stack framework like Derby or Meteor, trying to cover each aspect of web development; rather, we wanted to pull together the awesome modules available in Node.js (express, browserify, socket.io) in order to achieve client/server model reuse as gracefully as possible. In that sense, the nodejitsu blog was a great source of inspiration.

As a side note, the framework is named after Invisible, a progressive rock group led by Luis Alberto Spinetta in the ’70s.

Architecture

Invisible.js stands on a more or less MEAN stack; it’s actually not tied at all to AngularJS, but we usually choose it as our front end framework, as we think it’s the best and places no constraints on the models it observes, which makes it a good fit for Invisible (as opposed to backbone, for example). As of the database, we appreciate the short distance between a JSON object and a Mongo document, plus it has a nice, flexible Node.js driver, but certainly an interesting branch of development for Invisible.js would be to add support for other data stores.

The main component of Invisible are the models. The developer defines them with its methods and attributes, and registers them in Invisible; this way they are exposed both in client and server, and augmented with methods to handle database access, real-time events and validations. The implementation of those methods will change depending on where the call is made, as the figure shows.

tpprof1b

What goes under the hood is that the Invisible server, which replaces the express one, exposes a dynamically generated browserify bundle that contains all the registered model definitions for the client. It also exposes the REST controllers to handle the CRUD methods that they call.

Further development

We’re very pleased with the result of our work; most of the things we’ve tried worked out, and we went further than we expected. Indeed, we feel that Invisible.js not only solves its initial goal of exposing reusable models, but also that it’s simple to use and gives a lot of non-trivial stuff out of the box, with a few lines of code.

Nevertheless, we are very aware that it’s still a toy; a fun one, which we’d like to keep working on, but a toy so far. As the Russians fairly point out, Invisible.js exposes the database to any JavaScript client (namely, the browser console), without any restriction whatsoever. Thus, the main thing we’ll be working on in the short term is providing means for authentication and authorization: establishing the identity of the clients and restricting the segments of data they can access, both in the REST API and the socket.io events. We’ve already started studying the topic and made some implementation attemps.

Apart from security, we still have to see how well the framework escalates, both in resources and code base, as it’s used in medium and big applications. We hope other developers will find it interesting enough to give it a try and start collaborating, so it does turn into something more than a toy.

My take on RESTful authentication

The topic of authentication in REST architectures is a debatable one; there are several ways to do it, not all of them practical, not all RESTful; no standard and a lot of room for confusion. Ever since I got into REST, this was the one thing which wasn’t evident to me, even after a decent amount of research. Recently I got the time to dive deeper in the problem, evaluated thoroughly the alternatives and made my conclusions. While they may be inaccurate at some degree, I gather them here since I found no one place that would present the topic in a friendly fashion.

First let’s establish some ground rules for the analysis, to avoid a lot of the usual confusion.

  1. I want to authenticate my own clients: a Single-Page Web App or a Mobile App is the front end, and a REST API is the back end of my application. I don’t want to authenticate third-party consumers of my API, which is the focus of most REST traditional bibliography.
  2. I want to do pragmatic REST: I’m not interested in being too subtle on how much state is RESTful; I won’t start by quoting Fielding’s dissertation on why REST should be stateless. I know statelessness induces some desirable properties on the architecture, therefore it’s good to reduce the application state to the minimum and try to keep it on the client side. But, some compromises can be made to get other desirable properties, such as security, simplicity of the implementation, better user experience (i.e. no ugly Basic Auth browser dialog). For example, having a resource that matches an access token to a user, and states the expiry time, sounds like a fair trade-off between convenience, security and RESTfulness.
  3. You can’t get away without SSL/TLS. If you want to provide a decent level of security (and why else would you worry about authentication?), you need to have a secure channel of communication, so you have to use HTTPS.
  4. Using a home-brew authentication method is probably a bad idea.

That being said, let’s look at the authentication methods available.

HTTP Basic

Why would you use HTTP Basic authentication? well, it’s simple, and that always counts. It does send the password on each request, which in most cases isn’t that big of a deal since, as we established, we are over HTTPS. Most methods will make you send the credentials anyway, and although some of them do it in just one request, I don’t see this as a deal-breaker for the most common authentication method out there. The biggest issue with Basic (also applicable to digest) is that it displays an ugly browser login dialog, and you can’t avoid that just by including the authorization header manually via JavaScript, because it would appear in the case of invalid credentials. To get around this you have to incur in inconvenient hacks, moving away from the standard. Thus we loose the simplicity we started with, we get too close to the ill-fated place of rolling our own security method (without adding any desirable extra features), and so we should probably look into one of the other options.

HTTP Digest

Digest is intended to be a more secure alternative to HTTP Basic, and could be considered if we were not using HTTPS, which we are. Without a secure connection, the method is vulnerable to Man-in-the-Middle attacks, you’d be sending credentials hashed with a weak algorithm and you wouldn’t be allowed to use a strong encryption method to store the passwords. Moreover it’s less simple than Basic and you still have to deal with the browser login box. So we rule out digest.

Cookies

A classic resource on RESTful Authentication is the homonymous stackoverflow question. The most voted answer there mentions the problems of using Basic Auth and proposes a custom method based on storing a session id in a cookie. I don’t mind having a narrow scoped session (for example with expiration date), but if you’re rolling a custom method, I don’t see any advantages in using cookies over an Authorization Header, either mimicking Basic Auth or with a different logic.

OpenID

OpenID provides federated authentication by letting the user log in an application using his account from another provider such as Google or Yahoo. It is in theory a more adequate approach than OAuth for delegating the credentials management to a third-party provider, but it’s harder to implement and I haven’t found a single source discussing how it may be used as a method for REST authentication.

OAuth

OAuth is probably the biggest source of confusion: you have two widely deployed versions, with a lot of debate behind, and several workflows to handle different scenarios. What’s more, OAuth is an authorization standard, that in some cases may be bent into doing authentication.

Pseudo-authentication

The most common use case of OAuth is a user authorizing a consumer application to access his data on a third party application (i.e. Facebook), without giving away his credentials. This authorization schema can be used as a way of delegated authentication: if the consumer is granted access to the user data, then the identity of the user is proven. While this works, it has some pitfalls: first, it assumes that having access to user data equals to being the user, which isn’t necessarily true (this is not enforced by the protocol), but more importantly, it gives the consumer application access to data that shouldn’t be required for authentication (i.e. photos, contacts). That’s why this is referred as pseudo-authentication. It’s worth noting that OpenID Connect is being developed as a complement to OAuth to solve this problem.

2-legged and Client Credentials

There are cases where you want to handle credentials yourself, so you don’t need the third party provider in the workflow. Some articles suggest using OAuth1 2-legged Auth or the OAuth2 Client Credentials grant, but I’ve found that both of them solve the authorization part, providing an access token to include in the requests, but leave authentication (how you establish the identity when requesting for that token) to be handled by other method. Thus, it’s not of much use for the problem at hand.

Resource Password Owner

OAuth2 Resource Password Owner flow does solve authentication when you are in control of the credentials. It exchanges an initial request with user and password for a token that can be used to authenticate (and authorize) subsequent requests. This is an alternative to Basic Auth, slightly better in the sense that you just include credentials on the first call (thus you don’t need to store them in the client). It’s also a standard with a simple implementation and avoids the browser interaction problem of the standard HTTP methods, making it the better choice in this scenario.

Secure Remote Password

Meteor.js recently introduced the Secure Remote Password protocol as a way to handle authentication in web applications. It’s hailed as the one method that guarantees security without HTTPS, but SRP itself only provides a way to log a user in without using its credentials in the application server. Upon user registration, a verifier is stored instead of the password; for authentication the user sends some parameters derived from the password that can be checked against that verifier. The credentials indeed are never sent, and can’t be guessed by the parameters, but you still need a secure transaction when registering the verifier. An attacker that gets a hold of the verifier can obtain the passwords with a dictionary attack. An interesting case of this is the attack to Blizzard servers in 2012.

Conclusions

Avoiding password management is generally a good idea for application developers. With that in mind, I’d start looking at delegated and federated authentication for securing my RESTful APIs. OAuth is formally less appropriated, but simpler and more widely used than OpenID, which some state to be dead, so that looks like the safer bet. If you want to handle credentials yourself, OAuth’s Resource Password Owner flow is probably the best choice.