Node and Express as a web server
Over at Oddshot we’ve had some time to look at the best ways of providing an awesome experience to our viewers. We’ve looked at things like performance, maintainability and accessibility, open-source solutions etc. and we’ve ended up using NodeJS with Express for our front-end (and we’re very happy we did!).
It’s been several weeks with Node in production and we’re already ecstatic with its performance (and me pleasantly surprised). I’ve done a bit of Node development before but I’ve never released a server-based project at such a scale. I was a little doubtful at some points that Node+Express would handle the full brunt of our expected traffic (scaling was the temporary solution there), but this was quickly put to rest after I’d nutted out a few important points to using Node in a production environment.
The first and most important point is the Node environment variable, which you should absolutely set when deploying your production service. NODE_ENV should be set to “production” when running your services in front of clients, as it provides several perks (check out the performance hit of not using NODE_ENV).
One of these perks is view template caching. When using a view templating engine (like Jade) in development, Express will not cache the rendered views and will instead render them upon each request. This makes development a ton easier, but is obviously needlessly expensive in terms of CPU usage when in production.
NODE_ENV=production will (as part of its effects) enable view caching, saving a lot of cycles.
Serving static content from Express
Short answer: don’t.
A lot of guides to getting started with Express do the following in the routes specification:
// static files:
This will (being placed after all of the other dynamic routes) serve up static files from within Node and send them to the user. There are many changes being made to Node and Express to increase performance, including features around
sendFile functionality that mimics Nginx’s that loads static files into memory - But IMO, why dabble in uncertainty when we could just use Nginx to serve all of our static files.
If you’re considering supporting SSL connections on your website, and you seriously should, there’s little reason to have it handled by your NodeJS application.
We’ve already spoken about using Nginx to manage your static assets by sitting in front of Node, and it’s little extra work to have Nginx also handle your SSL connections. This way, you get SSL to your doorstep (handled by an experienced player, Nginx), and you reduce complexity in your Node app.
You’re using NodeJS, and you have the ability to be extremely expressive in your setup while keeping complexity to a minimum. You can continue this trend with your deployment, also, by using some existing tools to move your project into staging and production.
Flightplan is an awesome NodeJS tool for managing your deployments. It supports different configurations and environments:
You can run deployments in a very simple manner:
There are certainly several other options for deployment (also written for Node), but Flightplan is by far my favourite.
Auto-starting and monitoring your service is important as well in production environments. PM2 is a very popular Node app management tool that has an extensive array of features. You can write a short JSON configuration for your application (separate files for different environments) which you feed into PM2, which will then keep your application running forever if you choose.
PM2 also has integration with a beautiful dashboard and monitoring platform called Keymetrics. Keymetrics has a price tag beyond a single instance and app, which is a bit hefty for my taste, but it’s an amazing solution for companies running performance-critical Node applications in the wild. It’s well worth a look, and PM2 is definitely a good bet even if you choose not to use Keymetrics.
Clustering and load balancing
Node has some great tools to clustering, and if you’re planning to scale you can quite easily drop a load balancer in front of your servers. Keeping your web service somewhat ‘dumb’ will allow you to easily scale horizontally with all of your servers behind a load balancer - You could even configure the public listening port to 80 on the LB and whatever you choose to listen on with your Node servers (eg. ext. 80 -> int. 8080).
Some may choose to go as far as building a cluster of processes, and even using that to their advantage with Express. Depending on your project, clustering may certainly help you manage some background work. PM2 loves working with your clusters, too.
Where would we be without a reliable logging system to capture the behaviour of our systems? Node has some powerful logging systems ready to go that you should definitely consider using. Rolling your own is always an easy choice, but it doesn’t scale well and is really just re-creating the wheel.
We can watch these during development and store and process them in production. Obviously raw JSON isn’t so easy on the eyes for larger logs, and Bunyan supports piping in your feed for a more readable output:
$ sudo node source/app.js | bunyan
You can dump the logs and rotate them over a time period, pushing them to a storage medium for processing later (like Amazon’s S3, for instance).
As a whole
Building your initial stack to get all of the pieces working will definitely be time consuming on occasion, but it’s well worth it when you get your project live for the first time. The easier the process is for you to develop, deploy and test, the better the experience can be for your users.