Multithreaded Socket.IO Server

Dec 16
Posted by
Turbo charge your NodeJS applications with Multicore

Recently, I’ve been developing a new chat/messaging platform for my site, that will be used throughout the website as well as for upcoming mobile applications. I started running into issues first with the linux kernel killing my chat processes because of open connections but after increasing the ulimit that issue subsided. Because of these crashes, I began looking for a way to run the chat on all 8 cores of the system to optimize performance and add redundancy in the case that one of the worker processes fails.

As you all know, NodeJS only runs on a single thread on one core of the server it’s running on by default. However, it comes installed with a simple utility that allows you to fork extra processes/workers as desired that can all access the same port/server running on Node. This is an incredibly useful feature and I recommend that everyone migrates their applications to multicore as it easily octuples (is that even the right word? Lol) the performance of the application. To illustrate this, I performed some ab benchmarks on my local installation which just served up which worker ID it was connected to. The test was to perform 10,000 requests and the results were really encouraging.

On the single core machine, the test took 4.503s. The multicore server however completed the test in 3.376 seconds, more than an entire second quicker; and that’s just running on my MacBook Pro. Throw in some server-grade hardware and wow does it fly. I’ve posted the Apache Bench results below for those who want to see the full output.

Now multiply that 1.1 second savings over 1M requests and you’ll quickly see the performance increase is exponential! Here’s how to do it:

What you Need

Scaling to Multicore

Let’s start with the basic file:

Configure the Master Process

Now that we have the basic structure that will control both the master process and the worker processes, let’s setup our master process. To make sure can talk to other socket instances on different workers, we have to configure them to use a Redis Store. Thankfully comes packaged with Redis already so we don’t have to install it separately, simply tell that you want to use a separate Redis server. Let’s add this inside the if( cluster.isMaster )  if block:

Create Workers

Time to create the additional processes that will the server simultaneously. This can be done manually by calling  cluster.fork() as needed, but I prefer to spawn one process per CPU core:

Deal with Worker Crashes

Sometimes an exception will occur that is uncaught by your application causing your server to crash. When this happens, normally the application itself terminates. However, since we are running a multithreaded environment, we can detect a worker process has died and restart one in it’s place:

Adding Worker Code

Now that we have our Master process all set up to create worker processes, let’s go ahead and add some worker code to run on the threaded processes. Working now from inside the if( cluster.isWorker )  if block, let’s configure the server and the Redis store like we did before, this time, we’re going to add an extra line to start the server listening on the port:

Add Socket Bindings

Realistically we’re all good to go now, but our server doesn’t actually do anything yet. Let’s add some simple bindings:

That’s It!

If you start the server up, you’ll now see something like this:

Multithreaded Server

Multithreaded Server

There you go! Your server is now running in full multicore mode! If you wanted, you could even get fancier and spawn more workers as processor load increases and then kill workers off as the load goes down 😉 But I’ll leave that to you 😛

As always guys if you have any questions or comments, post ’em below and I’ll do my best to answer them 🙂

Apache Benchmark Results

Filed Under: Dev
Tagged With: , , , , ,