Multithreaded Socket.IO Server

Dec 16
Turbo charge your NodeJS applications with Multicore

Recently, I’ve been developing a new chat/messaging platform for my site, that will be used throughout the website as well as for upcoming mobile applications. I started running into issues first with the linux kernel killing my chat processes because of open connections but after increasing the ulimit that issue subsided. Because of these crashes, I began looking for a way to run the chat on all 8 cores of the system to optimize performance and add redundancy in the case that one of the worker processes fails.

As you all know, NodeJS only runs on a single thread on one core of the server it’s running on by default. However, it comes installed with a simple utility that allows you to fork extra processes/workers as desired that can all access the same port/server running on Node. This is an incredibly useful feature and I recommend that everyone migrates their applications to multicore as it easily octuples (is that even the right word? Lol) the performance of the application. To illustrate this, I performed some ab benchmarks on my local installation which just served up which worker ID it was connected to. The test was to perform 10,000 requests and the results were really encouraging.

On the single core machine, the test took 4.503s. The multicore server however completed the test in 3.376 seconds, more than an entire second quicker; and that’s just running on my MacBook Pro. Throw in some server-grade hardware and wow does it fly. I’ve posted the Apache Bench results below for those who want to see the full output.

Now multiply that 1.1 second savings over 1M requests and you’ll quickly see the performance increase is exponential! Here’s how to do it:

What you Need

Scaling to Multicore

Let’s start with the basic file:

Configure the Master Process

Now that we have the basic structure that will control both the master process and the worker processes, let’s setup our master process. To make sure can talk to other socket instances on different workers, we have to configure them to use a Redis Store. Thankfully comes packaged with Redis already so we don’t have to install it separately, simply tell that you want to use a separate Redis server. Let’s add this inside the if( cluster.isMaster )  if block:

Create Workers

Time to create the additional processes that will the server simultaneously. This can be done manually by calling  cluster.fork() as needed, but I prefer to spawn one process per CPU core:

Deal with Worker Crashes

Sometimes an exception will occur that is uncaught by your application causing your server to crash. When this happens, normally the application itself terminates. However, since we are running a multithreaded environment, we can detect a worker process has died and restart one in it’s place:

Adding Worker Code

Now that we have our Master process all set up to create worker processes, let’s go ahead and add some worker code to run on the threaded processes. Working now from inside the if( cluster.isWorker )  if block, let’s configure the server and the Redis store like we did before, this time, we’re going to add an extra line to start the server listening on the port:

Add Socket Bindings

Realistically we’re all good to go now, but our server doesn’t actually do anything yet. Let’s add some simple bindings:

That’s It!

If you start the server up, you’ll now see something like this:

Multithreaded Server

Multithreaded Server

There you go! Your server is now running in full multicore mode! If you wanted, you could even get fancier and spawn more workers as processor load increases and then kill workers off as the load goes down 😉 But I’ll leave that to you 😛

As always guys if you have any questions or comments, post ’em below and I’ll do my best to answer them 🙂

Apache Benchmark Results

Filed Under: Dev

Neat NodeJS Plugin

May 4

I found this neat plugin today for Node.js development called Nodemon. What it does is it monitors the files you’re working with and restarts the server whenever a change is made. This will be particularly useful for those who are sick of hitting CTRL + C, UP ARROW, ENTER each time they save a file. On top of that, if some change is made to the script that causes it to error and crash, Nodemon will display the error and wait for you to fix it before restarting. This prevents infinite looping if errors occur!

To install Nodemon, just use the built in Node Package Manager, npm

Once it’s installed you can run it as you would normally but substitute the node command with nodemon so nodemon some_application.js. As an added bonus, cd to your work folder first so it can monitor all folders and files of your application.

I found this to be exceptionally useful during development and saves a lot of time!

Filed Under: Dev

NodeJS + Random Freezing

Mar 30

So I recently found this out the hard way… if you are running a service on a non-standard port and have a firewall (which you most likely do lol) then you may be experiencing random freezing. TO MY KNOWLEDGE, this is caused by the UDP port being blocked by the firewall.

I was experiencing this issue and I unblocked that port in the firewall yesterday and I have not seen my app freeze yet! 😀


Points for super short blog entry lol but just in case anyone can benefit from this 😛

Filed Under: Dev

NodeJS – Execute a Shell Script / Command

Mar 10

So on my continued quest to build an awesome shoutbox in NodeJS +, I have discovered how to run shell scripts from NodeJS. This was a request from one of the staff members on DarkUmbra.Net to be able to restart the server if something messes up. What resulted was a shell script that restarts NodeJS and whatever server you are running.

This is the NodeJS code to do this:

You can also choose to output something back to the client if need be.

Filed Under: Dev

Awesome Linux Process Management

Mar 9

I recently discovered a neat trick with linux processes that allow you to a) run them in the background and/or b) keep them running after you disconnect from SSH or your connection, however that may be.

Running processes in the background is quite common so if you’re familiar with linux then you’ll know that appending an & to the end of the command will throw it in the background out of view and allow you to keep working, while it runs.

The issue with this is that when you disconnect from SSH, that script will cease to run. The way you can prevent that is you can redirect output to the nohup command which will continue to run after closing your connection.

You can also combine the two to run it in the background AND keep running if you drop connection.

Now the neat thing about nohup is it stores any output from the command in ~/nohup.out and can be viewed using any method (tail, nano, etc). But what if you wanted to view the output in realtime as if you were viewing the command running? You can use the tail -f command to do so. This will load the last few lines of the file as well as print out any new lines that are posted to the file while the command is running.

So when I run my nodeJS server for example, I run it as follows so it’s in the background, keeps running if I drop connection, AND I load the log file into tail -f so I can view the output as normal. This prevents the server from stopping if I lose connection (like when I’m working from the bus :P) as well as any errors 😀

Yay for awesome process management in unix/linux!

Filed Under: Dev