Cluster computing with nodejs

A single instance of Node runs in a single thread. To take advantage of multi-core systems we may want to launch a cluster of Node processes to handle the load!

That is to say if a system has 8 cores in it a single instance of node would use only one core, but to make the most of it, we can make use of all the cores with the wonderful concept of workers and more interestingly they can share the same port!

This module is still in Stability: 1 - Experimental phase, check out the code below to enjoy the awesomeness of the cluster!

var cluster = require('cluster');
var http = require('http');
var numCPUs = require('os').cpus().length;
 
if (cluster.isMaster) {
  // Fork workers.
  require('os').cpus().forEach(function(){
    cluster.fork();
  });
  // In case the worker dies!
  cluster.on('exit', function(worker, code, signal) {
    console.log('worker ' + worker.process.pid + ' died');
  });
  // As workers come up.
  cluster.on('listening', function(worker, address) {
    console.log("A worker with #"+worker.id+" is now connected to " +\
     address.address +\
    ":" + address.port);
  });
  // When the master gets a msg from the worker increment the request count.
  var reqCount = 0;
  Object.keys(cluster.workers).forEach(function(id) {
    cluster.workers[id].on('message',function(msg){
      if(msg.info && msg.info == 'ReqServMaster'){
        reqCount += 1;
      }
    });
  });
  // Track the number of request served.
  setInterval(function() {
    console.log("Number of request served = ",reqCount);
  }, 1000);
 
} else {
  // Workers can share the same port!
  require('http').Server(function(req, res) {
    res.writeHead(200);
    res.end("Hello from Cluster!");
    // Notify the master about the request.
    process.send({ info : 'ReqServMaster' });
  }).listen(8000);
}

On a quad core machine, the output would be as below, for each hit two works are responding.

Number of request served =  0
A worker with #2 is now connected to 0.0.0.0:8000
A worker with #4 is now connected to 0.0.0.0:8000
A worker with #1 is now connected to 0.0.0.0:8000
A worker with #3 is now connected to 0.0.0.0:8000
Number of request served =  0
...
Number of request served =  2
..
Number of request served =  4
...
Number of request served =  6

One can also try : ab -n 1000 -c 5 http://127.0.0.1:8000 to benchmark this!

Benchmarking 127.0.0.1 (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests
 
 
Server Software:        
Server Hostname:        127.0.0.1
Server Port:            8000
 
Document Path:          /
Document Length:        19 bytes
 
Concurrency Level:      5
Time taken for tests:   0.171 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      94000 bytes
HTML transferred:       19000 bytes
Requests per second:    5841.57 [#/sec] (mean)
Time per request:       0.856 [ms] (mean)
Time per request:       0.171 [ms] (mean, across all concurrent requests)
Transfer rate:          536.24 [Kbytes/sec] received
 
 
 
Percentage of the requests served within a certain time (ms)
  50%      1
  66%      1
  75%      1
  80%      1
  90%      1
  95%      2
  98%      4
  99%      5
 100%      8 (longest request)

Share this