Hemanth.HM

A Computer Polyglot, CLI + WEB ♥'r.

Getting Started With Koajs

| Comments

koa next generation web framework for node.js brought to us by the wonderful team behind express {Specially bow to TJ}.

Let's being with a simple hello-world application :

First up Express:

1
2
3
4
5
6
7
8
9
var express = require('express');

var app = express();

app.use(function(req, res, next){
  res.send('Hello World');
});

app.listen(3000);

Here comes koa:

1
2
3
4
5
6
7
8
var koa = require('koa');
var app = koa();

app.use(function *(){
  this.body = 'Hello World';
});

app.listen(3000);

Both of the above code will result in a HTTP server listening on port 3000 and would render 'Hello World' for the index.

If you are wondering what on earth is function *(){} is they are harmony:generators and we can use ES6 on node today with node --harmony flag as of now.

Let's zoom in a bit:

1
2
3
4
5
6
7
8
9
// Express
app.use(function(req, res, next){
  res.send('Hello World');
});

//Koa
app.use(function *(){
  this.body = 'Hello World';
});

Seeing this for the first time you might wonder where the hell are the req and res params!

The answer is the context. A Koa Context encapsulates node's request and response objects into a single object.

A Koa application is an object containing an array of middleware generator functions which are composed and executed in a stack-like manner upon request

So, for every request a new context is created:

1
2
3
4
5
6
7
app.use(function *(){
  this; // is the Context
  this.request; // is a koa Request.
  this.response; // is a koa Response.
  this.req; // nodejs request.
  this.res; // nodejs response.
});
1
2
3
4
5
6
7
8
9
10
// keys of this:
[ 'request',
  'response',
  'app',
  'req',
  'res',
  'onerror',
  'originalUrl',
  'cookies',
  'accept' ]

If we were to re-rewrite Koa hello world example with route it would be like:

1
2
3
4
5
6
7
var koa = require('koa');
var app = koa();
var route = require('koa-route');

app.use(route.get('/', function *() {}
    this.body = 'Hello World';
));

So, there are set of route middlewares and their respective middleware definitions and each have their own contexts.

In express it would be like:

1
2
3
4
5
6
var express = require('express');
var app = express();

app.get('/', function(req, res){
  res.send('hello world');
});

Enough of hello-world, let's dig into the real power of generators in Koa!

With the help of ES6 generators it's uber easy and intuitive, have a look at the below example straight from Koa docs!

koa

In the code above each yield next will make the to control to flow "downstream" from the "upstream" and finally where there are no more middlewears left the stack will unwind.

It's as good as the generator suspends and passes the flow to the downstream and then resumes.

The flow is like mw1->mw2->mw3 and backwards mw3->mw2->mw1

If you had cascaded multiple middlewares in Express which uses Connect's implementation which simply passes control through series of functions until one returns, which can get really messy with callback hell!

But with Koa, it's as easy as:

1
2
3
4
5
function *all(next) {
  yield mw1.call(this, mw2.call(this, mw3.call(this, next)));
}

app.use(all);

Note the pattern is just chaining the required .call(this, next)

This can further be simplified with koa-compose

1
2
3
4
5
6
7
8
9
var co = require('co');
var compose = require('koa-compose');

co(function *(){
      yield compose(stack)
})(function(err){
      if (err) throw err;
      done();
})

Well there are few many things you can do like streaming file, object or views to this.body and more, summarizing it:

  • No more callback hell, thanks to generated-based control flow.
  • Koa is a minimalistic framework which does not include routing, middleware or extra utils.
  • Relies less middleware thanks to the wonderful context!
  • Better stream handling and abstracts node's req and res.

Hope this helped you get started with Koa! Don't miss to read their wonderful wiki

Launch Browser From Node Server

| Comments

About 400 days ago I had blogged about live reload with grunt which explained about auto refreshing the browser on any change that occurs on the file that is being observed. Here I would like to talk about a simple way to launch the browser from a node webserver.

Why do you need that? :

  • Because we are lazy!
  • For quick demos.
  • Quick Testing.
  • Less of manual work.

Let's see the code!:

1
2
3
4
5
6
7
8
9
10
11
var http = require('http'),
   open = require('open'),
   server;
server = http.createServer(function (req, res) {
           res.writeHead(200, {'Content-Type': 'text/plain'});
           res.end('Hello World\n');
        });
server.listen(1337, '127.0.0.1',function(){
    console.log('Launching the browser!');
    open('http://127.0.0.1:1337');
});

The only dependency is : open which opens a file or uri with the users preferred application (browser, editor, etc), cross platform. [ npm install open ] In the node API server.listen(port, [host], [backlog], [callback]) the last parameter callback will be added as an listener for the listening event, which is useful in identifying that the server is up and it's the right time to launch the browser!

Python Datasets

| Comments

Python datasets: dataset databases for lazy people! Bumped into this project that provides a simple abstraction layer removes most direct SQL and an export mechanism to export data in JSON or CSV formats.

Installing datasets via pip would be straight forward : pip install dataset

Why datasets? Say if you have opted to use sqlite, the below is how normal operations would look like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
import sqlite3

# open connection and get a cursor
conn = sqlite3.connect(':memory:')
c = conn.cursor()

# create schema for a new table
c.execute('CREATE TABLE IF NOT EXISTS sometable (name, age INTEGER)')
conn.commit()

# insert a new row
c.execute('INSERT INTO sometable values (?, ?) ', ('John Doe', 37))
conn.commit()

# extend schema during runtime
c.execute('ALTER TABLE sometable ADD COLUMN gender TEXT')
conn.commit()

# add another row
c.execute('INSERT INTO sometable values (?, ?, ?) ', ('Jane Doe', 34, 'female'))
conn.commit()

# get a single row
c.execute('SELECT name, age FROM sometable WHERE name = ?', ('John Doe', ))
row = list(c)[0]
john = dict(name=row[0], age=row[1])

Now the same with datasets can be reduced to:

1
2
3
4
5
6
import dataset
db = dataset.connect('sqlite:///:memory:')
table = db['sometable']
table.insert(dict(name='John Doe', age=37))
table.insert(dict(name='Jane Doe', age=34, gender='female'))
john = table.find_one(name='John Doe')

Isn't that sweet?

P.S: The ruby world also has something similar, do paw at Sequel::Dataset

Programmatically Accessing Network Interfaces

| Comments

This more of a self.note here, about programmatically accessing network interface.

Well, nothing can really beat ip addr show but here we go with few programming languages I like to paw at.

In Ruby 2.1:

1
2
3
4
5
require 'socket'

Socket.getifaddrs.each do |i|
  puts "#{i.name}: #{i.addr.ip_address}" if i.addr.ip?
end

In python:

Here I played with sockets but many python veterans suggest to use something like netifaces so did a pip install netifaces and then...

1
2
3
4
import netifaces

for face in netifaces.interfaces():
    print netifaces.ifaddresses(face)

In perl:

Similarly had to use Net::Interface package.

1
2
3
4
5
6
7
8
my %addresses = map {
      ($_ => [
          map { Net::Interface::inet_ntoa($_) }
              $_->address,
      ]);
} Net::Interface->interfaces;

print %addresses;

In nodejs:

Last but not the least, my current fav!

1
2
var os = require('os');
print os.networkInterfaces();

Well, there might always be better ways to do this, do share your way.

Publish Packages to NPM With Yeoman

| Comments

Last year I wrote about Hitchhikers guide to npm and node-grunt but a long awaited post was to write something about Yeoman a project which I'm closely associated from inception.

This post shall be demonstrating the sheer power of Yeoman in creating and maintaining node packages.

The most interesting part of Yeoman is it's community-generators out the many I have selected generator-node authored by Addy Osamni

This generator helps you to create a node.js module including nodeunit unit tests and is based of grunt-init-node, authored by the magnificent GruntJS team.

Let's get started : First up, if you still haven't installed Yeoman, please go through the Gettng-Started wiki!

Assuming you have got Yeoman up and running, let's paw at node generator!

1
npm install -g generator-node # Install the generator.

I shall call the sample node package as exmp

1
2
mkdir exmp && cd $_
yo node

Now the magic happens ;) You just need to fill in the required details and the project's scaffold.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
The name of your project shouldn't contain "node" or "js" and
should be a unique ID not already in use at search.npmjs.org.
[?] Module Name: exmp
[?] Description: The best module ever.
[?] Homepage:
[?] License: MIT
[?] GitHub username:
[?] Author's Name:
[?] Author's Email:
[?] Author's Homepage:
   create lib/exmp.js
   create test/exmp_test.js
   create .jshintrc
   create .gitignore
   create .travis.yml
   create README.md
   create Gruntfile.js
   create package.json

Digging deeper, we can notice the below dir struct, along with node_modules:

1
2
3
4
5
6
7
8
.
├── Gruntfile.js
├── README.md
├── lib
│   └── exmp.js
├── package.json
└── test
    └── exmp_test.js

The package.json would be per-populated with all the information that was provided in step 1.

The Gruntfile.js default task is ['jshint', 'nodeunit'].

node-unit is a easy unit testing in node.js and the browser, based on the assert module.

The exmp_test.js would look like :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
'use strict';

var exmp = require('../lib/exmp.js');

/*
  ======== A Handy Little Nodeunit Reference ========
  https://github.com/caolan/nodeunit

  Test methods:
    test.expect(numAssertions)
    test.done()
  Test assertions:
    test.ok(value, [message])
    test.equal(actual, expected, [message])
    test.notEqual(actual, expected, [message])
    test.deepEqual(actual, expected, [message])
    test.notDeepEqual(actual, expected, [message])
    test.strictEqual(actual, expected, [message])
    test.notStrictEqual(actual, expected, [message])
    test.throws(block, [error], [message])
    test.doesNotThrow(block, [error], [message])
    test.ifError(value)
*/

exports['exmp'] = {
  setUp: function(done) {
    // setup here
    done();
  },
  'no args': function(test) {
    test.expect(1);
    // tests here
    test.equal(exmp.awesome(), 'awesome', 'should be awesome.');
    test.done();
  }
};

And the lib/exmp.js would be like :

1
2
3
4
5
'use strict';

exports.awesome = function() {
  return 'awesome';
};

That's it! Once you cook your logic you can just go ahead and publish the module to node repository, with npm publish give that you have registered and have done a npm adduser.

So, what are you waiting for?

BTW, this post is a product of the talk I delivered at Google Bangalore as a part of BangaloreJS, on can view bjs the demo app that was created during the talk.

DOM Mutation Observers

| Comments

M4 API MutationObservers can be used to observe mutations to the tree of nodes.

It is designed as a replacement for Mutation Events defined in the DOM3 Events specification.

All you get with it is:

1
2
3
4
var observer = new MutationObserver(callback); // Get the changes.
observer.observe(target, options); // Observer the desired node with options.
observer.disconnect(); // Stop observing.
observer.takeRecords(); // Empty the record queue and returns what was in there.

List of options:

img

Check out a sample DEMO I made for fun, which load random images from different sources ;)

The code looks like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
var matrix = document.querySelector("#imgs");
    var imgServ = ["http://lorempizza.com/i/380/240",
                   "http://lorempixel.com/380/240/"
                  ];

    var observer = new MutationObserver(function(mutations){
        mutations.forEach(function(mutation){
                var size = matrix.childElementCount;
                matrix.children[size -1 ].src = imgServ[Math.floor(imgServ.length * Math.random())]+"#"+ Math.random().toString(36).substring(7);
        });
    });

    observer.observe(matrix,{childList: true});

    setInterval(function(){
            matrix.appendChild(document.createElement('img')
            )},1000
    );

Don't miss to read the spec

Happy hacking, till next time ;)

Negative Array Index in Javascript

| Comments

If you are from python or ruby school, you shall be familar with negative array index, like a[-1] will return the last element, a[-2] will return the second-to-last element and so on.

In javascript we can make use of negitve indices like:

1
2
3
4
>> negArray = []; negArray[-100] = -100; negArray.length
(number) 0
>> negArray = []; negArray[-100] = -100; negArray.length; negArray[-100]
(number) -100

Let's add some ES6 proxy masala:

1
2
3
4
5
6
7
8
9
10
11
12
13
function negArray(arr) {
  var dup = arr;

  return Proxy.create({
    set: function (proxy, index, value) {
      dup[index] = value;
    },
    get: function (proxy, index) {
        index = parseInt(index);
        return index < 0 ? dup[dummy.length + index] : dup[index];
    }
  });
}

Now:

1
2
3
console.log(negArray(['eat', 'code', 'sleep'])[-1]);

// Would log sleep

Hope this is useful and there might be many other way to implement the same, do let me know your way of doing this! :)

Update 0 : as suggested by Gundersen we can avoid duplication, as per the code below.

1
2
3
4
5
6
7
8
9
10
11
12
function negArray(arr) {
  return Proxy.create({
    set: function (proxy, index, value) {
        index = parseInt(index);
        return index < 0 ? (arr[arr.length + index] = value) : (arr[index] = value);
    },
    get: function (proxy, index) {
        index = parseInt(index);
        return index < 0 ? arr[arr.length + index] : arr[index];
    }
  });
}

Redefining DOM Object's Behaviour

| Comments

Object.defineProperty has been very useful for defining new properties or modifying the existing ones.

But faced an intresting senario of redefining video element's src attribute.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
// Let's create a video element.
var video = document.createElement('video');
var videos = {
    comedy: '',
    drama: ''
}; //Some collection.

// Let's try to redefine the src attr.
Object.defineProperty(video, 'src', {
    set: function (type) {
        this.src = videos[type];
    }
});

video.src = comdey; // This must set src to comedy video.

But for much obvio recursion we get InternalError: too much recursion on FF and RangeError: Maximum call stack size exceeded on Chrome.

As I was looking for a backdoor, one my colleagues @sashvini suggested a trick using setAttribute as video element was a DOM node with IDL attributes and methods are exposed to dynamic scripts.

Vola! It worked.

1
2
3
4
5
6
7
Object.defineProperty(video, 'src', {
    set: function (url) {
        this.setAttribute('src', videos[index]);
    }
});

video.src = comedy; // Now works as expected :)

P.S : The real use case for this was more complicated, have taken a silly example just to prove this!

Hope this is useful and there maybe many other ways to do this, do tell me know if you are aware of one such! (:

ES6 on node.js

| Comments

In my old post I wrote of the same topic using shims. But here I would like to demonstrate many ES6 (harmony) features using raw node.js

Let the code do the talking!

Node version : v0.11.6

Let's grep some harm ;) :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ node --v8-options | grep harm
  --harmony_typeof (enable harmony semantics for typeof)
  --harmony_scoping (enable harmony block scoping)
  --harmony_modules (enable harmony modules (implies block scoping))
  --harmony_symbols (enable harmony symbols (a.k.a. private names))
  --harmony_proxies (enable harmony proxies)
  --harmony_collections (enable harmony collections (sets, maps, and weak maps))
  --harmony_observation (enable harmony object observation (implies harmony collections)
  --harmony_typed_arrays (enable harmony typed arrays)
  --harmony_array_buffer (enable harmony array buffer)
  --harmony_generators (enable harmony generators)
  --harmony_iteration (enable harmony iteration (for-of))
  --harmony_numeric_literals (enable harmony numeric literals (0o77, 0b11))
  --harmony_strings (enable harmony string)
  --harmony_arrays (enable harmony arrays)
  --harmony (enable all harmony features (except typeof))

Kool, now let's enable all of them with some awk magic, along with strict mode!

1
$ node --use-strict $(node --v8-options | grep harm | awk '{print $1}' | xargs) #ES6

Let's get started!

BLOCK SCOPING :

The keyword let helps in defining variables scoped to a single block.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
function aboutme(){
  {
    let gfs = 10;
    var wife = 1;
  }
  console.log(wife);
  console.log(gfs);
}

// Let's invoke aboutme
aboutme();

// Would result in :
1
ReferenceError: gfs is not defined.

gfs got a ReferenceEorror as it was in a block and declared with let, syntactically similar to var, but defines a variable in the current block. The above is a simple example, but let is more useful in creating closures in loops and works better than var.

GENERATORS :

Generators helps to build iterators, yield is not a keyword in ES6 so the syntax to declare a generator function is function* (){} Let's see an example :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
function *Counter(){
var n = 0;
while(1<2) {
  yield n;
  ++n
  }
}

var CountIter = new Counter();

CountIter.next()
// Would result in { value: 0, done: false }

// Again 
CountIter.next()¬
//Would result in { value: 1, done: false }

The done attribute will be true once the generator has nothing more to yield. Interestingly a generator can also yield another generator! :)

PROXIES :

Proxies provide a meta-programming API, that helps the programmer to define primitive object behaviour using traps

1
2
3
4
5
6
7
8
9
var life = Proxy.create({
    get: function(obj,value){
        return value === "ans" ? 42 : "Meh! Nothing like : " + value
    }
});

life.ans // Would return 42

life.lol // Would return Meh! Nothing like lol.

The above can be extended to simulating __noSuchMethod__

Update 0 : (Better way from the specs draft)

1
2
3
4
5
6
7
8
9
10
11
12
Object.createHandled = function(proto, objDesc, noSuchMethod) {
  var handler = {
    get: function(rcvr, p) {
      return function() {
        var args = [].slice.call(arguments, 0);
        return noSuchMethod.call(this, p, args);
      };
    }
  };
  var p = Proxy.create(handler, proto);
  return Object.create(p, objDesc);
};

P.S : Note This API is superseded by the newer direct proxies API, but in node this is not yet there, as a result :

1
2
> Proxy()
TypeError: Property 'Proxy' of object #<Object> is not a function

We must wait till v8 implements it.

MODULES :

Modules helps in separation of code and increase modularity.

1
2
3
4
5
6
7
8
9
10
11
module Counter{
    var n = 0;
    export function inc() { return ++n; }
    export function dec() { return --n;}
    export function cur() { return n;}
}

Counter.n // undefined.
Counter.inc() // 1
Counter.dec() // 0
Counter.cur() // 0

Object.observe :

Object.observe provides a runtime capability to observe changes to an object.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
> var todos = ["eat","code","code","sleep"];

// Using Array.observe
> Array.observe(todos,function(changes) { console.log(changes); })
> todos.pop()
'sleep'
> [ { type: 'splice',
    object: [ 'eat', 'code', 'code' ],
    index: 3,
    removed: [ 'sleep' ],
    addedCount: 0 } ]
> todos.push("sleep")
4
> [ { type: 'splice',
    object: [ 'eat', 'code', 'code', 'sleep' ],
    index: 3,
    removed: [],
    addedCount: 1 } ]

// Similarly with Object.observe
> var obj = {}
> Object.observe(obj,function(changes) {console.log(changes); })
> obj.name = "hemanth";
'hemanth'
> [ { type: 'new', object: { name: 'hemanth' }, name: 'name' } ]

COLLECTIONS (Maps and Sets) :

Map objects are simple key/value maps, but it is different when compared to Object :

  • Objects have default key/value pairs from prototype.

  • Keys of an Object are Strings, where they can be any value for a Map.

  • Keeping track of size for an Object is manual, but Maps have size attribute.

Set objects let you store unique values of any type, whether primitive values or object references, but we still can't iterate them in node :(

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
> Object.getOwnPropertyNames(Set.prototype)
[ 'constructor', 'size', 'add', 'has', 'delete', 'clear' ]

> Object.getOwnPropertyNames(Map.prototype)
[ 'constructor', 'size', 'get', 'set', 'has', 'delete', 'clear' ]

var myMap = new Map();
myMap.set(NaN, "not a number");
myMap.get(NaN); // "not a number"


var mySet = new Set();
var todos = ["eat","code","sleep","code","drink","code"]
todos.forEach(function(t) { mySet.add(t); } )
todos.length // 6
mySet.size  // 4

Some fun with Strings :

1
2
> (0/0 + "").repeat(10)+ " batman!"
'NaNNaNNaNNaNNaNNaNNaNNaNNaNNaN batman!'

All these without pollyfills or shims is not bad at all!

Until next time, happy hacking!

Update 0: Don't use typeof flag!

Jekyll Blog From a Subdirectory

| Comments

Say you have a site example.com running on jekyll and you want example.com/blog to also be served from the same jekyll setup, here is what you need to do :

  • cd ~/example.com -> Root dir.

  • cd _includes -> Add a blogskin.html that would contain all the required css and js.

  • cd _layouts and create a blog layout that each of your blog posts will use, don't forget to Included file 'blogskin.html' not found in _includes directory

  • mkdir blog -> This would be the index for example.com/blog and hence must contain the index.html static file, listing all the blog posts, similar to an index file that jekyll new would generate, have a layout: blog so that index and the blog posts look and feel remains in sync.

  • echo "permalink: /blog/:title.html" >> _config.yml -> This would be premalink format for your blog posts.

Now, a jekyll build && jekyll server must have example.com/blog serving your blog :-)

Hope this helps, not sure if a bash script to do this will help more, will decide after some feedback ;)

This was a part of my learning gained while preparing yeoman's team blog.