Hemanth.HM

A Computer Polyglot, CLI + WEB ♥'r.

Handling Currency in JavaScript

| Comments

Oh yeah! Handling currency got better with Intl object, which provides a namespace for the standard internationalization constructors.

Intl object has many intresting functions in it (as below) but in this write-up I'm looking at a specific functionality.

  • Collator : Constructor for collators, objects that enable language sensitive string comparison.

  • DateTimeFormat : Constructor for objects that enable language sensitive date and time formatting.

  • NumberFormat : Constructor for objects that enable language sensitive number formatting.

Simple code to format currency : (Tested in chrome canary and FF nightly)

1
2
3
4
5
6
7
8
var currency = function(type){
  return new Intl.NumberFormat([ "en-IN" ], {
    style: "currency",
    currency: type,
    currencyDisplay: "symbol",
    maxmimumFractionDigit: 1
  });
 };

Now let play around with currency ;)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
currency("INR").format(10023.56)
"Rs10,023.56"

currency("USD").format(10023.56)
"$10,023.56"

currency("GBP").format(10023.56)
"£10,023.56"

currency("YEN").format(10023.56)
"YEN10,023.56"

currency("CNY").format(10023.56)
"CN¥10,023.56"

P.S : Sticking with "en-IN" as the NumberFormat, it can be changed as per the need.

Hope there shall be a standard API for currency convertion! That would be very useful? :)

Selfless Javascript

| Comments

Don't get mislead by the title, all I'm trying to do is avoding explicity saving of the context with something like self=this.

What would 'this' be bound to?

  • this will be set to the first argument passed in case of .call()/.apply().

  • In case of .bind(), this will be the first argument that was passed to .bind() during the function was creation.

  • On object.method(), this will refer to that object.

  • Rest, this will reffer to the global context.

Let's see a simple usecase :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
var names = ["hemanth","gnumanth","yomanth"];

greet = {
    msg: "Yo! ",
    greetThem: function (names) {
        names.forEach(function (name) {
            console.log(this.msg + " " + name);
        })
    }
}

greet.greetThem(names);

// Would log :
/*
undefined hemanth
undefined gnumanth
undefined yomanth
*/

The work around is pretty simple and famous : save the context!

1
2
3
4
5
6
7
8
9
10
11
12
13
var names = ["hemanth","gnumanth","yomanth"];

greet = {
    msg: "Yo! ",
    greetThem: function (names) {
        var self = this;
        names.forEach(function (name) {
            console.log(self.msg + " " + name);
        })
    }
}

greet.greetThem(names);

But, even more better approach would be to avoid self=this altogehter and take the advantage of bind()

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
var names = ["hemanth","gnumanth","yomanth"];

greet = {
    msg: "Yo! ",
    greetThem: function (names) {
        names.forEach(function (name) {
            console.log(this.msg + " " + name);
        }.bind(this));
    }
}

greet.greetThem(names);

// Both of them would log :
Yo!  hemanth
Yo!  gnumanth
Yo!  yomanth

This was just a simple use case to make things clear, but this kind of paradigm would be really useful when there are n levels of nesting. Where one has to save context as that = this, self = this and so on... instead bind() them as per the need!

Auto Notify Git Pull

| Comments

So, here the code I came up with [First cut] :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
function cd() {
  async() {
    {
      $2 $($1)
    }&
  }

  notify_callback() {
    [[ $1 > 0 ]] && echo "You have new stuff to pull!"
  }

  # If it's a git repo, check if we need to pull.
  if git rev-parse --is-inside-work-tree &>/dev/null; then
    async "git rev-list HEAD...origin/master --count" notify_callback
  fi
 
  builtin cd "$@"
}

Place this in your .bashrc to override the cd command, to check if you cd to a git repo and then check and notify you if you need to pull stuff.

Happy Hacking! :-)

Update 0:

Better approach would be to use (( $(git rev-list HEAD..@{u} --count) > 0 )) && echo "There are new things to merge"

Getting the Expiry Date of Pfx (PKCS12)

| Comments

A PKCS12 file, which has an extension of .pfx, contains a certificate (CA-issued certificate or self-signed certificate) and a corresponding private key.

Getting the certificate expiration date is a two step process :

  • Convert the .pfx file to .pem

  • Get the expiration/enddate of the pem file.

There might be better ways to do this, but below is what I came up with while working with a friend today.

1
2
3
4
5
# Using -passin to avoid pem passpharse prompt.
$ openssl pkcs12 -in testuser1.pfx -out temp.pem -passout pass:"${pass}" -passin pass:"${pass}"

# This will spit out the expiration date.
$ openssl x509 -in temp.pem -noout -enddate

Power of Vanilla JS

| Comments

Last year, I wrote about Native JS Like jQuery and Class handling with classlist which did not catch the eyes of many, so had to blog about JavaScript vs jQuery+CoffeeScript which is getting decent number of hits, but still feels like the message is not conveyed clearly!

I have great respect for jQuery, but at the same time really like the sheer power of vanilla JS.

The power of vanilla JavaScript (Few more examples) :

Below code for event listeners is one such soild example, credits @alunny

1
2
3
var $ = document.querySelectorAll.bind(document);
Element.prototype.on = Element.prototype.addEventListener;
$("somelink")[0].on('touchstart', handleTouch);

We can append, prepend or remove a child with ease :

1
2
3
4
5
parent.appendChild(child) // append child.

parent.insertBefore(child, parent.childNodes[0]) // prepend child.

child.parentNode.removeChild(child) // remove child.

Looping through a NodeList :

1
2
3
4
[].forEach.call( document.querySelectorAll('a'),
function  fn(elem){
    console.log(elem.src);
});

Convert a NodeList to a List :

1
myList = Array.prototype.slice.call(myNodeList);

Select element for data attribute :

1
var matches = el.querySelectorAll('iframe[data-src]');

DYK!? getElementsByTagName() returns a Live NodeList which is faster than querySelectorAll() which returns a static NodeList.

(Anyway let that DYK fact apart)

Nesting many levels :

1
var cells = document.querySelectorAll("#score>tbody>tr>td:nth-of-type(2)");

Selecting multiple IDs :

1
var x = document.querySelector("#bar, #foo");

Change CSS inline :

1
document.querySelector("#mainLink").style.cssText = 'color:red'

Test if element contains :

1
2
var child = document.querySelector('#child');
console.log(document.querySelector('parent').contains(child));

That's it I can think of now, will update as and when my brain.signals me with new ideas.

Do feel free to share your ideas in the comment section!

Happy Hacking till then.

Update 0 :

For all those who are questioning about browser compatibility, please check out this graph

Few reactions from reddit :

Substack says :

Module systems and package managers give you these benefits and even more without the bloat of lumping everything into a single kitchen sink like jquery. For ajax, when you use something like browserify require('http') works for requests like it does in node with xhr wrappers that work all the way down to IE6. See the bottom of my recent post about this. If you think this example is uglier than the jquery version it's trivial to wrap it in a different api but you get streaming updates as the default.
The biggest problem with jquery is that it doesn't compose well since it tries to do so much. It's really hard to publish reusable components with a jquery dependency as a result and it doesn't scale up well when you want to be using dozens of modular components.

zzzev says :

I agree if your goal is implementing stuff in a way that's cross browser compatible, but there is definitely a class of projects that doesn't need JQuery, like mobile sites where bandwidth is crucial and you're specifically targeting one device. If all you need are the DOM API functions mentioned in this link, well, it's kind of nice to not have JQuery if you don't need it.

TheGameHippo [JS Purist] says :

I kindly point you to this jsperf test that I just made.
Both Chrome and FireFox show element.value is faster than $elements.val();
As to the reason, I believe jQuery uses a map function or similar to apply the value to each element. This has the overhead of an additional function call per element.

Web Crawler With Python Twisted

| Comments

Here is a simple HTTP crawler I wrote with python Twsited

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from twisted.web.client import getPage
from twisted.python.util import println
from BeautifulSoup import BeautifulSoup
from twisted.python import log
from twisted.internet import defer, task
import re
# Needs : PyOpenSSL and Twisted 12.3+

def parallel(iterable, count, callable, *args, **named):
    coop = task.Cooperator()
    work = (callable(elem, *args, **named) for elem in iterable)
    return defer.DeferredList([coop.coiterate(work) for i in xrange(count)])


def union(p, q):
    for e in p:
      if e not in q:
        print e
        q.append(e)


def extractLinks(html):
    soup = BeautifulSoup(html)
    soup.prettify()
    return [str(anchor['href']) for anchor in soup.findAll('a',attrs={'href': re.compile("^http://")}) if anchor['href']]

def crawlPage(url, urlList):
    d = getPage(url)
    d.addCallback(extractLinks)
    d.addCallback(union, urlList)
    d.addErrback(log.err)
    return d


def crawler(urls):
    urls = list(urls)


def main(reactor, *args):
    urls = list(args)
    return parallel(urls,len(urls), crawlPage, urls)


if __name__ == '__main__':
    import sys
    task.react(main,["http://h3manth.com","http://www.test.com"]) # Can pass a list of urls

Here is the non-twisted version :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""This is a liner implementation of a simple HTTP crawler.

This is crawler crawlers a given URL till a specified limit,
or till limit tends to infinity.

TODO :
1. import robotparser and parse robots.txt
2. Write the URL to DB using sqllite.
3. Content type validation using response.info().headers
"""
import urllib2
import socket
from lxml.html import parse
import argparse
import sys
import re
socket.setdefaulttimeout(10)


class Spidy:
  """Main spider class, public method crawl"""

  def __init__(self, url):
       self.seed = url
       self.failed = []
       self.crawled = []

  def __union(self, p, q):
      """list(set(a) | set(b))"""
      for e in q:
          if e not in p:
              p.append(e)

  def __extractLinks(self, page):
        """ Extract hrefs """
        dom = parse(page).getroot()
        dom.make_links_absolute()
        links = dom.cssselect('a')
        return [link.get('href') for link in links if link.get('href')]

  def crawl(self, limit=float('inf')):
      """ Crawls the webpage,
          optional param limit.
      """
      tocrawl = [self.seed]
      while tocrawl and len(self.crawled) < limit:
          page = tocrawl.pop()
          print page   # Printing as of now for redirection.
          if page not in self.crawled:
              try:
                self.__union(tocrawl, self.__extractLinks(page))
                self.crawled.append(page)
              except Exception as e:
                print e
                self.failed.append([page, e])   # Failed! write to DB.
                pass
      return self.crawled

if __name__ == "__main__":
  parser = argparse.ArgumentParser(description='Spidy a simple web crawler')
  parser.add_argument('-u', '--url',  help='URL to crawl',required=True)
  parser.add_argument('-l', '--limit', help='Crawlling limit', required=False)
  args  = parser.parse_args()
  url   = args.url
  limit = args.limit
  if re.match("^https?://", url):
    try:
      urllib2.urlopen(url)
    except IOError:
      print "Not a real URL"
      sys.exit(0)
  else:
    print "Sorry only http or https urls are accepted as of now"
    sys.exit(0)
  if not url.endswith("/"):
    url+="/"  # Needs a trailing slash.
  spider = Spidy(url)
  spider.crawl() if limit==None else spider.crawl(limit)

Reqular Expression Negation in Ruby

| Comments

We all are aware of the pattren matching operator =~

1
"hemanth" =~ /heman/ # Does match.

But what about !=~? we don't get any errors, but boolean true value

1
2
3
4
5
6
"hemanth" !=~ /foo/ # => true
"hemanth" !=~ /bar/ # => true

#That is true always because it's like :

"hemanth".!=(~/heman/) # => != is Object class and ~ is from R.E.

So the right way :

1
2
3
# We can as well use :
"hemanth" !~ /heman/ # => flase
"hemanth" !~ /foo/ # => true

Another simple way is to use the match method ! hemanth.match("foo") #=> true

Octopress Atlast!

| Comments

Atlast had to make a move to octopress. I have been a big fan of it and have been following it from inception, but due to the limitations of my old blog had to hold on and today decided to ditch the old blog!

Did not even bother migrating from drupal to octopress, just started a fresh octopress blog here and let the old blog stay as it is.

One of the other major reason that held be from migrating to octopress was the server I'm hosting this site has no proper support for hosting octopress, but then I notice this an easy way of deploying the site with rsync.

Setting up a octopress blog with rsync was easy :

  • Clone the source and bundle install.

  • Add your server configurations to the Rakefile

1
2
3
4
5
6
7
8
## -- Rsync Deploy config -- ##
# Be sure your public key is listed in your server's ~/.ssh/authorized_keys file
ssh_user       = "[email protected]"
ssh_port       = "22"
document_root  = "~/html/new"
rsync_delete   = true
rsync_args     = "--rsync-path=/usr/local/bin/rsync"  # Don't forget this!
deploy_default = "rsync"
  • Then just do a rake generate && rake deploy that's it!

So, my journey with octopress beings!