But, even more better approach would be to avoid self=this altogehter and take the advantage of bind()
1234567891011121314151617
varnames=["hemanth","gnumanth","yomanth"];greet={msg:"Yo! ",greetThem:function(names){names.forEach(function(name){console.log(this.msg+" "+name);}.bind(this));}}greet.greetThem(names);// Both of them would log :Yo!hemanthYo!gnumanthYo!yomanth
This was just a simple use case to make things clear, but this kind of paradigm would be really useful when there are n levels of nesting. Where one has to save context as that = this, self = this and so on... instead bind() them as per the need!
function cd(){ async(){{$2$($1)}&
} notify_callback(){[[$1 > 0 ]]&&echo"You have new stuff to pull!"}# If it's a git repo, check if we need to pull.if git rev-parse --is-inside-work-tree &>/dev/null; thenasync "git rev-list HEAD...origin/master --count" notify_callback
fibuiltin cd"$@"}
Place this in your .bashrc to override the cd command, to check if you cd to a git repo and then check and notify you if you need to pull stuff.
Happy Hacking! :-)
Update 0:
Better approach would be to use (( $(git rev-list HEAD..@{u} --count) > 0 )) && echo "There are new things to merge"
A PKCS12 file, which has an extension of .pfx, contains a certificate (CA-issued certificate or self-signed certificate) and a corresponding private key.
Getting the certificate expiration date is a two step process :
Convert the .pfx file to .pem
Get the expiration/enddate of the pem file.
There might be better ways to do this, but below is what I came up with while working with a friend today.
12345
# Using -passin to avoid pem passpharse prompt.$ openssl pkcs12 -in testuser1.pfx -out temp.pem -passout pass:"${pass}" -passin pass:"${pass}"# This will spit out the expiration date.$ openssl x509 -in temp.pem -noout -enddate
That's it I can think of now, will update as and when my brain.signals me with new ideas.
Do feel free to share your ideas in the comment section!
Happy Hacking till then.
Update 0 :
For all those who are questioning about browser compatibility, please check out this graph
Few reactions from reddit :
Substack says :
Module systems and package managers give you these benefits and even more without the bloat of lumping everything into a single kitchen sink like jquery. For ajax, when you use something like browserify require('http') works for requests like it does in node with xhr wrappers that work all the way down to IE6. See the bottom of my recent post about this. If you think this example is uglier than the jquery version it's trivial to wrap it in a different api but you get streaming updates as the default. The biggest problem with jquery is that it doesn't compose well since it tries to do so much. It's really hard to publish reusable components with a jquery dependency as a result and it doesn't scale up well when you want to be using dozens of modular components.
zzzev says :
I agree if your goal is implementing stuff in a way that's cross browser compatible, but there is definitely a class of projects that doesn't need JQuery, like mobile sites where bandwidth is crucial and you're specifically targeting one device. If all you need are the DOM API functions mentioned in this link, well, it's kind of nice to not have JQuery if you don't need it.
TheGameHippo [JS Purist] says :
I kindly point you to this jsperf test that I just made. Both Chrome and FireFox show element.value is faster than $elements.val(); As to the reason, I believe jQuery uses a map function or similar to apply the value to each element. This has the overhead of an additional function call per element.
#!/usr/bin/env python# -*- coding: utf-8 -*-fromtwisted.web.clientimportgetPagefromtwisted.python.utilimportprintlnfromBeautifulSoupimportBeautifulSoupfromtwisted.pythonimportlogfromtwisted.internetimportdefer,taskimportre# Needs : PyOpenSSL and Twisted 12.3+defparallel(iterable,count,callable,*args,**named):coop=task.Cooperator()work=(callable(elem,*args,**named)foreleminiterable)returndefer.DeferredList([coop.coiterate(work)foriinxrange(count)])defunion(p,q):foreinp:ifenotinq:printeq.append(e)defextractLinks(html):soup=BeautifulSoup(html)soup.prettify()return[str(anchor['href'])foranchorinsoup.findAll('a',attrs={'href':re.compile("^http://")})ifanchor['href']]defcrawlPage(url,urlList):d=getPage(url)d.addCallback(extractLinks)d.addCallback(union,urlList)d.addErrback(log.err)returnddefcrawler(urls):urls=list(urls)defmain(reactor,*args):urls=list(args)returnparallel(urls,len(urls),crawlPage,urls)if__name__=='__main__':importsystask.react(main,["http://h3manth.com","http://www.test.com"])# Can pass a list of urls
#!/usr/bin/env python# -*- coding: utf-8 -*-"""This is a liner implementation of a simple HTTP crawler.This is crawler crawlers a given URL till a specified limit,or till limit tends to infinity.TODO :1. import robotparser and parse robots.txt2. Write the URL to DB using sqllite.3. Content type validation using response.info().headers"""importurllib2importsocketfromlxml.htmlimportparseimportargparseimportsysimportresocket.setdefaulttimeout(10)classSpidy:"""Main spider class, public method crawl"""def__init__(self,url):self.seed=urlself.failed=[]self.crawled=[]def__union(self,p,q):"""list(set(a) | set(b))"""foreinq:ifenotinp:p.append(e)def__extractLinks(self,page):""" Extract hrefs """dom=parse(page).getroot()dom.make_links_absolute()links=dom.cssselect('a')return[link.get('href')forlinkinlinksiflink.get('href')]defcrawl(self,limit=float('inf')):""" Crawls the webpage, optional param limit. """tocrawl=[self.seed]whiletocrawlandlen(self.crawled)<limit:page=tocrawl.pop()printpage# Printing as of now for redirection.ifpagenotinself.crawled:try:self.__union(tocrawl,self.__extractLinks(page))self.crawled.append(page)exceptExceptionase:printeself.failed.append([page,e])# Failed! write to DB.passreturnself.crawledif__name__=="__main__":parser=argparse.ArgumentParser(description='Spidy a simple web crawler')parser.add_argument('-u','--url',help='URL to crawl',required=True)parser.add_argument('-l','--limit',help='Crawlling limit',required=False)args=parser.parse_args()url=args.urllimit=args.limitifre.match("^https?://",url):try:urllib2.urlopen(url)exceptIOError:print"Not a real URL"sys.exit(0)else:print"Sorry only http or https urls are accepted as of now"sys.exit(0)ifnoturl.endswith("/"):url+="/"# Needs a trailing slash.spider=Spidy(url)spider.crawl()iflimit==Noneelsespider.crawl(limit)
We all are aware of the pattren matching operator =~
1
"hemanth"=~/heman/# Does match.
But what about !=~? we don't get any errors, but boolean true value
123456
"hemanth"!=~/foo/# => true"hemanth"!=~/bar/# => true#That is true always because it's like :"hemanth".!=(~/heman/)# => != is Object class and ~ is from R.E.
So the right way :
123
# We can as well use :"hemanth"!~/heman/# => flase"hemanth"!~/foo/# => true
Another simple way is to use the match method ! hemanth.match("foo") #=> true
Atlast had to make a move to octopress. I have been a big fan of it and have been following it from inception, but due to the limitations of my old blog had to hold on and today decided to ditch the old blog!
Did not even bother migrating from drupal to octopress, just started a fresh octopress blog here and let the old blog stay as it is.
One of the other major reason that held be from migrating to octopress was the server I'm hosting this site has no proper support for hosting octopress, but then I notice this an easy way of deploying the site with rsync.
Setting up a octopress blog with rsync was easy :
Clone the source and bundle install.
Add your server configurations to the Rakefile
12345678
## -- Rsync Deploy config -- ##
# Be sure your public key is listed in your server's ~/.ssh/authorized_keys file
ssh_user = "[email protected]"
ssh_port = "22"
document_root = "~/html/new"
rsync_delete = true
rsync_args = "--rsync-path=/usr/local/bin/rsync" # Don't forget this!
deploy_default = "rsync"
Then just do a rake generate && rake deploy that's it!