26 April 2017

Aren't you using retina images already?

With the advent of higher resolution devices increasing every day and the need for the end users to getting used to seeing high-res images, anything that is default or standard definition looks inferior if not ugly. As a developer, it is a must to consider this design impact and how to accommodate and give a seamless experience to the end user across various types of devices. I am going to quickly touch base on how to understand what High Dpi devices are how to exploit all those pixels available in a device real estate.

DPI stands for Dot Per Inch. I am a 90s kid (don't know if that is proud statement or a something to be trolled about), one of the highest resolution I use to see was 1024 x 768 - almost square sized monitors. It simply means that many pixels available and that many colors one could see in screen irrespective of the size of the screen. Take a look at a simple example below. If you closely observe, we are trying to draw a circle in various devices with different pixel densities. If you see on the far left assuming 1 pixel = 1 unit we won't get that much of a resolution as each slot of rectangle can show just one color at a time. If we can increase the density by 4x (or twice in area), the resolution becomes better and if its 9x (thrice in area), it will show a better curve.

The current HD Televisions are having 1920x1080 pixels and if you observe it very close literally, you should be able to decipher the pixels. But viewed at a particular distance, the human brain will not be able to separate that out. However, in small devices like phone where the proximity is more, it is vital that every developer understand this basic concept in order to give high-res content.

For the web

Before we dive into how to optimize rasterized images for the web, if there is an option available to go with Vector Graphics (example .svg), always go for it, it should be a no-brainer. They are relatively lesser in size and come ready made to be rendered across any kinds of devices. However, practically this is not always the case.

Rendering high pixels for the web should be easier. For demonstration purposes, let me take Pramati Logo as an example. Following are two versions of the same image rendered differently. If you see this image on HD monitor, you will call it both look alike. However I would urge you to look in a high resolution device like an Apple iPhone or a Retina enabled Macbook or any high DPI device, you will clearly see a stark difference between these two.

To put in "lay-dev" terms, the image on the bottom is not retina ready whereas the top one is. For the stubborn folks like me, who won't take the time to see it on a retina device, I am attaching a screenshot (deliberately enlarged) captured from an Apple iPhone below.

As you can see, it is pretty clear there is a significant difference in the edges and the symbol.

The trick

The technique is very simple - you have to use an image four times larger its size and use css explicitly to set the width to the initial or base size. For example, if you want to render an image in an area of 400x150, you need to get your image four times larger than this (or twice larger in the sides). So, you should use an image of dimension 800x300 and set the resulting width to 400px. This would ensure that it renders with the maximum pixels in a Retina device.

Whenever you are developing this for web, you are going to test it in most of the common browsers where CSS3 standards have become a norm in order to guarantee a consistent experience. However, when you are developing for email clients (especially the bloody outlooks), I would urge you to consider and craft every minute thing. With lots and lots of emails consumed in mobile devices these days, heading towards Retina images is a no-brainer. However, almost all the emails I see and consume are ignoring this main fact and are not retina friendly. Even a lot of famous companies ignore this trivial design difference and send image as-is.

I've already written an article on the tools and techniques required to develop rich and responsive emails for different emails clients which is a short read. And don't forget to use Litmus - they have clearly monopolised the Email testing market!

Maximum compression
I am a huge fan of PNG images because of their size, transparency and supreme compression it offers. If you are not going to link the pictures directly with the email, compressing a PNG can save a lot of file size especially when they made as attachments and referred in the final email.

I use a tool called as ImageAlpha which is phenomenal when it comes to compressing images with transparency. For PNG, the number of predefined colors can be reduced to 32 or 64 depending on the color usage your picture has and voila, you can achieve compression as high as 80%! It is 100% free as well!

I often say that Design nuances and intricacies are not just for "designers". Any "Engineer" should be able to understand why we do what we do and then dig on the how we do part to become proficient thus increasing your productivity and your edge multi-folds.

Hope you were able to learn something today.


24 February 2017

Is your experience an Asset or a Liability?

We are already into few weeks into 2017. As with any new year, a lot of resolutions would have been made. I wanted this year to be the best one, better than any other year in the past and I am very sure most you would want the same. As with any year progressing, there are two things that will happen whether we wish or not. For professionals, (which is where I direct this article), you age a year and your experience will increase by a year. I am going to analyse on whether this natural progression is good or bad for our careers.

The Information technology is a niche industry that started booming, exploding, rather, in early 2000. Due to this immediate inflation and vacancy, opportunities opened in every direction and everyone wanted to cash in towards this boom as the demand of resources was on the rise. As for supply goes, it was relatively young. In fact, the potential was so high that there was a surge in Engineering colleges worldwide in the past 20 years. In a country like the growth can be said as almost Exponential. As with any emerging market, this enormous demand was met with lots and lots of Information technology professionals since the start of the millenia. The profession was considered Youth's epoch into the industry. The salaries that this generation got suddenly raised the eyebrows and at times irked a lot of the older folks respectively a generation above. It was a flourishing period for a lot.

Any sudden growth like this is bound to bring with it a perception of sameness spread across time. This Inflation, occurring right at the start, was destined to reach equilibrium at some point. That is what has started to happen. Around 2006, any Software Engineer with 6 years experience and a working knowledge of Java was considered highly valuable irrespective of their objective competence. The folks who were entering the market then started to observe this pattern and there was this natural association people started to make - that the more the experience, the more the intellect and knowledge, thus more money. People started to value themselves high with time irrespective of the type of choices they were making for their careers. This attitude was deeply rooted within the entire industry and it became a pressing problem for the companies as the experienced non-performers have started to become a nightmare.

The current phase is that organisations are fearing to take a stance or a bold decision because when I say organisations - they are not some idealistic machines, they are people in flesh like you or me where some of the problems to be solved are the problem solvers. It is kind of intertwined right now.

Normalisation phase
However, as with anything else, time will play catch-up. Since the new set of engineers are learning a lot of new and novel ways to do things and the amount of startups that are exploding with their own sets of ideas, the perception of time being reduced, these bad outliers have made themselves sitting ducks. The ones who are in the most pathetic state are the ones that think that "I deserve this and that" simply because they have more experience on paper. It will be a harsh fact to accept when reality reveals itself which it often does.

An analogy I can relate to is with our planet. My brother hates when people call “Save the planet”. His argument is that the planet does not need any saving. Planting trees and making it greener does not matter an ounce for the scale of something as big as earth - perhaps the beneficiaries are none but humans and other species with humans always being the primary. Whatever we try to do to Mother Earth, she has her own defence mechanism to bring things back to Equilibrium. Perception of time on our human scale - may look very long, but the laws of the universe do not change a bit. Only our perceptions do.

Same is the case with the perceived experience factor. I am seeing this first hand that experience has become a problem especially during recruitment. When we want to fill a particular position, obviously we don’t want more engineers with 10-15 years of experience. Here are some arguments towards the same.

  1. Recruiting an experienced player is a difficult job. For one to hire an experienced player, there needs to be a good availability in the assessment front as there is going to be a considerable investment. So the one who is assessing should be theoretically at least as good as the position that has been sought out for. Unfortunately when people are trying to add resources at scale, this becomes a problem. The chances of making a mistake are high.
  2. Experienced resources are expensive: This is a very “current” trend. The ones who are experienced are usually expensive for the reason that both the market and the candidate perceive themselves as valuable and hence the cost factor.
  3. Experienced resources are older: With the sample space and the heavy availability of young blood, people tend to prefer younger engineers as there has been a lot of stagnant folks big organizations are finding ways to shed. This may be a hard thing to digest, but this is how it is.
  4. Distribution of experienced resources are very thin: Although there may be 10 million resources each with 10+ years of experience, the industry does not need all the 10 million because of the hierarchy structure. If you consider the pyramid, the volume accounts more towards the worker bees and not more towards the Queen bees. This does not mean that Queen bees are not needed, but the spread or distribution is very thin. So the more experienced you are, chances are more likely that you are going to face a very harsh time in the coming decade as the number of positions to fill will exponentially reduce.

So how to avoid this deadly trap?
Fortunately, the situation is not all that bad! At least, not yet. Yes, this is a problem that many organizations inevitably have to address and many have already gone through multiple cycles addressing it irrespective of the backlashes it face, there is still a chance from the individual’s perspective to convert any baggage that one has a into an X factor that is bound to increase the value.
  1. Switch from a knowledge worker to a deep worker: Ask yourself whether you are a knowledge worker or a deep worker. It is not a rocket science to know where you stand. Perhaps try to answer yourself these simple questions honestly and you can assess whether you fall on the knowledge or deep worker. Do I proactively look for sensational/political/social feeds? Do I use facebook/twitter/social media sites multiple times during a day? Have I solved any real problem which only a few in my industry could solve? Do I have a learning/positive habit building routine? Have I worked in recent time on an issue in total isolation for days together?
    Based on answering some of these questions, you could roughly assess whether you are a knowledge worker, gathering a bunch of haphazard information which is easily repeatable by others and does not add any special value. If that is the case, you should switch from being a knowledge worker to a deep worker. The best resource I can point you to is the book Deep Work by Cal Newport where he dives deep and emphasizes why it is very important to go deep and why it is a valuable asset amidst a highly distractive environment.
  2. Attitude check: This is a very subjective and a tough thing to assess for anyone to do it to themselves. Our ego may interfere and will give some sweet answers that are acceptable to hear. Your number of years ARE NOT directly related to your experience. Perhaps on paper, yes, but do a fair judgment. Again this should be easy. Ask yourself this question. Assuming you are having 10 years of paper experience in the industry - How long will it take for a truly “focussed” engineer to be able to meet the job profile I am performing today. If the answer is 10 years, either you have not understood my question or you have skipped most of what I’ve said earlier.
  3. Technology immune: One other pattern I am seeing very closely is that people are associating themselves with a particular technology. I am  a Java person, I am a Manual Tester, a Scala guy, and they believe that working in that field magically makes them a subject matter expert. Again I am talking about the players who are with the "I-Deserve" attitude. These players are very reluctant to change. When the technology or the process becomes obsolete, they get obsolete along with it. Making them cross-train into anything else is both a painful and costly juncture which many organisations assess and instead shed them. That is the reason, the change should not come from the outside, but these things should be realized by the professionals themselves.

Being lethargic and thinking things will take care of themselves is not going to work, not with the pace at which the industry is moving. Being proactive is no longer merely an option for the passionate but a necessity for the normal. The good news is, before things are dictated to us, we can take the baton in our hands and realise where the industry is heading and make the right moves. Wishing you all a fantastic year!


05 February 2017

15-Game in Android

Having developed this game earlier (for fun) in Java Swing and Actionscript years before, I thought I would port this to Android as well. Spent a good 6 hours from New Project to put it live on the Play store! Click here for downloading it.


23 January 2017

A simple formula to rapidly succeed

This is a well-known formula. Wanted to see how it looks in code. Only 5 lines, not bad!

Of course, you can force kill this infinite loop :), how far you go is up to you, though!


21 January 2017

Gotchas while Appifying your website in Android with Webview

Android's Webview class is an excellent way to convert your website instantly into an app provided it is responsive. There are a lot of tutorials out there to convert your website into a webview instantly. I too wanted to convert one of my websites into an app.

However, there are certain things you need to keep in mind before you go ahead and submit in the app store for publication.

Since this is a relatively newer experience for me, I did not anticipate certain obvious gotchas. My application went through two sets of rejection before I could make it live, perhaps these could be useful for someone in similar waters.

Ensure that you are aware of all the policies that are set forth by Google Play; may it be copyrighted content, Privacy policy, Impersonation Policy, Ratings etc., I thought I had it all covered until I got this email from Google assuming that my application was an impersonation of an existing website.


Notification from Google Play

Google Play Support <googleplay-developer-support+no-reply@google.com>

Hi Developers at ***,

After review, <may app>, has been suspended and removed from Google Play as a policy strike because it violates the impersonation policy.

Next Steps

  1. Read through the Impersonation article for more details and examples of policy violations.
  2. Make sure your app is compliant with the Impersonation and Intellectual Property policy and all other policies listed in the Developer Program Policies. Remember additional enforcement could occur if there are further policy issues with your apps.
  3. Sign in to your Developer Console and submit the policy compliant app using a new package name and a new app name.

What if I have permission to use the content?

Contact our support team to provide a justification for its use. Justification may include providing proof that you are authorized to use the content in your app or some other legal justification.

Additional suspensions of any nature may result in the termination of your developer account, and investigation and possible termination of related Google accounts. If your account is terminated, payments will cease and Google may recover the proceeds of any past sales and/or the cost of any associated fees (such as chargebacks and transaction fees) from you.

If you’ve reviewed the policy and feel this suspension may have been in error, please reach out to our policy support team. One of my colleagues will get back to you within 2 business days.


The Google Play Review Team

My Appeal

My initial reaction was wtf! I have created an application for my own website, why the hell would google want to reject it? But the rationale behind this rejection made sense. What if a random person uses my website and tries to monetize my application? Or myself for that matter take a bunch of existing websites and start appifying them for my personal benefits.
I appealed. During the appeal, I uploaded a bunch of documents supporting that I am the owner of the website and that I have the rights to appify. The documents included - Google Analytics statistics page of the site and the Digital Ocean hosted app screenshots. Plus a small write up on claiming myself to be the owner.

Google understood and accepted my appeal. This is the email I got from them afterward.


Re: [<my case#>] Your appeal for reinstatement

Hi Bragadeesh,

Thanks for contacting the Google Play Team.

We’ve accepted your appeal and your app <appname> has been reinstated. For the app to appear on the Play Store, you’ll need to sign into your Developer Console and click "Submit update" to submit your app again.

If the option to resubmit is not available, please make a small change (such as adding and deleting a space in your Description in the Store Listings) to reactivate the button.

In the future, if you have proof of permission you can submit it to our team proactively using this form:

The link can also be found on your Store Listing page underneath the box for Full Description.

If you're an AdMob publisher, you'll need to contact the AdMob team to re-enable ad serving:

The AdMob policy team will review your app(s) and decide whether to re-enable ad serving.

Please let me know if you have any other questions or concerns.

Thanks for supporting Google Play!

<Google Engineer>
The Google Play Team

At this point, I was elated to have successfully appealed to my rejection and went ahead and resubmitted the application. However, later I found that, this feeling was short lived as I got a second rejection from Google Playstore.


Notification from Google Play about <app name>

Hi Developers at ***,

Thanks for submitting your app to Google Play.

I reviewed <appname>, and had to reject it because of an unauthorized use of copyrighted content. If you submitted an update, the previous version of your app is still live on Google Play.

Here’s how you can submit your app for another review:
  1. Remove any content owned by a third party from your app. For example, your app Store Listing contains: images of “<a celebrity>” in the Tablet 7" Screenshots. Affected Translations: en_US, en_IN
  2. Read through the Unauthorized Use of Copyrighted Content article for more details and examples.
  3. Make sure your app is compliant with the Impersonation and Intellectual Property policy and all other policies listed in the Developer Program Policies. Remember that additional enforcement could occur if there are further policy issues with your apps.
  4. Sign in to your Developer Console and submit your app.

What if I have permission to use the content?

Contact our support team to provide a justification for its use. Justification may include providing proof that you are authorized to use the content in your app or some other legal justification.

If you’ve reviewed the policy and feel this rejection may have been in error, please reach out to our policy support team. One of my colleagues will get back to you within 2 business days.

I appreciate your support of Google Play!


<Google Engineer>

Google Play Review Team

This rejection though was totally valid. I felt so dumb to have used a celebrity picture for demonstration purposes in one of my tablet screenshots without a copyright. Google was sort of kinder and patient to me. In order, not to test their patience too much, I thought I would remove the image in question they've said and three other images as well which I thought would fall under the violative category. 

I then resubmitted my application and voila! My app is now live at the play store. It was an adventurous learning because they could have blocked my entire developer account for life if I had one or two more policy strikes as many other developers have had. 


17 November 2016

Crawl all the linkedin skills

One of the recent problems I solved, crawling all the LinkedIn skills. Without any adieu, here is the source code.

Well, most of it is self-explanatory. Have also attached the complete skill list used by LinkedIn for anyone to download. Note: This is something available for free on the public domain. Also, the program written is good for the current date, I could write a dynamic one that would automatically update, but then I was too lazy to do that :)

Code Explanation
Line 1: Requiring anemone library which is a web spider framework written in ruby.
Line 3-6: I am initialising the characters to pages map obtained from the URL. For example, take a look at this link https://www.linkedin.com/directory/topics-o/ Here the character O has 99 sub pages. Similarly, character x has 73 sub pages. I manually assigned it here for the crawler to go that many times
Line 8: The variable all_urls consists all the possible combinations from a to z at the max each character having 99 subpages. The variable skipped_urls is to catch the URLs whose values are not crawlable because LinkedIn detected scrapping is going on. That will be collected and will be printed for recrawling later.
Line 9: Mapping all the possible URLs mentioned above into the variable all_urls
Line 11: Open a file called skills.txt in write mode and make it ready
Line 12: Iterate over each of the URLs present in all_url variable
Line 15-20: This is where the real crawling occurs. The XPath selector searches for class=column and collects all the skills in the given page and writes them directly into the file
Line 22: Capture all the skipped_urls in case LinkedIn blocks the scraper (which is the program)
Line 27: Print the skipped_urls using which we will have to rerun the program again - which I leave it to the reader to figure out how.


22 September 2016

State of Rails Releases

I am dealing with multiple Rails applications at once some of which are funded right and some of which have ran into maintenance modes. I just wanted to have a pictorial representation of what is the state of each of the Rails versions, unfortunately, I cannot get one. So I spent half an hour trying to decode the Rails official releases page and came up with this.

If you closely observe, the Rails releases 3.0.x and 3.1.x are history (I don't even want to pull up data for releases before that). It is high time you plan to upgrade your stack to a minimum of Rails 4.2 before you loose all the goodness the Rails community has to offer. Yes, I know it is a herculean task for folks in 3.x in which case I recommend you to do a complete rewrite of your application piecemeal by piecemeal!

Data Source: http://weblog.rubyonrails.org/releases/


26 July 2016

20 Tips for an Effective Code Review

It is a well-established fact that most of the bugs in the Software Development life cycle could be prevented literally right at the source (code). Since Code Review is almost an inevitable process in the Agile paradigm, keep in mind these 20 tips/guidelines (in no particular order) to become an effective reviewer of code. This is not restrictive to any one language but applicable to all. I've been reviewing code for many years and one of my core successes lie in stressing these points across the team. This is also the only way to effectively nurture and scale teams across the organisation.

  1. Identify the right tool: Identifying the right tool is very important. Because one should not be thrown off for adding a review just because the tool is not efficient enough. There are many open source tools out there. In most cases, you may have to host it yourself or you can also opt for services that do the hosting for you. If you are an Open Source Contributor, you would know how effective Github can be which also happens to be my personal favorite.
  2. Pre-conditions/Checklist: Any patch or a pull request that is submitted should have a minimum set of pre-conditions like it should have a Green build. A lot of Review tools have hooks to be configured to poll the SCM automatically and run the build. Build tools like Jenkins, Travis support these with minimal to no configuration. Ensure that you use them! Because it will definitely save you time and heartache instead of seeing stuff getting pushed to your trunk/master/production branch.
  3. Avoid Repeat Mistakes: As Gustavo Fring from The Breaking Bad rightly says "Never Repeat the same mistake twice", it is crucial that repetitive patterns are broken. In the context of code review, this means the developers should not get the same review comment that they had received earlier. This ensures that with each Iteration - the Quality of the patches improves so that if at all any new review comments are there - they are only new and anything that is given in the past are assumed to have been implemented in the later ones. If this is not happening, it is up to the Reviewer to go and identify to see where the leak is.
  4. Self Review: The person who is submitting the diff should first review it him/herself. Many obvious things like debugger statements, extra/missing files, ignorable files could be identified here. And I would recommend even doing a full fledged review of his/her own code as if he/she would do another one's. This culture also reduces the burden on the reviewer of concentrating on the Meat of the patch and not the obvious ones.
  5. Design Review: The Reviewer should also be able to decode the design introduction/changes that the Pull Request has and should be in a position to judge and give appropriate feedback. This is very important. 
  6. UI Review: Although software developers look at just the code and give feedback (because that is all they can be seeing in a Pull Request or a diff), they often neglect how the end product would look in a browser or the device where the code was intended to. It is extremely difficult to guess on how it will look. I recommend everyone to go the extra mile of looking how it looks and whether it relates to the original functionality. This is going to take some extra time. In my experience, this has insane returns in terms of identity and squashing obvious UI related issues. 
  7. Non-Logical Checklist: Code Review does not only involve in vetting the Logical Integrity but also some non-logical things like Naming convention, Spacing/Indentation, Object oriented compliance checks. Ensure that there is such a checklist in the first place.
  8. Keeping a pulse on the industry: This is true not only in this context but also on the overall wholeness of a Programmer. You should be up to date on what is going on in the Programming world, at least in the particular language you are part of. Knowledge of things like critical security patches, feature additions, language enhancements, performance improvements proves to be really powerful in assisting an effective review process.
  9. Encourage feedback: One does not always have to agree with what is said in the Review. If there are some contradictions - it is best they are addressed between the Reviewer and the Reviewed (or Reviewee). I also encourage that all the review comments are responded to. This process gives confidence to the entire team that any review comment will not go unanswered. 
  10. Avoid Oral Reviews: When a patch or pull requested gets created that too for teams and developers co-located or sitting next to each other, it is tempting to just go through it and give all the feedback orally. This may be fine if the team is small (only 2) and they fully own the codebase. However, this has some negative effects in terms of follow up and broadcasting. What I mean by broadcast is that there could be a review comment which could be applicable for the entire team.
  11. Learn from other reviews: Encourage to team members to not just read your own reviews and apply however to read the other reviews within the team. I've heard this famous quote - 'An intelligent person learns from their own mistakes, but a genius learns from the mistakes of others'. Let's make everyone in the team a Genius! 
  12. Dual Reviews: Similar to a Doubly refined sugar or Oil, the throughput and Quality of the code review could improve if it has a Second reviewer if that's possible.
  13. Review the Reviewer: It is a bit over-zealous to expect anyone coming new to the team who is relatively younger to the software development or who has not involved in Review process in the past to quickly catch up to all the nuances in the code review process. It would be nice if these guidelines are slowly implemented and mentoring/onboarding is in the organisation's culture. In simple terms, there could be a reviewer who can review whether reviewer complies to all the best practices out there.
  14. Over Engineering: At times it is tempting for Reviewers to comment on things that may look like Over Engineering work. These cases it is okay to voice your opinion to the reviewer.
  15. Enterprise Adherence: An Enterprise will have an adherence in different horizontals in terms on what tools they need to use, what style guide they need to follow, what frameworks has been used across different projects. It is up to the Enterprise Architect or the Senior Member of the team to proactively absorb all these facts and ensure that the entire review process is in Adherence with the overall Enterprise. This is crucial because each Atomic commit may slowly introduce things that could stray away from what the Enterprise would want. It may not look like a problem at all in the initial phase. However, should there be a consolidation happen across various projects - having multiple stacked apps across the enterprise would result in painful Refactors and often ends up leaving a huge amount of technical debt behind.
  16. Dependency Injection: Be wary of addition or removal of a new Library to the code. This again falls under the adherence of standards across the projects. Make sure that any introduction of a new library is well evaluated across the team and that it has enough support both in the near and long run. I have seen a lot of libraries which were started by individual contributors go unmaintained for years. Ensure that there is a strong community following and is very active.
  17. Against the Right branch: This may seem like something that may not belong here but in my personal experience I've faced this issue multiple times where a Reviewed creates a pull request against a different (or default) branch instead of the one that it actually has to go.
  18. Tech Debt Identification: During the course of the review, the reviewer may stumble upon an issue which involves a good amount of effort. In such cases, it is not advised to block it and hamper the delivery commitments. Instead, the right thing to do here is to add these things to a technical debt backlog where it could be groomed and picked up in future.
  19. Copy Paste excuses: "I did not do this - it was already there - I just copied/moved it" - Yes this is a very common statement every developer says when his code is challenged - however ensure that any code that is touched has to comply to the coding standards set by the team.
  20. Make Guidelines explicit: It is a very good process for all the developers on-boarding to a new team to have a set of guidelines (you could use this) explicit and review it from time to time. This could be done across the organisation.

The above list may look overwhelming. However, if you have the knack and right drive to implement some or all of these - the productivity of the Engineering team would increase by multi-folds.


08 June 2016

Jenkins bump from 1.x to 2.x

Jenkins is one of the amazing open source softwares especially after it forked itself from its predecessor Hudson. Amazing for multiple reasons - but the one that has really "amazed" me is the painless upgrades it provide. Trust me, I am a Rails developer and I know in & out on what a nightmare it is to perform an upgrade!

Recently I had to perform an upgrade for Jenkins and it was as dead simple as replacing a .war file and I was err.... DONE! All I did was stopped and started the process whilst replacing the war file. Once I rebooted, it all just worked. They seriously think about making their installations to run the latest software which I personally consider a true value.

Anyways, the reason me posting this is to help my fellow developers who are running into an issue where their slave would not start up after the upgrade process.

You would face an error something like this below when you go and inspect in the slave agent.

What this error means is that it could not invoke the slave because of an outdated java running in the slave. All you've got to do is upgrade it and you should be all good to go.

This was how the overall upgrade felt like :)


31 May 2016

Responsive Email design with Rails

It is almost imperative in the recent times, the emails we send out are expected to be responsive with a heavy number of users preferring to read or more like skim through emails from their smartphones. To find the ideal sweet spot that aids in not only developing fully responsive emails, but also to do it quickly and easily is vital. There are lot of factors should be taken into account both from a business perspective and from a developer standpoint. I am listing them here (in no particular order)

  1. Responsive design - works consistent across all the devices from mobile layout to the most stringent Outlook Email client.
  2. The UI should be consistent with ways to freeze the Headers, Footers and should follow a proper template similar to Rails Action View Layouts.
  3. Should be able to easily testable in developer mode with support for Plain text view besides supporting HTML View.
  4. Avoiding hardcoding of styles in each and every HTML tag. Hardcoding styles in the email has been the norm in Rails community and other web frameworks as well for a very long time.
  5. Should be easily testable in all types of browsers. Even a minor modification/tweak should be tested quickly instead of painfully sending emails again and again.
The following may seem a shorter list, but believe me - to quench all the above criterions I had to go through a lot of different phases with varied learning curves. To attack all the above problems - I would suggest the following tools and libraries to make our lives super simple.

  1. Zurb's "Foundation for emails" (previously called Ink) that provides with ready-made available templates to kick start and later customise on top of it to our heart's content.
  2. Premailer-Rails - A wonderful Rails pre-processor that makes the email design entirely stylesheet driven as opposed to hardcoding styles directly in the tag. Not only does it removes the pain of having hardcoded styles, it also provides a packages view to render the Plain text automagically - with 0 amount of code required from the developer.
  3. Letter Opener - A classical tool by Ryan to quickly preview the emails in development mode.
  4. Litmus - If you are into Responsvie email design, you have no reason not to subscribe to Litmus as they provide a comprehensive way to template, design and test your email in in-numerous email clients.
That is it! Combine these tools and with a slight learning curve, you can claim yourself as a fully responsive e-mail designer.


22 January 2016

Streaming vs Synchronus Replication in Postgres

I recently faced one strange issue in Rails which usually questioned some of the basic Relation Database principles. It gave me almost a sleepless night until I was able to get to the Root cause of the issue.

The problem

The problem was pretty straightforward. A Rake task generates an email and the email had two places where the count of documents was mentioned. Ideally they are supposed to be the same - but for some reason it was different.

The pain point

The reason this particular problem was painful because this has not occurred for few years and that it occurred only intermittently. The problem with intermittency is that there is always some theory behind. Here too there was something. Here are steps I had to perform to find the Root cause.

The approach

I first looked into the Rake task's log file which is outputted when my specific Email job runs. Things looked fine there - meaning it completed in under 90 seconds as expected.
The next step was to look at the production logs. The logs as expected was having 30 insert statements - Check. And it also has a read statement for the insert statements before and it was a typical count(*) query. The problem occurred at this point. The count(*) should have returned 30 but instead it returned 4. There comes another count(*) somewhere below in the code - but that returned 30 as expected!

The above step revealed that this problem is not with the Rails layer but something to do with our production database setup. So routed my energy towards there.
The production database environment is a Master-Slave configuration with Master taking Writes and Reads and Slave purely configured to take Reads. Both these nodes are load balanced via a PG Pool server. My initial gut said to investigate some time in the PGPool but that is not much useful as all PG Pool going to do is route traffic.
So I went and read about the Master - Slave Replication configuration. I read about two types of replication. One being synchronous replication and the other being Streaming Replication. Digging into that I found my root cause!

Synchronous vs Streaming Replication

Assume you have two databases A and B with A being a R/W Master and B being R-only Slave. If an insert or update command is issued, it goes and writes that entry to A as its configured for write. If the database returns after it ensures that all the slaves got this write - it is called as Synchronus or 2-Safe Replication. If A does not wait for this step however acknowledge whether it wrote successfully and later streams that value to B - this is called as Streaming Replication.

Both has their obvious own pros and cons. Streaming Replication is for Raw Speed and is also a very good configuration where there are too many writes. And Synchronous Replication although not as fast as Streaming provides 100% consistency. We unfortunately were in Streaming Replication mode. The 30 inserts happened so fast at A, that before even it could stream them to B, the count query intervened and read the half baked data from B. I am talking in terms of millisecond speed.

How did we fix it?

We isolated all our cron jobs to run in a dedicated node and pointed the database directly to the Master database server skipping the PG Pool in the process. In a single database configuration the concept of Streaming or Synchronous Replication does not apply. Hope this was helpful!


07 January 2016

Quick way to import a jar in Eclipse

Following is a quick demonstration to import .jar files via Eclipse IDE

See Also: TRIE ADT Tutorial

You will need to understand why you are doing this. Any java program, for it to execute, first needs to be compiled without errors. The compilation process (using "javac") is going to convert your source code into bytecode (.class) file which you can then use the "java" executable to run. An IDE like Eclipse will do the compilation automatically in the background. That is why when you have any sort of error in the program (syntax/logical etc.,), you see it getting highlighted immediately. 

So, when you are referencing an external library, javac executable needs to know which library you are referencing. You can do this by setting the CLASSPATH variable before executing the program if it is via command line. Eclipse makes it easier, instead of specifying the Java library file's location through the command line, you can do it through the editor by Right clicking on the project and selecting Build Path --> Configure Build Path. With the Libraries tab selected, click on "Add External Archives" and give your .jar files. Your program should now compile smoothly without errors.

Using an IDE introduces insane efficiency for designing and development. However, always have an understanding of what is going behind the scenes. Hope you learned something useful today!


31 December 2015

Hotel Automation Controller - Interview coding problem

This is one of the problems I got via a friend who recently faced a company called Sahaj Software a clone company to Thoughtworks.

Problem Statement:

Hotel Automation Controller Problem  Statement

A very prestigious chain of Hotels is facing a problem managing their electronic equipments. Their equipments, like lights, ACs, etc are currently controlled manually, by the hotel staff, using switches. They want to optimise the usage of Power and also ensure that there is no inconvenience caused to the guests and staff.

So the Hotel Management has installed sensors, like Motion Sensors, etc at appropriate places and have approached you to program a Controller which takes inputs from these sensors and controls the various equipments.

The way the hotel equipments are organised and the requirements for the Controller is below:
  • A Hotel can have multiple floors
  • Each floor can have multiple main corridors and sub corridors
  • Both main corridor and sub corridor have one light each
  • Both main and sub corridor lights consume 5 units of power when ON
  • Both main and sub corridor have independently controllable ACs
  • Both main and sub corridor ACs consume 10 units of power when ON
  • All the lights in all the main corridors need to be switched ON between 6PM to 6AM, which is the Night time slot
  • When a motion is detected in one of the sub corridors the corresponding lights need to be switched ON between 6PM to 6AM (Night time slot)
  • When there is no motion for more than a minute the sub corridor lights should be switched OFF
  • The total power consumption of all the ACs and lights combined should not exceed (Number of Main corridors * 15) + (Number of sub corridors * 10) units of per floor. Sub corridor AC could be switched OFF to ensure that the power consumption is not more than the specified maximum value
  • When the power consumption goes below the specified maximum value the ACs that were switched OFF previously must be switched ON

Motion in sub corridors is input to the controller. Controller need to keep track and optimise the power consumption.

Write a program that takes input values for Floors, Main corridors, Sub corridors and takes different external inputs for motion in sub corridors and for each input prints out the state of all the lights and ACs in the hotel. For simplicity, assume that the controller is operating at the night time. Sample input and output below.

Initial input to the controller: Number of floors: 2
Main corridors per floor: 1

Sub corridors per floor: 2

Since the hotel management is trying this for the first time, they would be changing the requirements around which electronic equipments are controlled and the criteria based on which they are controlled, so the solution design should be flexible enough to absorb these requirement changes without significant change to the system.

The solution to this problem involves approaching in an object oriented manner. Also we need to see here that we should use a Command/Strategy Pattern given there could be changes in the behavior based on external factors. I have not included the timings from the problem but from on here it should be easily extensible.

Code below:


21 November 2015

Cloning remote PG database and loading in Local environment

For projects involving small to medium sized databases one may require to copy the remote (or production) database onto local environment. I was earlier doing this for my production application using custom pg_dump and then restoring with pg_restore. It was relatively straightforward but still consumed good amount of time. I wanted to automate this using capistrano and this is how I did it

You should note that this is extremely fast because it executes the command on the VPS - usually EC2 which has amazing internet speeds. And then copies it over scp as a single file. You can also add a compression step using the --format option in the pg_dump.

Hope this was helpful!


Dealing with Intermittent Build failures due to Memory - Jenkins + EC2

It is not uncommon to see a Jenkins build failure due to a memory choke now and then while running thousands and thousands of rspec examples. The examples may be too much for the memory allocated within the EC2 instance. One simple solution is to enable Swap memory. Going through the typical EC2 route you need to have a dedicated Swap Partition. However if you feel you don't want to go through that route, you can simply do it via a swap file.

Make sure you have root access and follow these instructions to enable swap memory usage. Typically you could go from 1 to 2 times the allocated RAM but that is not a hard rule. In the current scenario I am going to elicit what I did.

Done! you may check whether this is enabled by typing the free command.

Note: This will only persist until the machine is running. If you reboot this will go away. If you still want to persist the swap after a restart you may do

Hope you find this article helpful. This is a trimmed down version of a wonderful article from DigitalOcean.