ThinkingCog

Articles written by Parakh Singhal

Polly - An Introduction

Everything is getting smart. Not everyone, but everything.

My lamp, dishwasher, washing machine, car and everything in between, now connects to the internet and sends notifications about important events, from completing their duties to depleting levels of various consumables required in their functioning.

With all these devices connecting to a distributed and redundant network like the internet, which further hosts distributed systems that spans heterogeneous hardware, multiple software stacks and geographically separated data centers, something, somewhere, is bound to fail, albeit for a fraction of a second, but it will fail. The device may lose its WIFI signal momentarily, the router may experience a hiccup due to a power brownout, the optic network cable may experience the wrath of an excavator bucket or the server may die in the line of duty before the redundant one kicks in. All these are very much possibilities and we experience them all the time.

Such issues that are ephemeral in nature are known as transient errors in the programming domain. They are there momentarily and then they are not, and because of their non-persistent nature, are not easy to debug, as re-producing them may not always be feasible.

One of the proven ways to increase dependability is to increase availability of the desired product or service. Web services and applications run on a 24-hour schedule on servers that continuously consume power. One way to increase their availability is to increase the redundancy i.e. have the same service or application be available on multiple servers. The servers may all be in the same data center or geographically spread. The geographical spread helps in averting a situation where a single data center may experience a power outage, network outage or a natural calamity and take down the entire service or application.

But increasing availability always comes associated with costs. Every server/virtual machine/container cost to run and the more of them exists, the more man hours go in their upkeep and maintenance.

In order to reduce costs, without introducing redundancy beyond warranted, we incorporate smarts into software, so that the application keeps running the desired actions without throwing error, creating an illusion of high availability. Such activity which can introduce resilience in an application, can be achieved with the help of programming frameworks. One, and at the moment only framework to exists for .Net developers happen to be Polly.

With Polly, we can easily incorporate resiliency patterns such as retry pattern, circuit breaker, timeout, bulkhead isolation etc. in our applications and services. Each pattern deserves a post unto itself and in future I will bring more information on the patterns along with code examples.

Till then, please enjoy a video in which Late Mr. Scott Allen discusses about building resilient applications in cloud and look for opportunities to apply a resilient framework like Polly.

Cloud computing and what it means for internet pipes

Key takeaway:

Technology companies in order to promote their cloud platform and integrate in our daily lives, might have to take ownership of infrastructure supporting the internet and in the process might also leverage this endeavor to become a service provider.

Read on:

In the not so distant future, we will be travelling in self-driving cars that would adjust the cabin climate according to the ambient climate and habits of the user, play music from your home server or read news/business reports using a digital subscription, and after dropping you off at the office will go and pick your groceries based off of the list sent to your car by your refrigerator. Once you reach home, you will be greeted with the ambience according to how your schedule went during the day. You will automatically be reminded by your home about your favorite TV serials scheduled for the day and will automatically record them for later playback, in case you are unable to see them when scheduled.

 

Microsoft’s vision of productivity in future

 

 

A Day Made of Glass... Made possible by Corning.

 

You see, that a lot of what I mentioned is already happening around us in or form or the other, it is just that it is not happening in one cohesive form, one mass that can act seamlessly. But that will change in future. A lot of what I mentioned in the opening paragraph requires the following things:

1. Data about you – living style, requirements, frequency and kind of purchases etc.

2. Appliances that operate standard protocols

3. Bandwidth

The big data and analytics movement aims to solve the first requirement and if you look at the concept of shopping recommendations, to a certain extent, it does solve it, and with time and data it will get better. The second requirement of appliances that operate on standard protocols is also being worked upon. The third requirement of bandwidth is the focus of this post and is the most interesting to me at this moment. Let me present my perspective on where things are and where they are heading.

It has been a long time since the term “Cloud Computing” was first coined, and it materialized in the form of Amazon Web Services in 2002. Cloud computing has since then evolved into various kinds of services, the gist of all being that, the consumer of the service is not required to engage in managing IT infrastructure. The service can be in the form of providing an operating system as a service (Windows Azure), which can then become a foundation to run programs written in various programming language(s) supported, or it can be simple Customer Relationship Management (CRM) system (Salesforce). All these are offered as services that are managed by their parent companies, freeing the service consumer to focus of what they do best, and not invest capital and human resources into managing infrastructure.

Now if we take a step back and look at how principles of economics governed the spread of general computing, we will find that general computing first targeted the enterprise space to make them efficient and save cost, and then came down to the general masses. Similarly, cloud computing is currently targeting the enterprise space heavily, enticing them with the convenience and upfront capital cost savings the concept brings with it, but has also started to crawl into the consumer space. Look at DropBox, Google Drive, Microsoft SkyDrive; they are not enterprise storage solutions, but rather consumer facing cloud storage solutions. In the entertainment section look at iTunes, Amazon AutoRip, Google Play and Netflix; they are not enterprise solutions at all, but rather a flavor of cloud services geared towards consumers.

An interesting side effect, of cloud computing is that it can be extended to individual consumers and be used to gather data about their purchasing habits. Applications and services like iTunes, Google Play, Amazon, Netflix etc. take into account your choice of music and based on your past preferences suggest you potential songs, movies, books, goods, services etc.

Since these services rely on internet as the medium of conveyance, the bandwidth available between the offering and the consuming entities can become an issue. This becomes a bigger concern if the service provider hosts a public facing service that is heavy on bandwidth such as video, for example Netflix.

In such situations, the growth of the service provider depends upon meeting the demands of the service consumer, and ensuring that there are reliable and redundant pipes available. Given this, it would bode well, if they have a say in the upkeep of these internet conduits. A more desirable situation from the service provider’s perspective would be to own this essential piece of infrastructure, if they want to grow their cloud platform.

As far as I can see and evaluate, I already see that happening, albeit at a smaller pace. Companies like Google, I believe already have a strategy and have started acting on it in the primary markets. We can see that in the form of high speed fiber connectivity in Kansas City, Missouri. Recently, Google started offering free Wi-Fi in limited areas in New York and in the City of Mountain View. Fast connectivity means more viewing of high definition video, more usage of cloud storage, more video conferencing leading to an alternate source of income for cloud computing provider(s) and ultimately an end to end solution i.e. collection of data about habits leading to predictive analytics leading to automated lifestyle. So it starts with the ownership of internet connectivity in city areas and eventually might end up with transcontinental internet pipes, and in the process becoming a service provider.

It gels well with the philosophy and selling points of cloud computing which are redundant data backups at geographically dispersed locations and content distribution networks which serve content from the data center which is located nearest to the consumer. One more pointer in this direction is the purchase of Motorola Mobility by Google, and now having access to a cache of intellectual property in the form of telecom patents.

One of the primary reasons of this trend is that if you look at history, then it will be apparent that the telecom companies have not done much to push the limits of bandwidth on a pro-active basis. They just offered to the consumers whatever lowest common denominator they could come up with that proved profitable to them. Before the advent of cloud computing, there was in fact no stake of the tech companies either. But all that is changing. More mobile devices, proliferation of video content, video communication, in-app purchases, cloud storage etc. require more and more bandwidth. That’s too much of a responsibility to be left with Ma Bell, especially when they are not getting a cut in the pie. Self-driving cars, home automation, intelligent thermostats, refrigerators etc., all eventually leading to an internet of things, would require a whole lot of bandwidth and redundancy. Retina screens, HD TVs, 4K TVs needs loads of bandwidth in order to render the depth and richness that they have been designed for. Thus, the future depends on how efficiently we carry data to and fro between the serving and consuming points, and that requires discarding off copper conduits and embracing optical and high speed wireless technologies with new standard protocols that work efficiently, precisely what pioneer companies like Google are quietly working on.

See more:

1. The future according to Google's Larry Page

2. The Internet of Things

3. Eight business technology trends to watch