Kubernetes affords developers the ability to think in container-centric terms, untethered from the responsibilities of virtual machines.
As a full-stack app developer, there are an endless number of things to know. If you work on the front end, you have to know how to make an app functional, presentable, and most importantly, usable for the end user. If you work on the back end, you have to know how to prepare, structure, and connect the data models to the front end.
But in the development of one app from blinking-cursor to MVP, an often overlooked challenge is the inevitable problem of scalability. How will my app handle potentially hundreds of thousands of client connections? How do I spin up my microservice architecture in a cost-effective way? And what the heck is a load balancer? Enter, the weird and wonderful world of Development Operations & Cloud Infrastructure.
At Leverege, we grapple with the scourge of scalability every day due to the nature (and also promise) of the IoT industry. And in this article, I’m going to supply a little context around the current standards in cloud deployment models (eg. virtualization, containerization), and explain why we’ve settled on Kubernetes (specifically, Google Kubernetes Engine) as our particular weapon of choice against the challenges of scalability.
So, consider this. You’ve built your awesome application. Your UI brings tears (of joy) to your end-users, and everything is fast and coherent on the backend. The one last thing you need to do is publish your app to the world somehow. What are your options?
Well, before considering any kind of cloud options, you could buy yourself a few raspberry pi’s, connect them to the internet, install/configure your app on each pi, and expose a port on each pi through which anyone in the world can ping your machine. Easy….
Ok, so maybe it’s not so easy. It’s also something you’re not that into. You just signed up for building your app, not all this set up, maintenance, etc. Not to mention the fact that it’s horribly inefficient. You did some analytics on your machines’ performance, and you discovered that each pi only uses 10% of its total compute resources serving your app to the world. That’s a lot of wasted potential.
But you’re clever. What if you abstracted away from the physical hardware, and created “virtual machines” on top of each of your host pi’s? This abstraction allows you to install your application (or any other applications you want) on n number virtual machines on just one raspberry pi! Amazing!
…But there are still problems. You realize that while the Virtual Machines allow you to use each physical server to its full potential, when you make a change to one crucial package/library that two applications/processes/services on the same Virtual Machine depend on, both applications crash. These kinds of application entanglement errorshappen because the two applications have access to the same Operating System on the same VM.
“Fine,” you say, “I’ll just abstract away from the Operating System on my VM. I’ll put my application in a container with its own filesystem, and I’ll prevent it from being able to see any other processes happening on my VM. There. I’m never getting bit by the application entanglement bug again!” Thus, the Linux container was born.
So, lets take stock real quick. At this point, we have our application running in the relative safety of a container. The container is exposed on a particular port of our Virtual Machine, which lives on a physical computer (in this example, our raspberry pi server). In other words, our structure looks like this: App > Container > VM > Hardware.
And for the most part, this structure works for many kinds of application architectures.
Most of us have neither the time nor the resources to build out our own server farm. Physical servers require a lot of time to configure properly, and then once done, they require frequent maintenance. That’s where cloud platforms like GCP, Azure, AWS, and OpenStack come in. At the most basic level, they offer Virtual Machines on which customers can deploy their applications.
But even containers running on VMs in the cloud are still hard to manage. Increasingly, full applications are separated into logical pieces that function together as one service. As a result, system administrators are more concerned with the deployment and management of groups of containers across a number of VMs, as opposed to just a single container on a VM.
It is at this level that Kubernetes and related container orchestration platforms do their work. By shifting our thinking away from managing host machines (physical or virtual), Kubernetes affords developers the ability to think in container-centric terms, untethered from the requirements/responsibilities of virtual machines.
Additionally, Google Kubernetes Engine (a Kubernetes implementation on top of GCP) gives developers all the benefits of a container-centric cloud environment, as well as the native integration with other, massively scalable GCP products, like BigQuery and CloudSQL.
Picture thousands of devices talking to servers, servers to databases (as well as other servers), and databases to mobile and web applications. With all the different channels of communication in just one IoT app, scaling the cloud services appropriately (based on # of requests, time of day, compute resources available, etc.) becomes a significant challenge. With Kubernetes, building durable IoT systems on any cloud platform is a much more feasible endeavor.