Kubernetes is an open source system for managing applications in a container technology environment. Kubernetes automates the manual processes to deploy and scale containerized applications. It can also manage clusters of containerized applications, which can span public, private and hybrid clouds.
The name, Kubernetes, is inspired by the container ship analogy, and builds on that to indicate the Greek word that translates to “helmsman,” as in the one who needs to pilot the container ship. Kubernetes also gets called “kube” and “k8s” which is a numeronym, using the first letter, the last letter, and the number of letters in between those letters.
History of Kubernetes
Word etiologies aside, Kubernetes was originally created by Google, as it grew out of an internal project Borg, with containers powering Google Cloud technology. In fact, Google claims to have experience with containers across fifteen years, and claims to “run billions of containers a week,” which gave them plenty of experience that went into this software program. Kubernetes was then donated to the Linux Foundation as a seed technology, to form the the Cloud Native Computing Foundation (CNCF) in 2015.
While Kubernetes is an open source project, it is officially supported by both Microsoft Azure, and Google Cloud. Kubernetes gained initial acceptance among early adopters, which translated to steady growth, and now occupies a prominent position in the container management software space.
These days, using multiple containers for a real production app has become commonplace, with the containers located across multiple servers. Kubernetes software enables deployment of these containers, and to scale them across multiple servers to match the workload, including scheduling of the containers across a cluster. It also can help to manage the health of these multiple containers.
Kubernetes gets deployed for a group of containers, which gets termed a cluster. One of the containers of the cluster is designated as the cluster master, that runs the Kubernetes control plane processes. The other containers of the cluster get assigned as nodes, which are the worker machines, that fall under the control of the cluster master, which functions as a unified endpoint.
The cluster master has total control of its nodes, designating their lifecycle, including assessing their health, as well as controlling upgrades and repairs to each node. In the cluster, there can be special containers, that get designated as per-node agents with specific functions, for example, log collection, or intra-cluster network connectivity.
The default for a node is for it to have one virtual CPU, and 3.75GB of RAM, which is the standard Compute Engine machine type. For more compute intensive tasks, a higher baseline minimum CPU platform can be chosen. Realize that not all of the node’s resources can be brought to bear on the application that it is designed to run. Rather, some of these resources are needed to run the Kubernetes Engine. The allocatable resources of the node, which can be used to run the application, is the difference between the total resources, and the amount reserved for the Kubernetes Engine.
By way of example, if the node has 4GB of RAM available, 25% of it gets reserved for the Kubernetes engine, and the remaining 75% can be used to run the application, and requires only 20% of the next 4GB of RAM if the node has a total of 8GB of RAM. The Kubernetes Engine is less hungry of CPU resources, reserving only 6% of the processing power of the first core of the node, and only 1% of a second core designated to the node.
The cluster master runs Kubernetes API Server, which handles requests, which originate from Kubernetes API calls from the Kubernetes software. The Kubernetes API Server functions as the ‘communication hub’ for the entire container cluster.
Contributing to Kubernetes popularity is its robust feature set. These include:
- Automatic binpacking: This automates where containers get placed based on the most efficient use of resources.
- Horizontal scaling: Applications can be scaled up or down via a simple command, or automated to match CPU usage.
- Automated rollouts and rollbacks: Kubernetes rolls out updates to updates to the applications in stages, rather than all at once, and monitors for health issues, and if found, will automatically rollback to a more stable version to preserve uptime.
- Storage orchestration: It works with a variety of storage solutions for additional flexibility, from local to public cloud.
- Self-healing: The ability to kill containers that freeze, and restart containers that freeze or fail their health check.
- Service discovery and load balancing: Kubernetes is able to assign each container its own IP address, with one DNS name, and the ability to distribute the load between them.
- Secret and configuration management: Applications can be updated without an image rebuild.
- Batch execution: Management for batch and continuous integration workloads.
Real world applications of Kubernetes
Kubernetes gets used by top corporations, including Comcast, eBay, Goldman Sachs, The New York Times, and Pokemon Go, among many others. An example of Kubernetes use is video provider Sling TV, which after their launch in 2015 experienced issues as new subscribers were outstripping their existing resources as they attempted to distribute live TV through the internet. In order to improve their customer’s experience, and with a desire for increased flexibility – for now and going forward – the decision was made to shift to a hybrid cloud strategy. This used both an on-premise VMWare multi data center, integrated with multiple public clouds, and controlled through the Kubernetes Engine.
According to Brad Linder, their Sling TV’s Cloud Native & Big Data Evangelist, “We are getting to the place of where we can one-click deploy an entire data center – the compute network, logging and monitoring all the apps.” He goes on to point out that previously deploying a new app would take days, which can now be accomplished in about an hour via the Kubernetes Engine.
While Kubernetes has humble origins as an internal project at Google, it has evolved to become a dominant player for container management software, no doubt fostered by its open source approach. The power and flexibility of the Kubernetes Engine explains why it get used across so many diverse industries.