How to avoid vendor Lock-in with Kubernetes

Share on facebook
Share on twitter
Share on linkedin
It's the architecture that matters, not the infrastructure.

Kubernetes for absolute beginners

A while ago we asked one of our juniors to write his opinion on a technology called Kubernetes: “Kubernetes by n00bs“. He can be rightly called an absolute n00b because he came from an embedded software developer background and got thrown into the whole Kubernetes stack out of the blue.

Our junior called out two major advantages:

  1. Scalability: “When there are too many requests, Kubernetes will assign more containers doing the same work to handle the increased load”.
  2. Platform independence: “The same configuration can work on any cloud platform. We just tell Kubernetes what we want happening in the clouds. How it talks to different environments is none of our business.”

But this sparked some critique in our own ToThePoint headquarters….

Platform independence causes Kubernetes vendor-lock in

Some pointed out: “Yeah right, you are platform independent by using Kubernetes. But aren’t you vendor-locking yourself in to Kubernetes itself — in some evil ironic way?”. You could be tightly coupled into Kubernetes, yes. A Kubernetes lock-in so to say.
So our Kubernetes expert Johan Siebens has replied: A Kubernetes vendor-lock in can easily be avoided by keeping some easy guidelines in mind. In the end, it’s the architecture that matters, not the infrastructure. In what follows you’ll be able to get what this means.

Kubernetes guarantees platform independence

Kubernetes makes for platform independency thanks to its open source roots

Kubernetes is an open source project. Initially Kubernetes was backed by Google but in the meantime it has become a universal standard that is supported by all the majors and offered as a managed service (Google Cloud, Amazon Elastic Container Service for Kubernetes, Digital Ocean Kubernetes, Azure Kubernetes Service, …).
You can just check it on Github: it’s one of the biggest projects on Github with the most contributors and companies that are backing it.

Kubernetes is an open source system for managing containerized applications across multiple hosts; providing basic mechanisms for deployment, maintenance, and scaling of applications. Kubernetes builds upon a decade and a half of experience at Google running production workloads at scale using a system called Borg, combined with best-of-breed ideas and practices from the community. Kubernetes is hosted by the Cloud Native Computing Foundation (CNCF). If you are a company that wants to help shape the evolution of technologies that are container-packaged, dynamically-scheduled and microservices-oriented, consider joining the CNCF. For details about who’s involved and how Kubernetes plays a role, read the CNCF announcement.

https://github.com/kubernetes/kubernetes

The open source roots of Kubernetes makes it inherently platform independent. Platform independence is guaranteed because open source software enables a multitude of cloud providers or other instances to provide a Kubernetes implementation or to build a service of their own. This way platform independence is guaranteed.

In doing so, platform independence means that when you build your application and provide it with the necessary manifests (the well known YAML files) and those are somewhat conform, then, in theory, you could apply those on a Kubernetes cluster with any of the vendors of your choosing (Google GKE, Azure AKS, Digital Ocean, AWS, …).

There are many different concepts within Kubernetes where each provider uses their own implementation. But the results are always the same: you’ll have beautiful YAML files that enable you to apply it anywhere you choose with the same results. Through their own implementation they hide the complexity of their own infrastructure.

Case in point: You can expose your application deployed on a Kubernetes cluster by creating a Kubernetes Service of type “LoadBalancer”. When applying this on GKE, Google Cloud will internally setup a network load balancer with the appropriate forwarding rules and target pools and in the end you’ll get a public address to access your application. The same goes for AWS, there an ELB (elastic load balancer) will be created, but in the end, the result is the same: you’ll be assigned a public address and your service will be externally available.

Another example are persistent volumes. You’re applying a persistent volume claim to your Kubernetes cluster and it’s your provider that decides which volume he’s able to offer you. In our current case: Google uses a persistent disk and Amazon uses a (Elastic Block Store) EBS volume. In both cases, your application will be assigned a certain volume and off you go. In this way you remain platform independent, which could be interesting in your particular case.

This means our junior was right: the open source roots of Kubernetes make it so that you’ll be platform and vendor independent.

Conformance testing your Kubernetes implementation to remain platform independent

CNCF (Cloud Native Computing Foundation) offers a conformance test for Kubernetes implementation — which if successful proves the Kubernetes installation is Kubernetes Certified. “Certified Kubernetes” means: ‘in each installation that conforms to these tests, you will be able to handle a certain level of workload.” The Kubernetes certified label enables you to switch between vendors or platforms to your choosing.

All of the big game players that offer managed Kubernetes service (Kubernetes As A Service) are compliant to this test suite.

Case study: How to deploy an application on Kubernetes

Remain platform independent and avoid vendor lock-in

We asked two of our juniors to deploy an application we were building on Kubernetes. We were short on time so we had to give them gradual hints. The first one was: you’ll have to create a Docker image of the application. So that’s the first thing they had to figure out: how to create a new Docker image for my application? In no time they had figured this out. So what’s next? Install a minikube on your local machine on which you could start creating manifest files. They needed this step to create a Docker image in their own local Kubernetes cluster. Well, after not even two hours they were ready to apply manifest files to the minikube. Et voilà, they had a running application on their minikube. Now it was only a question of starting a Kubernetes cluster on a virtual machine provided by any of the big game names (such as Google Cloud) and applying the same YAML files they had prepared for their minikube.

What our juniors learned was an essential part of becoming platform independent: the procedure to deploy their application was the same on their minikube as on Google Cloud.

What are the benefits of avoiding vendor lock-in

Just what is a vendor lock-in?

“Vendor lock-in is commonly defined as “Proprietary lock-in or customer lock-in, [which] makes a customer dependent on a vendor for products and services, unable to use another vendor without substantial switching costs.”

5 ways to avoid vendor lock-in TechRepublic, 8/10/2018

In our case, you’ll want to avoid locking into a certain cloud provider because you’ll want to be able to shift the deployment of an application when necessary.

Shift cloud vendor whenever necessary

One of the most cited reasons to remain cloud agnostic is the ability to shift whenever necessary. Whenever a certain provider suddenly offers the ability to provide advantages that others can’t, or a client asks to be subscribed to a different service, you can just “pick it off a shelve” and apply it with minimal effort.

How to avoid Kubernetes Lock-in?

Kubernetes enables vendor independence

As we’ve discussed, Kubernetes provides the flexibility to run applications in a wide array of environments without limiting their capabilities.

Frameworks like Kubernetes (which automates the deployment, scaling and management of applications on a group or cluster of servers) level the playing field for companies of all sizes by creating a universal common infrastructure. They hide the details of a given cloud provider’s infrastructure from applications. When properly applied, this infrastructure abstraction offers a degree of flexibility to run applications in a wide array of environments without limiting their capabilities. Technology like Linux containers and OSS projects like Kubernetes provide a degree of abstraction that’s not too high level and not too low level, but “just right.

Four ways to avoid vendor lock-in when moving to the public cloud, Forbes, 14 Dec. 2017

In doing so, Kubernetes enables vendor and provider independence.

Am I at risk of hooking into Kubernetes?

No, not necessarily. Because it’s the architecture that matters. Not the infrastructure.

What this means is: If you build your application in such a way that it can work independently of Kubernetes, then you’ll avoid Kubernetes vendor-lock in.

You’ll want to build your architecture in such a way that it follows a certain philosophy. In doing so, you’ll be able to deploy your applications in any environment of your choosing. All you need is a good orchestrator, of which Kubernetes is one of the more popular and well-evolved in the current market.

How to create architecture that matters

You can follow the typical Twelve-Factor App rules: https://12factor.net/, coined by Heroku (a deployment platform). If you follow these as you are developing, you will have built an application that is independent of the environment in which it is deployed.

If you follow these twelve factors, you’ll be able to blindly deploy your application anywhere. In our humble DevOps evangelising opinion: the twelve factors are holy scripture.

  1. Codebase: One codebase tracked in revision control, many deploys
  2. Dependencies: Explicitly declare and isolate dependencies
  3. Config: Store config in the environment
  4. Backing services: Treat backing services as attached resources
  5. Build, release, run: Strictly separate build and run stages
  6. Processes: Execute the app as one or more stateless processes
  7. Port binding: Export services via port binding
  8. Concurrency: Scale out via the process model
  9. Disposability: Maximise robustness with fast startup and graceful shutdown
  10. Dev/prod parity: Keep development, staging, and production as similar as possible
  11. Logs: Treat logs as event streams
  12. Admin processes: Run admin/management tasks as one-off processes

Some items are still missing from the twelve factors that are gradually being added by the DevOps community. Such as tracing and metrics.

Adding observability to your application means: your application should provide all the information it can so that your environment and operators have insights on how it is behaving and take action when required, such as scaling down or up.

In other words: it’s not the infrastructure that matters but the architecture that matters.

What else is there that comes close to Kubernetes?

If you have a microservice architecture that nicely follows the twelve factors and it is built using your favourite CI/CD (continuous integration / continuous deployment) tools such as Jenkins, Concourse or Gitlab CI, … the result will always be the same: your application will be available in your container (in the shape of a Docker image). From there onward, you’ll only need to pick an orchestrator that you fancy. Kubernetes is merely one of the most popular ones. But you can also use Docker Swarm or ECS on AWS.

Why do people choose Kubernetes?

It trumps the competition when it comes to orchestrating:

  • “Where exactly do i need to run this Docker container?”
  • Service discovery using DNS
  • Persistent volumes
  • Rolling updates
  • ….

Only Nomad can trump Kubernetes.

Nomad is easy to set up with less operational overhead

Nomad from HashiCorp is one to keep an eye on. Why? Because it is so easy to set up. So much easier than Kubernetes, in fact.

Kubernetes is handy to maintain as a developer, once everything is set up and deployed. But when someone from Operations or an Administrator wants to uphold a Kubernetes cluster of their own, it becomes quite a challenge. This is the reason why so many companies are choosing Kubernetes as a service (managed service) from the bigger companies.

When you check out Nomad from HashiCorp: you’ll immediately gèt the philosophy behind HashiCorp’s tooling: build a simple tool that does what it is supposed to do. Which is exactly what Nomad is: a lightweight orchestrator. One simple binary that you download, start up and it works.

This means that the operational overhead is way smaller with Nomad than it is with Kubernetes.

Nomad offers more runtime providers

Where Kubernetes falls short in its functionalities, Nomad delivers. Kubernetes makes you inherently tied closely to the ‘container’ world. This is so because Kubernetes can only schedule containers, nothing more.

Nomad makes the difference by offering other runtime providers. For example: you can start up static binaries and manage them accordingly: you could use executable jars and you can totally abandon the container world, as long as your artefact is compliant with the twelve factor app rules.

We hereby accept the challenge to deploy a Kubernetes deployed microservice architecture (that is compliant to the twelve factor app) on a Nomad cluster.

Conclusion

Platform independence is possible using Kubernetes. Some critics pointed out that this would cause a vendor lock-in with Kubernetes. As we’ve discussed: this is not necessarily so. As long as your architecture is built up in such a way that other platforms can also deploy it (Such as Nomad, Docker Swarm and others, …) you’ll be completely independent.

Johan Siebens

Leave a Reply