The need for faster and more efficient software delivery in this current digital economy has never been stronger. There’s no doubt that the ability to release, manage and scale applications to respond quickly to changing business conditions is critical for the success of digital transformation.
Small autonomous components that can be released independently and frequently that form together into digital products are the primary architectural choice today.
Containers are the infrastructure realization of the microservices paradigm.
But when you run hundreds or thousands of the containers you need a tool to manage them all as they start, stop, shutdown, create new instances, and manage the network communication between themselves.
This is the responsibility of the container fleet managers, and to the surprise of many ‘experts’ today, fleet managers existed before Kubernetes and were working just fine.
All of the cloud providers attempted to convince enterprises to choose their platform as a service solution for building enterprise applications. Everything was ready for the companies to build, test and deploy their solutions using vendor specific platforms from Amazon, Microsoft or Google.
But the adoption rate was slow and disappointing.
IT decision makers were too afraid of a vendor lock-in and to be bound forever with the particular set of cloud services provided by a single vendor.
There was never a guarantee for how long the PaaS solution would work or what its ultimate fate would be (fear of deprecation, aggressive forced upgrades).
So at the time, the popularity of the cloud was mainly in the most basic IaaS services (virtual machines) and the entire success story of AWS was built on the way they addressed that basic need.
These risks were addressed very well by Google. Its GCP public cloud was trailing behind AWS and Azure, but Google wanted something that would help them to get back into the cloud game.
So, they announced (2015) Kubernetes and heavily promoted containerization.
Their promise was that you could run it everywhere . . . on premises or in the other clouds . . . . so basically they were saying ‘in our GCP cloud it runs the best so that should be the reason you choose us and not them’ (Amazon, Microsoft).
And many technology decision leaders agreed, at least with the first part, especially in the hybrid cloud environments, when there’s need to run the same components locally and in the multiple public clouds with unified set of tools for packaging and management.
Microsoft and Amazon (also Docker with its own container fleet manager) were trying to wait it out but just for a short moment. Recognizing that Kubernetes was inevitable, they included Kubernetes as a primary service in their clouds.
And now AWS is the primary choice for running Kubernetes workloads in the public cloud.
Enthusiastic acceptance in the development and DevOps communities, was a great success for Google and Kubernetes.
Both Amazon AWS and Microsoft Azure tried to convince the community to use their own solutions but finally gave up and became major supporters of Kubernetes.
The competition now is which cloud will run your containers with Kubernetes the best, not which container technology to choose (Docker) or which fleet manager to choose (Kubernetes).
It’s very common during discussions with developers and DevOps that they express the belief that Google gave them something they can use internally to run their enormous clouds.
It was built by their people based on their experiences and knowledge but it’s a totally different solution than what Google uses internally.
The reality, based on the different surveys, is that only 15% to 40% of enterprise applications are running in containerized environments.
The real difference is about future plans; almost unanimously it is containers managed by Kubernetes. Our own genuine survey also confirms that.
And many more.
All of them promise to deal with the complexity of Kubernetes, provide nice Web GUIs for monitoring and managing the clusters, and have good integration with their own cloud services (monitoring, security, capacity management, and storage).
It will start to live in its environment, logging solutions, and security solutions, however, runtime management solutions are different.
Very soon the promise of independence will turn into a world of pain when you would have to change one Kubernetes platform to another.
Especially in the business reality when you start to use their dedicated storage solutions to address the inevitable need for non-ephemeral storage.
Kubernetes in itself is a software which needs to be patched and upgraded. If you do it yourself, well these operations are not easy and problem free. And please note, it’s not something that comes to mind when you start thinking about it.
Containers are based on base images of OS-es, databases, and servers which contain security vulnerabilities. Proper scanning for vulnerabilities, both in images and configurations should be the key element of effective pipelines.
Many enterprises complain about the lack of expert level knowledge. There are many introductory tutorials and “Hello world” books, but when you want to go one step further, it’s getting much more difficult and less people have the required knowledge based or practical experience.
The question now is not about using Kubernetes or not, it seems to be at the top of the hype and adoption.
There are many voices trying to emphasize the complexity and learning curve, and they are diverting attention from business problems to infrastructure problems, etc. But these voices are not heard with the proper attention and won’t be in the near future.
There are also strong fans of the serverless paradigm and want to get away from the complexity of containers and Kubernetes altogether, and focus on business functions delivering real business functionalities. But the adoption of serverless is still relatively slow despite its benefits.
The questions that have to be answered are:
We at Avenga have a great deal of experience with different Kubernetes flavors, from local on premise solutions, hybrid cloud to the public cloud, and with different environments and scenarios. You don’t have to learn all of it by yourself, it takes time, effort and many mishaps along the way. In case you are thinking about serverless, we do have production experience as well.