Director of Avenga Labs
Welcome to the tenth edition of the Avenga Labs update. This time I’ll focus on the breaking enterprise tech news about Docker, Java, PyTorch, AWS and Kubernetes, and also Kotlin. None of these technologies requires any introduction as they are extremely popular, so let’s get to it.
Big news because… it’s about Docker, one of the most popular technologies in use nowadays in the modern software development world.
Docker changed their licencing policy with another update and it’s no longer free (licence cost) for large enterprises with more than 250 employees or more than 10 mln USD of revenue. There was no prior announcement of the changes and no public discussion; they just did it.
There’s a subscription fee of five dollars per month per developer. It’s a drop in the ocean compared to the human labor cost, even in the cheapest countries. But, it’s not free anymore. And, the payments will be required after January 2022.
This move by Docker received mixed responses. Some developers are disappointed because they will have to pay now, while others are defending Docker as it’s a very useful tool that deserves commercial success and support as well.
Docker is trying to find a sustainable business model and the popularity of the technology is not a guarantee of profitability, unfortunately. Docker is also an entire ecosystem of container repositories that are used by millions of developers every day. Despite development costs and infrastructure costs the entire development world has gotten used to the idea of a free Docker.
Docker can be replaced by other solutions such as MiniKube on the desktop, but there’s a lot of convenience in the desktop Docker client.
The IT world could abandon Docker and move on to a post-Docker era. Only time will tell if Docker will be able to implement a sustainable business model or go out of business and join the history books of IT.
It depends upon the enterprises and if they will choose to pay a licence fee or move to other free alternatives.
Java 17 is a long-term support version that was released. It’s a result of a joint effort of the entire Java community. Despite being more than twenty years old Java is the number one language for enterprise-level applications, and there’s no replacement for it in sight.
I remember the times when Java had a lot of trouble and doubts about its future, but these times are over. And that’s the most important high-level message.
But let’s touch a bit more on what is new:
and a lot more
As with all mature technologies, we cannot expect a surprise or breaking changes, the ease of migration from the previous LTS version to the next LTS is more important than the sheer speed of innovation. But new additions are welcome and are practical proof of the vigor of the Java stack.
There are two main camps in the machine learning world: Tensor Flow and PyTorch. They used to be libraries, then frameworks, and now they are considered entire ecosystems. It is worth following both sides, because they are known to “borrow ideas” from each other on a regular basis.
So what is new in PyTorch 1.9?
Kotlin is a language created by JetBrains and supported by Google with a focus on Android native application development.
And, it achieved great success with more than 80% of the top 1000 apps in the Google App Store written in Kotlin.
The language is generally liked by the Java crowd, as it is easier to write and read, thus reducing the infamous problem of a null reference exception as well as many other improvements.
Still, despite years on the market, it is not gaining the popularity that was predicted before.
What about the server-side? You know, the place where Java reigns.
Despite five years on the market, Kotlin is still trailing behind Go, or maybe it is almost as popular as Go with around 8% of developers using it. I’ll leave it for you to decide if this means success or failure.
The hybrid cloud is everywhere and will stay with us for a long time. Cloud vendors are trying to make it easier by providing solutions for local on-premise infrastructures that will work in almost the same way as their public cloud counterparts. The same management tools, components, and APIs are blending the boundaries of the cloud and on-premise infrastructures consequently providing unified management of all the resources and APIs in one place.
Amazon EKS is the most popular Kubernetes hosting platform in the cloud. So, it’s a very welcome addition and a great help for Kubernetes clusters existing
Their primary focus is on the customers who are already users of Amazon EKS, but would like to extend the cloud to the local Kubernetes clusters.
Cloud-Native Computing Foundation is an organization dedicated to the development of a cloud-native stack of technologies that are the base for cloud-agnostic solutions in the native cloud and multi-cloud environments of today’s enterprises.
Autoscaling is a very important feature that enables solutions to consume more resources when needed (increased number of customers accessing the system) and to scale down to save resources and costs.
KEDA is a modern solution to autoscaling based on events that are produced by components called Scalers. They can be of various types and the number of them has increased from 15 to 37.
Autoscaler, which can react to multiple types of events from different sources, is expected to be more efficient.
Moving KEDA to incubation increases the probability of it becoming an important part of the Kubernetes ecosystem.
Avenga Labs has addressed language models before in the GPT interview and GPT-J article. Let’s just remind ourselves that the NLP race is accelerating. This time it is accelerating five hundred times in comparison to the previous version.
There are an estimated 100 trillion synapses parameters in our brains and GPT-4 will have the same number of parameters.
It is expected that this enormous number of parameters will bring GPT-4 even closer to human language processing capabilities and this will be an important step towards general AI.
→ Avenga AI services
What is also very very important is that GPT-4 will depart from text as the only medium, as it will also embrace images. This is an important part of AI awareness that is expected to bring in an entirely new level of context generalization and understanding.
The many more parameters are not just for accuracy or performance improvements, but they will certainly turn quantity into new capabilities that have never been seen before.
I will be thrilled to check up on the results and talk to GPT-4, as GPT-3 is already very impressive.