Avenga Labs Update 014 | January 2022 news are here

Avenga Labs
Update 014:
January 2022
news are here

Avenga Labs update

Happy New Year 2022 both digitally and personally for all of you!

Bad news or good news first? Let’s start with the bad ones about open source security, cloud outages, and then switch to the good news about cloud-native adoption, new version of Kubernetes, new promises in the area of quantum computing, and another event that would have been deemed  impossible just a few years ago.

Log4j disaster, open source security reality check

Log4j library is used to log events for the majority of Java-based applications which are (again) a vast majority of enterprise software solutions worldwide. When a series of critical vulnerabilities were discovered, one after another, the whole IT world, including the Internet giants with their sophisticated automated pipelines and top-notch security teams, held its breath. It seems like nobody was really prepared for something like this to happen despite tools, people, and processes in place.

There have been millions of hours spent everywhere to fix the problem, upgrade components, and then update them again and again after even more vulnerabilities were discovered.

There was even a White House meeting where Google and IBM called for a list of critical open source projects which should be given more attention and review scrutiny. In the times of Cold War 2.0 which takes place during a billion times more digital era than the 1980s, open-source security may also be a decisive factor in digital warfare. So it’s not “just” a threat to peculiar businesses but to entire nations and military alliances.

The situation about the security of business applications is worse than ever, which I addressed in previous Avenga Labs updates long before the Log4 Shell disaster. So it was a strong “I told you so” moment for me; it was bound to happen, it had to. The focus on sheer speed and functionalities has been stronger than ever at the expense of security, data protection, and other “non-features” which don’t bring any business value. The big problem is that they  tend to bring disasters when they happen.

Open source always promises a fair game of being able to find and fix vulnerabilities faster, compared to closed source components. Is it better? It clearly depends on the ratio in which vulnerabilities are detected and fixed to how many are left open to be exploited by the hackers.

The Log4jj situation is yet another proof that modern pipelines that automatically scan for component vulnerabilities as the key elements of the pipeline, regularly updated versions of all the components, and frequent builds of the system allow the fastest reaction time. Furthermore, legacy systems are too often left in a dire state because of the “do not touch it or it may break” mentality. It’s better late than never to update the policies and implement them.

There will be more, much more vulnerabilities and exploits for open source components. If the only lesson learned was to “fix this Log4Js thing ASAP”, then the trouble for your company or department is just a matter of time. At Avenga, we’ve already paid attention to this issue by protecting our clients web services and their users from critical vulnerability with a solution of our own.

Massive cloud misfires

The end of 2021 was also a time when multiple cloud outages made many businesses suffer. AWS US availability zones failed multiple times in December raising questions and concerns, especially because the teams did not give an impression of being in control of what was going on. Even finding the root cause of the problem took a long time despite all the advanced monitoring, logging, and redundancy mechanisms. The communication from the AWS team was also a source of frustration and anxiety.

The public cloud has become a default choice of the majority of decision-makers when building new digital solutions and transforming the legacy ones. As of now, the dependency is only getting stronger.

In the European Union, there’s this famous “cloud exit” strategy which is becoming more paperwork than the viable real alternative of going back to pre-cloud times.  This is not even much of a secret any longer and there’s no way back on it.

So are the cloud providers becoming more relaxed as more and more organizations are fully dependent on them? And what can we do about these outages?

Dependency on a single cloud provider may seem to be too risky, so what about multi-cloud as a solution?  Analysts claim multi-cloud is simply too complex and too expensive to maintain to be an economically viable option; technically it is possible and there are tools and techniques to achieve that.

A more approachable option is to distribute the workload among multiple processing zones within a single cloud with enough redundancy to keep the business processes going in case of failure of a single zone. Sure enough, it dramatically increases the costs and increases the complexity of the deployment and operations. But it is quickly becoming a de facto necessity given the recent series of “unfortunate” events.

From the governance point of view, the service level agreements must be fully reviewed and understood, and we seem to need to put more pressure on the cloud providers to step up their resilience game.

In the case of hybrid cloud and cloud-native architectures, these outages are also a motivation to keep both sides of infrastructure (local and remote) fully operational and make sure the on-prem infrastructure will continue to work in case of unavailability of the public cloud services.

Speaking of cloud-native, let’s take a look at the adoption of cloud-native technologies report.

State of cloud-native development

According to the report by SlashData, the Kubernetes adoption rose to 67%. It means two-thirds of the enterprise workloads are run on Kubernetes clusters. The tendency falls in line with the previous prediction of Avenga Labs of Kubernetes quickly becoming  a de facto operating system/runtime of all modern enterprise workloads. This also includes all the public cloud vendors and all the hybrid cloud infrastructures.

Kubernetes skills are already a key requirement for all the roles in software projects. And let’s not forget about Docker Desktop being a commercial product now for medium and large enterprises, minikube is often a way to go for those who do not want to pay licensing fees.

Modern DataOps, MLOps, and Kubeflow based projects for data processing and machine learning also mean more and more Kubernetes use.

Kubernetes 1.23

The Kubernetes project is alive and well (and complicated and complex some may add and I wouldn’t disagree). The new 1.23 version brings new features, such as Dual-Stack IPv4/IPv6, CronJobs, Ephemeral Volumes, Improved Events, and gRPC Probes. There’s also an obligatory set of deprecated and removed features, including the FlexVolume drivers. The only way to keep up with the changes is to constantly update clusters and learn about the new features.

The best part of the news is, again, that the critical component of modern deployments is growing and responding to community feedback by adding new features and updating the existing ones. It’s definitely a safe bet at the moment and in the near future. Do you always need it? Fewer and fewer even dare to ask.

The promise of 1000 Qubits by the end of 2023

Quantum computers do exist and are believed to be the next big revolution. Quantum computing will enable applications of computing power when even our most powerful binary computers fail to deliver.

They do work but they have their set of problems which still limit their applications to a very narrow set of business and academic use cases. So the pivotal question is not if quantum computers work, but when they will be a truly viable option for more business applications?

One of the main issues is the limited number of Qubits. Modern algorithms for the quantum advantage era require thousands and millions of Qubits. And yet we live in the tens of Qubits era, moving towards hundreds of Qubits. There’s a long long way ahead of us.

Now, the two quantum startups, Pasqal and Qu&Co, merged and promised 1000 Qubits machines by the end of 2023. Let’s decode the news a little bit.

Of course, stability of Qubits, error rates, and in general, the so-called quantum volume are much better units of measuring the capabilities of quantum machines. Nonetheless, this is the number of Qubits that makes the headlines.

So every time I see news like this I am happy about the (potential) progress, but also have to cool down the enthusiasm a bit because, first of all, this is a promise, and there’s no data about other critical parameters. Plus, there’s an obvious temptation for the quantum startups to (over)promise for financing and PR reasons.

More metaverses

And yes, there are even more new metaverse attempts than described in  Avenga Labs Update 013. It is definitely one of the biggest bets in the technology space, and I wonder how it’s going to work for customers and businesses.

Microsoft joins Java Community Program (JCP)

That could have been a great April Fools Day joke even a few years ago. Then, the Java ecosystem was an arch-enemy for Microsoft with their competitive .NET technology and Windows server ecosystem. Now everything seems to run on Linux anyway, and in the cloud more than ever. So Microsoft joined the organization that shapes the future of the most popular enterprise technology and will have an impact on the priorities, new features, and the direction this technology ecosystem follows.

Until next time

Thank you for reading, and let’s see what interesting IT news will get our attention next time.

Other articles

or

Book a meeting

Zoom 30 min

or call us+1 (800) 917-0207

Start a conversation

We’d like to hear from you. Use the contact form below and we’ll get back to you shortly.