Discover why a Salesforce implementation partner is crucial for business success. Learn how to choose the right partner, what to expect, and how to maximize ROI.
Kafka for a More Connected Digital World
Digitalization means connectivity
Digitalization means faster business, and faster business is about removing humans from the processes and enabling as much automation as possible.
In the case of the digital ecosystems of applications and APIs, it means fast reliable resilient communication between different microservices and legacy applications.
And there’s one technology that seems to have won all the awards and popularity contests, and it’s name is Kafka, the most popular stream processing software in the world.
The market analysis shows an adoption rate of 80% to 90% by the enterprises that use Apache Kafka in their production environments.
There were queues, message brokers, and data streamers a long time ago, from simple messaging queues to complex ESB orchestrators, both open source and commercial solutions.
Java Messaging APIs for Java applications, Advanced Messaging Queue Protocol (AMQP) approved by OASIS and ISO, MQ Telemetry Transport (MQTT), eXtensible Messaging and Presence Protocol (XMPP) and other standards are still very popular and used in the production with millions of applications.
So why Kafka? Why has it become so popular? What is Apache Kafka used for?
Benefits of Kafka
Data needs speed
The explosion of data that is generated in the digital world is enormous. The data needs to be captured and transmitted from the right sources to the right consumers.
The data revolution came at the same time as the need for immediacy in API based business transactions, which changed the digital landscape of data solutions from more static ones to real time data processing and analytics.
Kafka has gotten attention because of its speed, which I want to address first.
Kakfa was focused on speed from the beginning. The numbers of data it can transmit were impressive from the beginning. Its entire architecture and implementation is focused on performance.
Network communication is based on binary TCP protocol, with an added abstraction of grouping messages into message-sets to reduce the network overhead. This makes the network packet larger, which means a reduced loss of network performance.
For memory management, it means a much lower overhead of heap management, memory allocations, deallocations, and heap fragmentation reductions.
For disk write and read performance, it’s also great to write in larger pieces less often so as to avoid an unnecessary IO overhead of storage subsystems.
And once again, it uses its own binary protocol and serialization (Avro) to allow for a lower footprint of serialization and message size.
So, performance was and is at the center of attention for the Kafka product team.
Kafka can serve multiple purposes and work in multiple modes. This is one of its key strengths compared to other platforms which usually represent a single messaging paradigm, such as queue for instance.
One is a publish-subscribe messaging platform that uses topics and message groups which connect consumers with data producers.
The producers are not aware of the consumers, which creates very sought for decoupling between data producers and consumers. It enables flexibility when new message consumers are added later and it removes them without affecting the producer at all.
Number two, it is an optimal solution both for log aggregation from multiple sources and for log shipping between producers and consumers. Managing log flows requires tons of speed and reliability. Logs are very important sources of information for detecting problems and determining the root cause of errors. They are also crucial for compliance.
And last but not least, it is used as a tool for communication between microservices. Event driven architectures use Kafka as it is good for complex messaging patterns and even sourcing (CQRS pattern).
Of course, with such versatility comes… the inevitable overuse. For instance, there’s a lot of criticism about using Apache Kafka for everything without even thinking about alternatives. Another criticism is about Kafka is it becoming an old ugly ESB monster in disguise.
Need for unification
As I mentioned before, there are many messaging solutions. IT decision makers are starting to be tempted by the idea of unification, but are afraid of vendor lock. Kafka was and is a great choice to enable internal standardization and interoperability without vendor lock.
There’s a de facto lock on this popular open source project, but given its popularity and constant improvements that are delivered on a regular basis, it’s the safest choice at the moment.
Due to the popularity but also the limitations of Apache Kafka, a whole ecosystem of Kafka solutions have emerged. Public cloud vendors offer Kafka integrated with their proprietary monitoring, loggin and management tools. They and others add additional features and provide capabilities beyond what’s possible with Kafka alone, saving time and effort.
For example, the whole set of tools addresses the need for web based UIs for management in visualisation.
It’s either a decision to create a dependency on the Kafka ecosystem vendor’s extensions or to spend internal money and resources on adding missing, but very needed features.
Created at LinkedIn for LinkedIn
LinkedIn started with its business requirements and Kafka was a response to the direct business needs. It makes it therefore very convincing for all the large and medium enterprises to consider using it.
It’s Apache and JVM
Kafka comes from the Apache Software Foundation, which is one of the most respected open source software groups in the world with a proven track record. Starting with the famous web server, Active MQ, Cordova (mobile apps), CouchDB, Groovy (JVM language), Hadoop (big data engine), and Spark (big data processing engine).
Now, Kafka has become so popular that when one types into the Google search box “Apache”, the word “Kafka” will be one of the first results at the top.
Kafka is written in Java and Scala, it runs on the most popular runtime in the enterprise world, which is Java Virtual Machine (JVM).
Popularity means more popularity
One of the reasons for Kafka’s growth popularity is … its popularity. People attending conferences and talking to their colleagues, all quickly figure out that others are already using Kafka and they don’t want to be left behind.
In the beginning, there were warnings that Kafka was not a full messaging suite and when the number of messages are not that high (less than millions of messages per day) it is just overhead.
But not for the first time in the history of the technology, only a few listened and it quickly became the most popular solution for data streaming. Some even consider it as a de facto industry standard.
→ Explore more about Meeting the Future. Trends & Technology 2021.
Apache Kafka is over 8 years old, which means it’s mature enough to be trusted. It has gotten a lot of feedback and improvements over the years thus becoming a reliable solution.
Drawbacks of Kafka
Latency vs. throughput
Throughput means how much data can be transferred in a specified amount of time. And, Apache Kafka excels at that.
But, there’s another parameter which sometimes is even more important and Kafka is not optimized as good for that. I am talking about latency, which is the time it takes for the data message to enter the Kafka server then to arrive at its consumer. In the case of most applications, it should be guaranteed to less than a few milliseconds.
Latency is not guaranteed with Kafka and it is recommended to look elsewhere for a lower messaging latency solution.
The default configuration of Kafka is focused on the throughput of the streaming more than on the reliability of the client.
Another important aspect and key for the effectiveness of Kafka is that it does not sync its ‘writes’ to disk storage. This means there is a risk that the data won’t be saved to the physical storage medium, all in order to allow faster writes.
Someone comparing Kafka to big ESB solutions, and even advanced solutions, will notice the many missing functionalities and options.
In my opinion, Kafka represents the Linux philosophy of service, which does one thing and does this one thing very well.
On the other hand, Kafka is receiving new features and updates.
Pressure on the consumer-end
Because of the heavy optimization of speed and multiple smart tricks on the service side, a lot is left to be done on the consumer side (hashing, computing checksums, staging). So, the consumers must be well aware and prepared to work with Kafka.
Flexibility means complexity
There are many ways in which Kafka can be configured and there are many options. It gives a lot of flexibility, but on the other hand it means complexity which makes it easier to break the proper configurations and replace them with a wrong one. There’s a ton of documentation from the Apache Kafka team, and many other sources, about how to do it properly and how to use Kafka efficiently, but still a lot comes with the experience.
Default settings are insecure and are not the most reliable; they often need to be changed for production. They may break proof of concept projects or test environments with a more relaxed security and different reliability settings.
The most famous setting is probably autocommit on the client side for each five seconds; no matter if the client processed the message or just started receiving it. It may lead to inconsistencies which are often not desired.
The Future of Kafka
We are well aware at Avenga, especially our solution architecture teams that are designing modern API ecosystems, that there’s no single solution that addresses all the needs, even with Kafka.
Kafka will continue to be successful and continue to be the backbone of successful digital enterprises.
When used properly, it is an important asset for every digital organization that exchanges data fast, inside and outside.
Digital enablement through custom-tailored software solutions: How Avenga can help drive innovation in the life sciences industry
Discover Avenga’s impact in Life Sciences through real-world success stories. Angeliki Cooney shares insights in this exclusive interview.
Explore the strategic approach to Salesforce’s territory management. Learn how to define, optimize, and analyze sales territories for improved performance and profitability.
Discover the potential of Microsoft 365 Copilot to streamline tedious processes and uncover critical insights.
Avenga among top software development companies in New Jersey: Yuriy Adamchuk’s dedicated interview with GoodFirms
Avenga’s CEO, Yuriy Adamchuk, sits down with GoodFirms to talk about the pillars of Avenga’s success.
Start a conversation
We’d like to hear from you. Use the contact form below and we’ll get back to you shortly.