Software is ruling the world, so let’s take a look at what’s going on with the fundamental concepts behind all the software running today and in the future.
Those who hoped that object oriented programming (OOP) would last forever will share the same fate as the others before them; a new kid on the block stealing all the thunder, and their beloved baby becoming obsolete.
There are always new versions of technologies released and even more frequently than ever, but something that is changing slower and requires much more time to happen is the shift of the programming paradigm.
Computer programming is older than electronic computers, so we can talk about almost a century of programming as a human activity. Let’s take a moment to look back at the history of programming and then analyze the current state of programming and its near future.
The first machines were programmed using the lowest level binary code which was sent directly to the machine for processing. The source code wasn’t text, it was binary, and often created manually by punching holes in the long tapes or other physical medium.
It was very error prone, extremely hard to debug, and very slow.
The code executed with 100% native speed, but nothing less was expected, because the machines were millions of times slower than your smartphone today.
Assembly programming was a major shift towards text based symbolic programming from binary code. For the first time, there was translation from something humanly readable and understandable to machine code.
The code is written using text with symbols of the instructions, which is thousands of times more readable than binary code. There are labels for jump instructions and loops, as well as comments after semicolons.
By the way, did you notice above I wasn’t using the past tense?
That’s because assembly languages are being used today. Whenever the maximum speed and tight integration with the hardware is required, there’s a place for the assembly language. In this area, it often competes with C language or (recently) Rust language, but the speed of execution cannot be beaten. Programmers can choose the registers that they want to use, tell the CPU directly what to do, and all without relying on the compiler to perform optimizations.
Structured programming introduces the ‘if-then-else’ loops and subroutines.
All these constructs were created as a cure for the GOTO/JMP disaster in which it was hard to control the flow of the programme.
It was created as early as the 1950s but became popular later on.
See the Algol 60 sample below:
Is it obsolete? No, this paradigm is still used today, every day.
This paradigm introduced modules as a way of organizing large applications. The language that introduced this concept is Modula-2. And yet again, this concept is alive and well today. It groups code into modules and packages, and is a very natural thing to do. Of course, there are now package managers for every popular programming language and runtime, so the idea was perfected, but again, it’s more than half a century old.
This paradigm was born a long time ago in the early 1960s, but has become the most popular programming paradigm in the 1990s to 2010s.
The ideas are so well known and popular, that I will just mention the key concepts, such as objects that contain both fields (data) and methods (actions performed on objects), as well as classes that define objects, inheritance, and encapsulation (separation of object state from external world). Sometimes instead of classes there were prototypes (Java Script for example). C++ provides multi-inheritance, while Java a single inheritance.
Inheritance in practice caused more troubles than benefits and all the programming guidelines today tell us to avoid inheritance, and to prefer composition over inheritance.
Binding data and methods together was great for a single process and single machine applications, but it did not survive the modern world of distributed computing where messages are exchanged between microservices.
The very concept of ‘class’ therefore has become obsolete to some extent.
OOP is also criticized for the execution inefficiency, such as memory management problems, slow compilation and execution times, and especially when runtime polymorphism is used (virtual methods etc.).
Another criticism is the focus on types (classes) but not on algorithms.
But, the main topic is that data and functions are totally different things and the very fundamental idea of mixing them together in objects was wrong from the get go.
There are multiple voices of programming experts who say that in their own worlds, but as you can see, after almost 30 years of domination, the industry feels it is the dawn of object oriented programming.
Functional programming is a totally different concept than object oriented (OO) programming.
First of all, it’s declarative programming not imperative programming.
The applications are built as functions which return values rather than sequences of instructions, which implement an algorithm.
Functions can be assigned to variables, passed as arguments, and then returned from other functions. Functions and data structures can be grouped into modules.
Pure functions have no side effects and are considered to be a great help in reducing the number of bugs, especially within complex large applications.
Example of Haskel code: But … pure functional languages did not really take off in the world dominated by object oriented programming (OOP) languages. They have their niche applications, but none of them have become a TOP 5 mainstream language.
So why did I even write about this concept?
The traditionally object oriented programming (OOP) languages such as Java or C# were augmented with functional programming constructs a few years ago.
Newer languages go even further; we analyzed Golang, Rust and Julia in our previous articles. The prevailing themes are functions as first-class citizens, getting rid of inheritance, a lack of traditional polymorphism, and a return of the data struct or records separated from actions. It caused a lot of tears with OO fanatics, but the times… they are changing.
On the other hand, functional programming fans are also not satisfied with the current state of affairs. Functional constructs affect almost all the languages popular today, but pure functional programming has not become a default paradigm.
Is it good or bad? What is next?
The Obi Wan once said “Only a Sith Deals in Absolutes” (Star Wars “Revenge of the Sith”).
The future belongs to fusion! Let me explain.
Modern languages always support more than one programming paradigm. I’d call it taking the best of the older (proven in reality) and newer ideas and combining them together.
It also enables writing code, which is closer in style to the object orientation or to functional orientation, depending on the … taste and existing code base. Both ways are fine.
In these times of requiring ever greater flexibility, having multiple options is a great thing.
Maps, slices, dynamic arrays, records, structs, filters, map-reduce and other elements have become standard in the latest versions of all the popular programming languages and in the new languages too.
The focus has shifted towards data structures, again, and functions which operate on these structures, which don’t have to pretend anymore to be objects of classes.
Another trend is a big focus on resource efficiency. Resource hogging platforms, such as Java, pale in comparison to the lightweight newcomers such as Go, Rust or Julia.
I really like this trend. This time it’s not because of the limitations of the hardware, but because of the economies of the cloud and functions as a service (FaaS). So, this very modern architectural shift made the technology industry focus again on the efficiency of code and runtimes. In times when the technology sector attempts to be as energy efficient as possible (i.e. carbon dioxide neutrality goal), using efficient languages and runtimes is very welcome to support the goal.
Of course, the old players (Java, DotNet) are evolving and are catching up, and nobody is seriously crossing them out. And not only because of billions of lines of existing code, but also because they are totally relevant, at least in the near future.
Avenga is using multiple languages, from as old as Cobol to as modern as Go, architectures from monoliths to serverless, and all the major public clouds. The choice depends on the IT environment and we always help to find the best technology fit for the system being built or modernized.