Neuromorphic computers — moving towards supercomputers in our heads

Neuromorphic
computers —
moving towards
supercomputers
in our heads

Neuromorphic Computers

We need a revolution!

What? Our computers are getting faster and with more processing power, memory, and storage. They work so well and improve all the time, so why talk about a revolution?

Because we’ve grown so very dependent on technology, both for business and our private lives, that we cannot imagine that the technological revolution could suddenly become stale.

For many years, we have been enjoying Moore’s law, which states that computing power doubles every eighteen months and this has occurred for many years. The exponential growth of computer power has enabled mainframe computers, mini computers, desktop computers, laptops, tablets and smartphones. Miniaturization of components has capacitated more and more processing power in our pockets and inside our wearable devices, such as watches and fitness trackers.

On the other hand, we have billions of servers running global cloud computing infrastructures serving billions of users and connecting the entire globe together.

The digital transformation would not have been possible without this rapid growth of computing power. Our world would look totally different if we were to stick with the CPUs from the 1980s.

We have gotten used to this rapid progress and many of us have never questioned the  impossibility that next year there will be a new much more capable hardware, as usual, as always.

Well, here comes the sad news.

The end of an era

Moore’s law is no longer valid as the growth of single core computing power has slowed down. Despite enormous investments and the efforts of computer scientists all over the world, we are getting closer to the wall of what is possible within the current paradigm.

Why? Because of laws of physics themselves. We are already very close to atomic size components, which are very fragile and packed together so tightly that they produce a lot of heat, so the quality of the chips is hard to maintain. Bigger and bigger chips are being created but then we collide with another physical limitation – the speed of light; the electric signals have a very high xxx, but not unlimited speed.

With the rapid rise of machine learning (ML), we also meet another challenge which is power consumption. Our journey towards true AI may be slowed down or even become impossible because of the power limitations.

→ Have a look at Avenga experience of Chatting about the future of AI with . . . the GPT-3 driven AI

Some estimates show that by the year 2040 the power consumption of machine learning (ML) applications will exceed the power production of the entire planet.

How long do we have?

The estimations vary from ten to twenty years of when we will reach the limit of what is possible with the current paradigm.

If they are true, we have to act fast to avoid collisions that could slow down progress.

Of course, it does not mean that computers will stop working, because in reality they will even continue to get better, faster and more power efficient, but with a much slower rate of improvement than we see today. Other innovations will be more and more expensive and many of them will become simply unacceptable from an economical point of view.

Von Neumann paradigm turns into a bottleneck

The great scientist John von Neumann created the “Neumann paradigm” for digital computing which has worked since the 1950s, well, until this day and for years to come. Basically he divided computers into:

  • Processing unit – which processes data and instructions (usually CPU, GPU, TPU, and ALU)
  • Memory unit – usually some kind of RAM
  • Input/output – to communicate with devices (ports, buses, etc.)

And this worked very well for many years, but it has gotten much harder recently to optimize things further.

The most efficient supercomputer

Imagine a supercomputer which has virtually unlimited memory, performs 1000 trillion operations per second, has 86 billion processing units with each one connected to 10,000 other units directly, and lasts for more than 70 years in the production environment. And, let’s not forget, it consumes only 20 watts of power.

It has software dating back millions of years with constant reprogramming and optimization, and it is able to adapt very quickly to changing environments. It can learn new types of objects with a single example and does not require petabytes of data to recognize cats.

This supercomputer is called the human brain.

Neuromorphic paradigm

The new paradigm, neuromorphic computing, which is expected to solve Von Neumann’s bottleneck is to create computers which mimic how the human brain works.

Brain-inspired computing and computers are supposed to have equivalent neurons, synapses connecting neurons, and are very highly parallel and asynchronous.

Digital neural networks

It’s natural to think about the current state of machine learning which heavily utilizes digital neural networks.

But… these neural networks are emulated in totally different digital architectures, which are not well suited for massive parallel processing and that have memory and processing separated. Yes, there are important optimizations, but the progress is limited at best. And, power consumption issues show clearly that there will be even more challenges to the traditional approach.

→ Avenga take if  Deep Learning is hitting the wall?

Neuromorphic computers

These computers usually consist of two connected architectures. In part, they utilize   traditional digital computer components for network operations, user interface, etc. However, inside a neuromorphic computer there’s a totally different core component which is an electronic brain. Artificial, but physically an equivalent of neurons and synapses that communicate using electric signals, which is much closer, faster and much more energy efficient of a hardware emulation of the biological brain.

Memristors, which are de facto resistors with memory, are the key element of many such architectures. But, we won’t dive into too many details here.

Is it better for machine learning with neural networks?

The tests show up to 10 million faster gain in performance for machine learning as the physical architecture using the neuromorphic paradigm which is much closer to the abstractions. Neural networks are real neural networks, based on electronic hardware.

When to expect neuromorphic computing?

Tests? Yes, the first generation of neuromorphic computers are already here.

No matter how far they are from real human brain capabilities, the progress is visible and all major chip manufacturers are investing in the area.

How far are we? Quite far. For instance, neuromorphic computers have 2 million neurons compared to almost 90 billion neurons in the brain. But, it doesn’t mean they are useless, as beating the capacity of the human brain is not the only goal. As we can see, even simpler organisms, such as insects, are much better at recognizing images than even the most advanced digital AI driven cars. So it makes sense now, the difference in total neurons used. And, neuromorphic computing will be economically viable before reaching the 90 billion threshold of the number of neurons.

Future

The digital computers we use every day will stay around for a very long time. They are excellent at crunching numbers, business transactions, global internet communication, and model simulations; our brains cannot match their capabilities already and it’s unlikely our brains will ever adapt to compete with them.

One of the ideas is to digitally augment our brains to combine the both of two worlds/paradigms. What? Wait a second. Really? We in fact do it already with the always on and always available smartphones and cloud ecosystems. The future means they will become invisible and more directly connected to our brains.

A second generation of neuromorphic computers is already in the prototype phase, and remember, it’s just the beginning.

Are there any benefits of neuromorphic computing now?

Yes, there are benefits right now, for all humankind. Neuromorphic computers are mainly adopted in the neuroscience area, helping patients with diseases related to brain functions, to include common depression. They help to build devices that fix damaged brain functions and assist with the testing of new types of neurological drugs. And, neuromorphic computers help us to know our brains much better. They are still probably the biggest mystery of humankind, but the progress is much faster because of neuromorphic computing.

What can I do now?

You? Please, do appreciate the supercomputer in your own head, keep it healthy, sleep well, provide it with oxygen, and stop being afraid of computer AI taking over humans. We will work together and our brains will get even more connected than our hands and eyes are to our smartphones.

On the other hand, in this fascinating journey to the next big paradigm shift, we can embrace the progress in the digital computing area. We as people and we as businesses are still far away from reaching the full potential of digitalization. Being aware of the next big thing will not prevent us from benefiting from the current era digital technologies.

Other articles

or

Book a meeting

Zoom 30 min

or call us+1 (800) 917-0207

Start a conversation

We’d like to hear from you. Use the contact form below and we’ll get back to you shortly.