It will come as no surprise that computing power is in demand, and increasingly so. From the 1960s up until around 2012, this demand was following the chip advancements made by Moore's law, doubling roughly every 24 months. Yet, with the recent advances in AI and data-centric computing, the demand for compute power now doubles every two months. At the same time, our historical trends of making transistors smaller and more energy-efficient are slowing down.
This has left us at an impasse. We're using more power and more money to keep up. We're building huge farms of data centres that are pumping dangerous levels of carbon into the atmosphere. We're approaching a point where the gains we're making with algorithms are set to plateau and this directly threatens innovation and our future way of life.
Something's got to give.
Instead of improving on what's gone before, we need to take a new approach. We need to go back to the drawing board - and it's up to us as innovators and investors to help make this happen.
The impending impasse
At the heart of the computing power challenge is how today's chips are designed, and how they handle the transfer of data. Transistors have been the basic building blocks of modern electronics for decades and during that time they've become smaller, more efficient, and affordable. This has proved useful for countless applications and devices.
However, when it comes to the development of AI, transistor scaling alone is not enough to support further rapid progress. It's not enough for us to achieve all the innovations and promises artificial intelligence can bring.
To handle the many, and parallel operations needed to run today's advanced AI algorithms, computers need to frequently move data between the memory, and compute units. A process which, when run using current computer architecture, is highly inefficient and uses vast amounts of energy.
In order to navigate this challenge, leading firms are developing a range of solutions across the technology stack, from dedicated GPUs and ASICs built for AI, to optimised algorithms, and our ever-increasing reliance on cloud computing. While each step provides marginal improvements and short-term solutions, over time these aren't sustainable. These improvements are not enough to keep pace with the computing power demand, nor are they suitable for many of the futuristic uses of AI including self-driving cars, robotic surgery and the many examples where there can be no room for the latency associated with cloud computing.
The long-term solution is to delve deeper. To break the cycle, we - as investors, and as a society - need to be investing in innovations that effectively rewrite the rule book on how to build electronics. Namely, in a new building block called a memristor.
Building from atoms up
First theorised 50 years ago, memristors (a portmanteau of memory and resistor) can both store and process data with superior energy efficiency. Various memristor technologies have been developed but they've failed to reach the mainstream because of how difficult they are to integrate with existing systems. That was until scientists at University College London built memristors out of silicon oxide, one of the most used, most understood and cheapest materials in the microelectronic industry. These memristors are much faster and more energy-efficient than Flash, are capable of achieving energy efficiencies orders of magnitude higher than current GPUs, and can operate as analogue devices, or as multiply-accumulate engines ideally suited for deep learning accelerators.
They are a great example of a fundamental breakthrough that could make a world-changing difference. There is, however, a problem.
Software ate the world. Hardware is biting back
To make a fundamental science breakthrough like this from a university laboratory and turn it into commercial reality takes time and guts. It needs a deeptech investment community prepared to take a long-term view and provide patient capital in a high-risk environment. That used to be what VC capital did, but over the last decade investment capital has migrated to seek lower-risk, more-scalable software opportunities. That left an investment environment unsuitable to power the next generation of computing hardware tech.
The good news is that the tide is turning. The impending impasse coupled with the inflation seen in the software investment market is seeking investors return to deep tech to find value. Across Europe, more capital is being raised by deeptech-focused funds willing to invest in high-risk, high-reward hardware.
In the UK, and increasingly in Europe, we've seen our universities develop their own ecosystem of funds such as the UCL Technology Fund (UCLTF) that specialise in funding those first, most difficult and risky steps.
But, we still have a long way to go. For many 'deeptech' funds, semiconductors are still too deep. There are very few well-funded startups building the new building blocks of compute (with the notable exception of Graphcore) and with such a strong set of universities and talent pool available from historical successes like ARM, we as European investors can do better.
If we don't, the rest of the world will make these breakthroughs. Promising technologies will wither on the vine and further reduce Europe's influence in the global semiconductor market. The time is now to take European deeptech really deep.
David Grimm is an Investment Director at the UCL Technology Fund which invests in a commercialisation of life and physical sciences innovations from UCL.
Adnan Mehonic is Co-Founder at Intrinsic, Assistant Professor in Nanoelectronics at UCL and Royal Academy of Engineering Research Fellow. Intrinsic is the startup behind the silicon oxide memristor breakthrough and is proving its tech in commercial fabrication facilities, via investment from the UCL Technology Fund.