Deep tech may stumble on insufficient computing power – Livemint

It appears that many of the deep tech" algorithms the world is excited about will run into physical barriers before they reach their true promise. Take Bitcoin. A cryptocurrency based on blockchain technology, it has a sophisticated algorithm that grows in complexity, as very few new Bitcoin are mintedthrough a digital process called mining". For a simple description of Bitcoin and blockchain, you could refer to an earlier Mint column of mine.

Bitcoins assurance of validity is achieved by its proving" algorithm, which is designed to continually increase in mathematical complexityand hence the computing power needed to process itevery time a Bitcoin is mined. Individual miners are continually doing work to assess the validity of each Bitcoin transaction and confirm whether it adheres to the cryptocurrencys rules. They earn small amounts of new Bitcoin for their efforts. The complexity of getting several miners to agree on the same history of transactions (and thereby validate them) is managed by the same miners who try outpacing one another to create a valid block".

The machines that perform this work consume huge amounts of energy. According to Digiconomist.net, each transaction uses almost 544KWh of electrical energyenough to provide for the average US household for almost three weeks. The total energy consumption of the Bitcoin network alone is about 64 TWh, enough to provide for all the energy needs of Switzerland. The website also tracks the carbon footprint and electronic waste left behind by Bitcoin, which are both startlingly high. This exploitation of resources is unsustainable in the long run, and directly impacts global warming. At a more mundane level, the costs of mining Bitcoin can outstrip the rewards.

But cryptocurrencies are not the worlds only hogs of computing power. Many Artificial Intelligence (AI) deep learning neural" algorithms also place crushing demands on the planets digital processing capacity.

A neural network" attempts to mimic the functioning of the human brain and nervous system in AI learning models. There are many of these. The two most widely used are recursive neural networks, which develop a memory pattern, and convolutional neural networks, which develop spatial reasoning. The first is used for tasks such as language translation, and the second for image processing. These use enormous computing power, as do other AI neural network models that help with deep learning".

Frenetic research has been going into new chip architectures for these to handle the ever-increasing complexity of AI models more efficiently. Todays computers are binary", meaning they depend on the two simple states of a transistor bitwhich could be either on or off, and thus either a 0 or 1 in binary notation. Newer chips try to achieve efficiency through other architectures. This will ostensibly help binary computers execute algorithms more efficiently. These chips are designed as graphic-processing units, since they are more capable of dealing with AIs demands than central processing units, which are the mainstay of most devices.

In a parallel attempt to get beyond binary computing, firms such as DWave, Google and IBM are working on a different class of machines called quantum computers, which make use of the so-called qubit" , with each qubit able to hold 0 and 1 values simultaneously. This enhances computing power. The problem with these, though, is that they are far from seeing widespread adoption. First off, they are not yet sophisticated enough to manage todays AI models efficiently, and second, they need to be maintained at temperatures that are close to absolute zero (-273 celsius). This refrigeration, in turn, uses up enormous amounts of electrical energy.

Clearly, advances in both binary chip design and quantum computing are not keeping pace with the increasing sophistication of deep tech algorithms.

In a research paper, Neil Thompson of the Massachusetts Institute of Technology and others analyse five widely-used AI application areas and show that advances in each of these fields of use come at a huge cost, since they are reliant on massive increases in computing capability. The authors argue that extrapolating this reliance forward reveals that current progress is rapidly becoming economically, technically and environmentally unsustainable.

Sustained progress in these applications will require changes to their deep learning algorithms and/or moving away from deep learning to other machine learning models that allow greater efficiency in their use of computing capability. The authors further argue that we are currently in an era where improvements in hardware performance are slowing, which means that this shift away from deep neural networks is now all the more urgent.

Thompson et al argue that the economic, environmental and purely technical costs of providing all this additional computing power will soon constrain deep learning and a range of applications, making the achievement of key milestones impossible, if current trajectories hold.

We are designing increasingly sophisticated algorithms, but we dont yet have computers that are sophisticated enough to match their demands efficiently. Without significant changes in how AI models are built, the usefulness of AI and other forms of deep tech is likely to hit a wall soon.

Siddharth Pai is founder of Siana Capital, a venture fund management company focused on deep science and tech in India

Subscribe to newsletters

* Enter a valid email

* Thank you for subscribing to our newsletter.

Read more from the original source:
Deep tech may stumble on insufficient computing power - Livemint

Related Posts
This entry was posted in $1$s. Bookmark the permalink.