MIT researchers warn that deep learning is approaching computational limits – VentureBeat

Were approaching the computational limits of deep learning. Thats according to researchers at the Massachusetts Institute of Technology, MIT-IBM Watson AI Lab, Underwood International College, and the University of Brasilia, who found in a recent study that progress in deep learning has been strongly reliant on increases in compute. Its their assertion that continued progress will require dramatically more computationally efficient deep learning methods, either through changes to existing techniques or via new as-yet-undiscovered methods.

We show deep learning is not computationally expensive by accident, but by design. The same flexibility that makes it excellent at modeling diverse phenomena and outperforming expert models also makes it dramatically more computationally expensive, the coauthors wrote. Despite this, we find that the actual computational burden of deep learning models is scaling more rapidly than (known) lower bounds from theory, suggesting that substantial improvements might be possible.

Deep learning is the subfield of machine learning concerned with algorithms inspired by the structure and function of the brain. These algorithms called artificial neural networks consist of functions (neurons) arranged in layers that transmit signals to other neurons. The signals, which are the product of input data fed into the network, travel from layer to layer and slowly tune the network, in effect adjusting the synaptic strength (weights) of each connection. The network eventually learns to make predictions by extracting features from the data set and identifying cross-sample trends.

The researchers analyzed 1,058 papers from the preprint server Arxiv.org as well as other benchmark sources to understand the connection between deep learning performance and computation, paying particular mind to domains including image classification, object detection, question answering, named entity recognition, and machine translation. They performed two separate analyses of computational requirements reflecting the two types of information available:

The coauthors report highly statistically significant slopes and strong explanatory power for all benchmarks except machine translation from English to German, where there was little variation in the computing power used. Object detection, named-entity recognition, and machine translation in particular showed large increases in hardware burden with relatively small improvements in outcomes, with computational power explaining 43% of the variance in image classification accuracy on the popular open source ImageNet benchmark.

The researchers estimate that three years of algorithmic improvement is equivalent to a 10 times increase in computing power. Collectively, our results make it clear that, across many areas of deep learning, progress in training models has depended on large increases in the amount of computing power being used, they wrote. Another possibility is that getting algorithmic improvement may itself require complementary increases in computing power.

In the course of their research, the researchers also extrapolated the projections to understand the computational power needed to hit various theoretical benchmarks, along with the associated economic and environmental costs. According to even the most optimistic of calculation, reducing the image classification error rate on ImageNet would require 105more computing.

To their point, a Synced report estimated that the University of Washingtons Grover fake news detection model cost $25,000 to train in about two weeks. OpenAI reportedly racked up a whopping $12 million to train its GPT-3language model, and Google spent an estimated $6,912 trainingBERT, a bidirectional transformer model that redefined the state of the art for 11 natural language processing tasks.

In a separate report last June, researchers at the University of Massachusetts at Amherst concluded that the amount of power required for training and searching a certain model involves the emissions of roughly 626,000 pounds of carbon dioxide. Thats equivalent to nearly five times the lifetime emissions of the average U.S. car.

We do not anticipate that the computational requirements implied by the targets The hardware, environmental, and monetary costs would be prohibitive, the researchers wrote. Hitting this in an economical way will require more efficient hardware, more efficient algorithms, or other improvements such that the net impact is this large a gain.

The researchers note theres historical precedent for deep learning improvements at the algorithmic level. They point to the emergence of hardware accelerators like Googles tensor processing units, field-programmable gate arrays (FPGAs), and application-specific integrated circuits (ASICs), as well as attempts to reduce computational complexity through network compression and acceleration techniques. They also cite neural architecture search and meta learning, which use optimization to find architectures that retain good performance on a class of problems, as avenues toward computationally efficient methods of improvement.

Indeed, an OpenAI study suggests that the amount of compute needed to train an AI model to the same performance on classifying images in ImageNet has been decreasing by a factor of 2 every 16 months since 2012. Googles Transformer architecture surpassed a previous state-of-the-art model seq2seq, which was also developed by Google with 61 times less compute three years after seq2seqs introduction. And DeepMinds AlphaZero, a system that taught itself from scratch how to master the games of chess, shogi, and Go, took eight times less compute to match an improved version of the systems predecessor, AlphaGoZero, one year later.

The explosion in computing power used for deep learning models has ended the AI winter and set new benchmarks for computer performance on a wide range of tasks. However, deep learnings prodigious appetite for computing power imposes a limit on how far it can improve performance in its current form, particularly in an era when improvements in hardware performance are slowing, the researchers wrote. The likely impact of these computational limits is forcing machine learning towards techniques that are more computationally-efficient than deep learning.

See more here:
MIT researchers warn that deep learning is approaching computational limits - VentureBeat

Related Posts
This entry was posted in $1$s. Bookmark the permalink.