Artificial Intelligence and Machine Learning Are Headed for A Major Bottleneck Here’s How We Solve It – Datanami

(ArtemisDiana/Shutterstock)

Artificial intelligence (AI) and machine learning (ML) are already changing the world but the innovations were seeing so far are just a taste of whats around the corner. We are on the precipice of a revolution that will affect every industry, from business and education to healthcare and entertainment. These new technologies will help solve some of the most challenging problems of our age and bring changes comparable in scale to the renaissance, the Industrial Revolution, and the electronic age.

While the printing press, fossil fuels, and silicon drove these past epochal shifts, a new generation of algorithms that automate tasks previously thought impossible will drive the next revolution. These new technologies will allow self-driving cars to identify traffic patterns, automate energy balancing in smart power grids, enable real-time language translation, and pioneer complex analytical tools that detect cancer before any human could ever perceive it.

Well, thats the promise of the AI and ML revolution, anyway. And to be clear, these things are all within our theoretical reach. But what the tech optimists tend to leave out is that our path to the bright, shiny AI future has some major potholes in it. One problem is looming especially large. We call it the dirty secret of AI and ML: right now, AI and ML dont scale well.

Scale the ability to expand a single machines capability to broader, more widespread applications is the holy grail of every digital business. And right now, AI and ML dont have it. While algorithms may hold the keys to our future, when it comes to creating them, were currently stuck in a painstaking, brute force methodology.

(paitoon/Shutterstock)

CreatingAI and ML algorithms isnt the hard part anymore. You tell them what to learn, feed them the right data, and they learn how to parse novel data without your help. The labor-intensive piece comes when you want the algorithms to operate in the real world. Left to their own devices, AI will suck up as much time, compute, and data/bandwidth as you give it. To be truly effective, these algorithms need to run lean, especially now that businesses and consumers are showing an increasing appetite for low-latency operations at the edge. Getting your AI to run in an environment where speed, compute,

and bandwidth are all constrained is the real magic trick here.

Thus, optimizing AI and ML algorithms has become the signature skill of todays AI researchers/engineers. Its expensive in terms of time, resources, money, and talent, but essential if you want performantAI. However, today, the primary way were addressing the problem is via brute force throwing bodies at the problem. Unfortunately, the demand for these algorithms is exploding while the pool of qualified AI engineers remains relatively static. Even if it were economically feasible to hire them, there are not enough trained AI engineers to work on all the projects that will take the world to the resplendent AI/sci-fi future weve been promised.

But all is not lost. There is a way for us to get across the threshold to achieve the exponential AI advances we require. The answer to scaling AI and ML algorithms is actually a simple idea. Train ML algorithms to tune ML algorithms, an approach the industry calls Automated Machine Learning, or AutoML. Tuning AI and ML algorithms may be more of an art than a science, but then again, so is driving, photo retouching, and instant language translation, all of which are addressable via AI and ML.

(Phonlamai Photo/Shutterstock)

AutoML will allow us to scale AI optimization so it can achieve full adoption throughout computing, including at the edge where latency and compute are constrained. By using hardware awareness in AutoML, we can push performance even further. We believe this approach will also lead to a world where the barrier to entry for AI programmers is lower, allowing more people to enter the field, and making better use of high-level programmers. Its our hope that the resulting shift will alleviate the current talent bottleneck the industry is facing.

Over the next few years, we expect to automate various AI optimization techniques such as pruning, distillation, neural architecture search, and others, to achieve 15-30x performance improvements. Googles EfficientNet research has also yielded very promising results in the field of auto-scaling convolutional neural networks. Another example is DataRobots AutoML tools, which can be applied to automating the tedious and time-consuming manual work required for data preparation and model selection.

There is one last hurdle to cross, though. AI automates tasks we always assumed we needed humans to do, offloading these difficult feats to a computer programmed by a clever AI engineer. The dream of AutoML is to offload the work another level, using AI algorithms to tune and create new AI algorithms. But theres no such thing as a free lunch. We will now need evenmore highlyskilled programmers to develop the AutoML routines at the meta-level. The good news is, we think weve got enough of them to do this.

But its not all about growing the field from the top. This innovation not only expands the pool of potential programmers, allowing lower-level programmers to create highly effective AI it provides a de facto training path to move them into higher and higher-skilled positions. This in turn will create a robust talent pipeline that can supply the industry for years to come and ensure we have a good supply of hardcore AI developers for when we hit the next bottleneck. Because yes, there may come a day when we need Auto-AutoML, but for now, we want to take things one paradigm-shifting innovation at a time. It may sound glib, but we believe it wholeheartedly: the answer to the problems of AI is more AI.

About the authors: Nilesh Jain is a Principal Engineer at Intel Labs where he leads Emerging Visual/AI Systems Research Lab. He focuses on developing innovative technologies for edge/cloud systems for emerging workloads. His current research interests include visual computing, hardware aware AutoML systems. He received M.Sc. degree from Oregon Graduate Institute/OHSU. He is also Sr. IEEE member, and has published over 15 papers and over 20 patents.

Ravi Iyer is an Intel Fellow in Intel Labs where he leads the Emerging Systems Lab. His research interests include developing innovative technologies, architectures and edge/cloud systems for emerging workloads. He has published over 150 papers and has over 40 patents granted. He received his Ph.D. in Computer Science from Texas A&M. He is also an IEEE Fellow.

Related Items:

Why Data Scientists and ML Engineers Shouldnt Worry About the Rise of AutoML

AutoML Tools Emerge as Data Science Difference Makers

What is Feature Engineering and Why Does It Need To Be Automated?

See the original post here:
Artificial Intelligence and Machine Learning Are Headed for A Major Bottleneck Here's How We Solve It - Datanami

Related Posts
This entry was posted in $1$s. Bookmark the permalink.