3 important trends in AI/ML you might be missing – VentureBeat

According to a Gartner survey, 48% of global CIOs will deploy AI by the end of 2020. However, despite all the optimism around AI and ML, I continue to be a little skeptical. In the near future, I dont foresee any real inventions that will lead to seismic shifts in productivity and the standard of living. Businesses waiting for major disruption in the AI/ML landscape will miss the smaller developments.

Here are some trends that may be going unnoticed at the moment but will have big long-term impacts:

Gone are the days when on-premises versus cloud was a hot topic of debate for enterprises. Today, even conservative organizations are talking cloud and open source. No wonder cloud platforms are revamping their offerings to include AI/ML services.

With ML solutions becoming more demanding in nature, the number of CPUs and RAM are no longer the only way to speed up or scale. More algorithms are being optimized for specific hardware than ever before be it GPUs, TPUs, or Wafer Scale Engines. This shift towards more specialized hardware to solve AI/ML problems will accelerate. Organizations will limit their use of CPUs to solve only the most basic problems. The risk of being obsolete will render generic compute infrastructure for ML/AI unviable. Thats reason enough for organizations to switch to cloud platforms.

The increase in specialized chips and hardware will also lead to incremental algorithm improvements leveraging the hardware. While new hardware/chips may allow use of AI/ML solutions that were earlier considered slow/impossible, a lot of the open-source tooling that currently powers the generic hardware needs to be rewritten to benefit from the newer chips. Recent examples of algorithm improvements include Sidewaysto speed up DL training by parallelizing the training steps, andReformerto optimize the use of memory and compute power.

I also foresee a gradual shift in the focus on data privacy towards privacy implications on ML models. A lot of emphasis has been placed on how and what data we gather and how we use it. But ML models are not true black boxes. It is possible to infer the model inputs based on outputs over time. This leads to privacy leakage. Challenges in data and model privacy will force organizations to embrace federated learningsolutions. Last year, Google releasedTensorFlow Privacy, a framework that works on the principle of differential privacy and the addition of noise to obscure inputs. With federated learning, a users data never leaves their device/machine. These machine learning models are smart enough and have a small enough memory footprint to run on smartphones and learn from the data locally.

Usually, the basis for asking for a users data was to personalize their individual experience. For example, Google Mail uses the individual users typing behavior to provide autosuggest. What about data/models that will help improve the experience not just for that individual but for a wider group of people? Would people be willing to share their trained model (not data) to benefit others? There is an interesting business opportunity here: paying users for model parameters that come from training on the data on their local device and using their local computing power to train models (for example, on their phone when it is relatively idle).

Currently, organizations are struggling to productionize models for scalability and reliability. The people who are writing the models are not necessarily experts on how to deploy them with model safety, security, and performance in mind. Once machine learning models become an integral part of mainstream and critical applications, this will inevitably lead to attacks on models similar to the denial-of-service attacks mainstream apps currently face. Weve already seen some low-tech examples of what this could look like: making a Tesla speed up instead of slow down, switch lanes, abruptly stop, or turning on wipers without proper triggers. Imagine the impacts such attacks could have on financial systems, healthcare equipment, etc. that rely heavily on AI/ML?

Currently, adversarial attacks are limited to academia to understand the implications of models better. But in the not too distant future, attacks on models will be for profit driven by your competitors who want to show they are somehow better, or by malicious hackers who may hold you to ransom. For example, new cybersecurity tools today rely on AI/ML to identify threats like network intrusions and viruses. What if I am able to trigger fake threats? What would be the costs associated with identifying real-vs-fake alerts?

To counter such threats, organizations need to put more emphasis on model verification to ensure robustness. Some organizations are already using adversarial networks to test deep neural networks. Today, we hire external experts to audit network security, physical security, etc. Similarly, we will see the emergence of a new market for model testing and model security experts, who will test, certify, and maybe take on some liability of model failure.

Organizations aspiring to drive value through their AI investments need to revisit the implications on their data pipelines. The trends Ive outlined above underscore the need for organizations to implement strong governance around their AI/ML solutions in production. Its too risky to assume your AI/ML models are robust, especially when theyre left to the mercy of platform providers. Therefore, the need of the hour is to have in-house experts who understand why models work or dont work. And thats one trend thats here to stay.

Sudharsan Rangarajan is Vice President of Engineering at Publicis Sapient.

Original post:
3 important trends in AI/ML you might be missing - VentureBeat

Related Posts
This entry was posted in $1$s. Bookmark the permalink.