IBM And MLCommons Show How Pervasive Machine Learning Has Become – Forbes

AI, Artificial Intelligence concept,3d rendering,conceptual image.

This week IBM announced its latest Z-series mainframe and MLCommons released its latest benchmark series. The two announcements had something in common Machine Learning (ML) acceleration which is becoming pervasive everywhere from financial fraud detection in mainframes to detecting wake words in home appliances.

While these two announcements were not directly related, but they are part of a trend, showing how pervasive ML has become.

MLCommons Brings Standards to ML Benchmarking

ML benchmarking is important because we often hear about ML performance in terms of TOPS trillions of operations per second. Like MIPS (Millions of Instructions per Second or Meaningless Indication of Processor Speed depending on your perspective), TOPS is a theoretical number calculated from the architecture, not a measured rating based on running workloads. As such, TOPS can be a deceiving number because it does not include the impact of the software stack., Software is the most critical aspect of implementing ML and the efficiency varies widely, which Nvidia clearly demonstrated by improving the performance of its A100 platform by 50% in MLCommons benchmarks over the years.

The industry organization MLCommons was created by a consortium of companies to build a standardized set of benchmarks along with a standardized test methodology that allows different machine learning systems to be compared. The MLPerf benchmark suites from MLCommons include different benchmarks that cover many popular ML workloads and scenarios. The MLPerf benchmarks addresses everything from the tiny microcontrollers used in consumer and IoT devices, to mobile devices like smartphones and PCs, to edge servers, to data center-class server configuration. Supporters of MLCommons include Amazon, Arm, Baidu, Dell Technologies, Facebook, Google, Harvard, Intel, Lenovo, Microsoft, Nvidia, Stanford and the University of Toronto.

MLCommons releases benchmark results in batches and has different publishing schedules for inference and for training. The latest announcement was for version 2.0 of the MLPerf Inference suite for data center and edge servers, version 2.0 for MLPerf Mobile, and version 0.7 for MLPerf Tiny for IoT devices.

To date, the company that has had the most consistent set of submissions, producing results every iteration, in every benchmark test, and by multiple partners, has been Nvidia. Nvidia and its partners appear to have invested enormous resources in running and publishing every relevant MLCommons benchmark. No other vendor can match that claim. The recent batch of inference benchmark submissions include Nvidia Jetson Orin SoCs for edge servers and the Ampere-based A100 GPUs for data centers. Nvidias Hopper H100 data center GPU, which was announced at Spring 2022 GTC, arrived too late to be included in the latest MLCommons announcement, but we fully expect to see Nvidia H100 results in the next round.

Recently, Qualcomm and its partners have been posting more data center MLPerf benchmarks for the companys Cloud AI 100 platform and more mobile MLPerf benchmarks for Snapdragon processors. Qualcomms latest silicon has proved to be very power efficient in data center ML tests, which may give it an edge on power-constrained edge server applications.

Many of the submitters are system vendors using processors and accelerators from silicon vendors like AMD, Andes, Ampere, Intel, Nvidia, Qualcomm, and Samsung. But many of the AI startups have been absent. As one consulting company, Krai, put it: Potential submitters, especially ML hardware startups, are understandably wary of committing precious engineering resources to optimizing industry benchmarks instead of actual customer workloads. But then Krai countered their own objection with MLPerf is the Olympics of ML optimization and benchmarking. Still, many startups have not invested in producing MLCommons results for various reasons and that is disappointing. Theres also not enough FPGA vendors participating in this round.

The MLPerf Tiny benchmark is designed for very low power applications such as keyword spotting, visual wake words, image classification, and anomaly detection. In this case we see results from a mix of small companies like Andes, Plumeria, and Syntiant, as well as established companies like Alibaba, Renesas, Silicon Labs, and STMicroeletronics.

IBM z16 Mainframe

IBM Adds AI Acceleration Into Every Transaction

While IBM didnt participate in MLCommons benchmarks, the company takes ML seriously. With its latest Z-Series mainframe computer, the z16, IBM has added accelerators for ML inference and quantum-safe secure boot and cryptography. But mainframe systems have different customer requirements. With roughly 70% of banking transactions (on a value basis) running on IBM mainframes, the company is anticipating the needs of financial institutes for extreme reliable and transaction processing protection. In addition, by adding ML acceleration into its CPU, IBM can offer per-transaction ML intelligence to help detect fraudulent transactions.

In an article I wrote in 2018, I said: In fact, the future hybrid cloud compute model will likely include classic computing, AI processing, and quantum computing. When it comes to understanding all three of those technologies, few companies can match IBMs level of commitment and expertise. And the latest developments in IBMs quantum computing roadmap and the ML acceleration in the z16, show IBM is a leader in both.

Summary

Machine Learning is important from tiny devices up to mainframe computers. Accelerating this workload can be done on CPUs, GPUs, FPGAs, ASICs, and even MCUs and is now a part of all computing going forward. These are two examples of how ML is changing and improving over time.

Tirias Research tracks and consults for companies throughout the electronics ecosystem from semiconductors to systems and sensors to the cloud. Members of the Tirias Research team have consulted for IBM, Nvidia, Qualcomm, and other companies throughout the AI ecosystems.

Read the rest here:
IBM And MLCommons Show How Pervasive Machine Learning Has Become - Forbes

Related Posts

Comments are closed.