Run:ai Seeks to Grow AI Virtualization with $75M Round – Datanami

Posted: March 17, 2022 at 3:08 am

Run:ai, a provider of an AI virtualization layer that helps optimize GPU instances, yesterday announced a Series C round worth $75 million. The funding figures to help the fast-growing company expand its sales reach and further development the platform.

GPUs are the beating heart of deep learning today, but the limited nature of the computing resource means AI teams are constantly battling to squeeze the most work out of them. Thats where Run:ai steps in with its flagship product, dubbed Atlas, which provides a way for AI teams to get more bang for their GPU buck.

We do for AI hardware what VMware and virtualization did for traditional computingmore efficiency, simpler management, greater user productivity, Ronen Dar, Run:ais CTO and co-founder, says in a press release. Traditional CPU computing has a rich software stack with many development tools for running applications at scale. AI, however, runs on dedicated hardware accelerators such as GPUs which have few tools to help with their implementation and scaling.

Atlas abstracts AI workloads away from GPUs by creating virtual pools where GPU resources can be automatically and dynamically allocated, thereby gaining more efficiency from GPU investments, the company says.

The platform also brings queuing and prioritization methods to deep learning workloads running on GPUs, and develops fairness algorithms to ensure users have an equal chance at getting access to the hardware. The companys software also enables clusters of GPUs to be managed as a single unit, and also allows a single GPU to be broken up into fractional GPUs to ensure better allocation.

Atlas functions as a plug-in to Kubernetes, the open source container orchestration system. Data scientists can get access to Atlas via integration to IDE tools like Jupyter Notebook and PyCharm, the company says.

The abstraction brings greater efficiency to data science teams who are experimenting with different techniques and trying to find what works. According to a December 2020 Run:ai whitepaper, one customer was able to reduce their AI training time from 46 days to about 36 hours, which represents a 3,000% improvement, the company says.

With Run:ai Atlas, weve built a cloud-native software layer that abstracts AI hardware away from data scientists and ML engineers, letting Ops and IT simplify the delivery of compute resources for any AI workload and any AI project, Dar continues.

The Tel Aviv company, which was founded in 2018, has experienced a 9x increase in annual recurring revenue (ARR) over the past 12 months, during which time the companys employee count has tripled. The company has also quadrupled its customer base over the past two years. The Series C round, which brings the companys total funding to $118 million, will be used to grow sales as well as enhancing its core platform.

When we founded Run:ai, our vision was to build the de- facto foundational layer for running any AI workload, says Omri Geller, Run:ai CEO and co-founder in the press release. Our growth has been phenomenal, and this investment is a vote of confidence in our path. Run:ai is enabling organizations to orchestrate all stages of their AI work at scale, so companies can begin their AI journey and innovate faster.

Run:ais platform and growth caught the eyes of Tiger Global Management, which co-led the Series C round with Insight Partners, which led the Series B round. Other firms participating in the current round included existing investors TLV Partners and S Capital VC.

Run:ai is well positioned to help companies reimagine themselves using AI, says Insight Partners Managing Director Lonne Jaffe, who you might remember was the CEO of Syncsort (now Precisely) nearly a decade ago.

As the Forrester Wave AI Infrastructure report recently highlighted, Run:ai creates extraordinary value by bringing advanced virtualization and orchestration capabilities to AI chipsets, making training and inference systems run both much faster and more cost-effectively, Jaffe says in the press release.

In addition to AI workloads, Run:ai can also be used to optimize HPC workloads.

Related Items:

Optimized Machine Learning Libraries For CPUS Exceed GPU Performance

Optimizing AI and Deep Learning Performance

AI Hypervisor Gets a GPU Boost

Read more here:

Run:ai Seeks to Grow AI Virtualization with $75M Round - Datanami

Related Posts