Page 117«..1020..116117118119..130140..»

Category Archives: Ai

Argo AI, Ford and Lyft to launch self-driving ride-hail service in Miami and Austin – Reuters

Posted: July 23, 2021 at 4:13 am

AUSTIN, July 21 (Reuters) - Self-driving startup Argo AI, carmaker Ford Motor Co (F.N) and ride-hail company Lyft Inc (LYFT.O) on Wednesday said they partnered to offer robotaxi trips to Lyft customers in Miami and Austin.

The service is expected to launch in Miami later this year and in Austin next year with a safety driver inside the Ford Escape hybrid vehicles. Over the next five years, the companies want to deploy at least 1,000 robotaxis in multiple cities.

The first truly driverless cars are expected to launch in 2023, said Jody Kelman, head of Lyft's autonomous team.

The partnership marks the first large-scale U.S. collaboration between a carmaker, a self-driving developer and a ride-hailing company. The companies hope to gain valuable insights on how to turn robotaxis into a commercially viable business - a challenge no company has yet answered.

As part of the agreement, Argo AI, which is backed by Ford and Volkswagen AG (VOWG_p.DE), will receive anonomized data on passenger trips and safety incidents. That will allow Argo to optimize its technology and routing to avoid unsafe streets, Argo CEO Bryan Salesky said in a blog post.

In exchange, Lyft will receive a 2.5% stake in the company. At Argo's most recent valuation of $7.5 billion, that equity slice would be worth $187.5 million. Argo, which is currently testing autonomous vehicles in several U.S. cities, in June said it plans to list publicly within the next year. read more

Ford will fuel, service and clean the robotaxi fleets under the partnership.

In traditional ride-hailing services, human drivers make up an estimated 80% of the total per mile cost, according to research firm Frost & Sullivan, underscoring the companies' interest in a driverless future. But self-driving vehicles need to recoup their expensive development costs and still need to be managed and maintained.

"Our job is to generate the maximum revenue out of each of these vehicles by getting the highest utilization," said Lyft's Kelman.

Lyft in April sold its own self-driving technology unit to Toyota Motor Corp (7203.T) for $550 million to focus instead on providing services such as routing, consumer interface and fleet management. read more

Lyft already allows consumers to book rides in self-driving vehicles in select cities in partnership with Alphabet Inc's (GOOGL.O) Waymo and Motional, the joint venture between Hyundai Motor Co (005380.KS) and Aptiv (APTV.N).

Reporting by Tina Bellon in Austin, additional reporting by Paul Lienert in Detroit, editing by Chris Reese

Our Standards: The Thomson Reuters Trust Principles.

Read more from the original source:

Argo AI, Ford and Lyft to launch self-driving ride-hail service in Miami and Austin - Reuters

Posted in Ai | Comments Off on Argo AI, Ford and Lyft to launch self-driving ride-hail service in Miami and Austin – Reuters

The Bourdain AI furore shouldnt overshadow an effective, complicated film – The Guardian

Posted: at 4:13 am

Anthony Bourdain was a singularly beloved cultural figure. His death by suicide in 2018 at 61 while filming an episode of his CNN travel series Parts Unknown remains, for many, one of the most tragic and baffling public losses of the past few years. Given the intense connection to the chef turned TV personality felt by his fans, and the shock of his death, Roadrunner: A Film about Anthony Bourdain, a bracing, graceful new documentary on Bourdain that features his inner circle, was going to be met sensitively.

On Friday, news broke that the film-makers used artificial intelligence to simulate the television hosts voice for three lines of synthetic audio. In interviews with the New Yorker and GQ, the films director, Morgan Neville, revealed that he fed 10 hours of Bourdains voiceovers into an AI model for narration of emails Bourdain wrote, totaling about 45 seconds. Reaction to the news was startlingly, if perhaps predictably angry. Some outright dismissed the film, which grapples with Bourdains less palatable qualities his obsessiveness, his flakiness, his gnawing impostor syndrome.

When I wrote my review I was not aware that the film-makers had used an AI to deepfake Bourdains voice, tweeted Sean Burns, a film critic from Bostons WBUR who reviewed the film negatively. I feel like this tells you all you need to know about the ethics of the people behind this project.

Others on Twitter, where it was a trending topic, called it ghoulish, a freakishly bad idea, awful.

Neville did not exactly help matters with his comment to the New Yorker that We can have a documentary ethics panel about it later, which felt more flippant than considered. Said panel seems pertinent, but not as a verdict of the film. While much of the discussion has focused on the ethics of reanimating a dead persons voice, the question of AI, in this case, seems like a misdirection. Bourdain wrote the words; we dont know if he read them aloud, but its not synthetic material, nor is it akin to hawking a deceased pop stars hologram performance for money.

The bigger issue is one of disclosure, both to the audience and to Bourdains loved ones. (In response to Nevilles claim in GQ that he checked with Bourdains widow and literary executor just to make sure people were cool with that. And they were like, Tony would have been cool with that, Bourdains ex-wife Ottavia Busia tweeted: I certainly was NOT the one who said Tony would have been cool with that.) If you know which lines to look for, you can hear how the AI voice is a touch stiffer, and a twitch higher, than the real one. But to an average viewer, the difference is fudged. You cant tell a point that may ultimately come to not matter, as audience comfort levels with synthetic audio shift. As Sam Gregory, a film-maker turned non-profit director on ethical applications of video and technology, pointed out in an interview with Helen Rosner in the New Yorker on the ethics of the AI voice, no one blinks an eye when a narrator in a documentary reads a letter written in the civil war.

The queasy part here was the blending of truth with interpretation of truth, fact with simulation, archive with embellishment. Nevilles obfuscation of the AI voice feels deceptive. But then again, all documentaries bend the lines of reality; audiences just often conveniently forget or ignore the artifice in favor of cohesion and momentum. This is, ironically, so much the subject of Roadrunner the blurring of person and persona, the bounded portrait on camera and the ambiguous messiness off it, the persistent burden of fame. The AI model of Bourdains voice, for three lines, is a questionable artistic choice, for sure. But its not an outright transgression that should overshadow a challenging, deeply emotional film.

Roadrunner starts not with Bourdains childhood, which is almost entirely glossed over, but his rebirth, of sorts: fame at middle age, thanks to his bestselling memoir Kitchen Confidential, published at 43 in 2000. Bourdain was, we see, at first an awkward, ungainly but eager student of the camera and the world around him. The bulk of the film traces the swift change of his life post-fame. He left cooking, his partner of nearly 30 years, his anonymity. He started hosting for CNN, married again, had a daughter, shifted his addictive personality an intimidating, exhaustive relentlessness that was once hooked on heroin to jiujitsu, among other things. He chafed against 250 days a year on the road, against the gulf between the ruggedness of his storytelling and the red carpets of promoting the story, against the role of a traditional TV dad he played in spurts at home.

The last third is dominated by his personal unraveling pushing away friends, almost leaving the show and his death. Arguably the more pressing ethical concern is Nevilles decision to not reach out to Asia Argento, the Italian actor/director and Bourdains final romantic partner, who is portrayed as the agent of undoing for both the show and Bourdain. Friends and crew recall an emotionally tense shoot in Hong Kong after Bourdain installed Argento as director at the last minute and fired a longtime cinematographer, how his infatuation with Argento seemed manic and adolescent, how angered he was by paparazzi photos of her and another man shortly before his death.

Neville has said that speaking to Argento would have been painful for a lot of people. Digging into the final days of Bourdains life instantly just made people want to ask ten more questions, he told Vulture. It became this kind of narrative quicksand of Oh, but then what about this? And how did this happen? It just became this thing that made me feel like I was sinking into this rabbit hole of She said, they said, and it just was not the film I wanted to make.

Still, its a lot of focus, words and images on a person who isnt given the opportunity to speak for herself. Its also clear that including Argentos interviews, if she agreed, would have dragged the film into a litigation of Bourdains death rather than an exploration of his life. There are no easy answers here, the type Bourdain eschewed while he was alive.

Roadrunner is ultimately an inviting, haunting, unsettling film, which doesnt hesitate to name the stars frustrating multitudes. Theres Bourdain the preternaturally magnetic host, Bourdain the giddily infatuated boyfriend, Bourdain the exacting boss and unreliable partner, the unbridled friend who once told confidant David Chang that he wouldnt be a good father. Theres his unbent curiosity, his zest for a hit of any experience, and a foreboding emptiness. The scene I cant stop thinking about is a scrap of footage from 2006, while filming a pivotal episode of No Reservations in Lebanon as war erupted between Israel and Hezbollah. Savoring pleasure amid devastation, lost for words, Bourdain simply shakes his head theres no neat way to sum it all up. Theres no fully cohesive portrait of a person in Roadrunner, maybe never any sense to be made. But theres plenty to sit with, to consider, and that feels like a tribute well done.

Read the rest here:

The Bourdain AI furore shouldnt overshadow an effective, complicated film - The Guardian

Posted in Ai | Comments Off on The Bourdain AI furore shouldnt overshadow an effective, complicated film – The Guardian

Climavision Is Taking On Big Weather With AI – Forbes

Posted: July 21, 2021 at 12:25 am

A highway is closed due to snow and ice in Houston, Texas on Feb. 15, 2021. Up to 2.5 million ... [+] customers were without power as the state's power generation capacity was impacted by an ongoing winter storm brought by Arctic blast. (Photo by Chengyue Lao/Xinhua via Getty)

A new weather tech startup says it has created a new artificial intelligence (AI)-powered weather radar and satellite network to take on big weather.

Climavision, which has $100 million in private equity funding, has created a high-resolution weather radar and satellite network that combines lower altitude, proprietary data with machine learning and AI technology.

Chris Goode, CEO of Climavision, says the new sensing network will fill the coverage gaps in the existing NOAA and NWS systems across the US. Goode adds that the current weather surveillance model provides a picture of weather at a given moment, but the picture is not complete.

The US, like many other governments across the world, relies on a network of weather radars known as NEXRAD and these are strategically placed across the country to collect weather data, in real-time, said Goode. In addition to NEXRAD, they also use other forms of weather collection, such as weather balloons and aircraft sensors which are deployed to collect additional data points at different parts in the atmosphere.

There are gaps in coverage between radars in the NEXRAD network and gaps in the mixing levels of the atmosphere, where volatile weather forms, and at the lowest levels, where this weather occurs, added Goode. The Climavision system will fill in those gaps with critical weather radar infrastructure and bring real-time data with space-based observations to get a complete view of whats happening from the ground up.

Once we collect data direct from the source, we use AI, machine learning and IoT to process, interpret and distribute the data, said Goode. Were applying the most sophisticated technology to public datasets so that beyond our data collection, we can learn even more from what is already available.

Goode says that traditional weather observations consist of multiple variables, including temperature, wind, pressure, and humidity, which are interconnected and complex.

We employ AI and machine learning to perform quality control of these observations which come from multiple platforms and various channels based on their respective sensitivity to different components of the atmosphere at different atmospheric levels, said Goode. Machine learning, AI like the Artificial Neural Network (ANN) and regression are used to extract the most impactful meteorological information and to integrate the information from different platforms.

Goode says that the company is bringing a fundamental shift in weather forecasting.

We can all agree that weather is one of the few phenomena with almost universal application. Understanding weather more intimately, in more detail, and with greater lead time impacts every person, every business, every industry, said Goode. So having a better understanding of whats happening in the atmosphere - one or two hours earlier, sometimes even just minutes earlier has the potential to save money and, more importantly, save lives.

If you dont know whats headed your way flash flooding, hail, snow, tornadoes you cant possibly prepare, adds Goode.

The company plans to rollout the technology in Q4 2021.

See the rest here:

Climavision Is Taking On Big Weather With AI - Forbes

Posted in Ai | Comments Off on Climavision Is Taking On Big Weather With AI – Forbes

Further Funding Flows to Canadian AI Inference Hardware – The Next Platform

Posted: at 12:25 am

AI inference hardware startup, Untether AI, has secured a fresh $125 million in funding to push its novel architecture into its first commercial customers in edge and datacenter environments.

Intel Capital was a primary investor in Untether AI since its founding in 2018. When we did a deep dive on their architecture with their CEO in October, 2020, the Toronto-based startup had already raised $27 million and was sampling its runAI200 devices. The team, comprised of several ex-FPGA hardware engineers, was bullish on the potential for custom ASICs for ultra-low power interference and apparently, its investors are too.

This latest funding round, led by Tracker Capital and Intel Capital, also roped in new investor, Canada Pension Plan Investor Board (CPP Investments), which manages money for the countrys 20 million-strong pension program with a fund total of over $492 billion.

These are still early days for the inference startup but they have managed to secure systems integrator, Colfax, to carry their tsunAlmi accelerator cards for edge servers along with their imAIgine SDK. Each of the cards have four of the runAI200 devices we described here which Untether says can delover 2 petaops of peak compute performance. In its own benchmarks they say this translates to 80k frames per second on ResNet-50 (batch size 1) and on BERT, 12k queries per second.

The startup is focused on Int-8, low latency server-based inference only with small batch sizes in mind (batch 1 was at the heart of their design process). The companys CEO, Arun Iyengar (you might recognize his names from leadership roles at Xilinx, AMD, and Altera) says they are going after NLP, recommendation engines, and vision systems for the applications push with fintech at the top of their list for markets, although he was quick to point out that this was less about high frequency trading and more for broader portfolio balancing (asset management, risk allocation, etc.) as AI has real traction there.

At the heart of the unique at-memory compute architecture is a memory bank: 385KBs of SRAM with a 2D array of 512 processing elements. With 511 banks per chip, each device offers 200MB of memory, enough to run many networks in a single chip. And with the multi-chip partitioning capability of the imAIgine Software Development Kit, larger networks can be split apart to run on multiple devices, or even across multiple tsunAImi accelerator cards.

He also says their low power approach would be a good fit for on-prem centers doing large-scale video aggregation (smart cities, retail operations, for example). He admits willingly that theyre starting with these use cases instead of coming out bold with ambitions to find a place among the hallowed hyperscalers, but says theres enough market out there for low-power, high performance devices that theyll find their niches.

In the absence of any public customers for its early silicon, the company is attractive beyond just the funding and the uniqueness of the architecture. It has some pedigreed folks backing the engineering, including Alex Grbic, who heads software engineering and is well known for a long career at Altera. On the hardware engineering side, Untethers Alex Michael, also of Altera, brings decades of IC design, product, and manufacturing experience to bear.

While the vendor word is that there is explosive opportunity for custom inference devices in the datacenter and edge, it remains to be seen who the winners and losers are in the inference startup game. From our view, the edge opportunity has more wiggle room than the large datacenters we tend to focus on here at TNP and it will be a long, tough battle to unseat those high-value (high margin) customers from their CPU/GPU positions.

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.Subscribe now

Read more:

Further Funding Flows to Canadian AI Inference Hardware - The Next Platform

Posted in Ai | Comments Off on Further Funding Flows to Canadian AI Inference Hardware – The Next Platform

Employees want more AI to boost productivity, study finds – VentureBeat

Posted: at 12:25 am

All the sessions from Transform 2021 are available on-demand now. Watch now.

Eighty-one percent of employees believe AI improves their overall performance at work. As a result, more than two-thirds (68%) are calling on their employers to deploy more AI-based technologies to help them execute tasks. Thats the top-level finding from a study published today by 3GEM on behalf of SnapLogic, which surveyed 400 office workers across the U.S. and U.K. about their opinions on AI in the workplace.

In recent years, there was concern among office workers that AI would drive job losses, but employee opinions seem to have changed. The more theyve been exposed to AI and see it in action, the more theyve realized how much it can assist them with their daily work, SnapLogic CTO Craig Stewart said in a statement.

More than half (56%) of employees responding to the SnapLogic survey said theyre using AI which the survey doesnt define as a part of their daily job responsibilities. Meanwhile, 89% believe AI could support them in up to half of their activities, particularly in (1) explaining data, (2) revealing trends and patterns, (3) moving data from one place to another, and (4) accessing data residing in different places across the business.

When asked about the benefits of AI, 61% of respondents said it helped them have a more efficient and productive workday. Almost half (49%) felt it improved their decision-making and accelerated time to insights, while just over half (51%) said they believe AI enables them to achieve a better work/life balance.

SnapLogics research also took a look at the growing market for personal AI apps versus those used at work. The workplace appears to be the proving ground for the use of AI, according to Stewart, with 45% of respondents reporting having downloaded AI-powered work apps compared with 26% reporting having downloaded personal apps.

As AI is increasingly used to make better decisions and rack up productivity gains, theyve gone from tentatively accepting to fully embracing AI. The fact that they are now calling on their leaders to accelerate AI technology adoption in the enterprise is a real sea-change, Stewart added.

SnapLogic has a vested interest in demonstrating demand for AI in the enterprise, of course. The companys platform taps AI and machine learning to automate app, data, and cloud integration. But business interests aside, AI technologies are verifiably becoming prevalent in workplaces around the world.

While the adoption rate varies between organizations, a majority of them 95% in a recent S&P Global report consider AI to be important in their digital transformation efforts. Corporations were expected to invest more than $50 billion in AI systems globally in 2020, accordingto IDC, up from $37.5 billion in 2019. And by 2024, investment is expected to reach $110 billion.

The C-suite, or the top level of companies that are getting interested in this technology, are seeing how they can actually use AI for business, Accenture global lead for applied intelligence Sanjeev Vohra explained during a recent panel at VentureBeats Transform conference. Its moved out of the experimentation zone to something scaled. Businesses are using AI to scale business value and enterprise value.

View original post here:

Employees want more AI to boost productivity, study finds - VentureBeat

Posted in Ai | Comments Off on Employees want more AI to boost productivity, study finds – VentureBeat

AI Is Not Going To Replace Writers Anytime Soon But The Future Might Be Closer Than You Think – Forbes

Posted: at 12:25 am

Patrick Fore via Unsplash

Is AI for content creation over-hyped or will all writers eventually be replaced by bots? Businesses dont necessarily need more content, they need better content that actually performs.

AI is doing a lot to help streamline content marketing and management for companies across the board. You can get things researched, prepped, edited, and published in minutes (as opposed to days or weeks).

The problem is that while AI can automate time-consuming publishing tasks and help predict what people want to read, it can't really write that well yet.

Today, AI still relies heavily on stringing together concepts or facts into some semi-coherent ramble, but it can't massage the phrasing or other intangibles that get customers to stand up and take notice.

AIs underlying technologies in the area of content creation currently include Microsofts Turing Natural Language Generation (T-NLG) boasting 17 billion parameters and OpenAIs Generative Pre-trained Transformer 3 technology (GPT-3), which has 175 billion machine learning parameters.

In September 2020, Microsoft announced it had licensed access to GPT-3s technology for its exclusive use, which offers a clue as to where this fast-growing industry is heading.

In this interview, Wordable CEO and Founder & CEO of Codeless, Brad Smith outlines AIs current limitations for content creation, tells us how best to leverage its capabilities, and looks at where we are headed in the not-too-distant future.

Key problems with content AI todayarbage in. Garbage out.

Smith says the biggest problem with AI right now is its overreliance on patterns and the probability of certain words or phrases showing up next to each other when you reference certain topics.

This means that currently, it can only take a mediocre pass at factual, information-based content. But even then it struggles to actually understand anything its saying. It is merely taking whats already out there on certain topics and then playing a Robocop version of the word game Mad Libs, he explains.

AI cant string together long, persuasive text.

Most good content builds on itself, says Smith. So you might lay the groundwork for an argument in the first section, and then come back later to build on top of that readers new understanding in the fourth section or paragraph.

The trouble with AI is that it cant reference itself in that way, says Smith. Its knowledge of a few topics are completely isolated from each other, so it cant connect the dots that your reader would importantly be expecting to see.

Smith says that these two issues alone would completely rule out AI-based uses for a significant amount of online content.

AI cant do audio or video

He also goes on to point out that AI cant do audio or video, nor write scripts for this type of media, which is how most people will consume digital media in the next few decades, but thats another topic for another day, says Smith.

AI cant do subjective

Most content online isnt objective, but subjective. Comparing alternatives, or providing a few recommendations, each with its own pros and cons.

Unless AI is basically robotically plagiarizing other content already on this subject, it cant compare alternatives like this or provide additional context as to why one argument might or might not be legitimate.

Wordable CEO and Founder & CEO of Codeless, Brad Smith

AI cant do emotion

And AI cant do emotion, such as style, jargon, inside jokes, meta-references, anecdotes, and storytelling. All the things that get someone to stop dead in their tracks, take notice of what theyre reading, and actually want to continue reading the full thing. At the end of the day, people are still emotional human beings, hardwired via their centuries-old lizard brains to use feelings to convince themself of logical decisions, and not vice-versa.

Where can AI help with content creation?Research and prep

Given that most long-form content (1,000-2,000 words) takes 4-5 hours on average to write, with maybe half of that for research and prep, Smith says that AI can be a huge help.

AI and its underlying content technologies can help shortcut this dramatically, providing ideas for how an article should look, or what subtopics to mention, within seconds versus hours.

Pattern-matching works well for SEO

While pattern matching can create content like a Robocop version of Mad Libs, Smith says that AI-based research thats heavily based on pattern-matching can help structure something for strong SEO.

Search engines like Google only exist to help searchers find answers to their queries. To do that, a lot of content they show tends to be fairly formulaic, where the top 10 results might all mention certain subtopics, semantic ideas, questions theyre answering.

First drafts in specific cases

Smith explains that in some very specific cases, AI might be able to provide short-form, basic fact-based content thats passable for a first draft. Again, youll still want writers and editors to actually review it, polish it, edit, or add on. And again this might save you significant time and money, especially if you can work with AI to vet or manually approve an outline before the AI attempts to write it out.

So while bots might not be replacing writers anytime soon, AI for content creation is developing rapidly. Given that GPT-3 launched just a couple of months later than T-NLG with 10 times the capacity of its rival, it will be interesting to see what happens next. Preliminary tests, on only 80 subjects showed only 48% success in distinguishing short stories written by people from those created by AI.

Smiths key advice is: when using underlying technologies for content, ensure you keep humans involved at various stages of the process.

Of course, you still need experts and humans to vet, filter, tweak, and throw out things to make sure its legit. Nevertheless, used correctly, AI can potentially be a huge timesaver, and will no doubt feature heavily in the future of content creation.

Go here to read the rest:

AI Is Not Going To Replace Writers Anytime Soon But The Future Might Be Closer Than You Think - Forbes

Posted in Ai | Comments Off on AI Is Not Going To Replace Writers Anytime Soon But The Future Might Be Closer Than You Think – Forbes

Global Operating Room Artificial Intelligence (AI) Market Report 2021-2030 with Case Studies to Assess the Key Strategies Adopted by Some of the…

Posted: at 12:25 am

DUBLIN, July 20, 2021 /PRNewswire/ -- The "Global Artificial Intelligence (AI) in Operating Room Market: Focus on Offering, Technology, Indication, Application, End User, Unmet Demand, Cost-Benefit Analysis, and Over 16 Countries' Data - Analysis and Forecast, 2021-2030" report has been added to ResearchAndMarkets.com's offering.

Global AI in Operating Room Market to Reach $2,951.5 Million by 2030

The purpose of the study is to enable the reader to gain a holistic view of the global AI in the operating room market by each of the aforementioned segments.

The report constitutes an in-depth analysis of the global AI in the operating room market, including a thorough analysis of the applications. The study also provides market and business-related information on various products, applications, technologies, and end users. The report considers software solutions and hardware solutions integrated with AI.

Expert Quote

"I think these are exciting times. Not considering the buzz around AI, ultimately, it is an enabler to do things at scale and quickly. It needs to serve a higher purpose that provides surgeons or other stakeholders in the healthcare ecosystem with value. The real value that a company provides with AI is the key component. This technology can be leveraged to tackle the disparity in the world of surgery".

Key Questions Answered in this Report:

In addition, the report provides:

Key Topics Covered:

1 Product Definition

2 Scope of Research

3 Research Methodology

4 Impact of COVID-19 on the Global AI in Operating Room Market4.1 Impact on Facilities4.2 Impact on AI Adoption in Operating Rooms4.3 Impact on Market Size4.4 COVID-19 Recovery Timeline4.5 Entry Barriers and Opportunities

5 Industry Analysis5.1 Technology Landscape5.1.1 Key Trends5.2 Value Chain Analysis5.3 Cost-Benefit Analysis5.4 End-User Perceptions5.5 Funding Scenario5.6 Regulatory Framework and Government Initiatives5.6.1 Regulations in North America5.6.1.1 U.S.5.6.1.1.1 Connected Devices5.6.1.1.2 Software-as-a-Medical Device (SaMD)5.6.1.1.2.1 General Considerations for SaMDs5.6.2 Regulations in Europe5.6.3 Regulations in Japan5.6.4 Regulations in China5.7 Patent Analysis5.7.1 Awaited Technological Developments5.7.2 Patent Filing Trend5.8 Product Benchmarking

6 Competitive Landscape6.1 Market Share Analysis6.2 Key Strategies and Developments6.3 Business Model Analysis6.4 Pricing Analysis6.5 Competitive Benchmarking

7 Global AI in Operating Room Market Scenario7.1 Assumptions and Limitations7.2 Global AI in Operating Room Market Assessment7.3 Key Findings and Opportunity Assessment7.4 Global AI in Operating Room Market Size and Forecast7.5 Market Dynamics7.5.1 Impact Analysis7.5.2 Market Growth Promoting Factors7.5.2.1 Growth in Funding for AI7.5.2.2 Growing Adoption of AI-Enabled Technologies in Healthcare Settings7.5.2.3 Advancement in Robotics and Medical Visualization Technologies7.5.2.4 Benefits of Artificial Intelligence-Enabled Surgeries Over Conventional Surgeries7.5.3 Market Growth Restraining Factors7.5.3.1 Lack of a Well-Defined Regulatory Framework in Regions7.5.3.2 Limited Studies and Data on the Efficiency of AI in Operating Rooms7.5.4 Market Growth Opportunities7.5.4.1 Leverage AI to Enhance Remote Surgical Capabilities7.5.4.2 Leveraging Business Synergies for Capability and Portfolio Enhancement7.5.5 Current Surgical Challenges7.5.6 Capitalizing on Unmet Demand

8 Global AI in Operating Room Market (by Offering)8.1 Key Findings and Opportunity Assessment8.2 Hardware8.3 Software-as-a-Service (SaaS)

9 Global AI in Operating Room Market (by Technology)9.1 Key Findings and Opportunity Assessment9.2 Machine Learning (ML) and Deep Learning9.3 Natural Language Processing (NLP)

10 Global AI in Operating Room Market (by Indication)10.1 Key Findings and Opportunity Assessment10.2 Cardiology10.3 Orthopedics10.4 Urology10.5 Gastroenterology10.6 Neurology

11 Global AI in Operating Room Market (by Application)11.1 Key Findings and Opportunity Assessment11.2 Training11.3 Diagnosis11.4 Surgical Planning and Rehabilitation11.4.1 Pre-Operative11.4.2 Intra-Operative11.4.3 Post-Operative11.5 Outcomes and Risk Analysis11.6 Integration and Connectivity11.7 Others (Instrument Tracking and Traceability, Scheduling, Anesthesia Management)

12 Global AI in Operating Room Market (by End User)12.1 Opportunity Assessment12.2 Hospitals12.3 Others (Ambulatory Surgical Centers, Private, Standalone, and Specialized Facilities)

13 Global AI in Operating Room Market (by Region)

14 Case Studies14.1 Enabling the Future Operating Room with AI14.2 Role of M&As in the Future of AI in Operating Room14.3 Role of AI in Minimally Invasive Surgeries

15 Company Profiles

For more information about this report visit https://www.researchandmarkets.com/r/555a1

Media Contact: Research and Markets Laura Wood, Senior Manager [emailprotected]

For E.S.T Office Hours Call +1-917-300-0470 For U.S./CAN Toll Free Call +1-800-526-8630 For GMT Office Hours Call +353-1-416-8900

U.S. Fax: 646-607-1904 Fax (outside U.S.): +353-1-481-1716

SOURCE Research and Markets

http://www.researchandmarkets.com

Continued here:

Global Operating Room Artificial Intelligence (AI) Market Report 2021-2030 with Case Studies to Assess the Key Strategies Adopted by Some of the...

Posted in Ai | Comments Off on Global Operating Room Artificial Intelligence (AI) Market Report 2021-2030 with Case Studies to Assess the Key Strategies Adopted by Some of the…

Qualcomm’s Vision: The Future Of … AI – Forbes

Posted: at 12:25 am

Mobile Chip Leaders AI Starts In Mobile, And Grows To The Clouds

Company acquires assets from Twenty Billion Neurons GmbH to bolster its AI Team.

Qualcomm Technologies (QTI) is running a series of webinars titled The Future of..., and the most recent edition is on AI. In this lively session, I hosted a conversation with Ziad Asghar, QTI VP of Product Management, Alex Katouzian, QTI SVP and GM Mobile Compute and Infrastructure, and Clment Delangue, Co-Founder and CEO of the open source AI model company, Hugging Face, Inc. Ive also penned a short Research Note on the companys AI Strategy, which can be found here on Cambrian-AI, where we outline some impressive AI use cases.

Qualcomm believes AI is evolving exponentially thanks to billions of smart mobile devices, connected by 5G to the cloud, fueled by a vibrant ecosystem of application developers armed with open-source AI models. Other semiconductor companies might say something similar, but in Qualcomms case it uniquely starts with mobile. The latest Snapdragon 888 has a sixth generation AI engine powerful enough to process significant AI models on the phone, enabling applications such as on-device speech processing and even real-time spoken language translation. Qualcomm complements the edge devices with cloud processing using the Cloud AI100, which demonstrated leading performance efficiency recently on the MLPerf V1.0 benchmarks. Qualcomm calls this approach Distributed Intelligence.

Qualcomm envisions a tightly coupled network of AI processors across the cloud, Edge cloud, and ... [+] on-device endpoints.

To add more talent and IP to Qualcomms AI research lab, the company announced today that it has acquired a team from Twenty Billion Neurons GmbH (TwentyBN), including their high quality video dataset that is widely used by the AI research community.The founding CEO is Roland Memisevic, who co-led the world-renowned MILA AI institute withYoshua Bengio of the Universit de Montral.

While few smart phone users realize they are using AI every time they take a picture, even fewer understand that AI helps keep them connected. QTI embeds AI in both applications, such as computational photography and accurate voice interaction, as well as in the operation of the mobile handset itself, optimizing 5G to extend the networks reach, and power management to prolong battery life. Heres a few snipets from The Future Of... session.

Alex Katouzian, notes that Strategically what we do is we use our largest channel, which is mobile, to create inventions that will spiral and get reused in adjacent markets that have the same mobile traits. For example, PCs and XR or auto. And then, getting into some of the infrastructure-based designs as well, because AI can get used in edge cloud, in some private networks environments.

Ziad Asghar notes that We took our pedigree, which is amazing power efficiency, and applied it to AI processing. We've taken that from mobile, taken our learnings and applied it to the cloud side. If you look at what some of the major platforms are seeing today, theres a huge problem with the power consumption, where the power is basically doubling every year on the cloud side. So what we did was we took our AI expertise, we developed a new architecture, and came up with a product that's specifically designed for inferencing. That's what's given us an ability to be able to show performance at a power level that nobody else can show.

Hugging Faces Clment Delangue believes that Transfer Learning will be the next big thing in AI. I think, in five years, most of the machine learning models out there will be transfer learning models. And that's super exciting because they have new capabilities but also new opportunities for them to be smaller, more compressed, to be trained on smaller datasets, thanks of the unique characteristics and capabilities of transfer learning.

Qualcomm is one of the few if not only semiconductor companies to offer AI engines in both SoCs for mobile edge processing and in cloud servers. With over a decade of experience with mobile and now data center AI, the company is in a unique position to build the future:

We believe that this comprehensive strategy coupled with leadership performance and power efficiency will position Qualcomm well for significant growth in AI.

View post:

Qualcomm's Vision: The Future Of ... AI - Forbes

Posted in Ai | Comments Off on Qualcomm’s Vision: The Future Of … AI – Forbes

Scientists Are Giving AI The Ability to Imagine Things It’s Never Seen Before – ScienceAlert

Posted: at 12:25 am

Artificial intelligence(AI) is proving very adept at certain tasks like inventing human faces that don't actually exist, or winning games of poker but these networks still struggle when it comes to something humans do naturally: imagine.

Once human beings know what a cat is, we can easily imagine a cat of a different color, or a cat in a different pose, or a cat in different surroundings. For AI networks, that's much harder, even though they can recognize a cat when they see it (with enough training).

To try and unlock AI's capacity for imagination, researchers have come up with a new method for enabling artificial intelligence systems to work out what an object should look like, even if they've never actually seen one exactly like it before.

"We were inspired by human visual generalization capabilities to try to simulate human imagination in machines," says computer scientist Yunhao Gefrom the University of Southern California (USC).

"Humans can separate their learned knowledge by attributes for instance, shape, pose, position, color and then recombine them to imagine a new object. Our paper attempts to simulate this process using neural networks."

The key is extrapolation being able to use a big bank of training data (like pictures of a car) to then go beyond what's seen into what's unseen. This is difficult for AI because of the way it's typically trained to spot specific patterns rather than broader attributes.

What the team has come up with here is called controllable disentangled representation learning, and it uses an approach similar to those used to create deepfakes disentangling different parts of a sample (so separating face movement and face identity, in the case of a deepfake video).

It means that if an AI sees a red car and a blue bike, it will then be able to 'imagine' a red bike for itself even if it has never seen one before. The researchers have put this together in a framework they're calling Group Supervised Learning.

Extrapolating new data from training data. (Itti et al., 2021)

One of the main innovations in this technique is processing samples in groups rather than individually, and building up semantic links between them along the way. The AI is then able to recognize similarities and differences in the samples it sees, using this knowledge to produce something completely new.

"This new disentanglement approach, for the first time, truly unleashes a new sense of imagination in AI systems, bringing them closer to humans' understanding of the world," says USC computer scientist Laurent Itti.

These ideas aren't completely new, but here the researchers have taken the concepts further, making the approach more flexible and compatible with additional types of data. They've also made the framework open source, so other scientists can make use of it more easily.

In the future, the system developed here could guard against AI bias by removing more sensitive attributes from the equation helping to make neural networks that aren't racist or sexist, for example.

The same approach could also be applied in the fields of medicine and self-driving cars, the researchers say, with AI able to 'imagine' new drugs, or visualize new road scenarios that it hasn't been specifically trained for in the past.

"Deep learning has already demonstrated unsurpassed performance and promise in many domains, but all too often this has happened through shallow mimicry, and without a deeper understanding of the separate attributes that make each object unique," says Itti.

The research has been presented at the 2021 International Conference on Learning Representations and can be read here.

Read more here:

Scientists Are Giving AI The Ability to Imagine Things It's Never Seen Before - ScienceAlert

Posted in Ai | Comments Off on Scientists Are Giving AI The Ability to Imagine Things It’s Never Seen Before – ScienceAlert

Freshworks: 93% of IT managers have deployed AI, or plan to soon – VentureBeat

Posted: at 12:25 am

All the sessions from Transform 2021 are available on-demand now. Watch now.

Nearly all IT managers (93%) are currently exploring or deploying some level of AI to streamline help desk systems, according to a new report from Freshworks. Half of IT managers said they have already implemented AI tools.

Nearly 70% of IT managers said AI is either critical or very important for upgrading and modernizing their service desk capabilities. Even so, respondents said there are certain prerequisites for AI-enabled solutions. While the most desired characteristic of AI tools is their ease of integration with existing IT infrastructure, a majority of respondents indicated that any AI solutions for IT service management (ITSM)/IT operations management (ITOM) should be intuitive, scalable, collaborative, and fast and easy to deploy.

The survey explored a key metric associated with todays demanding IT environment: the number of IT service inquiries received by the IT support desk each day. That number ranged from an average of 44 inquiries per day for small companies to 725 per day for large organizations.

ITSM chatbots were the clear leader in planned or actual AI deployments. The survey found that 25% of respondents expected AI-powered technologies to reduce IT staff workloads, and that 39% have already experienced this benefit.

Survey respondents also explained what they wanted to gain from implementing AI: Speed of implementation (40%), Integration with legacy systems (40%), Overall cost of implementation (38%), and training the AI bots solution to return the most accurate response (39%).

Conducted across 14 countries, surveying more than 850 senior IT executives the survey reveals that AI has hit the mainstream.

Read the full Right sizing AI report from Freshworks.

Read more:

Freshworks: 93% of IT managers have deployed AI, or plan to soon - VentureBeat

Posted in Ai | Comments Off on Freshworks: 93% of IT managers have deployed AI, or plan to soon – VentureBeat

Page 117«..1020..116117118119..130140..»