Artificial Intelligence and Machine Learning Are Headed for A Major Bottleneck Here’s How We Solve It – Datanami

(ArtemisDiana/Shutterstock)

Artificial intelligence (AI) and machine learning (ML) are already changing the world but the innovations were seeing so far are just a taste of whats around the corner. We are on the precipice of a revolution that will affect every industry, from business and education to healthcare and entertainment. These new technologies will help solve some of the most challenging problems of our age and bring changes comparable in scale to the renaissance, the Industrial Revolution, and the electronic age.

While the printing press, fossil fuels, and silicon drove these past epochal shifts, a new generation of algorithms that automate tasks previously thought impossible will drive the next revolution. These new technologies will allow self-driving cars to identify traffic patterns, automate energy balancing in smart power grids, enable real-time language translation, and pioneer complex analytical tools that detect cancer before any human could ever perceive it.

Well, thats the promise of the AI and ML revolution, anyway. And to be clear, these things are all within our theoretical reach. But what the tech optimists tend to leave out is that our path to the bright, shiny AI future has some major potholes in it. One problem is looming especially large. We call it the dirty secret of AI and ML: right now, AI and ML dont scale well.

Scale the ability to expand a single machines capability to broader, more widespread applications is the holy grail of every digital business. And right now, AI and ML dont have it. While algorithms may hold the keys to our future, when it comes to creating them, were currently stuck in a painstaking, brute force methodology.

(paitoon/Shutterstock)

CreatingAI and ML algorithms isnt the hard part anymore. You tell them what to learn, feed them the right data, and they learn how to parse novel data without your help. The labor-intensive piece comes when you want the algorithms to operate in the real world. Left to their own devices, AI will suck up as much time, compute, and data/bandwidth as you give it. To be truly effective, these algorithms need to run lean, especially now that businesses and consumers are showing an increasing appetite for low-latency operations at the edge. Getting your AI to run in an environment where speed, compute,

and bandwidth are all constrained is the real magic trick here.

Thus, optimizing AI and ML algorithms has become the signature skill of todays AI researchers/engineers. Its expensive in terms of time, resources, money, and talent, but essential if you want performantAI. However, today, the primary way were addressing the problem is via brute force throwing bodies at the problem. Unfortunately, the demand for these algorithms is exploding while the pool of qualified AI engineers remains relatively static. Even if it were economically feasible to hire them, there are not enough trained AI engineers to work on all the projects that will take the world to the resplendent AI/sci-fi future weve been promised.

But all is not lost. There is a way for us to get across the threshold to achieve the exponential AI advances we require. The answer to scaling AI and ML algorithms is actually a simple idea. Train ML algorithms to tune ML algorithms, an approach the industry calls Automated Machine Learning, or AutoML. Tuning AI and ML algorithms may be more of an art than a science, but then again, so is driving, photo retouching, and instant language translation, all of which are addressable via AI and ML.

(Phonlamai Photo/Shutterstock)

AutoML will allow us to scale AI optimization so it can achieve full adoption throughout computing, including at the edge where latency and compute are constrained. By using hardware awareness in AutoML, we can push performance even further. We believe this approach will also lead to a world where the barrier to entry for AI programmers is lower, allowing more people to enter the field, and making better use of high-level programmers. Its our hope that the resulting shift will alleviate the current talent bottleneck the industry is facing.

Over the next few years, we expect to automate various AI optimization techniques such as pruning, distillation, neural architecture search, and others, to achieve 15-30x performance improvements. Googles EfficientNet research has also yielded very promising results in the field of auto-scaling convolutional neural networks. Another example is DataRobots AutoML tools, which can be applied to automating the tedious and time-consuming manual work required for data preparation and model selection.

There is one last hurdle to cross, though. AI automates tasks we always assumed we needed humans to do, offloading these difficult feats to a computer programmed by a clever AI engineer. The dream of AutoML is to offload the work another level, using AI algorithms to tune and create new AI algorithms. But theres no such thing as a free lunch. We will now need evenmore highlyskilled programmers to develop the AutoML routines at the meta-level. The good news is, we think weve got enough of them to do this.

But its not all about growing the field from the top. This innovation not only expands the pool of potential programmers, allowing lower-level programmers to create highly effective AI it provides a de facto training path to move them into higher and higher-skilled positions. This in turn will create a robust talent pipeline that can supply the industry for years to come and ensure we have a good supply of hardcore AI developers for when we hit the next bottleneck. Because yes, there may come a day when we need Auto-AutoML, but for now, we want to take things one paradigm-shifting innovation at a time. It may sound glib, but we believe it wholeheartedly: the answer to the problems of AI is more AI.

About the authors: Nilesh Jain is a Principal Engineer at Intel Labs where he leads Emerging Visual/AI Systems Research Lab. He focuses on developing innovative technologies for edge/cloud systems for emerging workloads. His current research interests include visual computing, hardware aware AutoML systems. He received M.Sc. degree from Oregon Graduate Institute/OHSU. He is also Sr. IEEE member, and has published over 15 papers and over 20 patents.

Ravi Iyer is an Intel Fellow in Intel Labs where he leads the Emerging Systems Lab. His research interests include developing innovative technologies, architectures and edge/cloud systems for emerging workloads. He has published over 150 papers and has over 40 patents granted. He received his Ph.D. in Computer Science from Texas A&M. He is also an IEEE Fellow.

Related Items:

Why Data Scientists and ML Engineers Shouldnt Worry About the Rise of AutoML

AutoML Tools Emerge as Data Science Difference Makers

What is Feature Engineering and Why Does It Need To Be Automated?

See the original post here:
Artificial Intelligence and Machine Learning Are Headed for A Major Bottleneck Here's How We Solve It - Datanami

Leading the Artificial Intelligence Revolution – Psychiatric Times

Experts discuss psychiatrys role in the advancement of AI technologies at the 2022 APA Annual Meeting.

CONFERENCE REPORTER

Artificial intelligence (AI) is here to stay, and its really very important for the psychiatric field to take a leading role in developing it.

P. Murali Doraiswamy, MBBS, FRCP, of the Duke University School of Medicine discussed some of the latest developments in AI and its current and potential applications for psychiatry at the 2022 American Psychiatric Association (APA) Annual Meeting. He was joined in a panel discussion by Robert M. Califf, MD, MACC, commissioner of food and drugs for the US Food and Drug Administration (FDA), and moderator Samantha Boardman, MD, psychiatrist and author of Everyday Vitality: Turning Stress Into Strength.

If I was a computer programmer describing the unmet needs in the mental health field, I would use a formula that would go something like this: 40, 40, 30, 0. This means only 40% of people who need care get access to the care they want; of those, only 40% get optimal care; of those, only 30% achieve remission; and 0% of people get preventive care in terms of resilience, Doraiswamy said. Thats the problem in this fieldand the hope and promise is that AI and technology can help with some of this.

Doraiswamy noted that $4 billion has been invested in AI, which has been doubling every 3 months for the past few years and transforming multiple fields, including psychiatry. In psychiatry, digital health coaching, stigma reduction, triage and suicide prediction, clinical decision support, and therapeutic apps have already transformed the field, with therapeutic wearables, robots and virtual reality avatars, protocol standardization, QC and practice management, and population forecasting on the horizon.

Mental health and wellness apps are particularly lucrative, with more than 5000 such apps currently on the market and around 250 million individuals having accessed them in the past 2-and-a-half years. Although some are FDA-approvedsuch as Endeavor Rx for the treatment of pediatric attention-deficit/hyperactivity disorder (ADHD) and Somryst for the treatment of chronic insomniait is important to note that many are neither approved nor overseen by the FDA unless they make a disease claim. Many also provide inaccurate information, send data to third parties, and are based on black box algorithms rather than randomized controlled trials. However, recent data shows that 82% of individuals believe robots can support their mental health better than humans can, as robots are unbiased and judgment-free, and able to provide them with quick answers.

In addition to this, clinicians have cited a number of potential harms of AI, including diagnostic and treatment errors; a loss in creative clinical thinking; greater risks of dehumanization and the jeopardization of the therapeutic process due to a lack of empathy; and less privacy and more fatalism in general. Clinicians also worry that the process for the machine to reach a diagnosis could turn into a black box process, and that, if time saved is used by administrators to increase patient loads, it might lead to greater clinician burnout.

Another potential concern for clinicians is job security in light of the fully automated psychiatrist, which is another form of AI in development. Doraiswamy emphasized that a development like this should be regarded as an enhancement to psychiatric practice rather than as a replacement. This is not going to replace you, Doraiswamy said. This is a personal assistant thats going to be sitting in the waiting room doing the initial interview intake, get all the intake ready, and then summarize this information and have it ready for you so that you can make a diagnostic assessment.

Clinicians have also cited a number of benefits of AI, including better outcomes through more standardized protocols and the elimination of human error; less bias due to race or gender; and scalability of treatment. They say AI can also encourage patients to answer truthfully and accept support more objectively; use big data more efficiently than humans; provide practical guidance for trainees; and help elucidate etiologies of diseases that are currently not well understood.

The evidence for the predictive benefits of AI for psychiatry is also growing. Recent research has found support that AI may be able to detect Alzheimer disease 5 years before diagnosis and predict future depressive episodes and risk of suicide, and that machine learning may be able to predict a mental health crisis using only data from electronic health records. By and large, this shows you the promise, Doraiswamy said. And, as DeepMindwhich is owned by Googlerecently released a general-purpose AI agent, its probable that you could have 1 AI program that could help with all 159 diseases in the DSM-5, Doraiswamy said.

In order to maximize the potential benefits of AI and ensure that psychiatry leads this revolution, Doraiswamy recommended that the field develop clinical practice guidelines; provide proper education and training; ensure that cases are relevant and human centered; advocate for equitable and accountable AI; implement QI methods into workflow; create benchmarks for trusted and safe apps; and work with payers to develop appropriate reimbursement.

We need to step back and acknowledge that digitation in our society and broad access to the internet are having profound effects that we dont yet understand, and that as we develop technologies with the plasticity that these technologies have, as opposed to traditional medical devices, you can change the software very quickly, Califf said. Its a tremendous potential benefit, but it also carries very specific risks that we need to be aware of.

Some very reasonable people might argue that AI and psychiatry dont belong even in the same sentencethat AI should play no role whatsoever in mental health care, and that the psychotherapeutic relationship is sacrosanct, Boardman concluded. But in a world where there are so many unmet mental health needs, I think theres a very good argument that AI can not only improve care and diagnostics and increase access, but also reduce stigma and even flag potential issues and symptoms before they appear and reduce burnout among professionals.

Read more from the original source:
Leading the Artificial Intelligence Revolution - Psychiatric Times

Global Artificial Intelligence (AI) Partnering Deal Terms and Agreements Report 2022: Latest AI, Oligonucletides Including Aptamers Agreements…

Dublin, June 08, 2022 (GLOBE NEWSWIRE) -- The "Global Artificial Intelligence (AI) Partnering Terms and Agreements 2010 to 2022" report has been added to ResearchAndMarkets.com's offering.

This report contains a comprehensive listing of all artificial intelligence partnering deals announced since 2010 including financial terms where available including over 750 links to online deal records of actual artificial intelligence partnering deals as disclosed by the deal parties.

The report provides a detailed understanding and analysis of how and why companies enter artificial intelligence partnering deals. The majority of deals are early development stage whereby the licensee obtains a right or an option right to license the licensors artificial intelligence technology or product candidates. These deals tend to be multicomponent, starting with collaborative R&D, and commercialization of outcomes.

This report provides details of the latest artificial intelligence, oligonucletides including aptamers agreements announced in the healthcare sectors.

Understanding the flexibility of a prospective partner's negotiated deals terms provides critical insight into the negotiation process in terms of what you can expect to achieve during the negotiation of terms. Whilst many smaller companies will be seeking details of the payments clauses, the devil is in the detail in terms of how payments are triggered - contract documents provide this insight where press releases and databases do not.

In addition, where available, records include contract documents as submitted to the Securities Exchange Commission by companies and their partners.

Contract documents provide the answers to numerous questions about a prospective partner's flexibility on a wide range of important issues, many of which will have a significant impact on each party's ability to derive value from the deal.

In addition, a comprehensive appendix is provided organized by artificial intelligence partnering company A-Z, deal type definitions and artificial intelligence partnering agreements example. Each deal title links via Weblink to an online version of the deal record and where available, the contract document, providing easy access to each contract document on demand.

The report also includes numerous tables and figures that illustrate the trends and activities in artificial intelligence partnering and dealmaking since 2010.

In conclusion, this report provides everything a prospective dealmaker needs to know about partnering in the research, development and commercialization of artificial intelligence technologies and products.

Report scope

Global Artificial Intelligence Partnering Terms and Agreements includes:

In Global Artificial Intelligence Partnering Terms and Agreements, the available contracts are listed by:

Key Topics Covered:

Executive Summary

Chapter 1 - Introduction

Chapter 2 - Trends in artificial intelligence dealmaking2.1. Introduction2.2. Artificial intelligence partnering over the years2.3. Most active artificial intelligence dealmakers2.4. Artificial intelligence partnering by deal type2.5. Artificial intelligence partnering by therapy area2.6. Deal terms for artificial intelligence partnering2.6.1 Artificial intelligence partnering headline values2.6.2 Artificial intelligence deal upfront payments2.6.3 Artificial intelligence deal milestone payments2.6.4 Artificial intelligence royalty rates

Chapter 3 - Leading artificial intelligence deals3.1. Introduction3.2. Top artificial intelligence deals by value

Chapter 4 - Most active artificial intelligence dealmakers4.1. Introduction4.2. Most active artificial intelligence dealmakers4.3. Most active artificial intelligence partnering company profiles

Chapter 5 - Artificial intelligence contracts dealmaking directory5.1. Introduction5.2. Artificial intelligence contracts dealmaking directory

Chapter 6 - Artificial intelligence dealmaking by technology type

AppendicesAppendix 1 - Artificial intelligence deals by company A-ZAppendix 2 - Artificial intelligence deals by stage of developmentAppendix 3 - Artificial intelligence deals by deal typeAppendix 4 - Artificial intelligence deals by therapy areaAppendix 5 - Deal type definitionsAppendix 6 - Further reading on dealmaking

Table of figuresFigure 1: Artificial intelligence partnering since 2010Figure 2: Active artificial intelligence dealmaking activity since 2010Figure 3: Artificial intelligence partnering by deal type since 2010Figure 4: Artificial intelligence partnering by disease type since 2010Figure 5: Artificial intelligence deals with a headline valueFigure 6: Artificial intelligence deals with an upfront valueFigure 7: Artificial intelligence deals with a milestone valueFigure 8: Artificial intelligence deals with a royalty rate valueFigure 9: Top artificial intelligence deals by value since 2010Figure 10: Most active artificial intelligence dealmakers since 2010

Companies Mentioned

For more information about this report visit https://www.researchandmarkets.com/r/irt217

About ResearchAndMarkets.comResearchAndMarkets.com is the world's leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends.

Go here to see the original:
Global Artificial Intelligence (AI) Partnering Deal Terms and Agreements Report 2022: Latest AI, Oligonucletides Including Aptamers Agreements...

Four skills that won’t be replaced by Artificial Intelligence in the future – Economic Times

You've probably heard for years that the workforce would be supplanted by robots. AI has changed several roles, such as using self-checkouts, ATMs, and customer support chatbots. The goal is not to scare people, but to highlight the fact that AI is constantly altering lives and executing activities to replace the human workforce. At the same time, technological advancements are producing new career prospects. AI is predicted to increase the demand for professionals, particularly in robotics and software engineering. As a result, AI has the potential to eliminate millions of current occupations while also creating millions of new ones.

Among the many concerns that AI raises is the possibility of wiping out a large portion of the human workforce by eliminating the need for manual labour. But it will simultaneously liberate humans from having to perform tedious, repetitive tasks, allowing them to focus on more complex and rewarding projects, or simply take some much-needed time off.

According to a McKinsey report, depending on the adoption scenario, automation will displace between 400 and 800 million jobs by 2030, requiring up to 375 million people to change job categories entirely.

Though the potential of AI is unimaginable it is also restricted. While it is apparent that AI will dominate the professional world on many levels. However, there can be no denial that as advanced as AI may be, it can and never will be able to replicate human consciousness that reinforces the human beings' position at the top of the food chain.

Until now, we are talking about the jobs that can be snatched as technology advances but then, the human aspects of work cannot be replaced. Let's focus on something that they cannot do. There are some jobs that only humans are capable of performing.

There are jobs that require creation, conceptualization, complex strategic planning, and dealing with unknown spaces and feelings or emotional interactions that are way beyond the expertise of an AI as of now. Let's now talk about certain skills that are irreplaceable till the human race exist.1. Empathy is unique to humans: Some may argue that animals show empathy as well, but they are not the ones taking over the jobs. Humans, unlike programmed software designed to produce a specific result, are capable of feeling emotions. It may seem contradictory, but the personal affinity between a person and an organisation is the foundation of a professional relationship. Humans need a personal connection that extends beyond the professional realm to develop trust and human connection, something that bot technology completely lacks.2. Emotional Intelligence: Though accurate, the AI is not intuitive, or culturally sensitive because that's a human trait. No matter how accurately it is programmed to carry out a task, it cannot possess the human ability to adjust to the algorithm of human intellect. For instance, reading into the situation or the face of another human. It lacks emotional intellect which makes humans capable of understanding and handling an interaction that needs emotional communication. Exactly during your customer care service, one would always prefer a human interaction to read and understand the situation than an automated machine that cannot work or help beyond the programming.

3. Creativity: Perk of being human: AI can improve productivity and efficiency by reducing errors and repetition and replacing manual jobs with intelligent automated solutions, but it cannot comprehend human psychology. Furthermore, as the world becomes more AI-enabled, humans will be able to take on increasingly innovative tasks.

4. Problem-solving outside a code: Humans can deal with unexpected uncertainty by analysing the situation; like critical thinking during complex scenarios, and adopting

There is not even the slightest doubt that AI will not drive the future. To make AI work, humans need to be creative, insightful, and contextually aware. The reason for this is straightforward: humans will continue to provide value that machines cannot duplicate.

(Amarvijayy Taandur, CBO - BYLD group for Crucial Learning)

Read the rest here:
Four skills that won't be replaced by Artificial Intelligence in the future - Economic Times

Netradyne Named to Forbes AI 50 List of Top Artificial Intelligence Companies of 2022 – PR Newswire

Netradyne Uses AI To Help Fleets Reduce Driver Incidents, Protect Against False Claims, and Create Safer Roads

SAN DIEGO, June 8, 2022 /PRNewswire/ --Netradyne, an industry leader in artificial intelligence (AI) and edge computing focusing on driver and fleet safety, has been named on this year's Forbes AI 50 list 2022 for North America. Produced in partnership with Sequoia Capital, this list recognizes the standout privately held companies in North America that are making the most interesting and impactful uses of AI.

Forbes editorial team acknowledged that AI technology is driving advancements in every industry but that it can be difficult to identify which companies are utilizing such technology in transformative and measurable ways. The Forbes AI 50 list, now in its fourth edition, identifies North America's privately held companies at the forefront of the field for whom AI is at the heart of their products and services.

In selecting honorees for this year's list, Forbes' 12-judge panel of experts in artificial intelligence from the fields of academia, technology, and venture capital evaluated hundreds of submissions, handpicking the top 50 most compelling companies.

"We are honored to be named to the Forbes AI 50 list," said Avneesh Agrawal, co-founder, and CEO of Netradyne. "At Netradyne, our mission is to create safer and smarter roadways for all. Using AI and edge computing technologies, we are revolutionizing the fleet transportation ecosystem by helping reinforce good driving behavior and similarly empowering drivers to improve their performance."

Agrawal continued, "Driveri's unique ability to analyze everymile of a journey allows insights into good driving behaviors, which can be recognized and rewarded to reinforce drivers' safe behavior, and drivers also have full transparency and coaching access to their personalized driving GreenZone score via the driver mobile app."

Netradyne provides fleets of all sizes and vehicle types with an advanced video safety camera, fleet performance analytics tracking, and driver awareness tools to help reduce risky driving behavior and reward safe driving decision-making. Driveriis the only solution that can positively recognize, empower and improve driver performance. The cascading effects are powerful by using Driveri's revolutionary AI and reinforcing good behavior to improve driver performance in real-time. Fleets see reduced accidents, higher safety scores, lower insurance costs, improved driver retention, and better fleet performance in increased profits.

Netradyne was one of the hundreds of applicants to be included in this prestigious list. A panel of 12 expert AI judges identified the 50 most compelling companies.

About Netradyne, Inc.

Netradyne harnesses the power of Computer Vision and Edge Computing to revolutionize the modern-day transportation ecosystem. Netradyne is an industry leader in fleet safety solutions, immediately improving driver behavior and fleet performance and setting commercial vehicle driving standards. Netradyne collects and analyzes more data points and meaningful information than any other fleet safety organization so customers can improve retention, increase profitability, enhance safety, and enable end-to-end transparency. Organizations trust Netradyne to build a positive, safe, and driver-focused culture to take their business to the next level.

CONTACT: [emailprotected]

SOURCE Netradyne

See the original post here:
Netradyne Named to Forbes AI 50 List of Top Artificial Intelligence Companies of 2022 - PR Newswire

Artificial intelligence spotted inventing its own creepy language – New York Post

An artificial intelligence program has developed its own language and no one can understand it.

OpenAI is anartificial intelligencesystems developer their programs are fantastic examples of super-computing but there are quirks.

DALLE-E2 isOpenAIs latest AI system it can generate realistic or artistic images from user-entered text descriptions.

DALLE-E2 represents a milestone in machine learning OpenAIs site says the program learned the relationship between images and the text used to describe them.

A DALLE-E2 demonstration includesinteractive keywordsfor visiting users to play with and generate images toggling different keywords will result in different images, styles, and subjects.

But the system has one strange behavior itswritingits own language of random arrangements of letters, and researchers dont know why.

Giannis Daras, a computer science Ph.D. student at the University of Texas, published aTwitter threaddetailing DALLE-E2s unexplained new language.

Daras told DALLE-E2 to create an image of farmers talking about vegetables and the program did so, but the farmers speech read vicootes some unknown AI word.

Darasfedvicootes back into the DALLE-E2 system and got back pictures of vegetables.

We then feed the words: Apoploe vesrreaitars and we get birds. Daras wrote on Twitter.

It seems that the farmers are talking about birds, messing with their vegetables!

Stay up on the very latest with Evening Update.

Daras and a co-author have written apaperon DALLE-E2s hidden vocabulary.

They acknowledge that telling DALLE-E2 to generate images of words the command an image of the word airplane is Daras example normally results in DALLE-E2 spitting out gibberish text.

When plugged back into DALLE-E2, that gibberish text will result in images of airplanes which says something about the way DALLE-E2 talks to and thinks of itself.

Some AI researchers argued that DALLE-E2s gibberish text is random noise.

Hopefully, we dont come to find the DALLE-E2s second language was a security flaw that needed patching after its too late.

This article originally appeared onThe Sunand was reproduced here with permission.

See original here:
Artificial intelligence spotted inventing its own creepy language - New York Post

This Artificial Intelligence Stock Has a $596 Billion Opportunity – The Motley Fool

No technology has ever had the potential to transform the way the world does business quite like artificial intelligence (AI). Even in its early stages, it's already proving its ability to complete complex tasks in a fraction of the time that humans can, with adoption in both large organizations and small-scale start-ups accelerating.

C3.ai (AI -6.27%) is the world's first enterprise AI provider. It sells ready-made and customized applications to companies that want to leverage the power of this advanced technology without having to build it from scratch, and its customer base continues to grow in both number and pedigree.

C3.ai just reported its full-year results for fiscal 2022 (ended April 30), and beyond its strong financial growth, the company also revealed the magnitude of its future opportunity.

Image source: Getty Images.

Sentiment among both investors and the general public continues to trend against fossil fuel companies as people become more conscious about humanity's impact on the environment. Oil and gas companies are constantly trying to improve their processes to produce cleaner energy, and artificial intelligence is now helping them do that.

C3.ai serves companies in 11 industries, but 54% of its revenue comes from the fossil fuel sector. The company has a long-standing partnership with oil and gas services giant Baker Hughes (BKR -3.76%). Together, they've developed a full suite of applications designed to enable the industry to predict catastrophic equipment failures and to help reduce carbon emissions. Shell (SHEL -2.87%), for example, uses C3.ai's software to monitor 10,692 pieces of equipment every single day, ingesting data from over 1.1 million sensors to make 515 million predictions each month.

C3.ai continues to report major customer wins. It just received its first two orders as part of a five-year, $500 million deal with the U.S. Department of Defense, which was signed last quarter. And its collaborations with the world's largest cloud services providers, like Alphabet's Google Cloud, have delivered further blockbuster signings like Tyson Foodsand United Parcel Service. C3.ai and Google Cloud are leaning on each other's expertise to make advanced AI tools more accessible for a growing list of industries.

Overall, by the numbers, C3.ai's customer count is proof of steady demand.

C3.ai reported revenue of $72.3 million in the fourth quarter, which was a 38% year-over-year jump. For the full year, it met its previous guidance and delivered $252.8 million, which was also 38% higher compared to 2021.

But the company's remaining performance obligations (RPO) will likely capture the attention of investors because they increased by a whopping 62% to $477 million. It's an important number to track because it's effectively a looking glass into the future, as C3.ai expects RPO will eventually convert into revenue.

C3.ai isn't a profitable company just yet. It made a net loss of $192 million for its 2022 full year, a sizable jump from the $55 million it lost in 2021, mainly because it more than doubled its investment in research and development and increased its sales and marketing expenditure by nearly $80 million.

But since the company maintains a gross profit margin of around 75%, it has the flexibility to rein in its operating costs in the future to improve its bottom-line results. C3.ai is deliberately choosing to sacrifice profitability to invest heavily in growth because it's chasing an addressable market it believes will be worth $596 billion by 2025.

C3.ai maintains an extremely strong financial position, with over $950 million in cash and equivalents on its balance sheet. That means the company could operate at its 2022 loss rate of $192 million for the next five years before it runs out of cash, leaving plenty of time to add growth before working toward profitability.

Unfortunately, the current market environment has been unfavorable to loss-making technology companies. The Nasdaq 100 tech index currently trades in a bear market, having declined by 25% from its all-time high. It's representative of dampened sentiment thanks to rising interest rates and geopolitical tensions, which have forced investors to reprice their growth expectations.

C3.ai stock was having difficulties prior to this period, as growth hasn't been quite as strong as some early backers expected. Overall, the stock price has fallen by 87% since logging its all-time high of $161 per share shortly after its initial public offering in December 2020.

But that might be a great opportunity for investors with a long-term time horizon. C3.ai has some of the world's largest companies on its customer list, it's running a healthy gross profit margin, and it's staring at a $596 billion opportunity in one of the most exciting areas of the technology sector right now.

Read the original:
This Artificial Intelligence Stock Has a $596 Billion Opportunity - The Motley Fool

Timnit Gebru and the fight to make artificial intelligence work for Africa – Mail and Guardian

The way Timnit Gebru sees it, the foundations of the future are being built now. In Silicon Valley, home to the worlds biggest tech companies, the artificial intelligence (AI) revolution is already well under way. Software is being written and algorithms are being trained that will determine the shape of our lives for decades or even centuries to come. If the tech billionaires get their way, the world will run on artificial intelligence.

Cars will drive themselves and computers will diagnose and cure diseases. Art, music and movies will be automatically generated. Judges will be replaced by software that supposedly applies the law without bias and industrial production lines will be fully automated and exponentially more efficient.

Decisions on who gets home loans or how much your insurance premiums will be made by an algorithm that assesses your creditworthiness, while a similar algorithm will sift through job applications before any CVs get to a human recruiter (this is already happening in many industries). Even news stories, like this one, will be written by a program that can do it faster and more accurately than human journalists.

But what if those algorithms are racist, exclusionary or have dangerous implications that were not anticipated by the mostly rich, white men who created them? What if, instead of making the world better, they just reinforce the inequalities and injustices of the present? Thats what Gebru is worried about.

Were really seeing it happening. Its scary. Its reinforcing so many things that are harming Africa, says Gebru.

She would know. Gebru was, until late 2020, the co-director of Googles Ethical AI program. Like all the big tech companies, Google is putting enormous resources into developing its artificial intelligence capabilities and figuring out how to apply them in the real world. This encompasses everything from self-driving cars to automatic translation and facial recognition programs.

The ultimate prize is a concept known as Artificial General Intelligence a computer that is capable of understanding the world as well as any human and making decisions accordingly

It sounds like a god, says Gebru.

She was not at Google for long. Gebru joined in 2018, and it was her job to examine how all this new technology could go wrong. But input from the ethics department was rarely welcomed.

It was just screaming about issues and getting retaliated against, she says. The final straw was when she co-authored a paper on the ethical dangers of large language models, used for machine translation and autocomplete, which her bosses told her to retract.

In December 2020, Gebru left the company. She says she was fired; Google says she resigned. Either way, her abrupt departure and the circumstances behind it thrust her into the limelight, making her the most prominent voice in the small but growing movement that is trying to force a reckoning with Big Tech before it is too late to prevent the injustices of the present being replicated in the future.

Gebru is one of the worlds leading researchers helping us understand the limits of artificial intelligence in products like facial-recognition software, which fails to recognise women of colour, especially black women, wrote Time magazine when it nominated Gebru as one of the 100 most influential people in the world in 2022.

She offers us hope for justice-oriented technology design, which we need now more than ever.

Artificial intelligence is not yet as intelligent as it sounds. We are not at the stage where a computer can think for itself or match a human brain in cognitive ability. But what computers can do is process incomprehensibly vast amounts of data and then use that data to respond to a query. Take Dall-E 2, the image-generation software that created The Continents cover illustration this week, developed by San Francisco-based OpenAI.

It can take a prompt such as a brain riding a rocket ship heading towards the moon and turn it into an image with uncannily accurate sometimes eerie results. But the software is not thinking for itself. It has been trained on data, in this case, 650 million existing images, each of which have a text caption telling the computer what is going on in the picture. This means it can recognise objects and artistic styles and regurgitate them on command. Without this data, there is no artificial intelligence.

Like coal shovelled into a steamships furnace, data is the raw material that fuels the AI machine. Gebru argues that all too often the fuel is dirty. Perhaps the data is scraped from the internet, which means it is flawed in all the ways the internet itself is flawed Anglo- and Western-centric, prone to extremes of opinion and political polarisation and all too often it reinforces stereotypes and prejudices. Dall-E 2, for instance, thinks that a CEO must be a white man, while nurses and flight attendants are all women.

More ominous still was an algorithm developed for the United States prison system, which predicted that black prisoners were more likely than white people to commit another crime, which led to black people spending longer in jail.

Or perhaps, in one of the great paradoxes of the field, the data is mined through old-fashioned manual labour thousands of people hunched over computer screens, painstakingly sorting and labelling images and videos. Most of this work has been outsourced to the developing world and the people doing the work certainly arent receiving Silicon Valley salaries.

Where do you think this huge workforce is? There are people in refugee camps in Kenya, in Venezuela, in Colombia, that dont have any sort of agency, says Gebru.

These workers are generating the raw material but the final product and the enormous profits that are likely to come with it will be made for and in the West. What does this sound like to you? Gebru asks.

Timnit Gebru grew up in Addis Ababa (Timnit means wish in Tigrinya). She was 15 when Ethiopia went to war with Eritrea, forcing her into exile, first in Ireland and then in the US, where she first experienced casual racism. A temp agency boss told her mother to get a job as a security guard, because who knows whatever degree you got from Africa. A teacher refused to place Gebru in an advanced class because people like you always fail.

But Gebru didnt fail. Her academic record got her into Stanford, one of the worlds most prestigious universities, where she hung out with her friends in the African Students Association and studied electrical engineering. It was here that both her technical ability and her political consciousness grew.

She worked at Apple for a stint, and then returned to the university where she developed a growing fascination with artificial intelligence. So then I started going to these conferences in AI or machine learning, and I noticed that there were almost no black people. These conferences would have 5 000 or 6 000 people from all over the world but one or two black people.

Gebru co-founded Black in AI for black professionals in the industry to come together and figure out ways to increase representation. By that stage, her research had already proved how this racial inequality was being replicated in the digital world. A landmark paper she co-authored with the Ghanaian-American-Canadian computer scientist, Joy Buolamwini, found that facial recognition software is less accurate at identifying women and people of colour a big problem if law enforcement is using this software to identify suspects.

Gebru got her job at Google a couple of years later. It was a chance to fix what was broken from inside one of the biggest tech companies in the world. But, according to Gebru, the company did not want to hear about the environmental costs of processing vast data sets, or the baked in biases that come with them, or the exploitation of workers in the Global South. It was too busy focusing on all the good it was going to do in the distant future to worry about the harm it might cause in the present.

This, she says, is part of a pernicious philosophy known as long-termism, which holds that lives in the future are worth just as much as lives in the present. Its taken a really big hold in Silicon Valley, Gebru says. This philosophy is used by tech companies and engineers to justify decisions in product design and software development that do not prioritise immediate crises such as poverty, racism and climate change or take other parts of the world into consideration.

Abeba Birhane, a senior fellow in Trustworthy AI at the Mozilla Foundation, says: The way things are happening right now is predicated on the exploitation of people on the African continent. That model has to change. Not only is long-termism taking up so much of the AI narrative, it is something that is preoccupied with first-world problems.

Its taking up a lot of air, attention, funding, from the kind of work Timnit is doing, the groundwork that specialist scholars of colour are doing on auditing data sets, auditing algorithms, exposing biases and toxic data sets.

In the wake of Gebrus departure from Google some 2 000 employees signed a petition protesting against her dismissal. Although not acknowledging any culpability, Sundar Pichai the chief executive of Alphabet, Googles parent company said: We need to assess the circumstances that led to Dr Gebrus departure, examining where we could have improved and led a more respectful process. We will begin a review of what happened to identify all the points where we can learn.

In November 2020, a civil war broke out in Ethiopia and once again Gebrus personal and professional worlds collided. As an Ethiopian, she has been vocal in raising the alarm about atrocities being committed, including running a fundraiser for victims of the conflict. As a computer scientist, she has watched in despair as artificial intelligence has enabled and exacerbated these atrocities.

On Facebook, hate speech and incitements to violence related to the Ethiopian conflict have spread with deadly consequences, with the companys algorithms and content moderators entirely unable or unwilling to stop it. For example, an investigation by The Continent last year, based on a trove of leaked Facebook documents, showed how the social media giants integrity team flagged a network of problematic accounts calling for a massacre in a specific village. But no action was taken against the accounts. Shortly afterwards, a massacre took place.

The tide of the war was turned when the Ethiopian government procured combat drones powered by artificial intelligence. The drones targeted the rebel Tigray forces with devastating efficacy and have been implicated in targeting civilians too, including in the small town of Dedebit, where 59 people were killed when a drone attacked a camp for internally displaced people.

Thats why all of us need to be concerned about AI, says Gebru. It is used to consolidate power for the powerful. A lot of people talk about AI for the social good. But to me, when you think of the current way it is developed, it is always used for warfare. Its being used in a lot of different ways by law enforcement, by governments to spy on their citizens,by governments to be at war with their citizens, and by corporations to maximise profit.

Once again, Gebru is doing something about it. Earlier this year, she launched the Distributed Artificial Intelligence Research Institute (Dair). The clue that Dair operates a little differently is in the word distributed.

Instead of setting up in Silicon Valley, Dairs staff and fellows will be distributed all around the world, rooted in the places they are researching.

How do we ring the alarm about the bad things that we see, and how can we develop this research in a way that benefits our community? Raesetje Sefala, Dairs Johannesburg-based research fellow, puts it like this: At the moment, it is people in the Global North making decisions that will affect the Global South.

As she explains it, Dairs mission is to convince Silicon Valley to take its ethical responsibilities more seriously but also to persuade leaders in the Global South to make better decisions and to implement proper regulatory frameworks. For instance, Gmail passively scans all emails in Africa for the purposes of targeted advertising, but the European Union has outlawed this to protect their citizens.

Our governments need to ask better questions, says Sefala. If it is about AI for Johannesburg, they should be talking to the researchers here.

So far, Dairs team is small just seven people in four countries. So, too, is the budget.

What were up against is so huge, the resources, the money that is being spent, the unity with which they just charge ahead. Its daunting sometimes if you think about it too much, so I try not to, says Gebru.

And yet, as Gebrus Time magazine nod underscored, sometimes it is less about the money and more about the strength of the argument. On that score, Gebru and Dair are well ahead of Big Tech and their not quite all-powerful algorithms.

This article first appeared in The Continent, the pan-African weekly newspaper produced in partnership with the Mail & Guardian. Its designed to be read and shared on WhatsApp. Download your free copy here.

More here:
Timnit Gebru and the fight to make artificial intelligence work for Africa - Mail and Guardian

Expert.ai and the University of Siena Launch the First Multilingual Crossword Solver Based on Artificial Intelligence – PR Newswire

Expert.ai to Livestream "WebCrow" on June 16th; Stage Set for Multilingual Showdown Against Human Experts

BOSTON, June 9, 2022 /PRNewswire/ --Starting today, even machines can solve crossword puzzles thanks to WebCrow 2.0, a software developed by the University of Siena in collaboration with expert.ai (EXAI:IM), a leading company in artificial intelligence (AI) for natural language processing (NLP) and understanding (NLU).

For over a century, crossword puzzles have been an intriguing challenge for humans because of the complexity and nuance of the human language. This also happens to be one of the most complex and challenging areas for AI. In fact, the most advanced linguistic technologies must possess a significant breadth and depth of knowledge to identify the correct meaning of words based on context (e.g., trim a tree vs. trim on a house). They must also be able to interpret slang, catch phrases, wordplay and other forms of ambiguity (e.g., a crossword clue: liquid that does not stick, answer: scotch). WebCrow 2.0 does this and more.

"We're excited to introduce our intelligent machine, WebCrow, and discuss its evolution and ability to create and solve a daily standard of life, the crossword puzzle," said Marco Gori, Professor, Department of Information Engineering and Mathematical Sciences, University of Siena. "Can machines solve these as well as humans? How do they compare definitions and answer clues with niche or abstract references? Can they pick up on plays on words, linguistic nuances and even humor? We're ready to demonstrate how leveraging context can enable humans and software to work together and take AI-based cognitive abilities to new levels."

Understanding, Knowledge graph, Reasoning

WebCrow 2.0 has been empowered with typical human skills to simulate human-like processes for reading, understanding, and reasoning. This allows the software to identify the meaning of words based on definitions and other clues in crossword puzzles. It accomplishes this by:

"It's our business to help organizations improve any activity or process based on understanding and managing the immense wealth of information at their disposal," said Marco Varone, CTO of expert.ai. "It was very gratifying to work with researchers from the University of Siena and support their efforts with our tools for disambiguation, knowledge graph and expertise in applying AI to language. Anyone who has been challenged by a crossword is familiar with nuanced clues, so automated puzzle solving is a great way to illustrate just how far we've come in advancing natural language technologies."

Livestream: Solving Crosswords with WebCrow AI

A movie about WebCrow and its crossword-solving abilities can be viewed on the website. A special LinkedIn NLP stream session, "Solving Crossword Puzzles with WebCrow AI," is scheduled for June 16 at 11:00 am EDT. Those interested in attending can register here.

The Next Challenge for WebCrow

Next up for WebCrow is to compete against human experts in a multilingual competition. The "WebCrow 2.0 - Human vs. Machine" challenge is organized by expert.ai and the University of Siena, in collaboration with SudokuEditori (unpublished crosswords for the Italian language) and AVCX "Crosswords for the (not) faint of heart" (unpublished crosswords in English).

For more information, visit Expert.ai and the University of Siena Launch the First Multilingual Crossword Solver Based on Artificial Intelligence - Expert.ai | Expert.ai

About expert.ai

Expert.ai (EXAI:IM) is a leading company in AI-based natural language software. Organizations in insurance, banking and finance, publishing, media and defense all rely on expert.ai to turn language into data, analyze and understand complex documents, accelerate intelligent process automation and improve decision making. Expert.ai's purpose-built natural language platform pairs simple and powerful tools with a proven hybrid AI approach that combines symbolic and machine learning to solve real-world problems and enhance business operations at speed and scale. With offices in Europe and North America, expert.ai serves global businesses such as AXA XL, Zurich Insurance Group, Generali, The Associated Press, Bloomberg INDG, BNP Paribas, Rabobank, Gannett and EBSCO. For more information, visit https://www.expert.ai

SOURCE expert.ai

Continued here:
Expert.ai and the University of Siena Launch the First Multilingual Crossword Solver Based on Artificial Intelligence - PR Newswire

Artificial Intelligence and Sexual Wellness: The Future is Looking (And Feeling) Good – Gizmodo Australia

What does artificial intelligence have to do with sex? No, its not a set up for a dirty joke. Its actually a question we recently asked the man in charge of tech at the worlds largest sexual wellness company.

When you think of technology and innovation while talking about sexual wellness devices (the term we prefer to use for sex toys), its likely you think of the speeds of a vibrator, or an app that controls something you use in the bedroom. But it goes much deeper than that. And the possibilities of where it can go in the future, thanks to tech such as artificial intelligence (AI), are as mind-blowing as an orgasm (at least for tech nerds like us).

The Lovehoney Group is on a mission to promote sexual happiness and empowerment through design, innovation and research and development. And after chatting with The Lovehoney Groups chief engineering and production officer Tobias Zegenhagen, its easy to see just how much tech is actually involved in the sexual wellness industry.

But what if it could go one step further? What if a device just knew what felt good? Enter AI.

Currently, the user or their partner is the one controlling certain buttons, either on the device or a remote control. But, what if the device could be the one controlling the device?

Algorithms, AI sensing your responses, then using that data in order to intelligently drive the toy the way you want it, Zegenhagen described of a future that isnt all that far away. An AI controlling a toy based on your movements, reactions and learning from the previous data its pulled from you.

You are getting information and you use that information intelligently in order to fulfil a user need.

Its pretty straight forward when its broken down like that.

Lovehoney Group has a product in the market already, the We-Vibe Chorus, which allows you to, via an app, share vibrations during sex. Chorus matches its vibration intensity to the strength of your grip, with the idea being that its completely in tune with you. The Chorus has a capacitive sensor in it that senses the act of sexual intercourse. During PIV sex, it senses the touching of the two bodies, and according to these touches, it controlls the toy.

It is a straightforward algorithm, Zegenhagen said.

It actually makes a lot of sense. If you think about each of the sexual partners youve had throughout your life, no ones body is the same.

How you move is individual and changes all the time from person to person, from day to day, Zegenhagen said, adding what you want during sex is also individual.

Controlling the toy in general, and then individualising it to the person. That is where I see AI coming in.

Theres an immense amount of promise. But its important Lovehoney Group (and their peers, of course) use technology for the right purpose. That is, not just using tech like AI for the sake of it, that it offers something of benefit to the sexual experience. And, that data privacy is front and centre.

It is definitely in our core to try to innovate, and we need to research in order to better understand user needs, and to use technology in order to advance and to innovate, Zegenhagen explained. But it isnt that straight forward. Theres an insane amount of people at Lovehoney Group in the R&D (research and development) space.

If you compare it with other technological fields or areas, what is real particular in this case, is that the requirements that you formulate are very blurry and very individual, he said. If you ask somebody, What does sexual fulfillment mean for you?, What is a perfect orgasm?, you could ask a hundred people and you get 500 answers.

Unlike with, say, a phone, when it comes to sexual wellness, its very difficult for a user to state the actual need. But as Zegenhagen explained, it is also very difficult to then verify that the need is actually being fulfilled by the technology. Thats without even taking into consideration any biological and neurological factors.

We have a rough understanding of how touch works and how we perceive stimulation, Zegenhagen said. But do we know all the mechanisms behind it? Absolutely not. What happens when I touch a rough surface with my hand? How do my mechanical receptors perceive that? How is that being transferred to the brain? All this is pretty much unclear.

While a sexual wellness device isnt the same as medication, the closest comparison is probably with developing a new drug. You answer a need, test it, tweak it, test on a broader audience but everyones response to that medication will be different.

The human being is too complex to fully understand, he added.

I think that the easiest technical solution to meet a user need is the best technical solution, not the most complex one.

You dont have to be technically complex to be innovative. You dont have to be technically complex to meet a user need it has to be as simple as possible.

Well, yes, thats true. It would definitely kill the mood if you had to read a 30-page user manual or learn something needed to be charged, paired, updated, etc the moment youre about to use it.

There is a huge playground for technology in our field, Zegenhagen said.

With AI offering all sorts of benefits to our sexual wellness, the future sure is looking (and feeling) good.

Read more:
Artificial Intelligence and Sexual Wellness: The Future is Looking (And Feeling) Good - Gizmodo Australia