7 AI Stocks to Buy for the Increasing Digitization of Healthcare – InvestorPlace

The increased move to digitization is only one of several trends the healthcare industry has embraced in the past few years. Transferring paper-based information to digital formats gives health professionals faster access to data, but the benefits dont stop there. To turn the stored information into something useful, the industry needs systems that find patterns, recognize what is important and perform predictive analysis. On that basis, investors should consider AI stocks.

The digitization of healthcare-related data will involve companies that lead in Artificial Intelligence. The rise of AI will not lead to job losses for healthcare professionals, but instead enable companies to automate repetitive tasks and free their staff to do other, more valuable things.

Here are seven AI stocks to buy for the increasing digitization of healthcare:

How might AI-powered systems contribute to a better healthcare system? Electronic Healthcare Records (EHRs) have a rich dataset to back up the benefits of AI. As medical costs for patients increase at an uncontrollable rate, the industry will want to invest in AI solutions to lessen the load.

Source: Laborant / Shutterstock.com

International Business Machines reported lower year-over-year revenue for the second quarter. Revenue fell 5.42% Y/Y to $18.12 billion, though it earned $2.18 a share. Watson is a central brand for the AI solution IBM offers, as well as a part of its hybrid cloud strategy, which IBM advertises may help its clients work through both complex and regulated workloads.

According to IBM, Watson helps you predict and shape future outcomes, automate complex processes, and optimize your employees time. For example, AI will help healthcare professionals with surface treatment, supporting user needs, and by targeting similarities and patterns.

Data courtesy of Stockrover

As a tech stock, IBM trades at a steep price-to-earnings multiples well-below both industry and S&P 500 averages. Markets are punishing IBM stock for the slow growth in its legacy businesses.

IBM still has plenty of work ahead in building Watsons AI doctor. Until it gets beyond the hype and delivers on helping such things as making diagnoses, IBM will rely on business growth from its other business units. That includes Red Hat and Cloud Paks.

Source: StreetVJ / Shutterstock.com

China-based Baidu established a health internet hospital on March 18. It also recently established Baidu Health Technology. The company is committing to the online healthcare industry with strong experience in big data and AI technologies.

Baidus value score is on par with the index, as shown in the table below. As its role in healthcare increases, price-to-sales ratio will expand to match that of the industry average. Baidu stock will increase as a result:

Data courtesy of Stockrover

Baidu said last year that it would donate AI-integrated fundus screening machines to 500 medical centers. Already, the donation is paying off. The AI-powered camera detects eye fundus and creates a screening report in mere seconds. Because China has a shortage of ophthalmologists, Baidu is helping to increase the availability of patient care.

In the near term, the company will build its Baidu Health unit. This included holding more than 100 live broadcasting events on COVID-19. Baidu Health also helps users register for doctor appointments, get information on hospitals and doctors and connect with doctors for online consultation.

On Wall Street, the average price target for Baidu stock is $146.67 (per Tipranks).

Source: JHVEPhoto / Shutterstock.com

In 2019, Medtronic launched its first AI system for colonoscopy. The company said, The GIGenius module uses advanced artificial intelligence to highlight the presence of pre-cancerous lesions with a visual marker in real-time serving as an ever vigilant second observer. A new era of diagnostic endoscopy should improve the detection rate that a doctor may miss, ultimately saving more lives.

Data courtesy of Stockrover

Above, Medtronic stock scores a 92/100 on quality. The market is ignoring its strong gross margins relative to the S&P 500.

Chairman and CEO Omar Ishrak recently explained how the model for personalized medicine is becoming a reality. That will depend on developing AI solutions in the healthcare market. In doing so, the company will empower physicians.

By giving doctors clinical and behavioral data, providers will have more information available. Making better-informed decisions will increase the effectiveness of patient treatments.

Source: Shutterstock

More of Strykers customers are ordering robots. As robotic surgery procedures increase, the role of artificial intelligence in healthcare will rise in importance, too. Stryker is a leader in orthopedic robotics. In the second quarter, the company posted strong orders, thanks to its continued push for innovation. Joint replacement surgeries, for example, are growing above the market rate.

Data courtesy of Stockrover

Strykers price to free cash flow ratio is below that of the industry. Given its strong role in AI in healthcare, the Stryker stock is trading at a discount.

On its conference call, Strykers VP of Investor Relations, Preston Wells said, whether theyre competitive accounts that are in or out, were really just going to all of those different accounts and trying to find areas to place Mako.

Wells further implied the addressable market will get larger as customers ask for more solutions from Mako. The robotic-arm uses a 3D CT-based planning software. Surgeons will know more about the patients anatomy, enabling them to offer a personalized joint replacement.

Source: Shutterstock

Nuance shares have risen steadily from sub-$15 lows to around $27. In its second quarter, the company posted organic revenue growth of 11% Y/Y. Enterprise revenue grew 19%, the highest in 10 years. Dragon Medical One is the flagship growth driver for Nuance; demand for that service grew 46% Y/Y.

Below, most analysts rate Nuance stock with a strong-buy recommendation:

Data courtesy of Stockrover

Nuance accelerated its AI innovation and continued the development of machine learning-based tools. This will improve the workflow and productivity in healthcare. Dragon Medical One contributed to the strong first half annual recurring revenue growth.

Nuance scaled its international markets by launching Dragon Medical One in five new European countries. The product is a speech recognition cloud solution that will improve the productivity of healthcare workers. It securely captures the patients narrative and reduces the workload of clinicians.

The rise in telemedicine during the global pandemic will drive Nuances AI business higher.

Source: rvlsoft / Shutterstock.com

Googles mandate for Deepmind is building products that support care teams and improve patient outcomes. Google has expertise in cloud storage, data security and app development. It will work to develop mobile medical assistants for clinicians.

Data courtesy of Stockrover

Alphabets growth will outpace the S&P 500 index over the next year. The 95/100 growth score suggests the stock will outperform markets, too, in the year.

In diagnostics, Deepmind will help healthcare workers detect eye disease from scans or assist in cancer radiotherapy treatment. More recently, Googles pending acquisition of Fitbit will accelerate the search giants development of wearables in healthcare. And since these devices track the wearers health metrics, it will have plenty of user data to work with.

That volume of data will necessitate machine learning and AI to decipher any meaningful patterns. Without AI, Google cannot perform any initial diagnoses that may potentially save a wearers life.

Google hasnt gotten the European Unions blessing on the deal, and a full-scale investigation will delay the Fitbit acquisition. But should it clear, the companys positioning in AI in healthcare will strengthen.

Source: Kevin Chen Photography / Shutterstock.com

Alibaba has all the requisite backend systems in place for AI in healthcare. Alibaba Cloud has AI-powered solutions that are solving real-world problems. And BABA is solving healthcare problems by analyzing clinical and hospital operations.

The company said that the system uses 700 core indicators that come from medical institutions and regional medical operations. By feeding real-world data to the AI, the system will have higher accuracy and reliability. Its AI platform may perform image and voice recognition. Medical institutions get diagnosis support from Alibabas AI.

The real-world importance of Alibabas new AI system will save lives. The system has a 96% accuracy in detecting coronavirus in mere seconds. By contrast, it takes humans around 15 minutes to make a diagnosis.

The fair value of Alibaba stock is $325.72. The value score is low but the growth score is 100/100:

Data courtesy of Stockrover

Alibaba trained the system to detect coronavirus by introducing images and data from 5,000 confirmed coronavirus cases.

Disclosure: As of this writing, the author did not hold a position in any of the aforementioned securities.

View original post here:

7 AI Stocks to Buy for the Increasing Digitization of Healthcare - InvestorPlace

Doctors are using AI to triage covid-19 patients. The tools may be here to stay – MIT Technology Review

The pandemic, in other words, has turned into a gateway for AI adoption in health carebringing both opportunity and risk. On the one hand, it is pushing doctors and hospitals to fast-track promising new technologies. On the other, this accelerated process could allow unvetted tools to bypass regulatory processes, putting patients in harms way.

At a high level, artificial intelligence in health care is very exciting, says Chris Longhurst, the chief information officer at UC San Diego Health. But health care is one of those industries where there are a lot of factors that come into play. A change in the system can have potentially fatal unintended consequences.

Before the pandemic, health-care AI was already a booming area of research. Deep learning, in particular, has demonstrated impressive results for analyzing medical images to identify diseases like breast and lung cancer or glaucoma at least as accurately as human specialists. Studies have also shown the potential of using computer vision to monitor elderly people in their homes and patients in intensive care units.

But there have been significant obstacles to translating that research into real-world applications. Privacy concerns make it challenging to collect enough data for training algorithms; issues related to bias and generalizability make regulators cautious to grant approvals. Even for applications that do get certified, hospitals rightly have their own intensive vetting procedures and established protocols. Physicians, like everybody elsewere all creatures of habit, says Albert Hsiao, a radiologist at UCSD Health who is now trialing his own covid detection algorithm based on chest x-rays. We dont change unless were forced to change.

As a result, AI has been slow to gain a foothold. It feels like theres something there; there are a lot of papers that show a lot of promise, said Andrew Ng, a leading AI practitioner, in a recent webinar on its applications in medicine. But its not yet as widely deployed as we wish.

QURE.AI

Pierre Durand, a physician and radiologist based in France, experienced the same difficulty when he cofounded the teleradiology firm Vizyon in 2018. The company operates as a middleman: it licenses software from firms like Qure.ai and a Seoul-based startup called Lunit and offers the package of options to hospitals. Before the pandemic, however, it struggled to gain traction. Customers were interested in the artificial-intelligence application for imaging, Durand says, but they could not find the right place for it in their clinical setup.

The onset of covid-19 changed that. In France, as caseloads began to overwhelm the health-care system and the government failed to ramp up testing capacity, triaging patients via chest x-raythough less accurate than a PCR diagnosticbecame a fallback solution. Even for patients who could get genetic tests, results could take at least 12 hours and sometimes days to returntoo long for a doctor to wait before deciding whether to isolate someone. By comparison, Vizyons system using Lunits software, for example, takes only 10 minutes to scan a patient and calculate a probability of infection. (Lunit says its own preliminary study found that the tool was comparable to a human radiologist in its risk analysis, but this research has not been published.) When there are a lot of patients coming, Durand says, its really an attractive solution.

Vizyon has since signed partnerships with two of the largest hospitals in the country and says it is in talks with hospitals in the Middle East and Africa. Qure.ai, meanwhile, has now expanded to Italy, the US, and Mexico on top of existing clients. Lunit is also now working with four new hospitals each in France, Italy, Mexico, and Portugal.

In addition to the speed of evaluation, Durand identifies something else that may have encouraged hospitals to adopt AI during the pandemic: they are thinking about how to prepare for the inevitable staff shortages that will arise after the crisis. Traumatic events like a pandemic are often followed by an exodus of doctors and nurses. Some doctors may want to change their way of life, he says. Whats coming, we dont know.

Hospitals new openness to AI tools hasnt gone unnoticed. Many companies have begun offering their products for a free trial period, hoping it will lead to a longer contract.

It's a good way for us to demonstrate the utility of AI, says Brandon Suh, the CEO of Lunit. Prashant Warier, the CEO and cofounder of Qure.ai, echoes that sentiment. In my experience outside of covid, once people start using our algorithms, they never stop, he says.

Both Qure.ais and Lunits lung screening products were certified by the European Unions health and safety agency before the crisis. In adapting the tools to covid, the companies repurposed the same functionalities that had already been approved.

QURE.AI

Qure.ais qXR, for example, uses a combination of deep-learning models to detect common types of lung abnormalities. To retool it, the firm worked with a panel of experts to review the latest medical literature and determine the typical features of covid-induced pneumonia, such as opaque patches in the image that have a ground glass pattern and dense regions on the sides of the lungs. It then encoded that knowledge into qXR, allowing the tool to calculate the risk of infection from the number of telltale characteristics present in a scan. A preliminary validation study the firm ran on over 11,000 patient images found that the tool was able to distinguish between covid and non-covid patients with 95% accuracy.

But not all firms have been as rigorous. In the early days of the crisis, Malik exchanged emails with 36 companies and spoke with 24, all pitching him AI-based covid screening tools. Most of them were utter junk, he says. They were trying to capitalize on the panic and anxiety. The trend makes him worry: hospitals in the thick of the crisis may not have time to perform due diligence. When youre drowning so much, he says, a thirsty man will reach out for any source of water.

Kay Firth-Butterfield, the head of AI and machine learning at the World Economic Forum, urges hospitals not to weaken their regulatory protocols or formalize long-term contracts without proper validation. Using AI to help with this pandemic is obviously a great thing to be doing, she says. But the problems that come with AI dont go away just because there is a pandemic.

UCSDs Longhurst also encourages hospitals to use this opportunity to partner with firms on clinical trials. We need to have clear, hard evidence before we declare this as the standard of care, he says. Anything less would be a disservice to patients.

To have more stories like this delivered directly to your inbox,sign upfor our Webby-nominated AI newsletter The Algorithm. It's free.

Originally posted here:

Doctors are using AI to triage covid-19 patients. The tools may be here to stay - MIT Technology Review

Paris-based Monk raises 2.1 million to expand its AI-based car damage inspection system – EU-Startups

French AI startup Monk, a unique system for car damage detection, has closed a 2.1 million seed round led by Iris Capital, alongside Plug and Play and key business angels including Patrick Sayer (former CEO of Eurazeo), Yannis Yahiaoui (founder of Adot), and Arthur Waller (founder of PriceMatch and Pennylane).

Monk was founded in 2019 when Aboubakr Laraki (CEO) and Fayal Slaoui (CTO), both specialized in AI and image recognition, met and shared the conviction that the market of AI-based damages detection was still at its earliest stage, requiring an expert approach. They partnered from the very beginning of the company with Getaround, a leader of the peer-to-peer car rental market to provide them with car damages claims material that proved game-changing compared to the solutions available then in the industry.

Monks solution is based on a ground-breaking artificial intelligence technology allowing to detect damages on any car relying on pictures taken by users, renters and/or drivers for a fraction of the traditional solutions price.Monk has already convinced several professionals of the car logistics and rental industry, as well as a Tier 1 European car Manufacturer (partnership to be announced later this year).

Among all the solutions weve tested to automatically detect damage on vehicles from photos provided by our users, not only did Monk eclipse the competition but their results also exceeded by far our expectationssaid P. Beret, VP of Risk Getaround.

While the company is only starting its sales outreach, this new funding round will support Monks R&D programme, the recruitment of new team members, especially data scientists, and its business expansion across Europe.

Monks mission is to transform the mobility and insurance market by bringing trust and efficiency whenever a car changes hands. Weve built an AI-based, hardware-free, inspection system that assesses instantly any vehicles condition from photos or videos. From day 1 the challenge proposed by Getaround was equivalent to climbing up the Everest. Internally it paved the way for a strong culture of breaking walls and externally the product we ended up with has echoed a lot in the automotive and insurance industries. Weve been lucky to quickly deploy our product in other contexts and build high quality customer relationships that we aim at consolidating and developing in the coming months. We are proud to work with our new partners, who understand very well our challenges. This funding will help us boost our R&D and scale our product market-fit internationally, commented Aboubakr Laraki, Monks CEO and co-founder.

Monk has the potential to address many issues related to car damages. Theyre starting with car rental claims processes in an industry on the verge of being drastically transformed by the recent crisis. But the insurance industry is also looking for tools to simplify and optimize its underwriting and claim appraisal processes, a $200 billion market today, where it would allow for more efficient, optimized and faster settlement. This would represent tremendous savings and a better customer satisfaction for insurers. We believe Monk has the potential to solve these issues with its cutting edge technology, declared Julien-David Nitlech, Managing Partner at Iris Capital.

Visit link:

Paris-based Monk raises 2.1 million to expand its AI-based car damage inspection system - EU-Startups

A tug-of-war over biased AI – Axios

Why it matters: This debate will define the future of the controversial AI systems that help determine people's fates through hiring, underwriting, policing and bail-setting.

What's happening: Despite the rise of the bias-blockers in 2019, the bias-fixers remain the orthodoxy.

The other side: At the top academic conference for AI this week, Abeba Birhane of University College Dublin presented the opposing view.

The big picture: In a recent essay, Frank Pasquale, a UMD law professor who studies AI, calls this a new wave of algorithmic accountability that looks beyond technical fixes toward fundamental questions about economic and social inequality.

The bottom line: Technology can help root out some biases in AI systems. But this rising movement is pushing experts to look past the math to consider how their inventions will be used beyond the lab.

The impact: Despite a flood of money and politics propelling AI forward, some researchers, companies and voters hit pause this year.

But the question at the core of the debate is whether a fairness fix even exists.

The swelling backlash says it doesn't especially when companies and researchers ask machines to do the impossible, like guess someone's emotions by analyzing facial expressions, or predict future crime based on skewed data.

This blowback's spark was a 2017 research project from MIT's Joy Buolamwini. She foundthat major facial recognition systems struggled to identify female and darker-toned faces.

What's next: Companies are tightening access to their AI algorithms, invoking intellectual property protections to avoid sharing details about how their systems arrive at critical decisions.

The rest is here:

A tug-of-war over biased AI - Axios

Elon Musk and Mark Zuckerberg are both wrong about AI and the robot apocalypse – Quartz

What if at the dawn of the industrial revolution in 1817 we had known the dangers of global warming? We would have created institutions to study mans impact on the environment. We would have enshrined national laws and international treaties, agreeing to constrain harmful activities and to promote sound onesfor the good of humanity. If we had been able to predict our future, the world as it exists 200 years later would have been very different.

In 2017, we are at the same critical juncture in the development of artificial intelligenceexcept, this time, we have the foresight of seeing the horizons dangers.

AI is the rare case where I think we need to be proactive in regulation instead of reactive, Elon Musk recently cautioned at the US National Governors Association annual meeting. AI is a fundamental existential risk for human civilizationbut until people see robots going down the street killing people, they dont know how to react.

However, not all think the future is that dire, or that close. Mark Zuckerberg responded to Musks dystopian statement in a Facebook Live post. I think people who are naysayers and try to drum up these doomsday scenariosI just, I dont understand it, he said while casually smoking brisket in his backyard. Its really negative and in some ways I actually think it is pretty irresponsible. (Musk snapped back on Twitter the next day: Ive talked to Mark about this. His understanding of the subject is limited.)

So, which of the two tech billionaires is right? Actually, both are.

Musk is correct that there are real dangers to AIs advances, but his apocalyptic predictions distract from the more mundane but immediate issues that the technology presents. Zuckerberg is correct to emphasize the enormous benefits of AI, but he goes too far in terms of complacency, focusing on the technology that exists now rather than what might exist in 10 or 20 years.

This isnt just about stopping shady corporations or governments building autonomous killer robots in secret underground laboratories.We need to regulate AI before it becomes a problem, not afterward. This isnt just about stopping shady corporations or governments building autonomous killer robots in secret underground laboratories: We also need a global governing body to answer all sorts of questions, such as who is responsible when AI causes harm, and whether AIs should be given certain rights, just as their human counterparts have.

Weve made it work before: in space. The 1967 Outer Space Treaty is a piece of international law that restricts the ability of countries to colonize or weaponize celestial bodies. At the height of the Cold War, and shortly after the first space flight, the US and USSR realized an agreement was desirable given the shared existential risks of space exploration. Following negotiations over several years, the treaty was adopted by the UN before being ratified by governments worldwide.

This treaty was employed many years before we developed the technology to undertake the actions concerned as a precautionary measure, not as a reaction to solve a problem that already existed. AI governance needs to be the same.

In the middle of the 20th century, science-fiction writer Isaac Asimov wrote four Laws of Robotics.

Asimovs fictional laws would arguably be a good basis for an AI-ethics treaty, but he started in the wrong place. We need to begin by asking not what the laws should be, but who should write them.

Some federal and private organizations are making early attempts to regulate AI more systematically. Google, Facebook, Amazon, IBM, and Microsoft recently announced they have formed the Orwellian-sounding Partnership on Artificial Intelligence to Benefit People and Society, whose goals include supporting best practices and creating an open platform for discussion. Its partners now include various NGOs and charities such as UNICEF, Human Rights Watch, and the ACLU. In September 2016, the US government released its first ever guidance on self-driving cars. A few months later, the UKs Royal Society and British Academy, two of the worlds oldest and most respected scientific organizations, published a report that called for the creation of a new national body in the UK to steward the evolution of AI governance.

These kinds of reports show there is a growing consensus in favor of oversight of AIbut theres still little agreement on how this should actually be implemented beyond academic whitepapers circulating governmental inboxes.

Some tech companies will try to operate their businesses from wherever the law is the least restrictive, just as they do already with tax havens.In order to be successful, AI regulation needs to be international. If its not, we will be left with a messy patchwork of different rules in different countries that will be complicated (and expensive) for AI designers to navigate. If there isnt a legally binding global approach, some tech companies will also try to operate their businesses from wherever the law is the least restrictive, just as they do already with tax havens.

The solution also needs to involve players from both the public and private sector. Although the tech worlds Partnership on Artificial Intelligence plans to invite academics, non-profits, and specialists in policy and ethics to the table, it would benefit from the involvement of elected governments, too. While the tech companies are answerable to their shareholders, governments are answerable to their citizens. For example, the UKs Human Fertilization and Embryology Authority is a great example of an organization that brings together lawyers, philosophers, scientists, government, and industry players in order to set rules and guidelines for the fast-developing fields of fertility treatment, gene editing, and biological cloning.

Creating institutions and forming laws are only part of the answer: The other big issue is deciding who can and should enforce them.

For example, even if organizations and governments can agree which party should be liable if AI causes harmthe company, the coder, or the AI itselfwhat institution should hold the perpetrator to the crime, police the policy, deliver a verdict, and cast a sentence? Rather than create a new international police force for AI, a better solution is for countries to agree to regulate themselves under the same ethical banner.

The EU manages the tension between the need to set international standards and the desire of individual countries to set their own laws by setting directives that are binding as to the result to be achieved, but leave room for national governments to choose how to get there. This can mean setting regulatory floors or ceilings, like a maximum speed limit, for instance, by which member states can then set any limit below that level.

Another solution is to write model laws for AI, where experts from around the world pool their talents in order to come up with a set of regulations that countries can then take from and apply as much or as little as they want. This is helpful to less-wealthy nations as it saves them the cost of developing fresh legislation, but at the same time respects their autonomy by not forcing them to adopt all parts.

* * *

The world needs a global treaty on AI, as well as other mechanisms for setting common laws and standards. We should be thinking less about how to survive a robot apocalypse and more about how to live alongside themand thats going to require some rules that everyone plays by.

Learn how to write for Quartz Ideas. We welcome your comments at ideas@qz.com.

Read the original here:

Elon Musk and Mark Zuckerberg are both wrong about AI and the robot apocalypse - Quartz

Artificial Intelligence is Key: Why the Transition to Our Future Energy System Needs AI – POWER magazine

On any given day, the electric power industrys operations are complex and its responsibilities vast. As the industry continues to play a critical role in supporting global climate goal challenges, it must simultaneously support demand increases, surges in smart appliance adoption, and decentralized operating system expansions. And that just scratches the surface.

Behind the scenes, theres the power grid operator, whose role is to monitor the electricity network 24 hours per day, 365 days per year. As a larger number of lower capacity systems (such as renewables) come online and advanced network components are integrated into the grid, generation becomes exponentially more complex, decentralized and variable, stretching control room operators to their limits.

More locally, building owners and controllers (Figure 1) are being challenged to deploy grid-interactive intelligent elements that can flexibly participate in grid level operations to economically enhance grid resiliency (while also saving money for the building owner).

Outside those buildings, electric utilities collect millions of images of their transmission and distribution (T&D) infrastructure to assess equipment health and support reliability investments. But the ability to collect imagery has outpaced utility staffs ability to analyze and evaluate them.

On the generation side, operators are being increasingly pressured by market changes to decrease operations and maintenance costs (O&M) while maintaining, and if possible, increasing production revenue.

So how best to manage these current and future challenges? The solution may lie within another industryartificial intelligence.

If you step back for a moment you realize there are two (separate) trillion-dollar industriesthe energy industry and the data and information industrywhich are now intersecting in a way they never have before, saidArun Majumdar, Stanford UniversityJay Precourt Provostial Chair Professor of Mechanical Engineering, the founding director of ARPA-E, and a member of the EPRI Board of Directors. Majumdar spoke at an Electric Power Research Institute (EPRI) AI and Electric Power Roundtablediscussion earlier this year. The people who focus on data do not generally have expertise regarding the electricity industry and vice versa. We have entities like EPRI trying to connect the two and this is ofenormousvalue.

Take the power grid operator challenge, for example. EPRI is exploring an AI reinforcement learning (RL) agent that can act as a continuously learning, algorithm-based autopilot for operators to optimize performance. The goal is not to replace operatorswho are essential for transmission operationsbut rather to develop tools to augment their decision-making ability using RL.

Turning to building operators, recent advances in building controls technology, enabled by the model predictive control (MPC) framework, have focused on minimizing operating costs or energy use, or maximizing occupant comfort. But most commercial building MPC case studies have been abandoned because they can be labor-intensive and costly to customize and maintain.

EPRI is developing models and tools which will enable operators to enhance their responsiveness and flexibility to utility grid signals in the most cost-effective way. Coupled with the digitization of building control systems, AI predictive models will provide utilities and customers greater affordability, resiliency, environmental performance, and reliability.

In late May, EPRI brought more than 100 organizations together across the two industries in a Reverse Pitch event where electric power utilities presented their biggest challenges, and AI companies responded with potential solutions.

We want to help increase adoption of proven AI technologies, and that means we need to match solutions with the needs and issues utilities have, said Heather Feldman, EPRI Innovation Director for the nuclear energy sector. Utilities sharing operating experiences, use cases, and just as importantly, their data across the community were building with our AI. EPRI initiatives will enable the acceleration of AI technology deployment.

Feldman hosted the last panel discussion at the Reverse Pitch event, where speakers from Stanford University, Massachusetts Institute of Technology (MIT), Idaho National Lab (INL), SFL Scientific and EPRI discussed the future of AI (Figure 2) for electric power.

The utility sector by nature is a risk-averse industry, but its time to think about how to adapt their business models to embrace new AI technologies, saidLiang Min, Managing Director of the Bits & Watts Initiative at Stanford University. If utilities dedicate resources to identifying right use cases and conducting pilot programs, I think they will see benefits, and it will eventually lead to enterprise-wide adoption.

Validating different AI applications will help end-users and regulators determine their effectiveness, without eroding safety and reliability, said Idaho National Lab Nuclear National Technical Director, Craig Primer. We need to overcome those barriers to drive adoption and reduce the manual approaches used today.

In 2020, a large California investor-owned utility, and EPRI member, inspected 105,000 distribution and 20,500 transmission structures. Conservative estimates gave the utility 750,000 images for staff to review and evaluate. Thats about 3,500 person-hours and costs more than $350,000 at a standard utility staff rate for inspection review work.

With the wider adoption of drone technology in the very near future, significantly more images will be available than ever before. However, without augmented evaluation capabilities offered by AI, evaluation costs will correspondingly and exponentially increase. Inspections are complex tasks that become more complicated by utilizing drones.

EPRI is working with utilities and the AI community to build a foundation for machine learning to facilitate models that can detect damaged T&D assets (Figure 3) and assist staff in more efficiently managing the volume of images. But just as critically, its also taking on the tasks of collecting, anonymizing, labeling, and sharing imagery for model development. These data sets, along with a utility consensus taxonomy and data labeling process are needed to achieve desired improvements in efficiency, predictive modeling, damage identification, and repair/replacement of equipment.

During the Reverse Pitch event, Boston-based SFL Scientific, an AI consulting company, highlighted the significant technical and operational challenges associated with development of end-to-end AI applications, including validating machine and deep learning models, optimizing their performance long-term, and integrating the output into workflows and production pipelines.

AI is hard, its not easy, said Michael Segala, CEO of SFL Scientific. Introducing AI is essentially breaking peoples workflow, injecting risk into their process, which can break down adoption. This is maybe significantly more difficult for utilities based on the regulations that are set and consequences of getting things wrong. But theres a great ecosystem, like the folks here (at the Reverse Pitch) that will help with the journey and be a part of that adoption, so utilities dont fail and risks are reduced.

Now theres a new layer to consider: the increasing urgency to protect against threats to our energy infrastructure, recently heightened following the May cyberattack on one of the U.S.s largest fuel pipelines.

As physical threats to energy grids increase, connecting measures to ensure grid readiness, energy security, and resilience becomes critical, said Myrna Bittner Founder and CEO ofRUNWITHIT (RWI) Synthetics, an AI-based modelling company. Add on the pressures of electrification, decentralization, climate change, and cyberattacks, and the demand grows for even more adaptive scenario planning, mitigating technology and education.

Bittner presented RWIs Single Synthetic Environment modeling approach at the EPRI Reverse Pitch event. These geospatial environments include hyper-localized models of the people and businesses, the infrastructure, technology and policies, and then enable future scenarios to play forward.

On the energy generation side, EPRI continues to explore machine learning models to reduce O&M costs. One project that has advanced rapidly is wind turbine component maintenance. EPRI research shows the current gearbox cumulative failure rate during 20 years of operation is in the range of 30% (best case scenario) to 70% (worst case scenario). When a component like a gearbox prematurely fails, operation and maintenance (O&M) costs increase, and production revenue is lost. A full gearbox replacement may cost more than $350,000.

EPRI is researching and testing a physics-based machine-learning hybrid model that can identify gearbox damage in its early stages and extend its life. If a damaged bearing within a gearbox is identified early, the repair may only cost around $45,000, a savings of nearly 90%.

These projects all demonstrate real solutions that are deployed and are showing real results and increases in efficiencies. Many are set to be further deployed to enable the global energy systems transition.AI is at a point where I believe the technology has advanced to support scaling up adoption. Meanwhile we know that society depends on electric power 24/7 to run everything from health care and emergency resources, to communications infrastructure and in todays current situation, working from our homes, said Neil Wilmshurst, Senior Vice President of EPRIs Energy System Resources. Reliability and resilience have never been more essential in a time when were also making a critical energy systems transition to meet global climate goals and demand needs. AI must be a tool in the toolbox, and the time is nownot tomorrowto accelerate those applications.

Jeremy Renshaw is Senior Program Manager, Artificial Intelligence, at the Electric Power Research Institute (EPRI).

See more here:

Artificial Intelligence is Key: Why the Transition to Our Future Energy System Needs AI - POWER magazine

Microsoft Names AI Top Priority In Annual Report – Investopedia


Investopedia
Microsoft Names AI Top Priority In Annual Report
Investopedia
"Our strategic vision is to compete and grow by building best-in-class platforms and productivity services for an intelligent cloud and an intelligent edge infused with AI," the company said in the annual report, which came out Wednesday. We believe a ...

More:

Microsoft Names AI Top Priority In Annual Report - Investopedia

Apple is expanding its Seattle offices to focus on AI and machine learning – The Verge

In many ways, the tech worlds AI arms race is really a fight for talent. Skilled engineers are in short supply, and Silicon Valleys biggest companies are competing to nab the best minds from academia and rival firms. Which is why it makes sense that Apple has announced its expanding its offices in Seattle, where much of its AI and machine learning work is done.

Seattle is home not only to the University of Washington and its renowned computer science department, but also the Allen Institute for Artificial Intelligence. Microsoft and Amazon are headquartered nearby, and AI startups are finding a home in the region, too. Last August, Apple even bought a Seattle-based machine learning and artificial intelligence startup named Turi for an estimated $200 million, and the team is said to be moving into Apples offices at Two Union Square as part of the expansion.

Carlos Guestrin, a University of Washington professor, former Turi CEO, and now director of machine learning at Apple, told GeekWire: Theres a great opportunity for AI in Seattle.

Guesterin said Apples Seattle engineers would be looking at both long-term and near-term AI research, developing new features for the companys products across the whole spectrum. He added: Were trying to find the best people who are excited about AI and machine learning excited about research and thinking long term but also bringing those ideas into products that impact and delight our customers.

As part of the news, the University of Washington also announced a $1 million endowed professorship in AI and machine learning named after Guesterin. Thats one way to give back to the AI community.

Read more:

Apple is expanding its Seattle offices to focus on AI and machine learning - The Verge

Facebook’s translations are now powered completely by AI – The Verge

Every day, Facebook performs some 4.5 billion automatic translations and as of yesterday, theyre all processed using neural networks. Previously, the social networking site used simpler phrase-based machine translation models, but its now switched to the more advanced method. Creating seamless, highly accurate translation experiences for the 2 billion people who use Facebook is difficult, explained the company in a blog post. We need to account for context, slang, typos, abbreviations, and intent simultaneously.

The big difference between the old system and the new one is the attention span. While the phrase-based system translated sentences word by word, or by looking at short phrases, the neural networks consider whole sentences at a time. They do this using a particular sort of machine learning component known as an LSTM or long short-term memory network.

The benefits are pretty clear. Compare these two examples from Facebook of a Turkish-to-English translation. The top one comes from the old phrase-based system, and the bottom one from the new system. You can see how taking into account the full context of the sentence produces a more accurate result.

With the new system, we saw an average relative increase of 11 percent in BLEU a widely used metric for judging the accuracy of machine translation across all languages compared with the phrase-based systems, the company said.

When a word in a sentence doesnt have a direct corresponding translation in a target language, the neural system will generate a placeholder for the unknown word. A translation of that word is searched for in a sort of in-house dictionary built from Facebooks training data, and the unknown word is replaced. That allows abbreviations like tmrw to be translated into their intended meaning tomorrow.

Neural networks open up many future development paths related to adding further context, such as a photo accompanying the text of a post, to create better translations, the company said. We are also starting to explore multilingual models that can translate many different language directions.

Visit link:

Facebook's translations are now powered completely by AI - The Verge

AI Scientists Gather to Plot Doomsday Scenarios (and Solutions) – Bloomberg

Artificial intelligence boosters predict a brave new world of flying cars and cancer cures. Detractors worry about a future where humans are enslaved to an evil race of robot overlords. Veteran AI scientist Eric Horvitz and Doomsday Clock guru Lawrence Krauss, seeking a middle ground, gathered a group of experts in the Arizona desert to discuss the worst that could possibly happen -- and how to stop it.

Their workshop took place last weekend at Arizona State University with funding from Tesla Inc. co-founder Elon Musk and Skype co-founder Jaan Tallinn.Officially dubbed "Envisioning and Addressing Adverse AI Outcomes,"it was a kind of AI doomsday games that organized some 40 scientists, cyber-security experts and policy wonks into groups of attackers -- the red team -- and defenders -- blue team -- playing out AI-gone-very-wrong scenarios, ranging from stock-market manipulation to global warfare.

Horvitz is optimistic -- a good thing because machine intelligence is his life's work -- but some other, more dystopian-minded backers of the project seemed to find his outlook too positive when plans for this event started about two years ago, said Krauss, a theoretical physicist who directs ASU's Origins Project, the program running the workshop. Yet Horvitz said that for these technologies to move forward successfully and to earn broad public confidence, all concerns must be fully aired and addressed.

"There is huge potential for AI to transform so many aspects of our society in so many ways. At the same time, there are rough edges and potential downsides, like any technology," said Horvitz, managing director of Microsoft's Research Lab in Redmond, Washington. ``To maximally gain from the upside we also have to think through possible outcomes in more detail than we have before and think about how wed deal with them."

Participants were given "homework"to submit entries for worst-case scenarios. They had to be realistic -- based on current technologies or those that appear possible -- and five to 25 years in the future. The entrants with the "winning" nightmares were chosen to lead the panels, which featured about four experts on each of the two teams to discuss the attack and how to prevent it.

Blue team, including Launchbury, Fisher and Krauss, in the War and Peace scenario

Tessa Eztioni, Origins Project at ASU

Turns outmany of these researchers can match science-fiction writers Arthur C. Clarke and Philip K. Dick for dystopian visions. In many cases, little imagination was required -- scenarios like technologybeing used to sway electionsor new cyber attacks using AI are being seen in the real world,or are at least technically possible. Horvitz cited research that shows how to alter the way a self-driving car sees traffic signs so that the vehicle misreads a "stop" sign as "yield.''

The possibility of intelligent, automated cyber attacks is the one that most worries John Launchbury, who directs one of the offices at the U.S.'s Defense Advanced Research Projects Agency, and Kathleen Fisher, chairwoman of the computer science department at Tufts University, who led that session. What happens if someone constructs a cyber weapon designed to hide itself and evade all attempts to dismantle it? Now imagine it spreads beyond its intended target to the broader internet. Think Stuxnet, the computer virus created to attack the Iranian nuclear program that got out in the wild, but stealthier and more autonomous.

"We're talking about malware on steroids that is AI-enabled," said Fisher, who is an expert in programming languages.Fisher presented her scenario under a slide bearing the words "What could possibly go wrong?" which could have also served as a tagline for the whole event.

How did the defending blue team fare on that one? Not well, said Launchbury. They argued that advanced AI needed for an attack would require a lot of computing power and communication, so it would be easier to detect. But the red team felt that it would be easy to hide behind innocuous activities, Fisher said. For example, attackers could get innocent users to play an addictive video game to cover up their work.

Exclusive insights on technology around the world.

Get Fully Charged, from Bloomberg Technology.

To prevent a stock-market manipulation scenario dreamed up by University of Michigan computer science professor Michael Wellman, blue team members suggested treating attackers like malware by trying to recognize them via a database on known types of hacks. Wellman, who has been in AI for more than 30 years and calls himself an old-timer on the subject, said that approach could be useful in finance.

Beyond actual solutions, organizers hope the doomsday workshop started conversations on what needs to happen, raised awareness and combined ideas from different disciplines. The Origins Project plans to make public materials from the closed-door sessions and may design further workshops around a specific scenario or two, Krauss said.

DARPA's Launchbury hopes the presence of policy figures among the participants will foster concrete steps, like agreements on rules of engagement for cyber war, automated weapons and robot troops.

Krauss, chairman of the board of sponsors of the group behind the Doomsday Clock, a symbolic measure of how close we are to global catastrophe, said some of what he saw at the workshop "informed" his thinking on whether the clock ought to shift even closer to midnight. But don't go stocking up on canned food and moving into a bunker in the wilderness just yet.

"Some things we think of as cataclysmicmay turn out to be just fine," he said.

Read the original here:

AI Scientists Gather to Plot Doomsday Scenarios (and Solutions) - Bloomberg

3 Important Ways Artificial Intelligence Will Transform Your Business And Turbocharge Success – Forbes

From the smallest local business to the largest global players, I believe every organization must embrace the AI revolution, and identify how AI (artificial intelligence) will make the biggest difference to their business.

3 Important Ways Artificial Intelligence Will Transform Your Business And Turbocharge Success

But before you can develop a robust AI strategy in which you work out how best to use AI to drive business success you first need to understand whats possible with AI. To put it another way, how are other companies using AI to drive success?

Broadly speaking, organizations are using AI in three main ways:

Creating more intelligent products

Offering a more intelligent service

Improving internal business processes

Lets briefly look at each area in turn.

Creating more intelligent products

Thanks to the Internet of Things, a whole host of everyday products are getting smarter. What started with smartphones has now grown to include smart TVs, smartwatches, smart speakers, and smart home thermostats plus a range of more eyebrow-raising "smart" products such as smart nappies, smart yoga mats, smart office chairs, and smart toilets.

Generally, these smart products are designed to make customers lives easier and remove those annoying bugbears from everyday life. For example, you can now get digital insoles that slip into your running shoes and gather data (using pressure sensors) about your running style. An accompanying app will give you real-time analysis of your running performance and technique, thereby helping you avoid injuries and become a better runner.

Offering a more intelligent service

Instead of the traditional approach of selling a product or service as a one-off transaction, more and more businesses are transitioning to a servitization model, in which the product or service is delivered as an ongoing subscription. Netflix is a prime example of this model in action. For a less obvious example, how about the Dollar Shave Club, which will deliver razor blades and grooming products to your door on a regular basis. Or Stich Fix, a personalized styling service that delivers clothes to your door based on your personal style, size, and budget.

Intelligent services like this are reliant on data and AI. Businesses like Netflix have access to a wealth of valuable customer data data that helps the company provide a more thoughtful service, based on what it knows the customer really wants (whether its movies, clothes, grooming products or whatever).

Improving internal business processes

In theory, AI could be worked into pretty much any aspect of a business: manufacturing, HR, marketing, sales, supply chain and logistics, customer services, quality control, IT, finance and more.

From automated machinery and vehicles to customer service chatbots and algorithms that detect customer fraud, AI solutions and technologies are being incorporated into all sorts of business functions in order to maximize efficiency, save money and improve business performance.

So, which area should you focus on products, services, or business processes?

Every business is different, and how you decide to use AI may differ wildly from even your closest competitor. For AI to truly add value in your business, it must be aligned with your companys key strategic goals which means you need to be clear on what it is you're trying to achieve before you can identify how AI can help you get there.

That said, its well worth considering all three areas: products, services and business processes. Sure, one of the areas is likely to be more of a priority than the others, and that priority will depend on your companys strategic goals. But you shouldnt ignore the potential of the other AI uses.

For example, a product-based business might be tempted to skip over the potential for intelligent services, while a service-based company could easily think smart products arent relevant to its business model. Both might think AI-driven business processes are beyond their capabilities at this point in time.

But the most successful, most talked-about companies on the planet are those that deploy AI across all three areas. Take Apple as an example. Apple built its reputation on making and selling iconic products like the iPad. Yet, nowadays, Apple services (including Apple Music and Apple TV) generate more revenue than iPad sales. The company has transitioned from purely a product company to a service provider, with its iconic products supporting intelligent services. And you can be certain that Apple uses AI and data to enhance its internal processes.

In this way, AI can throw up surprising additions and improvements to your business model or even lead you to an entirely new business model that you never previously considered. It can lead you from products to services, or vice versa. And it can throw up exciting opportunities to enhance the way you operate.

Thats why I recommend looking at products, services, and business processes when working out your AI priorities. You may ultimately decide that optimizing your internal processes (for example, automating your manufacturing) is several years away, and thats fine. The important thing is to consider all the AI opportunities, so that you can properly prioritize what you want to achieve and develop an AI strategy that works for your business.

AI is going to impact businesses of all shapes and sizes, across all industries. Discover how to prepare your organization for an AI-driven world in my new book, The Intelligence Revolution: Transforming Your Business With AI.

More here:

3 Important Ways Artificial Intelligence Will Transform Your Business And Turbocharge Success - Forbes

The grim fate that could be ‘worse than extinction’ – BBC News

Toby Ord, a senior research fellow at the Future of Humanity Institute (FHI) at Oxford University, believes that the odds of an existential catastrophe happening this century from natural causes are less than one in 2,000, because humans have survived for 2,000 centuries without one. However, when he adds the probability of human-made disasters, Ord believes the chances increase to a startling one in six. He refers to this century as the precipice because the risk of losing our future has never been so high.

Researchers at the Center on Long-Term Risk, a non-profit research institute in London, have expanded upon x-risks with the even-more-chilling prospect of suffering risks. These s-risks are defined as suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far. In these scenarios, life continues for billions of people, but the quality is so low and the outlook so bleak that dying out would be preferable. In short: a future with negative value is worse than one with no value at all.

This is where the world in chains scenario comes in. If a malevolent group or government suddenly gained world-dominating power through technology, and there was nothing to stand in its way, it could lead to an extended period of abject suffering and subjugation. A 2017 report on existential risks from the Global Priorities Project, in conjunction with FHI and the Ministry for Foreign Affairs of Finland, warned that a long future under a particularly brutal global totalitarian state could arguably be worse than complete extinction.

Singleton hypothesis

Though global totalitarianism is still a niche topic of study, researchers in the field of existential risk are increasingly turning their attention to its most likely cause: artificial intelligence.

In his singleton hypothesis, Nick Bostrom, director at Oxfords FHI, has explained how a global government could form with AI or other powerful technologies and why it might be impossible to overthrow. He writes that a world with a single decision-making agency at the highest level could occur if that agency obtains a decisive lead through a technological breakthrough in artificial intelligence or molecular nanotechnology. Once in charge, it would control advances in technology that prevent internal challenges, like surveillance or autonomous weapons, and, with this monopoly, remain perpetually stable.

More:

The grim fate that could be 'worse than extinction' - BBC News

HealthTensor raises $5M for its AI-based medical diagnosis tools – Healthcare IT News

HealthTensor, an artificial intelligence company creating software to help augment medical decision-making, has raised a $5 million in a seed round of financing led by Calibrate Ventures, TenOneTen Ventures and Susa Ventures.

WHY IT MATTERS

The round also includes hospitals and physicians, including a medical officer at Amazon Health. Funds will be used to scale the company's software engineering and implementation team to keep up with demand from major health systems, the vendor said.

HealthTensor's software functions between physicians and the troves of raw medical data from any given patient, which often is more than any individual doctor can handle. The company uses advanced algorithms to do AI-enabled diagnosiswith the aim of ensuring no medical condition is overlooked. The software was designed with the physician workflow in mind, enabling frictionless adoption of the product by users, the company contended.

"HealthTensor makes me a better doctor because it allows me to spend less time in front of the computer and more time in front of the patient," said Dr. Tasneem Bholat, an early user of HealthTensor's software. "HealthTensor synthesizes all the data from the patient's chart, saving me from doing chart biopsy and surfacing diagnoses I might have otherwise missed."

The company's software currently is integrated within several hospitals and will expand to more in the coming months, the vendor reported.

THE LARGER TREND

The use of AI in healthcare has been on the rise throughout 2020. According to some experts, 2021 could be a big year for AI and machine learning.

"AI had become mythical, but 2021 looks set to be the year where it may come into its own in the health sector, along with the use of automation," said Dr. Sam Shah, chief medical strategy officer at Numan and former director of digital development at NHSX. "During the next year, we are likely to see more solutions that support, not only imaging, but also the quality of reporting,as well as the greater use of natural language processing.

"The combination of these technologies will help improve efficiency in health systems as they begin to recover from the pandemic," he said.

ON THE RECORD

"We think of HealthTensor as an AI-powered medical resident that is focused specifically on the tedious, data-driven aspects of medicine, which is what computers do best," said Eli Ben-Joseph, cofounder and CEO of HealthTensor.

"Many doctors are forced to spend a majority of their day focused on data aggregation from medical records, which leads to missed diagnoses, patient dissatisfaction and physician burnout. HealthTensor frees up the physician to focus on the conceptual and emotional aspects of medicine, which is what humans do best."

"HealthTensor makes doctors' lives easier and helps provide better patient care, ultimately generating revenue for hospitals, making it one of the rare startups that has massive global potential for both patients and healthcare providers," said Jason Schoettler, general partner at Calibrate Ventures.

Twitter:@SiwickiHealthITEmail the writer:bsiwicki@himss.orgHealthcare IT News is a HIMSS Media publication.

Read more:

HealthTensor raises $5M for its AI-based medical diagnosis tools - Healthcare IT News

Artificial Intelligence Applications within Retail in 2020 – ReadWrite

Artificial intelligence and its applications have surely revolutionized the sectors pushing them forward in a new direction. Its application isnt limited to the start of product development but continues post-launch and customer interaction.

One of the sectors that are reaping the benefits of AI integration is the retail industry. However, there are still many questions that are being thrown out there. From what AI-technology or application has proven to be the most beneficial in retail to which innovations have the potential to change the retail game?

We need to keep in mind that artificial intelligence has not been perfected and is still in the stages of experimentation. Some results have proven to be positive and progressive, while others a complete failure.

Having said this, from 2013 to 2018, AI startups have raised around $1.8 billion according to CB insights. These are impressive numbers and the credit can be given to Amazon which changes the perspective of AI integration within retail.

In a nutshell: AI in retail can be explained as a self-learning technology, that with the adequate data, only improves the processes further through smart prediction and much more.

AI solutions are still in the process of growing and progressing. However, there are certain applications within retail that have proven to be fruitful not just in terms of the value it provides as a service but the benefits businesses reap afterward.

What are the top of the line applications of AI in retail? Lets find out.

With digitization, much of the work-load has been automated and streamlined. Now, with the COVID wave placing human contact as harmful, cashier-less stores are an idea that is very much on the table. This idea of lowering the number of human employees working on a store and being replaced by AI-powered robots is not just a concept of the movies anymore.

Amazon is already on the case with Amazon AI introducing stores that are check-out free. You must have heard about Amazon Go and Just Walk Out technology where the items being placed within your trolley are being examined and kept track of, so when you simply walk out of the shop, the Amazon account takes the money. Pretty interesting, right?

AI and IoT play a great role in creating this cashier-less store experience, relieving stores from having expensive operation expenses. With technology like Amazon Go, human staff members are reduced to merely six or so, depending on the size of the store.

The rise of the chatbots was possible due to AI integration, making them capable of conversing in a human-like manner. Moreover, with their ability to understand the query posed by the visitor, they can analyze and provide adequate assistance accordingly.

Safe to say, AI chatbots have elevated customer service, searching, sending notifications, and suggesting relevant products all by themselves. These AI chatbots work wonders in retail as there are so many queries that are lined up mostly filled with product related questions. In addition, they also learn the buying behavior of the customer and suggest products that would match their search and buying intent.

Chatbots are the present and future of retail helping customers navigate through online stores and increasing the revenue of businesses in return.

Voice search is catching up with 31% of smartphone users globally using voice search at least once a week. While, in the year 2020, it is projected to grow to 50%. With Alexa and others, customers can simply ask for the desired product without having to type and visually invest in the process.

Voice search is definitely one of the demanded features in any software solution and software development companies (koderlabs dot com) would incorporate voice and text search to maximize the convenience.

Visual search is a term or technology not too familiar as of yet. However, this AI-powered system enables customers to upload images and find products similar to certain aspects of those uploaded images; like based on color, shapes, and even patterns.

AI coupled with image recognition technology is marvelous and can help significantly in the realm of retail. Imagine wanting a similar dress and just uploading its picture, you get suggestions of places either selling the same or something similar. You then can compare the price difference and go for the one that suits your best.

AI can detect the mood of your customers and provide you with valuable feedback that will allow your representatives to give assistance just in time. Take Walmart as an example. The retail giant has cameras installed at each checkout lane that detects their mood.

If a customer seems annoyed, they would immediately approach and try to help. So, with AI and facial recognition technology, stores can build strong relationships with their customers and ensure their satisfaction.

AI in the retail supply chain can help retailers dodge poor execution and management that leads to major losses. With AI, calculating the demand for a particular product through analyzing the data that includes the history of sales, promotions, location, trends, and various other metrics allow retail stores to make a better future decision.

AI can predict the demand for that certain product and allow you to order just the right amount without having to deal with leftovers or shortage of it.

Since we are currently facing COVID that has placed the necessity of an online-smart-world, AI can predict through the data received from either the websites or mobile apps. Either way, the supply chain is effectively managed and processed systematically.

With the usage of machine learning, the retail industry can easily classify millions of items from various sellers with the right category. For instance, sellers can upload the picture of their product, and machine learning will identify it and classify it accordingly.

Clasification helps automate the mundane and time-consuming task and can be done in a few minutes with the help of AI.

What more is that with such smart classification, customers are able to find the right products under the categories of their choosing.

The retail executives survey conducted by Capgemini at AI in Retail Conference entails that the AI application of technology in retail could potentially save up to $340 billion each year for the industry till 2020. In addition, nearly 80% of these savings will come from supply chain management and return as AI will improve these processes by a large margin.

The global market for AI in retail is projected to grow over $5 million by the year 2022.

Artificial intelligence and Machine Learning-powered software solutions can really change the game for retail, especially amid the pandemic. Not only AI facilitates automation but provides a better insight into businesses by predictive analysis and reporting.

On the customer front, AI-powered chatbots and cashier-less stores provides convenience and futuristic shopping experience with improved customer service.

Although the pandemic has slowed down much of the progress; still, we can see considerable growth in AI-powered solutions geared to improve the retail industry and prep it for the times ahead.

Zubair is a digital enthusiast who loves to write on various trends, including Tech, Software Development, AI, and Personal Development. He is a passionate blogger and loves to read and write. He currently works at Unique Software Development, a custom software development company in Dallas that offers top-notch software development services to clients across the globe.

Read more:

Artificial Intelligence Applications within Retail in 2020 - ReadWrite

Artificial Intelligence (AI) in Automotive – Market Share Analysis and Research Report by 2025 – CueReport

Latest updates on Artificial Intelligence (AI) in Automotive market, a comprehensive study enumerating the latest price trends and pivotal drivers rendering a positive impact on the industry landscape. Further, the report is inclusive of the competitive terrain of this vertical in addition to the market share analysis and the contribution of the prominent contenders toward the overall industry.0

- With the dynamically changing technology landscape in the automotive sector, an increasing number of automobile manufacturers are focusing on integrating semi-autonomous and fully-autonomous technologies into their vehicles

Artificial Intelligence (AI) in Automotive market is projected to surpass USD 12 billion by 2026. The market growth is attributed to the steadily growing uptake of driver assistance technologies for increasing driving comfort and ensuring safe driving experience. Consumers are increasingly exhibiting a positive attitude toward AI-powered vehicle driving systems, creating new avenues for market growth. Automotive manufacturers are capitalizing on the steadily growing industry by introducing new features in their vehicles including automated parking, lane assistance, driver behavior monitoring, and adaptive cruise control. For instance, in October 2019, Toyota announced the launch of level-4 driver assistance systems for enabling automated valet parking in its upcoming cars. The technology is developed in conjunction with Panasonic and is built with inexpensive sensors, offering affordable parking assistance solutions to Toyota's customers.

Request Sample Copy of this Report @ https://www.cuereport.com/request-sample/24675

- Machine learning solutions are witnessing a sustained rise in adoption, enabling AI systems to predict and decide driving patterns in dense traffic. With vastly improved neural network technologies, machine learning can achieve near human driving behavior without external assistance.

Request Sample Copy of this Report @ https://www.cuereport.com/request-sample/24675

- Technology providers including NVIDIA, Intel, and AMD are continuously upgrading their solutions and offering energy-efficient hardware, enabling AI technologies with low power consumption

- Sophisticated onboard AI systems are providing real-time connectivity between vehicle & driver, enabling safe driving and reducing driver fatigue by suggesting resting periods & controlling car navigation during driver distraction

- The growing interest of government agencies in adopting autonomous mobility for reducing traffic accidents and improving traffic management is creating a positive outlook for the industry

- Some of the leading market players are Alphabet Inc., Audi AG, BMW AG, Daimler AG, Didi Chuxing, Ford Motor Company, General Motors Company, Harman International Industries, Inc., Honda Motor Co., Ltd., IBM Corporation, Intel Corporation, Microsoft Corporation, NVIDIA Corporation, Qualcomm Inc., Tesla, Inc., Toyota Motor Corporation, Uber Technologies, Inc., Volvo Car Corporation, and Xilinx Inc.

- AI platform providers are focusing on strategic collaboration and long-term contracts with automotive manufacturers to gain market share

The hardware segment held majority of the market with over 60% share in 2019 and is expected to continue its dominance over the forecast timespan. This is attributed to the increasing adoption of automotive AI components for implementation of AI solutions. Energy-efficient System-on-Chips (SoCs) and dedicated AI GPUs are assisting enterprises in deploying highly sophisticated onboard computers with robust computing power. In July 2019, Intel launched Pohoiki Beach, a new AI-enabled chip, which features 8 million neural networks and can reach up to 10,000 times faster computing speeds compared to traditional CPUs. Furthermore, the growing uptake of sensors including high-resolution cameras, LiDARs, and ultrasonic sensors for vehicle situational awareness is fueling the growth of AI hardware.

The context awareness segment is anticipated to register an impressive growth with a CAGR of over 35% from 2019 to 2026 due to the rapid proliferation of driver assistance solutions and semi-automated cruise control. Context awareness systems provide situational intelligence through multi-sensory input and enable onboard computers to detect & classify on-road entities including pedestrians, traffic, and road infrastructure. Customers are reaping the benefits of context-awareness systems by deploying effective navigation assistance, which enables safe driving even during driver distraction. Major technology companies are investing in innovative automotive technologies including context awareness. For instance, in November 2016, Intel announced an investment of USD 250 million in autonomous driving technology. This investment was focused on key technologies such as context awareness, deep learning, security, and connectivity.

The image/signal recognition segment held majority of the market with over 65% share in 2019 due to the growing importance of vehicle speed control for reducing on-road accidents. Image/signal recognition technologies can detect traffic signs & speed limit indicators and reduce the vehicle speed accordingly without human intervention. The technology is also expected to grow significantly as several government initiatives are promoting traffic sign recognition to ensure adherence to speed limits. In March 2019, the European Commission made it mandatory for all vehicles manufactured from 2022 to have built-in image/signal recognition capabilities. This is expected to reduce rash driving, over-speeding, and promote on-road safety.

The semi-autonomous vehicles segment will grow at an impressive CAGR of over 38% by 2026 due to the extensive demand for Advanced Driver Assistance Systems (ADAS) and facilitating driving during heavy traffic scenarios. Semi-autonomous technologies have already been commercialized and are expected to gain significant market proliferation over the forecast timespan. Major automotive manufacturers, such as Chrysler, Audi, and Ford, have started integrating semi-autopilot and drive cruise control technologies into their latest models. Driver behavior monitoring, road condition awareness, and lane tracking are a few of the innovative solutions that have been introduced through the implementation of AI technologies in semi-autonomous vehicles. Furthermore, supporting initiatives from various governments to incorporate semi-autonomous vehicle technologies by 2022 will positively impact industry growth.

Europe held majority of the market with over 35% share in 2019 due to the growing demand for autonomous technologies in the region. Presence of several industry leaders including BMW, Audi, Mercedes, Daimler, and Bentley accelerated the advancements in autonomous mobility including several successful trial runs of level-5 autonomous vehicles. The increasing focus of automotive manufacturers on AI technologies, especially in Germany and the UK is driving the adoption of AI across the Europe automotive sector. Supportive initiatives from the government to adopt AI for smart traffic control has propelled the development of automotive AI solutions. In 2017, the UK government invested more than USD 75 million for the development of AI solutions and improved mobility.

Companies operating in AI in automotive market are focusing on various business growth strategies including investments in autonomous mobility solutions, strengthening partner network, and expanding R&D activities. Through such strategic moves, companies are trying to gain a broader market share and maintain their leadership in the market. For instance, in September 2019, Daimler partnered with Torc Robotics, an automated mobility firm, to design and develop level-4 autonomous trucks. Under the partnership, the companies are jointly testing autonomous trucks in the U.S. and focusing on evolving automated driving for heavy-duty vehicles.

Major Highlights from Table of contents are listed below for quick lookup into Artificial Intelligence (AI) in Automotive Market report

Chapter 1. Methodology and Scope

Chapter 2. Executive Summary

Chapter 3. Artificial Intelligence (AI) in Automotive Industry Insights

Chapter 4. Company Profiles

Request Customization on This Report @ https://www.cuereport.com/request-for-customization/24675

See the rest here:

Artificial Intelligence (AI) in Automotive - Market Share Analysis and Research Report by 2025 - CueReport

Facebook researchers shut down AI bots that started speaking in a language unintelligible to humans – Firstpost

Days after Tesla CEO Elon Musk said that artificial intelligence (AI) was the biggest risk, Facebook has shut down one of its AI systems after chatbots started speaking in their own language, which used English words but could not be understood by humans. According to a report in Tech Times on Sunday, the social media giant had to pull the plug on the AI system that its researchers were working on "because things got out of hand". The trouble was, while the bots were rewarded for negotiating with each other, they were not rewarded for negotiating in English, which led the bots to develop a language of their own.

Facebook founder Mark Zuckerberg.

"The AI did not start shutting down computers worldwide or something of the sort, but it stopped using English and started using a language that it created," the report noted. Initially the AI agents used English to converse with each other but they later created a new language that only AI systems could understand, thus, defying their purpose. This led Facebook researchers to shut down the AI systems and then force them to speak to each other only in English.

In June, researchers from the Facebook AI Research Lab (FAIR) found that while they were busy trying to improve chatbots, the "dialogue agents" were creating their own language. Soon, the bots began to deviate from the scripted norms and started communicating in an entirely new language which they created without human input, media reports said. Using machine learning algorithms, the "dialogue agents" were left to converse freely in an attempt to strengthen their conversational skills.

The researchers also found these bots to be "incredibly crafty negotiators". "After learning to negotiate, the bots relied on machine learning and advanced strategies in an attempt to improve the outcome of these negotiations," the report said. "Over time, the bots became quite skilled at it and even began feigning interest in one item in order to 'sacrifice' it at a later stage in the negotiation as a faux compromise," it added.

Although this appears to be a huge leap for AI, several experts including Professor Stephen Hawking have raised fears that humans, who are limited by slow biological evolution, could be superseded by AI. Others like Tesla's Elon Musk, philanthropist Bill Gates and ex-Apple founder Steve Wozniak have also expressed their concerns about where the AI technology was heading. Interestingly, this incident took place just days after a verbal spat between Facebook CEO and Musk who exchanged harsh words over a debate on the future of AI.

"I've talked to Mark about this (AI). His understanding of the subject is limited," Musk tweeted last week.The tweet came after Zuckerberg, during a Facebook livestream earlier this month, castigated Musk for arguing that care and regulation was needed to safeguard the future if AI becomes mainstream. "I think people who are naysayers and try to drum up these doomsday scenarios -- I just, I don't understand it. It's really negative and in some ways I actually think it is pretty irresponsible," Zuckerberg said.

Musk has been speaking frequently on AI and has called its progress the "biggest risk we face as a civilisation". "AI is a rare case where we need to be proactive in regulation instead of reactive because if we're reactive in AI regulation it's too late," he said.

With inputs from IANS

Visit link:

Facebook researchers shut down AI bots that started speaking in a language unintelligible to humans - Firstpost

Peering inside an AI’s brain will help us trust its decisions – New Scientist

Is it a horse?

Weegee(Arthur Fellig)/International Center of Photography/Getty

By Matt Reynolds

Oi, AI what do you think youre looking at? Understanding why machine learning algorithms can be tricked into seeing things that arent there is becoming more important with the advent of things like driverless cars. Now we can glimpse inside the mind of a machine thanks to a test that reveals which parts of an image an AI is looking at.

Artificial intelligences dont make decisions in the same way that humans do. Even the best image recognition algorithms can betricked into seeing a robin or cheetahin images that are just white noise, for example.

Its a big problem, says Chris Grimm atBrown Universityin Providence, Rhode Island. If we dont understand why these systems make silly mistakes, we should think twice abouttrusting them with our livesin things like driverless cars, he says.

So Grimm and his colleagues created a systemthat analyses an AI to show which part of an image it is focusing onwhen it decides what the image is depicting. Similarly, for a document-sorting algorithm, the system highlights which words the algorithm used to decide which category a particular document should belong to.

Its really useful to be able to look at an AI and find out how its learning, says Dumitru Erhan, a researcher at Google. Grimms tool provides a handy way for a human to double-check that an algorithm is coming up with the right answer for the right reasons, he says.

To create his attention-mapping tool, Grimm wrapped a second AI around the one he wanted to test. This wrapper AI replaced part of an image with white noise to see if that made a difference to the original softwares decision.

If replacing part of an image changed the decision, then that area of the image was likely to be an important area for decision-making. The same applied to words. If changing a word in a document makes an AI classify a document differently, it suggests that word was key to the AIs decision.

Grimm tested his technique on an AI trained to sort images into one of 10 categories, including planes, birds, deer and horses. His system mapped where the AI was looking when it made its categorisation. The results suggested that the AI had taught itself to break down objects into different elements and then search for each of those elements in an image to confirm its decision.

For example, when looking at images of horses, Grimms analysis showed that the AI first paid close attention to the legs and then searched the image for where it thought a head might be anticipating that the horse may be facing in different directions. The AI took a similar approach with images containing deer, but in those cases it specifically searched for antlers. The AI almost completely ignored parts of an image that it decided didnt contain information that would help with categorisation.

Grimm and his colleagues also analysed an AItrained to play the video game Pong. They found that it ignored almost all of the screen and instead paid close attention to the two narrow columns along which the paddles moved. The AI paid so little attention to some areas that moving the paddle away from its expected location fooled it into thinking it was looking at the ball and not the paddle.

Grimm thinks that his tool could help people work out how AIs make their decisions. For example, it could be used to look atalgorithms that detect cancer cells in lung scans,making sure that they dont accidentally come up with the right answers by looking at the wrong bit of the image. You could see if its not paying attention to the right things, he says.

But first Grimm wants to use his tool to help AIs learn. By telling when an AI is not paying attention, it would let AI trainers direct their software towards relevant bits of information.

Reference: arXiv, arxiv.org/abs/1706.00536

More on these topics:

The rest is here:

Peering inside an AI's brain will help us trust its decisions - New Scientist

Why So Many Companies Are Using AI To Search Google – Tech.Co

Artificial intelligence (A.I.) is here to stay. The genie is out of the bottle, so to speak, and that is mostly a good thing. Bill Gates has even called it the holy grail of technological advancement.

But while headlines focus on the science fiction aspects of what A.I. could do if it ever went rogue and rave about its high-profile applications, the technology is quietly changing much of the worlds economic landscape without any notice. And Im not referring to sleek consumer-facing apps that do cool tricks like write your emails or remind you about birthdays.

A.I. has become an incredibly viable technology in a range of industries performing functions formerly done by highly specialized and well-educated people. The biggest competitive advantage of A.I.? Well, it could be that it will read beyond the first page of Google search results.

The problem with the internet today is that it is too big, which became a very real state of affairs last year when ICANN announced it had run out of unique IP addresses under its existing protocol. Businesses that use Google to find vital information about markets and business dealings face a near impossible task of weeding through billions of websites and web pages that contain similar but ultimately useless information.

But a properly configured A.I. program can use Google to do that research and provide only the most valuable information to decision makers. Companies spend huge sums of money on research, says Jeff Curie, President of artificial intelligence company Bitvore. But despite hiring the very best and brightest, those experts are limited to using Google and setting up news alerts to stay informed. The internet is just too big for a person with a search engine to find the most important information.

Human nature being what it is, most of us do not have the discipline to search for the proverbial needle in the haystack. Research has suggested that 95percent of Google users never look beyond the first page of results, and even on subsequent pages the top link is the most clicked on by a wide margin, meaning that attention span wanes even as we scroll down the page.

The fundamental advantages of A.I. are its ability to assess huge volumes of information almost instantly and its inability to get lazy or tired. Those are also the largest challenges that human researchers face. As a result, A.I. is increasingly being leveraged to perform tasks like research and it is getting more sophisticated all the time.

A.I. doing research may sound ridiculous, but the process is quite logical. All that it needs to do is search for keywords and phrases, flag them based on relevance, and deliver a curated set of data to a human expert for a final review. Many companies employ hundreds of people to compile that information on a daily basis. A.I. may lack the human judgment ability required to make decisions about that data, but it can most certainly corral it.

This seemingly simple application of A.I. may actually have enormous effects on the global economy, far larger than the newest virtual office assistant.

Companies that rely on having the most relevant and up-to-date information as their strategic advantage benefit greatly from having that information before their competitors. If a researcher takes two hours to find a news alert, that is two hours that competitors may have had to leverage that information to their advantage. A.I. can work constantly, 24 hours every day. That means it is capable of alerting decision makers about events taking place the moment they happen, not two hours later.

In industries where knowledge is power, the new standard is A.I., says Curie. An A.I. program can outperform the best researchers in the world, and it is already doing that today for many of the worlds largest companies.

Research may not be the most visible application of A.I., but the most disruptive applications of this technology will likely be behind the scenes, not unveiled at major trade shows. The economic effects will be enormous and largely invisible.

The rest is here:

Why So Many Companies Are Using AI To Search Google - Tech.Co

2020-2025 Worldwide 5G, Artificial Intelligence, Data Analytics, and IoT Convergence: Embedded AI Software and Systems in Support of IoT Will Surpass…

The "5G, Artificial Intelligence, Data Analytics, and IoT Convergence: The 5G and AIoT Market for Solutions, Applications and Services 2020 - 2025" report has been added to ResearchAndMarkets.com's offering.

This research evaluates applications and services associated with the convergence of AI and IoT (AIoT) with data analytics and emerging 5G networks. The AIoT market constitutes solutions, applications, and services involving AI in IoT systems and IoT support of various AI facilitated use cases.

This research assesses the major players, strategies, solutions, and services. It also provides forecasts for 5G and AIoT solutions, applications and services from 2020 through 2025.

Report Findings:

The combination of Artificial Intelligence (AI) and the Internet of Things (IoT) has the potential to dramatically accelerate the benefits of digital transformation for consumer, enterprise, industrial, and government market segments. The author sees the Artificial Intelligence of Things (AIoT) as transformational for both technologies as AI adds value to IoT through machine learning and decision making and IoT adds value to AI through connectivity and data exchange.

With AIoT, AI is embedded into infrastructure components, such as programs, chipsets, and edge computing, all interconnected with IoT networks. APIs are then used to extend interoperability between components at the device level, software level, and platform level. These units will focus primarily on optimizing system and network operations as well as extracting value from data.

It is important to recognize that intelligence within IoT technology market is not inherent but rather must be carefully planned. AIoT market elements will be found embedded within software programs, chipsets, and platforms as well as human-facing devices such as appliances, which may rely upon a combination of local and cloud-based intelligence.

Just like the human nervous system, IoT networks will have both autonomic and cognitive functional components that provide intelligent control as well as nerve end-points that act like nerve endings for neural transport (detection and triggering of communications) and nerve channels that connect the overall system. The big difference is that the IoT technology market will benefit from engineering design in terms of AI and cognitive computing placement in both centralized and edge computing locations.

Taking the convergence of AI and IoT one step further, the publisher coined the term AIoT5G to refer to the convergence of AI, IoT, 5G. The convergence of these technologies will attract innovation that will create further advancements in various industry verticals and other technologies such as robotics and virtual reality.

As IoT networks proliferate throughout every major industry vertical, there will be an increasingly large amount of unstructured machine data. The growing amount of human-oriented and machine-generated data will drive substantial opportunities for AI support of unstructured data analytics solutions. Data generated from IoT supported systems will become extremely valuable, both for internal corporate needs as well as for many customer-facing functions such as product life cycle management.

There will be a positive feedback loop created and sustained by leveraging the interdependent capabilities of AIoT5G. AI will work in conjunction with IoT to substantially improve smart city supply chains. Metropolitan area supply chains represent complex systems of organizations, people, activities, information, and resources involved in moving a product or service from supplier to customer.

Research Benefits

Key Topics Covered

1. Executive Summary

2. Introduction

3. AIoT Technology and Market

4. AIoT Applications Analysis

5. Analysis of Important AIoT Companies

6. AIoT Market Analysis and Forecasts 2020-2025

7. Conclusions and Recommendations

Artificial Intelligence in Big Data Analytics and IoT: Market for Data Capture, Information and Decision Support Services 2020-2025

1. Executive Summary

2. Introduction

3. Overview

4. AI Technology in Big Data and IoT

5. AI Technology Application and Use Case

6. AI Technology Impact on Vertical Market

7. AI Predictive Analytics in Vertical Industry

8. Company Analysis

9. AI in Big Data and IoT Market Analysis and Forecasts 2020-2025

Story continues

10. Conclusions and Recommendations

11. Appendix

5G Applications and Services Market by Service Provider Type, Connection Type, Deployment Type, Use Cases, 5G Service Category, Computing as a Service, and Industry Verticals 2020-2025

1. Executive Summary

2. Introduction

3. LTE and 5G Technology and Capabilities Overview

4. LTE and 5G Technology and Business Dynamics

5. Company Analysis

6. LTE and 5G Application Market Analysis and Forecasts

7. Conclusions and Recommendations

Companies Mentioned

For more information about this report visit https://www.researchandmarkets.com/r/rigm8o

View source version on businesswire.com: https://www.businesswire.com/news/home/20200207005390/en/

Contacts

ResearchAndMarkets.comLaura Wood, Senior Press Managerpress@researchandmarkets.com For E.S.T Office Hours Call 1-917-300-0470For U.S./CAN Toll Free Call 1-800-526-8630For GMT Office Hours Call +353-1-416-8900

More here:

2020-2025 Worldwide 5G, Artificial Intelligence, Data Analytics, and IoT Convergence: Embedded AI Software and Systems in Support of IoT Will Surpass...

No bots need apply: Microtargeting employment ads in the age of AI – HR Dive

Keith E. Sonderlingis a commissioner for the U.S. Equal Employment Opportunity Commission. Views are the author's own.

It's no secret that online advertising is big business.In 2019, digital ad spending in the United States surpassed traditional ad spending for the first time, and by 2023, digital ad spending will all but eclipse it.

It's easy to understand why.Seventy-two percent of Americans use social media, and nearly half of millennials and Gen Z report being online "almost constantly."An overwhelming majority of Americans under 40 dislike and distrust traditional advertising.Digital marketing is now the most effective way for advertisers to reach an enormous segment of the population and social media platforms have capitalized on this to the tune of billions of dollars.In 2020, digital advertising accounted for 98% of Facebook's $86 billion revenue, more than 80% of Twitter's $3.7 billion revenue, and nearly 100% of Snapchat's $2.5 billion revenue.

But clickbait alone will not guarantee that advertisers and social media platforms continue cashing in on digital marketing.For these cutting-edge marketing technologies to be sustainable in job-related advertising, they must be designed and utilized in strict compliance with longstanding civil rights laws that prohibit discriminatory marketing practices.When these laws were passed in 1964, advertising more closely resembled the TV world of Darrin Stephens and Don Draper than the current world of social media influencers and "internet famous" celebrities.Yet federal antidiscrimination laws are just as relevant to digital marketing as they were to traditional forms of advertising.

One of the reasons advertisers are willing to spend big on digital marketing is the ability to "microtarget" consumers. Online platforms are not simply selling ad space; they are selling access to consumer information culled and correlated through the use of proprietary artificial intelligence algorithms.These algorithms can connect countless data points about individual consumers, from demographic details to browsing history, to make predictions.These predictions can include what each individual is most likely to buy, when they are most likely to buy it, how much they are willing to pay, and even what type of ads they are most likely to click.

So, suppose I have a history of ordering pizza online every Thursday at about 7 pm.In that case, digital advertisers might start bombarding me with local pizzeria ads every Thursday as I approach dinnertime.Savvy advertisers might even rely on a platform's AI-enabled advertising tools to offer customized coupons to entice me to choose them over competitors.

But microtargeting ads to an audience is one thing when you are trying to sell local takeout food.It is quite another when you are advertising employment opportunities.Facebook found this out the hard way when, in March 2019, it settled several lawsuits brought by civil rights groups and private litigants arising from allegations that the social media giant's advertising platform enabled companies to exclude people from the audience for employment ads based on protected characteristics.

According to one complaint filed in the Northern District of California, advertisers could customize their audiences simply by ticking off boxes next to a list of characteristics.Employers could check an "include" box next to preferred characteristics or an "exclude" box next to disfavored characteristics, including race, sex, religion, age, and national origin.Shortly after the complaint was filed, Facebook announced that it would be disabling a number of its advertising features until the company could conduct a full review of how exclusion targeting was being used.As part of its settlement of the case, Facebook pledged to establish a separate advertising portal with limited targeting options for employment ads.

To be clear, demographics matter in advertising and relying on demographic information is not necessarily problematic from a legal perspective.Think for a moment about Superbowl ads.Advertisers have historically paid enormous sums for air time during the game not only because of the size of the audience but because of the money that members of that particular audience are willing to spend on things like lite beer, fast food, and SUVs. Superbowl advertisers make projections about who will be tuning in to the game and what sorts of products they are more or less likely to buy.They target a general audience in the knowledge that ads for McDonald's Value Meals and Domino's Pizza will reach viewers who are munching on Cheetos and nibbling on kale chips alike.

But AI-enabled advertising is different. Instead of creating ads for general audiences, online advertisers can create specific audiences for their ads.This type of "microtargeting" has significant implications under federal civil rights law, which prohibits employment discrimination based on race, color, religion, sex, national origin, age, disability, pregnancy, or genetic information.These protections extend to the hiring process.So, a law firm that is looking to hire attorneys can build a target audience consisting exclusively of people with Juris Doctorate degrees because education level is not, in itself, a protected class under federal civil rights law.However, that same employer cannot create a target audience for its employment ads that consists only of JDs of one race because race is a protected class under federal civil rights law.

From a practical standpoint, exclusions of the sort that Facebook's advertising program allegedly enabled are the high-tech equivalent of the notorious pre-Civil-Rights-Era "No Irish Need Apply" signs.From a legal standpoint, they are even worse.These sorts of microtargeted exclusions would withhold the very existence of job opportunities from members of protected classes for the sole reason of their membership in a protected class, leaving them unable to exercise their rights under federal antidiscrimination law.After all, you cannot sue over exclusion from a job opportunity if you do not know that the possibility existed in the first place.Thus, online platforms and advertisers alike may find themselves on the hook for discriminatory advertising practices.

At the same time, one of the most promising aspects of AI is its capacity to minimize the role of human bias in decision-making.Numerous studies show that the application screening process is particularly vulnerable to bias on the part of hiring professionals.For example, African Americans and Asian Americans who "whitened" their resumes by deleting references to their race received more callbacks than identical applications that included racial references. And hiring managers have proven more likely to favor resumes featuring male names over female names even though the resumes are otherwise identical.

Often, HR executives do not become aware that screeners and recruiters engage in discriminatory conduct until it is too late.But AI can help eliminate bias from the earliest stages of the hiring process.An AI-enabled resume-screening program can be programmed to disregard variables that have no bearing on job performance, such as applicants' names.An applicant's name can signal, correctly or incorrectly, variables that usually have nothing to do with the applicant's job qualifications, such as the applicant's sex, national origin, or race.Similarly, an AI-enabled bot that conducts preliminary screening interviews can be engineered to disregard factors such as age, sex, race, disability and pregnancy.It can even disregard variables that might merely suggest a candidate's membership in a protected class, including foreign or regional accents, speech impairments and vocal timbre.

I believe that we can and we must realize the full potential of AI to enhance human decision-making in full compliance with the law.But that does not mean that AI will supplant human beings any time soon.AI has the potential to make the workplace more fair and inclusive by eliminating any actual bias on the part of resume screeners or interviewers.However, this can only happen if the people who design the advertising platforms and the marketers who pay to use them are vigilant about the limitations of AI algorithms and mindful of the legal and ethical obligations that bind us all.

Original post:

No bots need apply: Microtargeting employment ads in the age of AI - HR Dive