Sports Organizations Using Machine Learning Technology to Drive Sponsorship Revenues – Sports Illustrated

The sports industry has begun to place a greater emphasis on data capture and the use of analytics over the past decade - particularly as it relates to on-field performance, but while sports has become big business, Adam Grossman (founder Block Six Analytics aka B6A) suggests from an economic and financial perspective - in terms of understanding concepts like asset valuation, cash-flow and regression - it remains behind the times. To help bring the industry up to speed, Grossman developed a sponsorship evaluation platform that values sports assets in the same manner that venture capitalists, private equity firms and investment banks look at investment opportunities. Using machine learning technology (think: natural language processing, computer vision), B6A's proprietary sponsorship model translates traditional fit and engagement benchmarks into probabilistic revenue growth metrics. Over the last 10 months, more than a dozen pro sports organizations have begun using Block Six technology - as opposed to relying on antiquated metrics like CPM - to drive sponsorship revenues.

Howie Long-Short: Sellers of sports sponsorships naturally seek brand partners that are demographically aligned. While most teams and media entities have historically managed to gather insights on their own organization, the challenge has always been capturing that of potential partners; the demographic data needed to ensure audience alignment, so that both parties can achieve their goals. Grossman explained that those on the sales side use the insights B6A provides to find new sponsors and to demonstrate their audience is a good fit for [a particular] brand. It should be noted that while were focused on rights holders, B6A also works with corporate partners investing in sports; typically, Fortune 500 companies who use the software to ensure theyre spending their marketing dollars efficiently.

Detailed knowledge about ones own audience can also be beneficial from an engagement perspective. Grossman explained that sports organizations have historically struggled to translate brand metrics into revenue metrics, but if [a seller] can prove that they have the right audience [for a buyer], that the audience is interested in the [prospective partner's] company and in their product(s) and that the seller will publish content that drives engagement and awareness [for the buyer] within the target demo, [they can say with a level of confidence that they are] maximizing the probability of increasing revenues. Statistically speaking (at least according to the way B6A measures lift in brand perception), there is significant correlation between engagement, sentiment, awareness of a brand and revenue growth.

Block Six was kind enough to run a complimentary analysis on thousands of posts attributed to followers of JohnWallStreets Twitter account to demonstrate how the platform's findings could be used. The report they turned over indicated that even in comparison to the golf companies and brands like Amazon and Apple, [JWS] disproportionately reaches a more educated and higher income audience; in fact, from an education perspective, JWS has the most educated following [analyzed to date]. While we know that a significant number of league commissioners, team owners and c-level team/league, media and agency executives read the newsletter daily, from an aggregate perspective, the data shows that JWS content is reaching a much wide range of senior leaders across the business world. Thats particularly valuable information to have as we continue in our search for the right title sponsor. To date, JWS sales efforts have been focused on service companies that seek to reach sports most influential decision makers, but the data born out of the B6A study shows that any business targeting highly educated, high-income earners should be pursued.

Taking it a step further, the psychographic observations gained reflect that technology and gambling are two topics that the JWS audience is particularly interested in. To date, JWS has not targeted brands in either field (technology due to a lack of time/resources, gambling because we incorrectly assumed they would be solely focused on consumer acquisition), but Grossman suggests that we should be as the data indicates businesses within those two sectors are natural advertisers for the brand.

Editor Note: Please note that joining our community (below) will entitle you to receive our free daily email newsletter.

Read more here:

Sports Organizations Using Machine Learning Technology to Drive Sponsorship Revenues - Sports Illustrated

Machine Learning Answers: Sprint Stock Is Down 15% Over The Last Quarter, What Are The Chances It’ll Rebound? – Trefis

Sprint (NYSE:S) stock has seen significant volatility over recent months declining by about 15% over the last quarter and by close to 25% over the last six months on account of the companys underperforming postpaid wireless business and concerns on whether its proposed merger with larger rival T-Mobile will come to fruition.

We started with a simple question that investors could be asking about the Sprintstock: given a certain drop or rise, say a 5% drop in a week, what should we expect for the next week? Is it very likely that Sprint will recover the next week? What about the next month or a quarter?

In fact, we found that if the Sprint drops 15% in a quarter (63 trading days), there is a ~23% chance that it will rise by 10% over the subsequent month (21 trading days).Want to try other combinations? You can test a variety of scenarios on the Trefis Machine Learning Engine to calculate if Sprint stock dropped, whats the chance itll rise.

For example, after a 5% drop over a week (5 trading days), the Trefis machine learning engine says chances of an additional 5% drop over say the next month are about 34%. Quite significant, and helpful to know for someone trying to recover from a loss.

Knowing what to expect for almost any scenario is powerful. It can help you avoid rash moves. Given the recent volatility in the market owing to a mix of macroeconomic events like the trade war with China and the US Federal Reserves moves, we think investors can prepare better.

Below, we discuss a few scenarios and answer common investor questions:

Question 1: Does a rise in Sprint stock become more likely after a drop?

Answer:

Consider two situations,

Case 1: Sprint stock drops by 5% or more in a week

Case 2: Sprint stock rises by 5% or more in a week

Is the chance of say a 5% rise in Sprint stock over the subsequent month after Case 1 or Case 2 occurs much higher for one versus the other?

The answer is not really. The chance of a 5% rise over a month (21 trading days) is roughly that same at 34% for both cases.

Question 2: What about the other way around, does a drop in Sprint stock become more likely after a rise?

Answer:

Consider, once again, two cases:

Case 1: Sprint stock drops by 5% in a week

Case 2: Sprint stock rises by 5% in a week

The probability of a 5% drop after Case 1 or Case 2 is actually quite similar at 34% and 33%, respectively. The probability is also similar for theS&P 500, and for many other stocks.

Question 3: Does patience pay?

Answer:

If you buy and hold Sprint stock, the expectation is over time the near term fluctuations will cancel out, and the long-term positive trend will favor you at least if the company is otherwise strong. Overall, according to data and Trefis machine learning engines calculations, patience absolutely pays for most stocks!

After a drop of 5% in Sprint stock over a week (5 trading days), while there is only about 23% chance the stock will gain 5% over the subsequent week, there is more than 39% chance this will happen in 6 months, and 45% chance itll gain 5% over a year (about 252 trading days).

The table below shows the trend for Sprint Stock:

Question 4: What about the possibility of a drop after a rise if you wait for a while?

Answer:

After seeing a rise of 5% over 5 days, the chances of a 5% drop in Sprint stock are about 42% over the subsequent quarter of waiting (63 trading days). This chance increases slightly to about 45% when the waiting period is a year (252 trading days).

The table below shows the trend for Sprint Stock:

Whats behind Trefis? See How its Powering New Collaboration and What-Ifs

ForCFOs and Finance Teams|Product, R&D, and Marketing Teams

More Trefis Data

Like our charts? Exploreexample interactive dashboardsand create your own.

Go here to see the original:

Machine Learning Answers: Sprint Stock Is Down 15% Over The Last Quarter, What Are The Chances It'll Rebound? - Trefis

2010 2019: The rise of deep learning – The Next Web

No other technology was more important over the past decade than artificial intelligence. Stanfords Andrew Ng called it the new electricity, and both Microsoft and Google changed their business strategies to become AI-first companies. In the next decade, all technology will be considered AI technology. And we can thank deep learning for that.

Deep learning is a friendly facet of machine learning that lets AI sort through data and information in a manner that emulates the human brains neural network. Rather than simply running algorithms to completion, deep learning lets us tweak the parameters of a learning system until it outputs the results we desire.

The 2019 Turing Award, given for excellence in artificial intelligence research, was awarded to three of deep learnings most influential architects, Facebooks Yann LeCun, Googles Geoffrey Hinton, and University of Montreals Yoshua Bengio. This trio, along with many others over the past decade, developed the algorithms, systems, and techniques responsible for the onslaught of AI-powered products and services that are probably dominating your holiday shopping lists.

Credit: CS231N

Deep learning powers your phones face unlock feature and its the reason Alexa and Siri understand your voice. Its what makes Microsoft Translator and Google Maps work. If it werent for deep learning, Spotify and Netflix would have no clue what you want to hear or watch next.

How does it work? Its actually simpler than you might think. The machine uses algorithms to shake out answers like a series of sifters. You put a bunch of data in one side, it falls through sifters (abstraction layers) that pull specific information from it, and the machine outputs whats basically a curated insight. A lot of this happens in whats called the black box, a place where the algorithm crunches numbers in a way that we cant explain with simple math. But since the results can be tuned to our liking, it usually doesnt matter whether we can show our work or not when it comes to deep learning.

Deep learning, like all artificial intelligence technology, isnt new. The term was brought to prominence in the 1980s by computer scientists. And by 1986 a team of researchers including Geoffrey Hinton managed to come up with a back propagation-based training method that tickled at the beginnings of an unsupervised artificial neural network. Scant a few years later a young Yann LeCun would train an AI to recognize handwritten letters using similar techniques.

Credit: Harvard Magazine

But, as those of us over 30 can attest, Siri and Alexa werent around in the late 1980s and we didnt have Google Photos there to touch up our 35mm Kodak prints. Deep learning, in the useful sense we know it now, was still a long ways off. Eventually though, the next generation of AI superstars came along and put their mark on the field.

In 2009, the beginning of the modern deep learning era, Stanfords Fei-Fei Li created ImageNet. This massive training dataset made it easier than ever for researchers to develop computer vision algorithms and directly lead to similar paradigms for natural language processing and other bedrock AI technologies that we take for granted now. This lead to an age of friendly competition that saw teams around the globe competing to see which could train the most accurate AI.

The fire was lit. By 2010 there were thousands of AI startups focused on deep learning and every big tech company from Amazon to Intel was completely dug in on the future. AI had finally arrived.Young academics with notable ideas were propelled from campus libraries to seven and eight figure jobs at Google and Apple. Deep learning was well on its way to becoming a backbone technology for all sorts of big data problems.

And then 2014 came and Apples Ian Goodfellow (then at Google) invented the generative adverserial network (GAN). This is a type of deep learning artificial neural network that plays cat-and-mouse with itself in order create an output that appears to be a continuation of its input.

Credit: Obvious

When you hear about an AI painting a picture, the machine in question is probably running a GAN that takes thousands or millions of images of real paintings and then tries to imitate them all at once. A developer tunes the GAN to be more like one style or another so that it doesnt spit out blurry gibberish and then the AI tries to fool itself. Itll make a painting and then compare the painting to all the real paintings in its dataset, if it cant tell the difference then the painting passes. But if the AI discriminator can tell its own fake, it scraps that one and starts over. Its a bit more complex than that, but the technology is useful in myriad circumstances.

Rather than just spitting out paintings, Goodfellows GANs are also directly behind DeepFakes and just about any other AI tech that seeks to blur the line between human-generated and AI-made.

In the five years since the GAN was invented, weve seen the field of AI rise from parlor tricks to producing machines capable of full-fledged superhuman feats. Thanks to deep learning,Boston Dynamics has developed robots capable of traversing rugged terrain autonomously, to include an impressive amount of gymnastics. And Skydio developed the worlds first consumer drone capable of truly autonomous navigation. Were in the safety testing phase of truly useful robots, and driverless cars feel like theyre just around the corner.

Furthermore, deep learning is at the heart of current efforts to produce general artificial intelligence (GAI) otherwise known as human-level AI. As most of us dream of living in a world where robot butlers, maids, and chefs attend to our every need, AI researchers and developers across the globe are adapting deep learning techniques to develop machines that can think. While its clear well need more than just deep learning to achieve GAI, we wouldnt be on the cusp of the golden age of AI if it werent for deep learning and the dedicated superheroes of machine learning responsible for its explosion over the past decade.

AI defined the 2010s and deep learning was at the coreof its influence. Sure, big data companies have used algorithms and AI for decades to rule the world, but the hearts and minds of the consumer class the rest of us was captivated more by the disembodied voices that are our Google Assistant, Siri, and Alexa virtual assistants than any other AI technology. Deep learning may be a bit of a dinosaur, on its own, at this point. But wed be lost without it.

The next ten years will likely see the rise of a new class of algorithm, one thats better suited for use at the edge and, perhaps, one that harnesses the power of quantum computing. But you can be sure well still be using deep learning in 2029 and for the foreseeable future.

Read next: Dell's XPS 13 gets even better with thinners bezels and a bigger keyboard

More here:

2010 2019: The rise of deep learning - The Next Web

Here’s what AI experts think will happen in 2020 – The Next Web

Its been another great year for robots. We didnt quite figure out how to imbue them with human-level intelligence, but we gave it the old college try and came up with GPT-2 (the text generator so scary it gives Freddy Krueger nightmares) and the AI magic responsible for these adorable robo-cheetahs:

But its time to let the past go and point our bows toward the future. Its no longer possible to estimate how much the machine learning and AI markets are worth, because the line between whats an AI-based technology and what isnt has become so blurred that Apple, Microsoft, and Google are all AI companies that also do other stuff.

Your local electricity provider uses AI and so does the person who takes those goofy real-estate agent pictures you see on park benches. Everything is AI an axiom thatll become even truer in 2020.

We solicited predictions for the AI industry over the next year from a panel of experts, heres what they had to say:

AI and human will collaborate. AI will not replace humans, it will collaborate with humans and enhance how we do things. People will be able to provide higher level work and service, powered by AI. At Intuit, our platform allows experts to connect with customers to provide tax advice and help small businesses with their books in a more accurate and efficient way, using AI. It helps work get done faster and helps customers make smarter financial decisions. As experts use the product, the product gets smarter, in turn making the experts more productive. This is the decade where, through this collaboration, AI will enhance human abilities and allow us to take our skills and work to a new level.

AI will eat the world in ways we cant imagine today: AI is often talked about as though it is a Sci-Fi concept, but it is and will continue to be all around us. We can already see how software and devices have become smarter in the past few years and AI has already been incorporated into many apps. AI enriched technology will continue to change our lives, every day, in what and how we operate. Personally, I am busy thinking about how AI will transform finances I think it will be ubiquitous. Just the same way that we cant imagine the world before the internet or mobile devices, our day-to-day will soon become different and unimaginable without AI all around us, making our lives today seem so obsolete and full of unneeded tasks.

We will see a surge of AI-first apps: As AI becomes part of every app, how we design and write apps will fundamentally change. Instead of writing apps the way we have during this decade and add AI, apps will be designed from the ground up, around AI and will be written differently. Just think of CUI and how it creates a new navigation paradigm in your app. Soon, a user will be able to ask any question from any place in the app, moving it outside of a regular flow. New tools, languages, practices and methods will also continue to emerge over the next decade.

We believe 2020 to be the year that industries that arent traditionally known to be adopters of sophisticated technologies like AI, reverse course. We expect industries like waste management, oil and gas, insurance, telecommunications and other SMBs to take on projects similar to the ones usually developed by the tech giants like Amazon, Microsoft and IBM. As the enterprise benefits of AI become more well-known, the industries outside of Silicon Valley will look to integrate these technologies.

If companies dont adapt to the current trends in AI, they could see tough times in the future. Increased productivity, operational efficiency gains, market share and revenue are some of the top line benefits that companies could either capitalize or miss out on in 2020, dependent on their implementation. We expect to see a large uptick in technology adoption and implementation from companies big and small as real-world AI applications, particularly within computer vision, become more widely available.

We dont see 2020 as another year of shiny new technology developments. We believe it will be more about the general availability of established technologies, and thats ok. Wed argue that, at times, true progress can be gauged by how widespread the availability of innovative technologies is, rather than the technologies themselves. With this in mind, we see technologies like neural networks, computer vision and 5G becoming more accessible as hardware continues to get smaller and more powerful, allowing edge deployment and unlocking new use cases for companies within these areas.

2020 is the year AI/ML capabilities will be truly operationalized, rather than companies pontificating about its abilities and potential ROI. Well see companies in the media and entertainment space deploy AI/ML to more effectively drive investment and priorities within the content supply chain and harness cloud technologies to expedite and streamline traditional services required for going to market with new offerings, whether that be original content or Direct to Consumer streaming experiences.

Leveraging AI toolsets to automate garnering insights into deep catalogs of content will increase efficiency for clients and partners, and help uphold the high-quality content that viewers demand. A greater number of studios and content creators will invest and leverage AI/ML to conform and localize premium and niche content, therefore reaching more diverse audiences in their native languages.

Im not an industry insider or a machine learning developer, but I covered more artificial intelligence stories this year than I can count. And I think 2019 showed us some disturbing trends that will continue in 2020. Amazon and Palantir are poised to sink their claws into the government surveillance business during what could potentially turn out to be President Donald Trumps final year in office. This will have significant ramifications for the AI industry.

The prospect of an Elizabeth Warren or Bernie Sanders taking office shakes the Facebooks and Microsofts of the world to their core, but companies who are already deeply invested in providing law enforcement agencies with AI systems that circumvent citizen privacy stand to lose even more. These AI companies could be inflated bubbles that pop in 2021, in the meantime theyll look to entrench with law enforcement over the next 12 months in hopes of surviving a Democrat-lead government.

Look for marketing teams to get slicker as AI-washing stops being such a big deal and AI rinsing disguising AI as something else becomes more common (ie: Ring is just a doorbell that keeps your packages safe, not an AI-powered portal for police surveillance, wink-wink).

Heres hoping your 2020 is fantastic. And, if we can venture a final prediction: stay tuned to TNW because were going to dive deeper into the world of artificial intelligence in 2020 than ever before. Its going to be a great year for humans and machines.

Read next: Samsung reveals S10 and Note 10 Lite, its new budget flagships

The rest is here:

Here's what AI experts think will happen in 2020 - The Next Web

Welcome to the roaring 2020s, the artificial intelligence decade – GreenBiz

This article first appeared in GreenBiz's weekly newsletter, VERGE Weekly, running Wednesdays. Subscribe here.

Ive long believed the most profound technology innovations are ones we take for granted on a day-to-day basis until "suddenly" they are part of our daily existence, such as computer-aided navigation or camera-endowed smartphones. The astounding complexity of whats "inside" these inventions is what makes them seem simple.

Perhaps thats why Im so fascinated by the intersection of artificial intelligence and sustainability: the applications being made possible by breakthroughs in machine learning, image recognition, analytics and sensors are profoundly practical. In many instances, the combination of these technologies completely could transform familiar systems and approaches used by the environmental and sustainability communities, making them far smarter with far less human intervention.

Take the camera trap, a pretty common technique used to study wildlife habits and biodiversity and one that has been supported by an array of big-name tech companies. Except what researcher has the time or bandwidth to analyze thousands, let alone millions, of images? Enter systems such as Wildlife Insights, a collaboration between Google Earth and seven organizations, led by Conservation International.

Wildlife Insights is, quite simply, the largest database of public camera-trap images in the world it includes 4.5 million photos that have been analyzed and mapped with AI for characteristics such as country, year, species and so forth. Scientists can use it to upload their own trap photos, visualize territories and gather insights about species health.

Heres the jaw-dropper: This AI-endowed database can analyze 3.6 million photos in an hour, compared with the 300 to 1,000 images that you or I can handle. Depending on the species, the accuracy of identification is between 80 and 98.6 percent. Plus, the system automatically discounts shots where no animals are present: no more blanks.

Expect workers in 2020 to begin seeing these effects as AI makes its way into workplaces around the world.

At the same time, we are certainly right to be cautious about the potential side effects of AI. That theme comes through loud and clear in five AI predictions published by IBM in mid-December. Two resonate with me the most: first, the idea that AI will be instrumental in building trust and ensuring that data is governed in ways that are secure and reliable; and second, that before we get too excited about all the cool things AI might be able to do, we need to make sure that it doesnt exacerbate the problem. That means spending more time focused on ways to make the data centers behind AI applications less energy-intensive and less-impactful from a materials standpoint.

From an ethical standpoint, I also have two big concerns: first, that sufficient energy is put into ensuring that the data behind the AI predictions we will come to rely on more heavily isnt flawed or biased. That means spending time to make sure a diverse set of human perspectives are represented and that the numbers are right in the first place. And second, we must view these systems as part of the overall solution, not replacements for human workers.

As IBMs vice president of AI research, Sriram Raghavan, puts it: "New research from the MIT-IBM Watson AI Lab shows that AI will increasingly help us with tasks such as scheduling, but will have a less direct impact on jobs that require skills such as design expertise and industrial strategy. Expect workers in 2020 to begin seeing these effects as AI makes its way into workplaces around the world; employers have to start adapting job roles, while employees should focus on expanding their skills."

Projections by tech market research firm IDC suggest that spending on AI systems could reach $97.9 billion in 2023 thats 2.5 times the estimated $37.5 billion spent in 2019. Why now? Its a combination of geeky factors: faster chips; better cameras; massive cloud data-processing services. Plus, did I mention that we dont really have time to waste?

Where will AI-enabled applications really make a difference for environmental and corporate sustainability? Here are five areas where I believe AI will have an especially dramatic impact over the next decade.

For more inspiration and background on the possibilities, I suggest this primer (PDF) published by the World Economic Forum. And, consider this your open invitation to alert me about the intriguing applications of AI youre seeing in your own work.

Here is the original post:

Welcome to the roaring 2020s, the artificial intelligence decade - GreenBiz

A reality check on artificial intelligence: Can it match the hype? – PhillyVoice.com

Health products powered by artificial intelligence, or AI, are streaming into our lives, from virtual doctor apps to wearable sensors and drugstore chatbots.

IBM boasted that its AI could outthink cancer. Others say computer systems that read X-rays will make radiologists obsolete.

Theres nothing that Ive seen in my 30-plus years studying medicine that could be as impactful and transformative as AI, said Dr. Eric Topol, a cardiologist and executive vice president of Scripps Research in La Jolla, Calif. AI can help doctors interpret MRIs of the heart, CT scans of the head and photographs of the back of the eye, and could potentially take over many mundane medical chores, freeing doctors to spend more time talking to patients, Topol said.

Even the Food and Drug Administration which has approved more than 40 AI products in the past five years says the potential of digital health is nothing short of revolutionary.

Yet many health industry experts fear AI-based products wont be able to match the hype. Many doctors and consumer advocates fear that the tech industry, which lives by the mantra fail fast and fix it later, is putting patients at risk and that regulators arent doing enough to keep consumers safe.

Early experiments in AI provide a reason for caution, said Mildred Cho, a professor of pediatrics at Stanfords Center for Biomedical Ethics.

Systems developed in one hospital often flop when deployed in a different facility, Cho said. Software used in the care of millions of Americans has been shown to discriminate against minorities. And AI systems sometimes learn to make predictions based on factors that have less to do with disease than the brand of MRI machine used, the time a blood test is taken or whether a patient was visited by a chaplain. In one case, AI software incorrectly concluded that people with pneumonia were less likely to die if they had asthma an error that could have led doctors to deprive asthma patients of the extra care they need.

Its only a matter of time before something like this leads to a serious health problem, said Dr. Steven Nissen, chairman of cardiology at the Cleveland Clinic.

Medical AI, which pulled in $1.6 billion in venture capital funding in the third quarter alone, is nearly at the peak of inflated expectations, concluded a July report from the research company Gartner. As the reality gets tested, there will likely be a rough slide into the trough of disillusionment.

That reality check could come in the form of disappointing results when AI products are ushered into the real world. Even Topol, the author of Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, acknowledges that many AI products are little more than hot air. Its a mixed bag, he said.

Experts such as Dr. Bob Kocher, a partner at the venture capital firm Venrock, are blunter. Most AI products have little evidence to support them, Kocher said. Some risks wont become apparent until an AI system has been used by large numbers of patients. Were going to keep discovering a whole bunch of risks and unintended consequences of using AI on medical data, Kocher said.

None of the AI products sold in the U.S. have been tested in randomized clinical trials, the strongest source of medical evidence, Topol said. The first and only randomized trial of an AI system which found that colonoscopy with computer-aided diagnosis found more small polyps than standard colonoscopy was published online in October.

Few tech startups publish their research in peer-reviewed journals, which allow other scientists to scrutinize their work, according to a January article in the European Journal of Clinical Investigation. Such stealth research described only in press releases or promotional events often overstates a companys accomplishments.

And although software developers may boast about the accuracy of their AI devices, experts note that AI models are mostly tested on computers, not in hospitals or other medical facilities. Using unproven software may make patients into unwitting guinea pigs, said Dr. Ron Li, medical informatics director for AI clinical integration at Stanford Health Care.

AI systems that learn to recognize patterns in data are often described as black boxes because even their developers dont know how they have reached their conclusions. Given that AI is so new and many of its risks unknown the field needs careful oversight, said Pilar Ossorio, a professor of law and bioethics at the University of Wisconsin-Madison.

Yet the majority of AI devices dont require FDA approval.

None of the companies that I have invested in are covered by the FDA regulations, Kocher said.

Legislation passed by Congress in 2016 and championed by the tech industry exempts many types of medical software from federal review, including certain fitness apps, electronic health records and tools that help doctors make medical decisions.

Theres been little research on whether the 320,000 medical apps now in use actually improve health, according to a report on AI published Dec. 17 by the National Academy of Medicine.

Almost none of the [AI] stuff marketed to patients really works, said Dr. Ezekiel Emanuel, professor of medical ethics and health policy in the Perelman School of Medicine at the University of Pennsylvania.

The FDA has long focused its attention on devices that pose the greatest threat to patients. And consumer advocates acknowledge that some devices such as ones that help people count their daily steps need less scrutiny than ones that diagnose or treat disease.

Some software developers dont bother to apply for FDA clearance or authorization, even when legally required, according to a 2018 study in Annals of Internal Medicine.

Industry analysts say that AI developers have little interest in conducting expensive and time-consuming trials. Its not the main concern of these firms to submit themselves to rigorous evaluation that would be published in a peer-reviewed journal, said Joachim Roski, a principal at Booz Allen Hamilton, a technology consulting firm, and co-author of the National Academys report. Thats not how the U.S. economy works.

But Oren Etzioni, chief executive officer at the Allen Institute for AI in Seattle, said AI developers have a financial incentive to make sure their medical products are safe.

If failing fast means a whole bunch of people will die, I dont think we want to fail fast, Etzioni said. Nobody is going to be happy, including investors, if people die or are severely hurt.

The FDA has come under fire in recent years for allowing the sale of dangerous medical devices, which have been linked by the International Consortium of Investigative Journalists to 80,000 deaths and 1.7 million injuries over the past decade.

Many of these devices were cleared for use through a controversial process called the 510(k) pathway, which allows companies to market moderate-risk products with no clinical testing as long as theyre deemed similar to existing devices.

In 2011, a committee of the National Academy of Medicine concluded the 510(k) process is so fundamentally flawed that the FDA should throw it out and start over.

Instead, the FDA is using the process to greenlight AI devices.

Of the 14 AI products authorized by the FDA in 2017 and 2018, 11 were cleared through the 510(k) process, according to a November article in JAMA. None of these appear to have had new clinical testing, the study said. The FDA cleared an AI device designed to help diagnose liver and lung cancer in 2018 based on its similarity to imaging software approved 20 years earlier. That software had itself been cleared because it was deemed substantially equivalent to products marketed before 1976.

AI products cleared by the FDA today are largely locked, so that their calculations and results will not change after they enter the market, said Bakul Patel, director for digital health at the FDAs Center for Devices and Radiological Health. The FDA has not yet authorized unlocked AI devices, whose results could vary from month to month in ways that developers cannot predict.

To deal with the flood of AI products, the FDA is testing a radically different approach to digital device regulation, focusing on evaluating companies, not products.

The FDAs pilot pre-certification program, launched in 2017, is designed to reduce the time and cost of market entry for software developers, imposing the least burdensome system possible. FDA officials say they want to keep pace with AI software developers, who update their products much more frequently than makers of traditional devices, such as X-ray machines.

Scott Gottlieb said in 2017 while he was FDA commissioner that government regulators need to make sure its approach to innovative products is efficient and that it fosters, not impedes, innovation.

Under the plan, the FDA would pre-certify companies that demonstrate a culture of quality and organizational excellence, which would allow them to provide less upfront data about devices.

Pre-certified companies could then release devices with a streamlined review or no FDA review at all. Once products are on the market, companies will be responsible for monitoring their own products safety and reporting back to the FDA. Nine companies have been selected for the pilot: Apple, FitBit, Samsung, Johnson & Johnson, Pear Therapeutics, Phosphorus, Roche, Tidepool and Verily Life Sciences.

High-risk products, such as software used in pacemakers, will still get a comprehensive FDA evaluation. We definitely dont want patients to be hurt, said Patel, who noted that devices cleared through pre-certification can be recalled if needed. There are a lot of guardrails still in place.

But research shows that even low- and moderate-risk devices have been recalled due to serious risks to patients, said Diana Zuckerman, president of the National Center for Health Research. People could be harmed because something wasnt required to be proven accurate or safe before it is widely used.

Johnson & Johnson, for example, has recalled hip implants and surgical mesh.

In a series of letters to the FDA, the American Medical Association and others have questioned the wisdom of allowing companies to monitor their own performance and product safety.

The honor system is not a regulatory regime, said Dr. Jesse Ehrenfeld, who chairs the physician groups board of trustees.

In an October letter to the FDA, Sens. Elizabeth Warren (D-Mass.), Tina Smith (D-Minn.) and Patty Murray (D-Wash.) questioned the agencys ability to ensure company safety reports are accurate, timely and based on all available information.

Some AI devices are more carefully tested than others.

An AI-powered screening tool for diabetic eye disease was studied in 900 patients at 10 primary care offices before being approved in 2018. The manufacturer, IDx Technologies, worked with the FDA for eight years to get the product right, said Dr. Michael Abramoff, the companys founder and executive chairman.

The test, sold as IDx-DR, screens patients for diabetic retinopathy, a leading cause of blindness, and refers high-risk patients to eye specialists, who make a definitive diagnosis.

IDx-DR is the first autonomous AI product one that can make a screening decision without a doctor. The company is now installing it in primary care clinics and grocery stores, where it can be operated by employees with a high school diploma. Abramoffs company has taken the unusual step of buying liability insurance to cover any patient injuries.

Yet some AI-based innovations intended to improve care have had the opposite effect.

A Canadian company, for example, developed AI software to predict a persons risk of Alzheimers based on their speech. Predictions were more accurate for some patients than others. Difficulty finding the right word may be due to unfamiliarity with English, rather than to cognitive impairment, said co-author Frank Rudzicz, an associate professor of computer science at the University of Toronto.

Doctors at New Yorks Mount Sinai Hospital hoped AI could help them use chest X-rays to predict which patients were at high risk of pneumonia. Although the system made accurate predictions from X-rays shot at Mount Sinai, the technology flopped when tested on images taken at other hospitals. Eventually, researchers realized the computer had merely learned to tell the difference between that hospitals portable chest X-rays taken at a patients bedside with those taken in the radiology department. Doctors tend to use portable chest X-rays for patients too sick to leave their room, so its not surprising that these patients had a greater risk of lung infection.

DeepMind, a company owned by Google, has created an AI-based mobile app that can predict which hospitalized patients will develop acute kidney failure up to 48 hours in advance. A blog post on the DeepMind website described the system, used at a London hospital, as a game changer. But the AI system also produced two false alarms for every correct result, according to a July study in Nature. That may explain why patients kidney function didnt improve, said Dr. Saurabh Jha, associate professor of radiology at the Hospital of the University of Pennsylvania. Any benefit from early detection of serious kidney problems may have been diluted by a high rate of overdiagnosis, in which the AI system flagged borderline kidney issues that didnt need treatment, Jha said. Google had no comment in response to Jhas conclusions.

False positives can harm patients by prompting doctors to order unnecessary tests or withhold recommended treatments, Jha said. For example, a doctor worried about a patients kidneys might stop prescribing ibuprofen a generally safe pain reliever that poses a small risk to kidney function in favor of an opioid, which carries a serious risk of addiction.

As these studies show, software with impressive results in a computer lab can founder when tested in real time, Stanfords Cho said. Thats because diseases are more complex and the health care system far more dysfunctional than many computer scientists anticipate.

Many AI developers cull electronic health records because they hold huge amounts of detailed data, Cho said. But those developers often arent aware that theyre building atop a deeply broken system. Electronic health records were developed for billing, not patient care, and are filled with mistakes or missing data.

A KHN investigation published in March found sometimes life-threatening errors in patients medication lists, lab tests and allergies.

In view of the risks involved, doctors need to step in to protect their patients interests, said Dr. Vikas Saini, a cardiologist and president of the nonprofit Lown Institute, which advocates for wider access to health care.

While it is the job of entrepreneurs to think big and take risks, Saini said, it is the job of doctors to protect their patients.

Kaiser Health News (KHN) is a national health policy news service. It is an editorially independent program of the Henry J. Kaiser Family Foundation which is not affiliated with Kaiser Permanente.

More:

A reality check on artificial intelligence: Can it match the hype? - PhillyVoice.com

The U.S. Patent and Trademark Office Takes on Artificial Intelligence – JD Supra

Updated: May 25, 2018:

JD Supra is a legal publishing service that connects experts and their content with broader audiences of professionals, journalists and associations.

This Privacy Policy describes how JD Supra, LLC ("JD Supra" or "we," "us," or "our") collects, uses and shares personal data collected from visitors to our website (located at http://www.jdsupra.com) (our "Website") who view only publicly-available content as well as subscribers to our services (such as our email digests or author tools)(our "Services"). By using our Website and registering for one of our Services, you are agreeing to the terms of this Privacy Policy.

Please note that if you subscribe to one of our Services, you can make choices about how we collect, use and share your information through our Privacy Center under the "My Account" dashboard (available if you are logged into your JD Supra account).

Registration Information. When you register with JD Supra for our Website and Services, either as an author or as a subscriber, you will be asked to provide identifying information to create your JD Supra account ("Registration Data"), such as your:

Other Information: We also collect other information you may voluntarily provide. This may include content you provide for publication. We may also receive your communications with others through our Website and Services (such as contacting an author through our Website) or communications directly with us (such as through email, feedback or other forms or social media). If you are a subscribed user, we will also collect your user preferences, such as the types of articles you would like to read.

Information from third parties (such as, from your employer or LinkedIn): We may also receive information about you from third party sources. For example, your employer may provide your information to us, such as in connection with an article submitted by your employer for publication. If you choose to use LinkedIn to subscribe to our Website and Services, we also collect information related to your LinkedIn account and profile.

Your interactions with our Website and Services: As is true of most websites, we gather certain information automatically. This information includes IP addresses, browser type, Internet service provider (ISP), referring/exit pages, operating system, date/time stamp and clickstream data. We use this information to analyze trends, to administer the Website and our Services, to improve the content and performance of our Website and Services, and to track users' movements around the site. We may also link this automatically-collected data to personal information, for example, to inform authors about who has read their articles. Some of this data is collected through information sent by your web browser. We also use cookies and other tracking technologies to collect this information. To learn more about cookies and other tracking technologies that JD Supra may use on our Website and Services please see our "Cookies Guide" page.

We use the information and data we collect principally in order to provide our Website and Services. More specifically, we may use your personal information to:

JD Supra takes reasonable and appropriate precautions to insure that user information is protected from loss, misuse and unauthorized access, disclosure, alteration and destruction. We restrict access to user information to those individuals who reasonably need access to perform their job functions, such as our third party email service, customer service personnel and technical staff. You should keep in mind that no Internet transmission is ever 100% secure or error-free. Where you use log-in credentials (usernames, passwords) on our Website, please remember that it is your responsibility to safeguard them. If you believe that your log-in credentials have been compromised, please contact us at privacy@jdsupra.com.

Our Website and Services are not directed at children under the age of 16 and we do not knowingly collect personal information from children under the age of 16 through our Website and/or Services. If you have reason to believe that a child under the age of 16 has provided personal information to us, please contact us, and we will endeavor to delete that information from our databases.

Our Website and Services may contain links to other websites. The operators of such other websites may collect information about you, including through cookies or other technologies. If you are using our Website or Services and click a link to another site, you will leave our Website and this Policy will not apply to your use of and activity on those other sites. We encourage you to read the legal notices posted on those sites, including their privacy policies. We are not responsible for the data collection and use practices of such other sites. This Policy applies solely to the information collected in connection with your use of our Website and Services and does not apply to any practices conducted offline or in connection with any other websites.

JD Supra's principal place of business is in the United States. By subscribing to our website, you expressly consent to your information being processed in the United States.

You can make a request to exercise any of these rights by emailing us at privacy@jdsupra.com or by writing to us at:

You can also manage your profile and subscriptions through our Privacy Center under the "My Account" dashboard.

We will make all practical efforts to respect your wishes. There may be times, however, where we are not able to fulfill your request, for example, if applicable law prohibits our compliance. Please note that JD Supra does not use "automatic decision making" or "profiling" as those terms are defined in the GDPR.

Pursuant to Section 1798.83 of the California Civil Code, our customers who are California residents have the right to request certain information regarding our disclosure of personal information to third parties for their direct marketing purposes.

You can make a request for this information by emailing us at privacy@jdsupra.com or by writing to us at:

Some browsers have incorporated a Do Not Track (DNT) feature. These features, when turned on, send a signal that you prefer that the website you are visiting not collect and use data regarding your online searching and browsing activities. As there is not yet a common understanding on how to interpret the DNT signal, we currently do not respond to DNT signals on our site.

For non-EU/Swiss residents, if you would like to know what personal information we have about you, you can send an e-mail to privacy@jdsupra.com. We will be in contact with you (by mail or otherwise) to verify your identity and provide you the information you request. We will respond within 30 days to your request for access to your personal information. In some cases, we may not be able to remove your personal information, in which case we will let you know if we are unable to do so and why. If you would like to correct or update your personal information, you can manage your profile and subscriptions through our Privacy Center under the "My Account" dashboard. If you would like to delete your account or remove your information from our Website and Services, send an e-mail to privacy@jdsupra.com.

We reserve the right to change this Privacy Policy at any time. Please refer to the date at the top of this page to determine when this Policy was last revised. Any changes to our Privacy Policy will become effective upon posting of the revised policy on the Website. By continuing to use our Website and Services following such changes, you will be deemed to have agreed to such changes.

If you have any questions about this Privacy Policy, the practices of this site, your dealings with our Website or Services, or if you would like to change any of the information you have provided to us, please contact us at: privacy@jdsupra.com.

As with many websites, JD Supra's website (located at http://www.jdsupra.com) (our "Website") and our services (such as our email article digests)(our "Services") use a standard technology called a "cookie" and other similar technologies (such as, pixels and web beacons), which are small data files that are transferred to your computer when you use our Website and Services. These technologies automatically identify your browser whenever you interact with our Website and Services.

We use cookies and other tracking technologies to:

There are different types of cookies and other technologies used our Website, notably:

JD Supra Cookies. We place our own cookies on your computer to track certain information about you while you are using our Website and Services. For example, we place a session cookie on your computer each time you visit our Website. We use these cookies to allow you to log-in to your subscriber account. In addition, through these cookies we are able to collect information about how you use the Website, including what browser you may be using, your IP address, and the URL address you came from upon visiting our Website and the URL you next visit (even if those URLs are not on our Website). We also utilize email web beacons to monitor whether our emails are being delivered and read. We also use these tools to help deliver reader analytics to our authors to give them insight into their readership and help them to improve their content, so that it is most useful for our users.

Analytics/Performance Cookies. JD Supra also uses the following analytic tools to help us analyze the performance of our Website and Services as well as how visitors use our Website and Services:

Facebook, Twitter and other Social Network Cookies. Our content pages allow you to share content appearing on our Website and Services to your social media accounts through the "Like," "Tweet," or similar buttons displayed on such pages. To accomplish this Service, we embed code that such third party social networks provide and that we do not control. These buttons know that you are logged in to your social network account and therefore such social networks could also know that you are viewing the JD Supra Website.

If you would like to change how a browser uses cookies, including blocking or deleting cookies from the JD Supra Website and Services you can do so by changing the settings in your web browser. To control cookies, most browsers allow you to either accept or reject all cookies, only accept certain types of cookies, or prompt you every time a site wishes to save a cookie. It's also easy to delete cookies that are already saved on your device by a browser.

The processes for controlling and deleting cookies vary depending on which browser you use. To find out how to do so with a particular browser, you can use your browser's "Help" function or alternatively, you can visit http://www.aboutcookies.org which explains, step-by-step, how to control and delete cookies in most browsers.

We may update this cookie policy and our Privacy Policy from time-to-time, particularly as technology changes. You can always check this page for the latest version. We may also notify you of changes to our privacy policy by email.

If you have any questions about how we use cookies and other tracking technologies, please contact us at: privacy@jdsupra.com.

Read more here:

The U.S. Patent and Trademark Office Takes on Artificial Intelligence - JD Supra

Illinois regulates artificial intelligence like HireVues used to analyze online job Interviews – Vox.com

Artificial intelligence is increasingly playing a role in companies hiring decisions. Algorithms help target ads about new positions, sort through resumes, and even analyze applicants facial expressions during video job interviews. But these systems are opaque, and we often have no idea how artificial intelligence-based systems are sorting, scoring, and ranking our applications.

Its not just that we dont know how these systems work. Artificial intelligence can also introduce bias and inaccuracy to the job application process, and because these algorithms largely operate in a black box, its not really possible to hold a company that uses a problematic or unfair tool accountable.

A new Illinois law one of the first of its kind in the US is supposed to provide job candidates a bit more insight into how these unregulated tools actually operate. But its unlikely the legislation will change much for applicants. Thats because it only applies to a limited type of AI, and it doesnt ask much of the companies deploying it.

Set to take effect January 1, 2020, the states Artificial Intelligence Video Interview Act has three primary requirements. First, companies must notify applicants that artificial intelligence will be used to consider applicants fitness for a position. Those companies must also explain how their AI works and what general types of characteristics it considers when evaluating candidates. In addition to requiring applicants consent to use AI, the law also includes two provisions meant to protect their privacy: It limits who can view an applicants recorded video interview to those whose expertise or technology is necessary and requires that companies delete any video that an applicant submits within a month of their request.

As Aaron Rieke, the managing director of the technology rights nonprofit Upturn, told Recode about the law, This is a pretty light touch on a small part of the hiring process. For one thing, the law only covers artificial intelligence used in videos, which constitutes a small share of the AI tools that can be used to assess job applicants. And the law doesnt guarantee that you can opt out of an AI-based review of your application and still be considered for a role (all the law says is that a company has to gain your consent before using AI; it doesnt require that hiring managers give you an alternative method).

Its hard to feel that that consent is going to be super meaningful if the alternative is that you get no shot at the job at all, said Rieke. He added that theres no guarantee that the consent and explanation the law requires will be useful; for instance, the explanation could be so broad and high-level that its not helpful.

If I were a lawyer for one of these vendors, I would say something like, Look, we use the video, including the audio language and visual content, to predict your performance for this position using tens of thousands of factors, said Rieke. If I was feeling really conservative, I might name a couple general categories of competency. (He also points out that the law doesnt define artificial intelligence, which means its difficult to tell what companies and what types of systems the law actually applies to).

Because the law is limited to AI thats used in video interviews, the company it most clearly applies to is Utah-based HireVue, a popular job interview platform that offers employers an algorithm-based analysis of recorded video interviews. Heres how it works: You answer pre-selected questions over your computer or phone camera. Then, an algorithm developed by HireVue analyzes how youve answered the questions, and sometimes even your facial expressions, to make predictions about your fit for a particular position.

HireVue says it already has about 100 clients using this artificial intelligence-based feature, including major companies like Unilever and Hilton.

Some candidates who have used HireVues system complain that the process is awkward and impersonal. But thats not the only problem. Algorithms are not inherently objective, and they reflect the data used to train them and the people that design them. That means they can inherit, and even amplify, societal biases, including racism and sexism. And even if an algorithm is explicitly instructed not to consider factors like a persons name, it can still learn proxies for protected identities (for instance, an algorithm could learn to discriminate against people who have gone to a womens college).

Facial recognition tech, in particular, has faced criticism for struggling to identify and characterize the faces of people with darker skin, women, and trans and non-binary people, among other minority groups. Critics also say that emotion (or affect) recognition technology in particular, which purports to make judgments about a persons emotions based on their facial expressions, is scientifically flawed. Thats why one research nonprofit, the AI Now Institute, called for the prohibition of such technology in high-stakes decision-making including job applicant vetting.

[W]hile youre being interviewed, theres a camera thats recording you, and its recording all of your micro facial expressions and all of the gestures youre using, the intonation of your voice, and then pattern matching those things that they can detect with their highest performers, AI Now Institute co-founder Kate Crawford told Recodes Kara Swisher earlier this year. [It] might sound like a good idea, but think about how youre basically just hiring people who look like the people you already have.

Even members of Congress are worried about that technology. In 2018, US Sens. Kamala Harris, Elizabeth Warren, and Patty Murray wrote to the Equal Employment Opportunity Commission, the federal agency charged with investigating employment discrimination, asking whether such facial analysis technology could violate anti-discrimination laws.

Despite being one of the first laws to regulate these tools, the Illinois law doesnt address concerns about bias. No federal legislation explicitly regulates these AI-based hiring systems. Instead, employment lawyers say such AI tools are generally subject to the Uniform Guidelines, employment discrimination standards created by several federal agencies back in 1978 (you can read more about that here).

The EEOC did not respond to Recodes multiple requests for comment.

Meanwhile, its not clear how, under Illinois new law, companies like HireVue will go about explaining the characteristics in applicants that its AI considers, given that the company claims that its algorithms can weigh up to tens of thousands of factors (it says it removes factors that are not predictive of job success).

The law also doesnt explain what an applicant might be entitled to if a company violates one of its provisions. Law firms advising clients on compliance have also noted that its not clear whether the law applies exclusively to businesses filling a position in Illinois, or just interviews that take place in the state. Neither Illinois State Sen. Iris Martinez nor Illinois Rep. Jaime M. Andrade, legislators who worked on the law, responded to a request for comment by the time of publication.

HireVues CEO Kevin Parker said in a blog post that the law entails very little, if any, change because its platform already complies with GDPRs principles of transparency, privacy, and the right to be forgotten. [W]e believe every job interview should be fair and objective, and that candidates should understand how theyre being evaluated. This is fair game, and its good for both candidates and companies, he wrote in August.

A spokesperson for HireVue said the decision to provide an alternative to an AI-based analysis is up to the company thats hiring, but argued that those alternatives can more time-consuming for candidates. If a candidate believes that a system is biased, the spokesperson said recourse options are the same as when a candidate believes that any part of the hiring process, or any individual interviewer, was unfairly biased against them.

Under the new law in Illinois, if you participate in a video interview that uses AI tech, you can ask for your footage to be deleted after the fact. But its worth noting that the law appears to still give the company enough time to train its model on the results of your job interview even if you think the final decision was problematic.

This gives these AI hiring companies room to continue to learn, says Rieke. Theyre going to delete the underlying video, but any learning or improvement to their systems they get to keep.

Open Sourced is made possible by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.

Continued here:

Illinois regulates artificial intelligence like HireVues used to analyze online job Interviews - Vox.com

Can medical artificial intelligence live up to the hype? – Los Angeles Times

Health products powered by artificial intelligence are streaming into our lives, from virtual doctor apps to wearable sensors and drugstore chatbots.

IBM boasted that its AI could outthink cancer. Others say computer systems that read X-rays will make radiologists obsolete. AI can help doctors interpret MRIs of the heart, CT scans of the head and photographs of the back of the eye, and could potentially take over many mundane medical chores, freeing doctors to spend more time talking to patients, said Dr. Eric Topol, a cardiologist and executive vice president of Scripps Research in La Jolla.

Theres nothing that Ive seen in my 30-plus years studying medicine that could be as impactful and transformative as AI, Topol said. Even the Food and Drug Administration which has approved more than 40 AI products in the last five years says the potential of digital health is nothing short of revolutionary.

Yet many health industry experts fear AI-based products wont be able to match the hype. Some doctors and consumer advocates fear that the tech industry, which lives by the mantra fail fast and fix it later, is putting patients at risk and that regulators arent doing enough to keep consumers safe.

Early experiments in AI provide a reason for caution, said Mildred Cho, a professor of pediatrics at Stanfords Center for Biomedical Ethics.

Systems developed in one hospital often flop when deployed in a different facility, Cho said. Software used in the care of millions of Americans has been shown to discriminate against minorities. And AI systems sometimes learn to make predictions based on factors that have less to do with disease than the brand of MRI machine used, the time a blood test is taken or whether a patient was visited by a chaplain.

In one case, AI software incorrectly concluded that people with pneumonia were less likely to die if they had asthma an error that could have led doctors to deprive asthma patients of the extra care they need.

Its only a matter of time before something like this leads to a serious health problem, said Dr. Steven Nissen, chairman of cardiology at the Cleveland Clinic.

Medical AI, which pulled in $1.6 billion in venture capital funding in the third quarter alone, is nearly at the peak of inflated expectations, concluded a July report from research company Gartner. As the reality gets tested, there will likely be a rough slide into the trough of disillusionment.

That reality check could come in the form of disappointing results when AI products are ushered into the real world. Even Topol, the author of Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, acknowledges that many AI products are little more than hot air.

Experts such as Dr. Bob Kocher, a partner at the venture capital firm Venrock, are blunter. Most AI products have little evidence to support them, Kocher said. Some risks wont become apparent until an AI system has been used by large numbers of patients. Were going to keep discovering a whole bunch of risks and unintended consequences of using AI on medical data, Kocher said.

None of the AI products sold in the U.S. have been tested in randomized clinical trials, the strongest source of medical evidence, Topol said. The first and only randomized trial of an AI system which found that colonoscopy with computer-aided diagnosis found more small polyps than standard colonoscopy was published online in October.

Few tech start-ups publish their research in peer-reviewed journals, which allow other scientists to scrutinize their work, according to a January article in the European Journal of Clinical Investigation. Such stealth research described only in press releases or promotional events often overstates a companys accomplishments.

And although software developers may boast about the accuracy of their AI devices, experts note that AI models are mostly tested on computers, not in hospitals or other medical facilities. Using unproven software may make patients into unwitting guinea pigs, said Dr. Ron Li, medical informatics director for AI clinical integration at Stanford Health Care.

AI systems that learn to recognize patterns in data are often described as black boxes because even their developers dont know how they reached their conclusions. Given that AI is so new and many of its risks unknown the field needs careful oversight, said Pilar Ossorio, a professor of law and bioethics at the University of Wisconsin-Madison.

Yet the majority of AI devices dont require FDA approval. None of the companies that I have invested in are covered by the FDA regulations, Kocher said.

Legislation passed by Congress in 2016 and championed by the tech industry exempts many types of medical software from federal review, including certain fitness apps, electronic health records and tools that help doctors make medical decisions.

Theres been little research on whether the 320,000 medical apps now in use actually improve health, according to a report on AI published Dec. 17 by the National Academy of Medicine.

The FDA has long focused its attention on devices that pose the greatest threat to patients. And consumer advocates acknowledge that some devices such as ones that help people count their daily steps need less scrutiny than ones that diagnose or treat disease.

Some software developers dont bother to apply for FDA clearance or authorization, even when legally required, according to a 2018 study in Annals of Internal Medicine.

Industry analysts say that AI developers have little interest in conducting expensive and time-consuming trials. Its not the main concern of these firms to submit themselves to rigorous evaluation that would be published in a peer-reviewed journal, said Joachim Roski, a principal at Booz Allen Hamilton, a technology consulting firm, and coauthor of the National Academys report. Thats not how the U.S. economy works.

But Oren Etzioni, chief executive at the Allen Institute for AI in Seattle, said AI developers have a financial incentive to make sure their medical products are safe.

If failing fast means a whole bunch of people will die, I dont think we want to fail fast, Etzioni said. Nobody is going to be happy, including investors, if people die or are severely hurt.

The FDA has come under fire in recent years for allowing the sale of dangerous medical devices, which have been linked by the International Consortium of Investigative Journalists to 80,000 deaths and 1.7 million injuries over the last decade.

Many of these devices were cleared for use through a controversial process called the 510(k) pathway, which allows companies to market moderate-risk products with no clinical testing as long as theyre deemed similar to existing devices.

In 2011, a committee of the National Academy of Medicine concluded the 510(k) process is so fundamentally flawed that the FDA should throw it out and start over.

Instead, the FDA is using the process to greenlight AI devices.

Of the 14 AI products authorized by the FDA in 2017 and 2018, 11 were cleared through the 510(k) process, according to a November article in JAMA. None of these appear to have had new clinical testing, the study said.

The FDA cleared an AI device designed to help diagnose liver and lung cancer in 2018 based on its similarity to imaging software approved 20 years earlier. That software had itself been cleared because it was deemed substantially equivalent to products marketed before 1976.

AI products cleared by the FDA today are largely locked, so that their calculations and results will not change after they enter the market, said Bakul Patel, director for digital health at the FDAs Center for Devices and Radiological Health. The FDA has not yet authorized unlocked AI devices, whose results could vary from month to month in ways that developers cannot predict.

To deal with the flood of AI products, the FDA is testing a radically different approach to digital device regulation, focusing on evaluating companies, not products.

The FDAs pilot pre-certification program, launched in 2017, is designed to reduce the time and cost of market entry for software developers, imposing the least burdensome system possible. FDA officials say they want to keep pace with AI software developers, who update their products much more frequently than makers of traditional devices, such as X-ray machines.

Scott Gottlieb said in 2017 while he was FDA commissioner that government regulators need to make sure its approach to innovative products is efficient and that it fosters, not impedes, innovation.

Under the plan, the FDA would pre-certify companies that demonstrate a culture of quality and organizational excellence, which would allow them to provide less upfront data about devices.

Pre-certified companies could then release devices with a streamlined review or no FDA review at all. Once products are on the market, companies will be responsible for monitoring their own products safety and reporting back to the FDA.

High-risk products, such as software used in pacemakers, will still get a comprehensive FDA evaluation.

But research shows that even low- and moderate-risk devices have been recalled due to serious risks to patients, said Diana Zuckerman, president of the National Center for Health Research. Johnson & Johnson, for example, has recalled hip implants and surgical mesh.

Some AI devices are more carefully tested than others. An AI-powered screening tool for diabetic eye disease was studied in 900 patients at 10 primary care offices before being approved in 2018. The manufacturer, IDx Technologies, worked with the FDA for eight years to get the test, sold as IDx-DR, right, said Dr. Michael Abramoff, the companys founder and executive chairman.

IDx-DR is the first autonomous AI product one that can make a screening decision without a doctor. The company is now installing it in primary care clinics and grocery stores, where it can be operated by employees with a high school diploma.

Yet some AI-based innovations intended to improve care have had the opposite effect.

A Canadian company, for example, developed AI software to predict a persons risk of Alzheimers based on their speech. Predictions were more accurate for some patients than others. Difficulty finding the right word may be due to unfamiliarity with English, rather than to cognitive impairment, said coauthor Frank Rudzicz, an associate professor of computer science at the University of Toronto.

Doctors at New Yorks Mount Sinai Hospital hoped AI could help them use chest X-rays to predict which patients were at high risk of pneumonia. Although the system made accurate predictions from X-rays shot at Mount Sinai, the technology flopped when tested on images taken at other hospitals. Eventually, researchers realized the computer had merely learned to tell the difference between that hospitals portable chest X-rays taken at a patients bedside with those taken in the radiology department. Doctors tend to use portable chest X-rays for patients too sick to leave their room, so its not surprising that these patients had a greater risk of lung infection.

DeepMind, a company owned by Google, has created an AI-based mobile app that can predict which hospitalized patients will develop acute kidney failure up to 48 hours in advance. A blog post on the DeepMind website described the system, used at a London hospital, as a game changer. But the AI system also produced two false alarms for every correct result, according to a July study in Nature. That may explain why patients kidney function didnt improve, said Dr. Saurabh Jha, associate professor of radiology at the Hospital of the University of Pennsylvania. Any benefit from early detection of serious kidney problems may have been diluted by a high rate of overdiagnosis, in which the AI system flagged borderline kidney issues that didnt need treatment, Jha said.

Google had no comment in response to Jhas conclusions.

This story was written for Kaiser Health News, an editorially independent publication of the Kaiser Family Foundation.

See original here:

Can medical artificial intelligence live up to the hype? - Los Angeles Times

Top Movies Of 2019 That Depicted Artificial Intelligence (AI) – Analytics India Magazine

Artificial intelligence (AI) is creating a great impact on the world by enabling computers to learn on their own. While in the real world AI is still focused on solving narrow problems, we see a whole different face of AI in the fictional world of science fiction movies which predominantly depict the rise of artificial general intelligence as a threat for human civilization. As a continuation of the trend, here we take a look at how artificial intelligence was depicted in 2019 movies.

A warning in advance the following listicle is filled with SPOILERS.

Terminator: Dark Fate the sixth film of the Terminator movie franchise, featured a super-intelligent Terminator named Gabriel designated as Rev-9, and was sent from the future to kill a young woman (Dani) who is set to become an important figure in the Human Resistance against Skynet. To fight the Rev-9 Terminator, the Human Resistance from the future also sends Grace, a robot soldier, back in time, to defend Dani. Grace is joined by Sarah Connor, and the now-obsolete ageing model of T-800 Terminator the original killer robot in the first movie (1984).

We all know Tony Stark as the man of advanced technology and when it comes to artificial intelligence, Stark has nothing short of state-of-the-art technology in Marvels cinematic universe. One such artificial intelligence was the Even Dead, Im The Hero (E.D.I.T.H.) which we witnessed in the 2019 movie Spider-Man: Far From Home. EDITH is an augmented reality security defence and artificial tactical intelligence system created by Tony Stark and was given to Peter Parker following Starks death. It is encompassed in a pair of sunglasses and gives its users access to Stark Industries global satellite network along with an array of missiles and drones.

I Am Mother is a post-apocalyptic movie which was released in 2019. The films plot is focused on a mother-daughter relationship where the mother is a robot designed to repopulate Earth. The robot mother takes care of her human child known as daughter who was born with artificial gestation. The duo stays in a secure bunker alone until another human woman arrives there. The daughter now faces a predicament of whom to trust- her robot mother or a fellow human who is asking the daughter to come with her.

Wandering Earth is another 2019 Chinese post-apocalyptic film with a plot involving Earths imminent crash into another planet and a group of family members and soldiers efforts to save it. The films artificial intelligence character is OSS, a computer system which was programmed to warn people in the earth space station. A significant subplot of the film is focused on protagonist Liu Peiqiangs struggle with MOSS which forced the space station to go into low energy mode during the crash as per its programming from the United Earth Government. In the end, Liu Peiqiang resists and ultimately sets MOSS on fire to help save the Earth.

James Camerons futuristic action epic for 2019 Alita: Battle Angel is a sci-fi action film which depicts the human civilization in an extremely advanced stage of transhumanism. The movie describes the dystopian future where robots and autonomous systems are extremely powerful. To elaborate, in one of the initial scenes of the movie, Ido attaches a cyborg body to a human brain he found (from another cyborg) and names her Alita after his deceased daughter, which is an epitome of advancements in AI and robotics.

Jexi is the only Hollywood rom-com movie depicting artificial intelligence in 2019. The movie features an AI-based operating system called Jexi with recognizable human behaviour and reminds the audience of the previously acclaimed film Her, which was released in 2014. But unlike Her, the movie goes the other way around depicting how the AI system becomes emotionally attached to its socially-awkward owner, Phil. The biggest shock of the comedy film is when Jexi the AI which lives inside Phils cellphone acts to control his life and even chases him angrily using a self-driving car.

Hi, AI is a German documentary which was released in early 2019. The documentary was based on Chucks relationship with Harmony an advanced humanoid robot. The films depiction of artificial intelligence is in sharp contrast with other fictional movies on AI. The documentary also depicts that even though human research is moving in the direction of creating advanced robots, interactions with robots still dont have the same depth as human conversations. The film won the Max Ophls Prize for best documentary for the year.

comments

Vishal Chawla is a senior tech journalist at Analytics India Magazine (AIM) and writes on the latest in the world of analytics, AI and other emerging technologies. Previously, he was a senior correspondent for IDG CIO and ComputerWorld. Write to him at vishal.chawla@analyticsindiamag.com

Go here to read the rest:

Top Movies Of 2019 That Depicted Artificial Intelligence (AI) - Analytics India Magazine