Page 125«..1020..124125126127..130140..»

Category Archives: Ai

Sleep apnea AI tool uses 0s and 1s to increase ZZZs – Sanford Health News

Posted: July 12, 2021 at 7:52 am

Doctors at Sanford Health will soon use augmented intelligence to scan electronic medical records for dozens of factors that could indicate a patient suffers from obstructive sleep apnea.

Those 67 indicators include body mass index (BMI), age, gender, medical history, clinical symptoms and blood work, among other factors. The physician also conducts a sleepiness questionnaire asking people how tired they are throughout the day. Combined, the information outputs a score of how likely the patient has the condition that afflicts millions of Americans by reducing airflow as they sleep.

Max Weaver, the Sanford Health business intelligence analyst who developed the tool, said the goal is simple.

Helping physicians provide the best quality of care, he said. This model does that by using mathematical modeling to narrow the population down. Its really from 1 million to perhaps 50,000. That kind of scale.

Kevin Faber, M.D., is chair and medical director of sleep medicine at Sanford Health in Fargo, North Dakota, and the projects provider champion. He said its a powerful way to reduce unneeded testing, maximize the physicians time and help patients.

Its a tool to help identify risk. Its not the diagnosis. It doesnt replace the need for a sleep test. It doesnt replace the need for the sleep consult for many patients. But its a tool that can help the primary care practitioner be ultra-efficient with his or her time, as they have precious few minutes with their patients and need to do the things that have the biggest impact, he said.

This tool will allow them to then identify those patients at highest risk, so we can treat them for a condition that they didnt know they had.

Unlike many medical conditions accompanied by pain, discomfort or visible symptoms that prompt people to go to the doctor, most sleep apnea sufferers are unaware of it, Dr. Faber said.

The problem is we need a way to identify the group of people who dont know they have the condition and therefore dont know to seek care, he said.

Intrinsic sleep disorders like sleep apnea are typically unknown by the patient because theyre happening at a time when the patient cant be aware of it. And the moment they could be aware of it, once they wake up, the problem is instantly gone.

Its especially difficult if the person doesnt have a bed partner or someone who is with them when theyre sleeping to let them know they regularly stop breathing or snore loudly, Dr. Faber said.

There are lots and lots of people who have no clue this is going on, he said of the roughly 1 out of 5 patients who have it.

Which means that we have tens of millions of people in our country alone, including at least hundreds of thousands in the Sanford footprint that likely have at least mild obstructive sleep apnea.

Thats why health care providers need better, more efficient tools that spot possible sleep disorders, so a doctor can quickly and accurately diagnose them and prescribe a treatment, Dr. Faber said.

He likens sleep apnea to being the base, a contributing cause, of a whole pyramid of metabolic, cardiovascular and neurocognitive health problems. When people are finally diagnosed, only then do some realize how much their poor sleep contributed to overall poor health.

They dont know that that untreated sleep apnea is what is causing their blood pressure to be so difficult to control, or their diabetes to be so difficult to manage. That it impacts their depression or their anxiety so much and thats why theyre having a harder time controlling it, Dr. Faber said. Sleeping pills dont help because the issue isnt that you need to sedate your brain. The issue is you cant stay asleep because you stop breathing over and over.

Thats where big data and tools like this AI project come in, he said.

Each persons electronic medical record already stores countless vital signs, laboratory results, medical history and other data. The artificial intelligence filters through all that information and ranks each persons chance of having sleep apnea as low, medium or high. Smoking is a risk factor, for example, but not for others who dont smoke.

If you have only a couple of those risk factors, you would be at very low risk. If you had 50 out of those 67, youd imagine Holy smokes, theyre at a much higher risk. Theres a weighting of each of those risk factors. Each has a different impact, Dr. Faber said.

Besides showing the overall risk, the tool displays the top five factors driving the score in that patient, which will change over time if the person, for example, stops smoking, loses weight or controls their diabetes.

This AI algorithm automatically adjusts all of that, Dr. Faber said. For the primary care provider who wants to know, Why is my patient at high risk? he or she can simply mouse over the icon and theres your top five risk factors for that patient. Six months later those top five might be different. There will be an automatic re-analysis of every patients risk every month.

Once a patient is identified as being at higher risk, the provider may either refer them to the sleep clinic for evaluation and a traditional sleep study or they may order a home sleep test conducted through their primary clinic. These tests monitor the patients sleep to count how many times they stop breathing during the night.

Its not feasible for the number of patients we have in Sanford Health population to administer a traditional sleep study on every patient, Weaver said. So that provider, rather than sifting through hundreds of data points about one patient, let alone all the patients they have, can see whos at the highest risk to administer that sleep apnea test. It really narrows it down.

Patients with mild sleep apnea likely wont need an additional sleep study if the home sleep test identifies the problem and initial treatment is effective and well-tolerated by the patient, Dr. Faber said. That saves them time and money. It also prevents unnecessary delays in treatment along with the additional travel and associated expenses, he said.

The main treatment options were once weight loss and continuous positive airway pressure (CPAP) therapy. Now the options also include oral appliances that move the lower jaw forward, some surgeries and Inspire therapy that uses an implanted device for those who dont tolerate CPAP.

All of this stuff is not to simply identify whos at risk but to find whatever the right treatment for their apnea is, which is going to vary from one person to another, Dr. Faber said.

I have some amazing stories, tear-jerking stories actually, of the success that getting rid of moderate to severe sleep apnea can have on somebodys life who was previously unable to treat it because they couldnt tolerate having a mask on their face.

Weaver has validated the model and received Sanford Health stakeholder approval and now is working with the technology team to add the AI tool to the emergency medical record system. Weaver said hes unaware of anything else like it on the market.

Its a big population health initiative that has the potential to help not only the health and welfare and quality of life of the people in the Sanford footprint, Dr. Faber said. But it also helps to lower the cost of care because fewer people have uncontrolled diabetes, hypertension, heart attacks and strokes, all those things that cost the health system, the health plan and therefore individual patients more money.

Posted In Fargo, Innovations, Sleep Medicine, Specialty Care

View original post here:

Sleep apnea AI tool uses 0s and 1s to increase ZZZs - Sanford Health News

Posted in Ai | Comments Off on Sleep apnea AI tool uses 0s and 1s to increase ZZZs – Sanford Health News

Artificial Intelligence Is On The Side Of Apes? Tesla-Fame’s AI-Based ETF Sells Facebook, Walmart And Buys AMC – Markets Insider

Posted: at 7:52 am

The Qraft AI-Enhanced US Large Cap Momentum ETF (NYSE:AMOM), an exchange-traded fund driven by artificial intelligence, has sold a majority of its holdings in Facebook Inc. (NASDAQ:FB) and Walmart Inc. (NYSE:WMT), while loading up on shares in AMC Entertainment Inc. (NYSE:AMC).

What Happened: The ETFs latest portfolio after rebalancing in early July showed that the fund has also sold major chunks ofits holdings, or entirely divested,in home retailer Home Depot Inc. (NYSE:HD), software company Adobe Inc. (NASDAQ:ADBE) and chipmaker Texas Instruments Inc. (NASDAQ:TXN).

The fund has a history of accurately predicting the price movements of electric vehicle makerTesla Inc.'s (NASDAQ:TSLA) shares.

The ETF now has online dating services provider Match Group Inc. (NASDAQ:MTCH), cybersecurity solutions company Fortinet Inc. (NASDAQ:FTNT) and auto parts retailer OReilly Automotive Inc. (NASDAQ:ORLY) as its three largest investments.

Match Group has a 3.65% weighting in the AMOM portfolio, followed by Fortinet and OReilly with 3.5% weighting each.

The other two stocks that make up the top five holdings in AMOM include auto parts retailer AutoZone Inc. (NYSE:AZO) with a 3.1% weighting and enterprise technology company Zebra Technologies Corp. (NASDAQ:ZBRA) with 2.7%.

AMC Entertainment has beenadded to the portfolio this month with a 2.34% weighting. The movie theater chain's stock is up 2078% year-to-date thanks to a short squeeze conducted by retail investors that refer to themselves as "apes."

Prior to the rebalancing, the ETF had Facebook, Walmart, Home Depot, Adobe and Texas Instruments as its five largest stock holdings.

See Also: Best Exchange Traded Funds

Why It Matters: AMOM, a product of South Korea-based fintech group Qraft, tracks 50 large-cap U.S. stocks and reweighs its holdings each month. The fund uses AI technology to automatically search for patterns that have the potential to produce excess returns and construct actively managed portfolios.

AMOM has delivered year-to-date returns of almost 15.1%, compared to its benchmark the Invesco S&P 500 Momentum ETF (NYSE:SPMO) which has returned 14.4% so far this year.

The fund said last week that it has surpassed an important milestone of $50 million in assets under management (AUM), an increase of nearly 1,500% from its $4.22 million total in August last year.

Price Action: Match Group shares closed almost 2.8% higher in Fridays trading session at $162.63, while Fortinet shares closed 1.5% higher at $256.81.

OReilly Automotive shares closed 1.7% higher in Fridays trading session at $591.65.

Read Next: 5 ETFs To Watch In The Second Half Of 2021

Photo by Samantha Celera on Flickr

Link:

Artificial Intelligence Is On The Side Of Apes? Tesla-Fame's AI-Based ETF Sells Facebook, Walmart And Buys AMC - Markets Insider

Posted in Ai | Comments Off on Artificial Intelligence Is On The Side Of Apes? Tesla-Fame’s AI-Based ETF Sells Facebook, Walmart And Buys AMC – Markets Insider

IIT Madras develops AI-based algorithm to identify cancer-causing alterations – BSI bureau

Posted: at 7:52 am

The technique will tackle the complexity and size of DNA sequencing datasets and can greatly help in pinpointing key alternations in the genomes of cancer patients

Indian Institute of Technology Madras Researchers has developed an artificial intelligence-based mathematical model to identify cancer-causing alterations in cells.The algorithm uses a relatively unexplored technique of leveraging DNA composition to pinpoint genetic alterations responsible for cancer progression.

The research was led by Prof B Ravindran, Head, RBCDSAI, and Mindtree Faculty Fellow IIT Madras and Dr Karthik Raman, Faculty Member, Robert Bosch Centre for Data Science and AI (RBCDSAI), IIT Madras, and also the Coordinator, Centre for Integrative Biology and Systems Medicine (IBSE), IIT Madras. Shayantan Banerjee, a Masters Student at IIT Madras, performed the experiments and analysed the data. The results have been recently published in the reputed peer-reviewed International JournalCancers.

Explaining the rationale behind this study, Ravindran said, One of the major challenges faced by cancer researchers involves the differentiation between the relatively small number of driver mutations that enable the cancer cells to grow and the large number of passenger mutations that do not have any effect on the progression of the disease.

The researchers hope that the driver mutations predicted through their mathematical model will ultimately help discover potentially novel drug targets and will advance the notion of prescribing the right drug to the right person at the right time.

Elaborating on the need for developing this technique, Dr Raman, said,In most of the previously published techniques researchers typically analysed DNA sequences from large groups of cancer patients, comparing sequences from cancer as well as normal cells and determined whether a particular mutation occurred more often in cancer cells than random. However, this frequentist approach often missed out on relatively rare driver mutations.

Dr Raman further said,Detecting driver mutations, particularly rare ones, is an exceptionally difficult task, and the development of such methods can ultimately accelerate early diagnoses and the development of personalised therapies.

In this study, the researchers decided to look at this problem from a different perspective. The main goal was to discover patterns in the DNA sequences made up of four letters, or bases, A, T, G and C surrounding a particular site of alteration.

The underlying hypothesis was that these patterns would be unique to individual types of mutations drivers and passengers, and therefore could be modelled mathematically to distinguish between the two classes. Using sophisticated AI techniques, the researchers developed a novel prediction algorithm, NBDriver and tested its performance on several open-source cancer mutation datasets.

Read more:

IIT Madras develops AI-based algorithm to identify cancer-causing alterations - BSI bureau

Posted in Ai | Comments Off on IIT Madras develops AI-based algorithm to identify cancer-causing alterations – BSI bureau

Google CEO Sundar Pichai Cautions the Dangers of Open Web; AI, Quantum Computing to be Highlight for the Next Few Years – Tech Times

Posted: at 7:52 am

Google CEO Sundar Pichai warns that the attack on the open internet has been persisting globally. The company CEO also said that in the upcoming years, the power of quantum computing and artificial intelligence would take over the world.

(Photo : Getty Images/Getty Images for Greentech Festival)BERLIN, GERMANY - SEPTEMBER 16: In this screengrab, Sundar Pichai speaks as part of SWITCH GREEN during day 1 of the Greentech Festival at Kraftwerk Mitte aired on September 16, 2020 in Berlin, Germany. The Greentech Festival is the first festival to celebrate green technology and to accelerate the shift to more sustainability. The festival takes place from September 16 to 18.

In a recent interview withBBC,the CEO said that many countries have been exploiting the free internet for everyone. In addition, there are some of them which limit the information dissemination which is oftentimes the real scenario behind the cameras.

Nowadays, the transition from physical activities to online has been fast-moving, especially since the digital age of the internet is continuously developing. Technological adoption has paved the path for others to access more webs via the internet, but sometimes, it could endanger the users' lives without them knowing.

Internet freedom might be on the tip of our fingers, yet the responsibility in using it is often displaced. Perhaps what Pichai wants to happen is for us to be aware that no one is safe on the internet. Everyone is exposed to the risk of having their data stolen or being suppressed to a lot of misinformation on social media platforms.

Besides warning about the threat circling the web, Pichai also tackled issues like data privacy, taxing technology, and more.

Read Also:Google CEO Sundar Pichai Diet Routine: Why Is He Required of Eating Eggs at This Exact Time?

Back in 2018, the Google manager said that artificial intelligence is more profound than fire and electricity. On Sunday, July 11, The Telegraph reported the same topic involving Pichai.

Pichai noted that AI and quantum computing are the two developments that will have a huge impact on everyone in the future.

Many machines are now capable of copying what humans can do. Surprisingly, some tools can even do tasks better than a normal person. This is what AI does since many activities are considered complicated, the machine is assigned to do them.

If it can do good for humans, there are also some bad impacts that AI could produce. In 2017, Elon Musk said that the "biggest risk" that people face is artificial intelligence. Since artificial intelligence can yield fake outcomes, the South African tycoon mentioned that it's better if the government will regulate its usage.

For the part of quantum computing, some technologists together with the Google CEO support the idea that the said technology will not work for everything. It might be applicable to other methods, but the knowledge about quantum computing might feature new solutions to the world.

In a report by The New York Times viaInc. Magazine,Google boss Sundar Pichai has been receiving a lot of criticisms about his style in leading the company.

Additionally, many employees have been complaining about his slow decision-making which results in delayed action in business. Specifically, it would last for a year until Pichai assigns a particular person to the vacant role at Google. The complainants also mentioned that the company did not acquire Shopify.

Related Article: Alphabet, Google's Q1 Earnings: CEO Sundar Pichai Reveals Hybrid Work Setup, Success in Early 2021

This article is owned by Tech Times

Written by Joseph Henry

2021 TECHTIMES.com All rights reserved. Do not reproduce without permission.

Here is the original post:

Google CEO Sundar Pichai Cautions the Dangers of Open Web; AI, Quantum Computing to be Highlight for the Next Few Years - Tech Times

Posted in Ai | Comments Off on Google CEO Sundar Pichai Cautions the Dangers of Open Web; AI, Quantum Computing to be Highlight for the Next Few Years – Tech Times

Die as a human or live forever as a cyborg: Will robots rule the world? – Sydney Morning Herald

Posted: at 7:52 am

Normal text sizeLarger text sizeVery large text size

In movies, theyre the bad guys killer cyborgs with bones of steel and lightning-fast reflexes, perhaps an Austrian accent too. But Peter Scott-Morgan has never been afraid of robots. As a scientist and roboticist by trade, he spent decades researching how artificial intelligence (AI) might transform our lives.

Then, in 2017, Dr Scott-Morgan was diagnosed with motor neuron disease, the same paralysing condition that killed Stephen Hawking. Months after puzzling over his wonky foot falling asleep, he was told he had two years to live.

He had other ideas. To survive, he would turn to the technology he had spent his career researching. He would become the cyborg. Scott-Morgan has now had two major surgeries to help keep himself alive with robotics machine upgrades that breathe for him, help him speak, and hopefully will even see him stand again as the advancing paralysis traps him inside his body. He plans to merge his brain with AI eventually too, so he can speak with his thoughts rather than the flicker of his eyes. And Im OK with giving up some control to the AI to stay me, he says. Though that might change what it means to be human ... Theres a long tradition of scientists experimenting on themselves. But die as a human or live as a cyborg? To me, its a no-brainer.

But what about the rest of us? Is humanity destined to merge with machine? We keep hearing that the robots are coming to take our jobs, how likely are they to stage a coup? And why are Facebook and Elon Musk already building machines to read our thoughts?

Credit:Illustration: Matt Davidson

A century ago, a Spanish scientist mapped the human brain and uncovered a hidden kingdom. As microscopes began to peer deeper into that mass of little grey cells, Santiago Cajal lay bare the wiring within, so dense he called it a jungle. It is from his detailed drawings that the world understood neurons for the first time and how they exchange information in a tangled network, giving rise to the senses, the emotions and possibly even consciousness itself.

Decades later, a philosopher and a young, homeless mathematician wondered if that network could be broken down into the most fundamental binary of logic: true or false. Neurons could, after all, be considered on or off, firing a signal or not. This theory, by Warren McCulloch and Walter Pitts at the University of Chicago, proved to be an incomplete model for the human brain, too simple to capture all the strange magic really going on inside. But it did give rise to the binary code of computers those ones and zeroes now form infinite variations of on or off to tell machines what to do. Scientists have been trying to bring computers closer to human brains, at least in function, ever since.

Because machines interpret the world through this binary code, and algorithms (rules made from that code), they are good at a lot of specific things we find difficult, such as solving complex equations fast (and playing chess better than a grandmaster). Yet they often struggle with the mundane things we, with our more complex, adaptable thinking centres, find easy: recognising facial expressions, making small talk and, most of all, improvising.

To overcome this, machine learning models seek to train computers to categorise and then react to things themselves rather than waiting on human programming. Over the past decade, one such model known as deep learning has charged beyond the rest, fuelling an AI boom. Its why your iPhone can recognise your face and Alexa understands you when you ask her to switch on the lights. And deep learning did it by going back to Cajals neural jungle. The learning is said to be deep because a machine is trained to classify patterns by filtering incoming information through layers of interconnected neuron-like nodes.

Im sorry, Dave, Im afraid I cant do that. In the 1968 sci-fi classic 2001: A Space Odyssey, a computer called HAL (Heuristically programmed ALgorithmic) takes over a spaceship.Credit:Fair Use

While these artificial networks take a staggering amount of data to train compared to a human brain, experts such as Scott-Morgan hope they will only get better and more efficient as computing power increases (it is roughly doubling every two years). Already, AI can translate speech, trade stock, and perform surgery (under supervision). Since his own surgical journey was documented in the British documentary Peter: The Human Cyborg, Scott-Morgan has been upgrading to a very Hollywood cyborg-like interface that uses AI to track the movement of his eyes across its screen with tiny cameras and then offers up phrases for his robot voice to say predictive text based on the letters he has spelt out so far.

As UNSW professor of AI Toby Walsh points out, machines are not limited by biological processing speeds the way humans and animals are. But others suspect that the capability of even this kind of AI is about to hit a wall. At the University of Sheffield, computer scientist James Marshall says deep learning networks are still based on a cartoon of how the [human] brain works. They are not really making decisions, because they do not understand for themselves what matters and what doesnt. That means theyre fragile. To tell a picture of a cat from a dog, for example, an AI needs to sift through a huge trove of images. While it might pick up tiny changes that would escape the notice of a human, such as a few pixels out of place, these tiny changes usually dont matter a lot because we understand the main features that set a cat apart from a dog. But suddenly you change some pixels and the AI thinks its a dog, Marshall says. Or if it sees a drawing of a cat or a cat in real life [in 3D] it might have to start from scratch again.

The tendency of AI, however powerful, to break in unexpected ways is part of the reason those driverless cars we keep being promised are yet to arrive. Machines can be fooled even into seeing things that arent really there driverless cars tricked into accelerating past stop signs when the addition of a few stickers on the sign makes them instead perceive increased speed limits; or facial recognition programs duped into skipping past suspects wearing wigs and glasses.

Any AI network is vulnerable to this kind of manipulation, and if hackers know its weak points they can do more than break it, they can hijack it to perform a new task entirely. Of course, AI can be trained to identify and resist this kind of sabotage too but, at some point, it will encounter a problem it hasnt prepared for.

Perhaps a little paradoxically, some experts say that a way to give deep learning more common sense is to fuse it with the old, more rigid form of AI that came before it, where machines used hard-coded rules to understand how the world worked. Others say deep learning needs to become more flexible yet, writing its own algorithms and programs to perform new functions as it needs to, even testing its actions in the real world through robotics (or at least very good simulators) to help it understand causality. Amazons new line of Alexa assistants look through a camera to better understand the world (and their owners).

But I dont think [deep learning] will ever work for driverless cars, Marshall says. When you have to build a more and more complicated machine for a fairly simple task, maybe the machine is built wrong.

Arnold Schwarzenegger (and his iconic Austrian accent) starred as a killer cyborg in The Terminator franchise.Credit:

Marshall is flying a drone around his lab. Its not bumping into walls, the way drones normally do when trying to distinguish one beige, slab of office wallpaper from another. This drone has a tiny chip in its brain holding an algorithm borrowed from a honeybee. It tells it how to navigate the world as the insect does.

At Marshalls lab in Sheffield, now a company offshoot of his university called Opteran, the team is trying something new modelling machine thinking on animals. Marshall calls it natural intelligence, not artificial intelligence. Autonomy, the kind driverless cars and robot vacuums need to navigate their surrounds, is a solved problem, he says. It happens all the time in the natural world. We require very little brain power ourselves to drive, most of the time were on autopilot.

Bees have a less formidable number of neurons than humans about a million, next to tens of billions and yet they can still perform impressive behaviours: navigating, communicating and problem-solving. Marshall has been mapping their brains, training them to perform tasks such as flying through tunnels and then measuring their neural activity; making silicon models of different regions of their brain according to their function and then converting that into algorithms his machines can follow.

Its like a jigsaw puzzle, Marshall says. We havent mapped it all yet, even those million neurons still interact in really complex ways.

So far, he has converted into code how bees sense the world and navigate it, and is busy finalising algorithms from the decision-making centre of their brains. Unlike Cajol, hes not looking to record all the exquisite detail that keeps the brain alive. We just need how it does the function we want. We dont just reproduce the neurons, we reproduce the computation.

When he first put his bee navigation algorithm in the drone, he was stunned at how much it improved, changing course as people moved around it, as walls came closer. Thats when we saw it could work, he says. But because everyone is focused on deep learning, we decided to make our own company to scale it up.

Marshall is also mapping the brains of ants to improve ground-based robots, imagining a world in which autonomous devices are as common as computers, cleaning and improving the world around us. And as machines get smaller smaller even than the head of a pin or the width of a human hair scientists hope they may help fight disease in the body too, cleaning blood or killing cancer and infection. Perhaps one day these nanobots could even repair the nerves fraying apart in people with motor neuron disease such as Scott-Morgan, or keep humans alive longer.

Marshall hopes to eventually look into the brains of larger animals too, including primates. There scientists might find more complex functions again, beyond just autonomy, and into advanced problem-solving, even moral reasoning. Still, just as Marshall is sure his robot bee is not a real bee, he doubts wed be able to reproduce an entire human brain in silico and fire it up to see if some kind of consciousness springs to life. A lot of this research comes out of that very question: could we just replicate the brain somehow, suppose we had a 3D printer, Marshall says. But the brain isnt just its neurons, its how it all interacts. And we still dont understand it yet.

In his latest book Livewired, US neuroscientist David Eagleman describes in new detail the plasticity of the human brain, where neurons fight for territory like drug cartels. There may even be a kind of evolution, a survival of the fittest being waged within our minds day to day, as new neural connections are forged. Quantum scientists, meanwhile, wonder if reactions are happening inside the brain, at its smallest scale, which we cannot even measure. How then could we ever hope to replicate it accurately? Or upload someones consciousness to a machine (another popular sci-fi plot)?

Will Smith battles another pesky AI that thinks it knows best (and a few thousand robots) in the 2004 film I, Robot.

Of all the renderings of AI in science fiction, few occupy the minds of real-world researchers like the singularity a hypothetical (and some say inevitable) tipping point where machine intelligence growth becomes exponential, out of control. In the 1960s, British mathematician I.J. Good spoke of an intelligence explosion, and everyone from Stephen Hawking to Elon Musk has since weighed in.

The theory is that as soon as we have a system as smart as a human, and we allow it to design a system superior to itself, well kick off a domino effect of ever-increasing intelligence that could shift the balance of power on Earth overnight. Humans, who are limited by slow biological evolution, couldnt compete, and would be superseded, Hawking told the BBC in 2014.

And, if AI were ever smart enough to be put in charge and make decisions for us, as is imagined in films such as I, Robot and The Matrix, what if their radical take on efficiency involves enslaving or powering down humans (i.e. mass murder)? Remember the glowing red eye of Hal the AI in 2001: A Space Odyssey, who decided the best thing to do, when faced with a crisis far out in space, was to stage a mutiny against his human crew? Musk himself says that, for a powerful AI, wiping out the human race wouldnt be personal if we stood in their way, it would be a matter of course, like squishing an ant hill to build a road.

When we refer to intelligence in machines, we usually mean weve taught a computer to do something that in humans requires intelligence, Walsh says. As of 2021, those smarts are still very narrow beating a human in a game of chess, for example. AI enthusiasts point to machines helping write music or mimicking the styles of great painters as signs of burgeoning creativity, but such demonstrations still rely on considerable human input, and results are often random or spectacularly bad. The limits of deep learning again mean true spontaneity, originality, is lacking. At IBM, Arvind Krishna imagines you could train an AI on images of what is and isnt beautiful, good art and bad art, for example, but that would still be training the AI on the creators own tastes, not moulding a new artist for the world. Mostly, experts see machines becoming another tool to deepen human creativity and decision-making, revealing patterns and combinations that might have otherwise been missed.

Loading

Still, Walsh says theres no scientific or technical reason why the gap between human and machine intelligence couldnt close one day. Every time we thought that we were special, that the sun went around the Earth, that we were different than the apes, we were wrong, he says. So to think theres anything special about our intelligence, anything that we could not create and probably surpass in silicon, I think would be terribly conceited of the human race.

Indeed, machines have a lot of apparent advantages over us mere flesh bags, as Hawking alluded to. Theyre faster thinkers, with bigger, potentially infinite memories; they can network and interface in a way that would be called telepathy if a human could do it. And theyre not limited by their physical bodies.

In Scott-Morgans case, transforming into a cyborg has already come with unexpected benefits. He can no longer speak on his own Im answering these questions long after my body has stopped working sufficiently well to keep me alive, he writes instead but through his new robot voice, he can communicate in any language. In May, his digital avatar even broke into song during a live interview with broadcaster Stephen Fry. His wheelchair, meanwhile, will soon allow him to stand, so he will tower over his fellow mortals and hopefully, with the aid of an inbuilt AI, it will drive itself wherever Scott-Morgan wishes to go. (I envision being able to speed through an obstacle course or safely make my way through a showroom of porcelain vases.)

The hair of his avatar is never out of place and my powers will double every two years. Ill be a thousand times more powerful by the time Im 80. Hes working on programming in a maniacal laugh for his avatar, too.

Of course, because these AI networks are being built by humans, they may inherit the worst of us along with the best. Weve seen this already on platforms such as Facebook and YouTube where AI used to curate user content has been shown to veer sharply into extremism and misinformation. Or police surveillance networks learning their human developers cultural prejudices. And, because AIs operate using complex mathematics, they are often themselves a black box, hard to scrutinise. Experts, including the late Hawking, have stressed that regulation and ethical frameworks must catch up fast to the technology, so we can maximise its social good, not just profit margins.

But what we may learn too, is that theres a ceiling to how intelligent something can be. The universe is full of fundamental limits, Walsh says. It might not be [as simple as] we wake up one day and the computers can program themselves. I suspect that we will get smarter computers, but it will be the old-fashioned way, through our sweat, ingenuity and perseverance.

While Marshall doubts well ever create a machine that is itself conscious (along the lines of, say, the eloquently self-aware cyborgs in Blade Runner), he is wary of the new push for robots or algorithms that can evolve independently designed to breed the way computer viruses spread now and so rewrite and advance their own programming. I dont think thats the path, Marshall says. I think we need to always know what it does, and, if it can evolve on its own, well, life finds a way

How can you tell? Cyborgs called replicants are much like humans in the 1982 sci-fi film Blade Runner. Credit:Fair Use

Rather than turning to one all-knowing AI to run the show, many experts think it more likely we will draw on the power of machines to improve our own thinking. If we had a better way to connect with computers, closer than our screens, futurists wonder if we could surf the internet with our minds, back up our memories to the cloud, even download ready-formed skills such as a second language or another sense entirely like echolocation or infrared vision.

In 2020, Elon Musk was ruling out none of this when he introduced the world to a pig called Gertrude and the coin-sized computer chip in her brain he hoped would allow people to plug in directly to machines one day. Its kind of like a Fitbit in your skull with tiny wires, Musk said, conceding this is sounding increasingly like a Black Mirror episode. In 2021, a monkey with the same chip, made by Musks company Neuralink, was shown playing a game of ping-pong using only his mind to control a joystick.

Labs, including military labs, around the world have been developing neural implants for more than a decade, mostly to help people with paralysis operate robotic limbs and those with epilepsy head off seizures. In 2016, an implant connected to a robotic arm even gave back the sensation of touch as well as movement to a man paralysed from the neck down he used it to fist-bump president Barack Obama.

But this is still new technology, so far involving about 100 electrodes inserted into the brain that read its neural signals and send them wirelessly back to a machine. Neuralinks prototype has more than a 1000 electrodes, each smaller than a human hair, and grand claims of fast insertion into the skull using a robotic surgery (and no need for even a general anesthetic).

Plunging anything into the brain is risky and can cause damage. But in 2016 two neurologists at the University of Melbourne, Tom Oxley and Nicholas Opie, developed a clever technique to insert an implant without the need for open surgery using, Oxley says, the veins and blood vessels as the natural highway into the brain. Theyve just received $52 million in funding from Silicon Valley to run more clinical trials of their own chip, called the Stentrode, in the US. Its about the size of a paperclip and in Melbourne its helped patients with motor neuron disease text, email and bank online by thought alone.

Loading

Neuralinks end goal is to develop a non-invasive headset instead of a chip but for now such external devices pick up a much weaker signal from the brain. Facebook, meanwhile, is looking at wearable wrist devices that would read your mind, literally, where nerves carry messages down to your hands, eventually allowing users to do away with the traditional mouse and keyboard and type at a speed of 100 words per minute just by thinking. Like Neuralink, helping patients with paralysis is their first goal, but they also plan to scale up to everyday users. Already, researchers funded by Facebook have managed to translate brain waves into speech with an accuracy rate of between 61 and 76 per cent (that beats Google Translate in some cases), using existing electrodes implanted in the brains of patients with epilepsy.

Some of this work being done by Facebook and Musk is right out on the edge for enhancement, says the chief executive of Bionics Queensland, Robyn Stokes, but it will likely benefit health applications along the way. Just as brain chips could become digital assistants of the mind, she imagines they could also help manage mental health conditions such as serious depression. Those sorts of brain computer interfaces are really advancing quickly, she says, pointing to the Strentrode. She expects an implant that can perform many functions inside the body, beyond reading brainwaves, will soon follow.

Even then, there are still concerns. While the brains now-famed plasticity could help it rewire around implants, for example, some experts warn it could also mean it quickly forgets how to perform important functions, if they are taken over by machines. What then if something fails?

Peter Scott-Morgan tries out AI technology that tracks his eye movements to spell out his speech.Credit:Cardiff Productions

Still, enthusiasts, or transhumanists, imagine the next stage of human evolution will inevitably be technological future generations can expect reinforced bones and improved brain power thanks to cybernetic upgrades. In British drama Years and Years, a new parental nightmare plays out as a daughter announces she wants to upload her mind and live as a machine. (I dont want to be flesh. I want to escape this thing and become digital.)

In his first book on robotics in 1984, long before his disease had emerged, Scott-Morgan himself considered how AI might unlock human potential, and vice versa. AI on its own is like a brilliant jazz pianist, but without anyone to jam with, he says now. Its nowhere near its full potential. The duet of human and AI, meanwhile, would seem close to magic ... a mutually dependent partnership, not a rivalry. And, to his mind, it could well be the only route that doesnt lead to a dead end. I anticipate that otherwise therell be a crippling backlash against whats typically perceived as the uncontrolled rise of raw AI.

Scott-Morgan plans for his eye-controlled communication interface to rely more and more on its underlying AI to generate his speech. That means sometimes what comes out will not be what biological Peter was planning to say. And Im very comfortable with that. I keep reassuring [everyone] I have absolutely no qualms about technology potentially making me appear cleverer, or funnier, or simply less forgetful, than I was before.

Others imagine a greater fusion of robotics, especially nanotech, with animals too. Already parts of nature are being re-engineered as technology in the lab from viruses repurposed as vaccines and computer chips that mimic the function of human organs to a robot-fish hybrid sent down as a deep-sea probe to collect data beneath the waves. Both the US and Russian armies have kitted out trained dolphins as underwater spies over the years, so perhaps its no surprise military researchers have been looking at going further even putting mind-controlling brain chips into sharks next. And, if bees die out, some experts say cyborg insects may be needed to pollinate plants in their place. All this again raises the strange question of when something is alive, or conscious, and whether we are building better robots or creating new life entirely.

The Terminator robots have no plans to co-exist to humans. They want the whole planet.Credit:Fair Use

Even if we dont get shark cyborgs, low-cost lethal machines are already changing the face of warfare. Imagine fighter drones talking to one another to find bombing targets, instead of a human pilot back at a base. Or swarms of explosive drones slamming themselves into people and buildings.

These are not visions of the future but news stories from 2020. According to a recent UN report, Turkish drones, packing explosives and facial recognition cameras, were sent out by Libyas army in 2020 to eliminate rebels via swarm attack in Tripoli, without requiring a remote connection between drone and base. They were, effectively, hunting their own targets. And the tech on board was not much more impressive than what youd find on a smartphone. Meanwhile, the Poseidon is a new class of robotic underwater vehicle that Russia is said to have already made, which can travel undetected and launch cobalt bombs to irradiate entire coastal cities all unmanned.

Loading

Machines that decide to kill like this, based on their sensors and a pre-programmed target profile, are making humanitarian groups increasingly nervous. The International Committee of the Red Cross wants the worlds governments to ban fully autonomous weapons outright. ICRC president Peter Maurer says they will make it difficult for countries to comply with international law, in effect substituting human decisions about life and death with sensor, software and machine processes.

Walsh agrees autonomous killer robots raise a host of ethical, legal and technical problems. If things go wrong or they break international law, who is held accountable? Should it be the programmer, the commander or the robot on trial for war crimes? Theyre not sentient, theyre not conscious, they cant have empathy, they cant be punished, Walsh says. And that takes us to a very, very dark place. It would be terribly destabilising and would change the speed and scale of war.

Of course, he adds, autonomous systems built for defence, such as the robots used to clear landmines, show that AI can reduce casualties in war too. And computers will continue to come online that can process battlefield data and make recommendations faster than humans ever could. But [we need] human oversight, human judgment, which is still significantly better than machines, at least today, Walsh says.

Loading

He thinks we should ban lethal autonomous weapons as we have chemical and biological weapons (as well as blinding lasers and cluster munitions), with enforcement powers for the UN to check no rogue state is stepping out of line.

The problem is that such bans rarely happen before things get ugly. For chemical weapons, it took the horrors of the First World War.

Im fearful that we wont have the initiative to do the same here until weve seen such weapons being used, Walsh says. A swarm of robot drones, hunting down humans and killing them mercilessly. It will look like a Hollywood movie.

Read more:

Die as a human or live forever as a cyborg: Will robots rule the world? - Sydney Morning Herald

Posted in Ai | Comments Off on Die as a human or live forever as a cyborg: Will robots rule the world? – Sydney Morning Herald

AI legislation must address bias in algorithmic decision-making systems – VentureBeat

Posted: at 7:52 am

Join AI & data leaders at Transform 2021 on July 12th for the AI/ML Automation Technology Summit. Register today.

In early June, border officials quietly deployed the mobile app CBP One at the U.S.-Mexico border to streamline the processing of asylum seekers. While the app will reduce manual data entry and speed up the process, it also relies on controversial facial recognition technologies and stores sensitive information on asylum seekers prior to their entry to the U.S. The issue here is not the use of artificial intelligence per se, but what it means in relation to the Biden administrations pre-election promise of civil rights in technology, including AI bias and data privacy.

When the Democrats took control of both House and Senate in January, onlookers were optimistic that there was an appetite for a federal privacy bill and legislation to stem bias in algorithmic decision-making systems. This is long overdue, said Ben Winters, Equal Justice Works Fellow of the Electronic Privacy Information Center (EPIC), who works on matters related to AI and the criminal justice system. The current state of AI legislation in the U.S. is disappointing, [with] a majority of AI-related legislation focused almost solely on investment, research, and maintaining competitiveness with other countries, primarily China, Winters said.

But there is some promising legislation waiting in the wings. The Algorithmic Justice and Online Platform Transparency bill, introduced by Sen. Edward Markey and Rep. Doris Matsui in May, clamps down on harmful algorithms, encourages transparency of websites content amplification and moderation practices, and proposes a cross-government investigation into discriminatory algorithmic processes throughout the economy.

Local bans on facial recognition are also picking up steam across the U.S. So far this year, bills or resolutions related to AI have been introduced in at least 16 states. They include California and Washington (accountability from automated decision-making apps); Massachusetts (data privacy and transparency in AI use in government); Missouri and Nevada (technology task force); and New Jersey (prohibiting certain discrimination by automated decision-making tech). Most of these bills are still pending, though some have already failed, such as Marylands Algorithmic Decision Systems: Procurement and Discriminatory Acts.

The Wyden Bill from 2019 and more recent proposals, such as the one from Markey and Matsui, provide much-needed direction, said Patrick Lin, director of the Ethics + Emerging Sciences Group at California Polytechnic State University. Companies are looking to the federal government for guidance and standards-setting, Lin said. Likewise, AI laws can protect technology developers in the new and tricky cases of liability that will inevitably arise.

Transparency is still a huge challenge in AI, Lin added: Theyre black boxes that seem to work OK even if we dont know how but when they fail, they can fail spectacularly, and real human lives could be at stake.

Though the Wyden Bill is a good starting point to give the Federal Trade Commission broader authority, requiring impact assessments that include considerations about data sources, bias, fairness, privacy, and more, it would help to expand compliance standards and policies, said Winters. The main benefit to [industry] would be some clarity about what their obligations are and what resources they need to devote to complying with appropriate regulations, he said. But there are drawbacks too, especially for companies that rely on fundamentally flawed or discriminatory data, as it would be hard to accurately comply without endangering their business or inviting regulatory intervention, Winters added.

Another drawback, Lin said, is that even if established players support a law to prevent AI bias, it isnt clear what bias looks like in terms of machine learning. Its not just about treating people differently because of their race, gender, age, or whatever, even if these are legally protected categories, Lin said. Imagine if I were casting for a movie about Martin Luther King, Jr. I would reject every actor who is a teenage Asian girl, even though Im rejecting them precisely because of age, ethnicity, and gender. Algorithms, however, dont understand context.

The EUs General Data Protection Regulation (GDPR) is a good example to emulate, even though its aimed not at AI specifically, but on underlying data practices. GDPR was fiercely resisted at first but its now generally regarded as a very beneficial regulation for individual, business, and societal interests, Lin said. There is also the coercive effect of other countries signing an international law, making a country think twice or three times before it acts against the treaty and elicits international condemnation. Even if the US is too laissez-faire in its general approach to embrace guidelines [like the EUs], they still will want to consider regulations in other major markets.

The rest is here:

AI legislation must address bias in algorithmic decision-making systems - VentureBeat

Posted in Ai | Comments Off on AI legislation must address bias in algorithmic decision-making systems – VentureBeat

The Pentagon Scrubs a Cloud Deal and Looks to Add More AI – WIRED

Posted: at 7:52 am

Late in 2019, the Pentagon chose Microsoft for a $10 billion contract called JEDI that aimed to use the cloud to modernize US military computing infrastructure. Tuesday, the agency ripped up that deal. The Pentagon said it will start over with a new contract that will seek technology from both Amazon and Microsoft, and that offers better support to data-intensive projects, such as enhancing military decisionmaking with artificial intelligence.

The new contract will be called the Joint Warfighter Cloud Capability. It attempts to dodge a legal and political mess that had formed around JEDI. Microsoft competitors Amazon and Oracle both claimed in lawsuits that the award process had been skewed. In April, the Court of Federal Claims declined to dismiss Amazons suit alleging that bias against the company from President Trump and other officials had nudged the Pentagon to favor Microsoft, creating the potential for years of litigation.

The Pentagon announcement posted Tuesday didnt mention JEDIs legal troubles but said the US militarys technical needs had evolved since it first asked for bids on the original contract in 2018. JEDI included support for AI projects, but the Pentagons acting chief information officer, John Sherman, said in a statement that the departments need for algorithm-heavy infrastructure had grown still further.

Our landscape has advanced, and a new way ahead is warranted to achieve dominance in both traditional and nontraditional war-fighting domains, Sherman said. He cited two recent AI-centric programs, suggesting that they would receive better support from the new contract and its two vendors.

One is called Joint All Domain Command and Control, which aims to link together data feeds from military systems across land, sea, air, and space so that algorithms can help commanders identify targets and choose among possible responses. In an Air Force exercise linked to the program last year, an airman used a VR headset and software from defense startup Anduril to order real air defenses to shoot down a mock cruise missile over White Sands Missile Range in New Mexico.

Sherman also suggested that JWCC would help a project announced last month to accelerate AI adoption across the Pentagon, including by creating special teams of data and AI experts for each of the agencys 11 top military commands.

The Pentagons claim that it will better support advanced technology like AI projects shows President Bidens Pentagon continuing an emphasis on the military potential of artificial intelligence that began during the Obama administration and continued under President Trump. Successive secretaries of defense have said tapping that potential will require better connections with tech industry firms, including cloud providers and startups. However, some AI experts fear more military AI could have unethical or deadly consequences, and some tech workers, including at Google, have protested Pentagon deals.

Andrew Hunter, director of the Defense-Industrial Initiatives Group at the Center for Strategic and International Studies, says the Pentagon appears to have decided that because of its legal tangles, a reboot was the most efficient way to get the cloud computing resources the department has needed for some time.

Computing-dependent projects like the one seeking to link various military services and hardware are central to the Pentagons strategy to face up to China. The potential of cloud computing is to be able to apply sophisticated analytical techniques such as AI on your data so you can act with greater knowledge than adversaries, Sherman says.

JEDI was not the Pentagons only cloud computing contract, but the speed with which its successor can get up and running could still have a significant effect on the Pentagons cloud and AI dreams. Had all gone to plan, the initial two-year phase of JEDI was to have been completed in April. Hunter expects the department to try to finalize the contract quicklybut also to take care to avoid a repeat of the controversy around JEDI.

View post:

The Pentagon Scrubs a Cloud Deal and Looks to Add More AI - WIRED

Posted in Ai | Comments Off on The Pentagon Scrubs a Cloud Deal and Looks to Add More AI – WIRED

Learn about Artificial Intelligence (AI) | Code.org

Posted: June 28, 2021 at 10:37 pm

AI and Machine Learning impact our entire world, changing how we live and how we work. That's why its critical for all of us to understand this increasingly important technology, including not just how its designed and applied, but also its societal and ethical implications.

Join us to explore AI in a new video series, train AI for Oceans in 25+ languages, discuss ethics, and more!

Learn about how AI works and why it matters with this series of short videos. Featuring Microsoft CEO Satya Nadella and a diverse cast of experts.

Students reflect on the ethical implications of AI, then work together to create an AI Code of Ethics resource for AI creators and legislators everywhere.

We thank Microsoft for supporting our vision and mission to ensure every child has the opportunity to learn computer science and the skills to succeed in the 21st century.

With an introduction by Microsoft CEO Satya Nadella, this series of short videos will introduce you to how artificial intelligence works and why it matters. Learn about neural networks, or how AI learns, and delve into issues like algorithmic bias and the ethics of AI decision-making.

Go deeper with some of our favorite AI experts! This panel discussion touches on important issues like algorithmic bias and the future of work. Pair it with our AI & Ethics lesson plan for a great introduction to the ethics of artificial intelligence!

Resources to inspire students to think deeply about the role computer science can play in creating a more equitable and sustainable world.

This global AI for Good challenge introduces students to Microsofts AI for Good initiatives, empowering them to solve a problem in the world with the power of AI.

Levels 2-4 use a pretrained model provided by the TensorFlow MobileNet project. A MobileNet model is a convolutional neural network that has been trained on ImageNet, a dataset of over 14 million images hand-annotated with words such as "balloon" or "strawberry". In order to customize this model with the labeled training data the student generates in this activity, we use a technique called Transfer Learning. Each image in the training dataset is fed to MobileNet, as pixels, to obtain a list of annotations that are most likely to apply to it. Then, for a new image, we feed it to MobileNet and compare its resulting list of annotations to those from the training dataset. We classify the new image with the same label (such as "fish" or "not fish") as the images from the training set with the most similar results.

Levels 6-8 use a Support-Vector Machine (SVM). We look at each component of the fish (such as eyes, mouth, body) and assemble all of the metadata for the components (such as number of teeth, body shape) into a vector of numbers for each fish. We use these vectors to train the SVM. Based on the training data, the SVM separates the "space" of all possible fish into two parts, which correspond to the classes we are trying to learn (such as "blue" or "not blue").

[Back to top]

View post:

Learn about Artificial Intelligence (AI) | Code.org

Posted in Ai | Comments Off on Learn about Artificial Intelligence (AI) | Code.org

How AI Is Taking Over Our Gadgets – The Wall Street Journal

Posted: at 10:37 pm

If you think of AI as something futuristic and abstract, start thinking different.

Were now witnessing a turning point for artificial intelligence, as more of it comes down from the clouds and into our smartphones and automobiles. While its fair to say that AI that lives on the edgewhere you and I areis still far less powerful than its datacenter-based counterpart, its potentially far more meaningful to our everyday lives.

One key example: This fall, Apples Siri assistant will start processing voice on iPhones. Right now, even your request to set a timer is sent as an audio recording to the cloud, where it is processed, triggering a response thats sent back to the phone. By processing voice on the phone, says Apple, Siri will respond more quickly. This will only work on the iPhone XS and newer models, which have a compatible built-for-AI processor Apple calls a neural engine. People might also feel more secure knowing that their voice recordings arent being sent to unseen computers in faraway places.

Google actually led the way with on-phone processing: In 2019, it introduced a Pixel phone that could transcribe speech to text and perform other tasks without any connection to the cloud. One reason Google decided to build its own phones was that the company saw potential in creating custom hardware tailor-made to run AI, says Brian Rakowski, product manager of the Pixel group at Google.

These so-called edge devices can be pretty much anything with a microchip and some memory, but they tend to be the newest and most sophisticated of smartphones, automobiles, drones, home appliances, and industrial sensors and actuators. Edge AI has the potential to deliver on some of the long-delayed promises of AI, like more responsive smart assistants, better automotive safety systems, new kinds of robots, even autonomous military machines.

More:

How AI Is Taking Over Our Gadgets - The Wall Street Journal

Posted in Ai | Comments Off on How AI Is Taking Over Our Gadgets – The Wall Street Journal

We Should Test AI the Way the FDA Tests Medicines – Harvard Business Review

Posted: at 10:37 pm

Predictive algorithms risk creating self-fulfilling prophecies, reinforcing preexisting biases. This is largely because it does not distinguish between causation and correlation. To prevent this, we should submit new algorithms to randomized controlled trials, similar to those the FDA supervises when approving new drugs. This would enable us to infer whether an AI is making predictions on the basis of causation.

We would never allow a drug to be sold in the market without having gone through rigorous testing not even in the context of a health crisis like the coronavirus pandemic. Then why do we allow algorithms that can be just as damaging as a potent drug to be let loose into the world without having undergone similarly rigorous testing? At the moment, anyone can design an algorithm and use it to make important decisions about people whether they get a loan, or a job, or an apartment, or a prison sentence without any oversight or any kind of evidence-based requirement. The general population is being used as guinea pigs.

Artificial intelligence is a predictive technology. They assess, for example, whether a car is likely to hit an object, whether a supermarket is likely to need more apples this week, and whether a person is likely to pay back a loan, be a good employee, or commit a further offense. Important decisions, including life-and-death ones, are made on the basis of algorithmic predictions.

Predictions try to fill in missing information about the future in order to reduce uncertainty. But predictions are rarely neutral observers they change the state of affairs they predict, to the extent that they become self-fulfilling prophecies. For example, when important institutions such as credit ratings publish negative forecasts about a country, that can result in investors fleeing the country, which in turn can cause an economic crisis.

Self-fulfilling prophecies are a problem when it comes to auditing the accuracy of algorithms. Suppose that a widely used algorithm determines that you are unlikely to be a good employee. Your not getting any jobs should not count as evidence that the algorithm is accurate, because the cause of your not getting jobs may be the algorithm itself.

We want predictive algorithms to be accurate, but not through any means certainly not through creating the reality they are supposed to predict. Too many times we learn that algorithms are defective once they have destroyed lives, as when an algorithm implemented by the Michigan Unemployment Insurance Agency falsely accused 34,000 unemployed people of fraud.

How can we limit the power of predictions to change the future?

One solution is to subject predictive algorithms to randomized controlled trials. The only way to know if, say, an algorithm that assesses job candidates is truly accurate, is to divide prospective employees into an experimental group (which is subjected to the algorithm) and a control group (which is assessed by human beings). The algorithm could assess people in both groups, but its decisions would only be applied to the experimental group. If people who were negatively ranked by the algorithm went on to have successful careers in the control group, then that would be good evidence that the algorithm is faulty.

Randomized controlled trials would also have greatpotential in identifying biases and other unforeseen negative consequences. Algorithms are infamously opaque. Its difficult to understand how they work, and when they have only been tested in a lab, they often act in surprising ways once they get exposed to real-world data. Rigorous trials could ensure that we dont use racist or sexist algorithms. An agency similar to the Food and Drug Administration could be created to make sure algorithms have been tested enough to be used on the public.

One of the reasons randomized controlled trials are considered the golden standard in medicine (as well as economics) is because they are the best evidence we can have of causation. In turn, one of AIs most glaring shortcomings is that it can identify correlations, but it doesnt understand causation, which often leads it astray. For example, when an algorithm decides that male job candidates are likelier to be good employees than female ones it does it because it cannot distinguish between causal features (e.g., most past successful employees have attended university because university is a good way to develop ones skills) and correlative ones (e.g., most past successful employees have been men because society suffers from sexist biases).

Randomized controlled trials have not only been the foundation of the advancement of medicine, they have also prevented countless potential disasters the release of drugs that could have killed us. Such trials could do the same for AI. And if we were to join AIs knack to recognize correlations with the ability of randomized controlled trials to help us infer causation, we would stand a much better chance of developing both a more powerful and a more ethical AI.

Read more here:

We Should Test AI the Way the FDA Tests Medicines - Harvard Business Review

Posted in Ai | Comments Off on We Should Test AI the Way the FDA Tests Medicines – Harvard Business Review

Page 125«..1020..124125126127..130140..»