‘Magic toilet’ could monitor users’ health, say researchers – The Guardian

A smart toilet boasting pressure sensors, artificial intelligence and a camera has been unveiled by researchers who say it could provide a valuable way to keep tabs on our health.

The model is the latest version of an idea that has been around for several years: a system that examines our daily movements in an effort to spot the emergence of diseases. Such an approach, experts say, has an advantage over wearable devices, since individuals do not need to remember to use the system.

We have developed a passive human health monitoring system that can be easily incorporated into a normal daily routine, requiring minimal or even no human intervention, the team behind the new toilet report.

They hope it will eventually become a daily clinic, helping in the prevention and early detection of problems from diabetes to urinary tract infections and inflammatory bowel diseases.

Writing in the journal Nature Biomedical Engineering, the international team of researchers note that previous attempts at such a toilet have been expensive and have provided limited information. However, their new system can be fitted on to existing toilets and incorporates a suite of sensors and detectors.

The future will be either a magic toilet paper or these magic toilets

These include test strips that detect telltale health markers within urine, such as glucose and red blood cells, as well as video recordings of the flow to spot changes that may be related to disease.

We believe that inconsistencies can provide valuable information about the prostate and bladder functions, the authors write.

In addition, the system incorporates cameras that take images of users stool. These images are then classified using a machine-learning system a type of artificial intelligence into the different categories on the Bristol stool scale that reflect problems such as constipation or diarrhoea.

The toilet has further features. It was also able to collect additional information, such as first stool dropping time and total seating time, which can potentially be acted on by clinicians to help to manage constipation and haemorrhoids, the authors write.

Perhaps most inventively, the team report that the system detects who is using the toilet from a fingerprint scanner on the flush handle, and analprints distinctive creases in the lining of the anus, captured by video frames.

However the team say there is more to do, not least in testing the device in large clinical studies so far a total of 21 participants have tested the toilet. They also stress the need to develop self-cleaning mechanisms to avoid false positives in the tests, adapt the system to squatting toilets, and redesign the urine analysis system for women, as it is currently designed for users who stand up while having a pee. They also hope to expand the range of tests to screen for illicit drug use, sexually transmitted infections and the makeup of microbes in the gut.

But whether the system will prove popular is another matter. In a survey of 300 individuals near Stanford University who were asked to rate what they thought of the proposed toilet, 30% said they felt uncomfortable with it, primarily citing privacy concerns, with the analprint the most disliked component.

Prof Tim Spector, an expert on the gut microbiome from Kings College London, who was not involved in the research, welcomed the work, but said the teams future plans to analyse chemicals and microbes were important.

We know that your stool sample is probably the best snapshot of your current health that we have, he said.

Spector said the new toilet was a sign of things to come, predicting that regular monitoring would become commonplace.

The future will be either a magic toilet paper that gives you this result or these magic toilets that will give you a chemical analysis basically of the chemicals your microbes are producing, to give a snapshot of your inner health, he said.

See the rest here:
'Magic toilet' could monitor users' health, say researchers - The Guardian

Artificial Intelligence (AI) Market is projected to garner $169411.8 million in 2025 with a booming CAGR 55.6% – WhaTech Technology and Markets News

Increase in investment in AI technologies, rise in demand for analyzing and interpreting large amount of data, and surge in adoption of AI in emerging market are expected to propel the global AI market.

Rise in investment in AI technologies, increased demand for analyzing and interpreting large amount of data, and surge in customer satisfaction coupled with increase in adoption of reliable cloud application have boosted the growth of the global artificial intelligence (AI) market. However, dearth of trained and experienced staff hampers the market growth.

On the contrary, rise in adoption of AI in emerging markets and rapid development of smarter robots are expected to create lucrative opportunities in the near future.

The global AI market is divided on the basis of technology, industry vertical, and geography. Based on technology, the market is segmented into machine learning, natural language processing, image processing, and speech recognition.

The machine learning segment held the largest share in 2016, contributing more than half of the market and expected to maintain its dominance throughout the study period. Moreover, the segment is projected to register the fastest CAGR of 56.4% during the forecast period.

The artificial intelligence market accounted for$4,065.0 millionin 2016, and is expected to reach$169,411.8 millionby 2025, growing at a CAGR of 55.6% from 2018 to 2025.

The market report provides an in-depth analysis of the major market players such asApple Inc., Alphabet (Google Inc.), IBM Corporation, Baidu, Microsoft Corporation, IPsoft, NVIDIA, MicroStrategy, Inc., Verint Systems Inc (Next IT Corp), and Qlik Technologies Inc.

Based on industry vertical, the market is divided into media & advertising, BFSI, it & telecom, retail, healthcare, automotive & transportation, and others. The IT & telecom segment dominated the market in 2016, contributing more than one-fifth of the market.

Moreover, the segment is projected to register the fastest CAGR of 56.8% during the forecast period.

Download Sample Report:www.alliedmarketresearch.com/request-sample/1773

The global AI market is analyzed across various regions such as North America, Europe, Asia-Pacific, and LAMEA. The market across North America held the largest share in 2018, contributing nearly half of the market.

However, the market across Asia-Pacific is projected to manifest the fastest CAGR of 59.4% during the forecast period.

Top Impacting Factors Such as -

1.Increase in investment in AI technologies

2.Growth in demand for analyzing and interpreting large amounts of data

3.Increased customer satisfaction and increased adoption of reliable cloud applications

For Inquiry:www.alliedmarketresearch.com/-enquiry/1773

This email address is being protected from spambots. You need JavaScript enabled to view it.

The rest is here:
Artificial Intelligence (AI) Market is projected to garner $169411.8 million in 2025 with a booming CAGR 55.6% - WhaTech Technology and Markets News

Artificial intelligence trialled in search and rescue missions – Defence Connect

An artificial intelligenceproject run by Defence personnel in search and rescue (SAR) trials has the potential to save lives.

An artificial intelligenceproject run by Defence personnel in search and rescue (SAR) trials has the potential to save lives.

The project, dubbed AI-Search, aims to apply modern AI to help detect small and difficult-to-spot targets, such as life rafts and individual survivors.

Plan Jerichos AI lead, Wing Commander Michael Gan, said his team recognised the potential for the technology to augment and enhance SAR.

The idea was to train a machine-learning algorithm and AI sensors to complement existing visual search techniques, hesaid.

Our vision was to give any aircraft and other Defence platforms, including unmanned aerial systems, a low-cost, improvised SAR capability.

His team approached Lieutenant Harry Hubbert of Warfare Innovation Navy Branch, who was prominent in developing AI-enabled autonomous maritime vehicles for the Five Eyes Exercise Autonomous Warrior in Jervis Bay in late 2018.

LEUT Hubbert was given a month to develop the new algorithms and completed the work in a fortnight.

The AI comprises a series of machine-learning algorithms alongside other deterministic processes to analyse the imagery collected by camera sensors and aid human observers.

AI-Search was first trialled successfully aboard a RAAF C-27J Spartan last year. The second trial took place in March this year near Stradbroke Island, Queensland. During these trials, AI-Search detected a range of small targets in a wide sea area while training the algorithm.

Using commercial off-the-shelf components with custom software and programming by LEUTHubbert, the trials highlighted the feasibility of the technology, which can be applied easily to other ADF airborne platforms.

There is a lot of discussion about AI in Defence but the sheer processing power of machine-learning applied to SAR has the potential to save lives and transform it, LEUT Hubbert said.

The project is a collaboration between Warfare Innovation Navy Branch, Plan Jericho, RAAF Air Mobility Groups No. 35 Squadron and the University of Tasmanias Australian Maritime College.

The projectstemmed froma challenge from the Director-General Air Combat Capability, AIRCDRE Darren Goldie, to find a way of enhancing SAR using improved sensors.

Artificial intelligence trialled in search and rescue missions

See more here:
Artificial intelligence trialled in search and rescue missions - Defence Connect

Holocaust survivors will be able to share their stories after death thanks to a new project – 60 Minutes – CBS News

Tonight, as the world struggles to contain and recover from the novel coronavirus, we offer a story we completed just before life changed so dramatically. It is a story of history, hope, survival and resilience, which has its roots in another time when the world was convulsed by crisis World War II.

This year marks the 75th anniversary of the end of that war and of the liberation of concentration camps across Europe. Most of the survivors who remain are now in their 80s and 90s. Soon there will be no one left who experienced the horrors of the Holocaust firsthand, no one to answer questions or bear witness to future generations. But a new and dramatic effort is underway to change that. Harnessing the technologies of the present and the future, it keeps alive the ability to talk to, and get answers from, the past.

Correspondent Lesley Stahl's interview with Holocaust survivor Aaron Elster, who spent two years of his childhood hidden in a neighbor's attic, was unlike any interview she had ever done.

"Aaron, tell us what your parents did before the war," Stahl asked Elster.

"They owned and operated a butcher shop," Elster said.

It wasn't the content of the interview that was so unusual.

"Where did you live?" Stahl asked.

"I was born in a small town in Poland called Sokolw Podlaski," Elster said.

It's the fact that this interview was with a man who was no longer alive. Aaron Elster died two years ago.

"What's the weather like today?" Stahl asked.

"I'm actually a recording," Elster said. "I cannot answer that question."

Heather Maio came up with the idea for this project. She had worked on exhibits featuring Holocaust survivors for years and wanted future generations to have the same opportunity to interact with them as she'd had.

"I wanted to talk to a Holocaust survivor like I would today," Maio said. "With that person sitting right in front of me and we were having a conversation."

She knew that back in the 90s, after making the film "Schindler's List," Steven Spielberg created a foundation named for the Hebrew word for the Holocaust Shoah to film and collect testimonies from as many survivors as possible. They have interviewed nearly 55,000 of them so far and have stored them at the University of Southern California. But Maio dreamed of something more dynamic, being able to actively converse with survivors after they're gone. And she figured, in the age of artificial intelligence tools like Siri and Alexa, the technology had to be creatable.

She brought the idea to Stephen Smith, executive director of the USC Shoah Foundation, and now her husband. He loved it, but some of his colleagues weren't so sure.

"One of them looked at me," Maio said. "She was, like, 'You wanna talk to dead people?'"

"And you said, '"Yes, because that's the point,'" Stahl said.

"That's the point," Maio said.

"Well maybe people thought you're turning the Holocaust into something maybe hokey?" Stahl asked.

"Yeah," Maio said. "They said that, 'You're gonna Disney-fy the Holocaust.'"

"We had a lot of pushback on this project," Smith said. "'Is it the right thing to do? What about the wellbeing of the survivors? Are we trying to keep them alive beyond their deaths?' Everyone had questions except for one group of people, the survivors themselves, who said, 'Where do I sign up? I would like to participate in this project.' No barriers to entry."

The first survivor they signed up to do a trial run was a man named Pinchas Gutter, who was born in Poland and deported to the Majdanek concentration camp with his parents and twin sister Sabina at the age of 11. He is the only one who survived. They flew Gutter from his home in Toronto to Los Angeles, and asked him to sit inside a giant lattice-like dome.

"Yeah, I call it a sphere," Gutter said. "They call it a dome. And then eventually, it was called a bubble."

A bubble surrounding him with lights and more than 20 cameras. The goal was to future-proof the interviews so that as technology advances and 3D, hologram-like projection becomes the norm, they'll have all the necessary angles.

"So the very first day we went to film Pinchas, we had these ultra high speed cameras," Smith said. "They were all linked together and synced together to make this video of him. So we sit down and they press record. Nothing happens. So Pinchas is sitting there with 6,000 LED lights on him and cameras that don't work."

Sunglasses shielded his eyes.

"I was bored sitting in that chair, So I started singing to myself," Gutter said. "So suddenly, Steven had this idea, 'Oh, he's singing. We're gonna record some songs of his.'"

Both Smith and Maio said Gutter was a good sport. Eventually the cameras rolled and Gutter was asked to come back to the bubble for the real thing.

"How long were you in that chair?" Stahl asked him.

"A whole week from 9:00 to 5:00," Gutter said. "We were there with breaks for lunch. And-- but I was there from 9:00 to 5:00 answering questions."

It took so long because they asked him nearly 2000 questions. The idea was to cover every conceivable question anyone might ever want to ask him.

"Did you have to look exactly the same?" Stahl asked.

"I had to wear the same clothes and I had three pairs of the same jackets, the same shirts, the same trousers, the same shoes," Gutter said.

Gutter can now be seen -- in those shirts, trousers, and shoes -- at Holocaust museums in Dallas, Indiana, and at the Illinois Holocaust Museum in Skokie, outside Chicago, where visitors can ask him their own questions.

"What kept you going," one girl asked, "or what gave you hope while you were experiencing hardship in the camps?"

"We did hope that the Nazis would lose the war," Gutter's digital image responded.

Gutter's image is projected onto an 11-foot high screen. Smith explained how the technology works.

"So what's happening is all of the answers to the questions that Pinchas gave go into a database," Smith said. "And when you ask a question, the algorithm is looking through all of the database, 'Do I have an answer to that.' And then it'll bring back what it thinks is the closest answer to your question."

Stahl then asked Gutter's digital image a question.

"Did you have a happy childhood?" Stahl asked.

"I had a very happy childhood," Gutter's digital image said. "My parents were winemakers. My father started teaching me to become a winemaker when I was 3-and-a-half years old. By the age of 5, I could already read and I could already write."

"Wow," Stahl said. "You're very smart."

"Thank you," Gutter's digital image said with a laugh.

"I've noticed there's a little jiggle right before Pinchas starts to talk," Stahl said. "What is that?""What you're seeing here isn't a human being," Smith said. "It's video clips that are-- that are being butted up to each other and played. And as it searches and brings the clip in, you just-- you're seeing a little bit of a jump cut."

The jump cuts stopped being distracting once Stahl asked about the fate of Gutter's family.

"Tell us what happened when you got to the camp," Stahl said.

"As soon as we arrived there, we were being separated into different groups," Gutter's digital image said. "And my sister was somehow pushed towards the children. And I saw her, she must have spotted my mother. So she ran towards my mother. I saw my mother. And she hugged her. And since that time, all I can remember whenever I think of my sister is her long-- big, long, blonde braid."

That was the last time he saw his twin sister, Sabina. He learned later that day that she and both his parents had been killed in the gas chambers. Pinchas Gutter was alone at age 11, put to work as a slave laborer.

"Did you ever see anybody killed?" Stahl asked.

"Unfortunately, I saw many people die in front of my eyes," Gutter's digital image said.

Stahl wasn't sure how a recording would handle what she wanted to ask him next.

"How can you still have faith in God?" Stahl asked.

"How can you possibly not believe in God?" Gutter's digital image said.

"Well," Stahl said, "how did he let this happen?"

"God gave human beings the knowledge of right and wrong and he allowed them to do what they wished on this earth, to find their own way," Gutter's digital image said. "To my mind, when God sees what human beings are up to, especially things like genocide, he weeps."

"Wow. Stephen, I could ask him questions for ten hours," Stahl said.

Since Pinchas Gutter was filmed, the Shoah Foundation has recorded interviews with 21 more Holocaust survivors, each for a full week. And they've shrunk the set-up required, so they can take a mobile rig on the road to record survivors close to where they live. They've deliberately chosen interview subjects with all different wartime experiences. Survivors of Auschwitz, hidden children, and as we saw last fall in New Jersey, 93-year-old Alan Moskin, who isn't a holocaust survivor. He was a liberator.

"Entering that camp was the most horrific sight I've ever seen or ever hope to see the rest of my life," Moskin said.

Moskin was an 18-year-old private when his Army unit liberated a little-known concentration camp called Gunskirchen.

"There was a pile of skeleton-like bodies on the left," Moskin said. "There was another pile of skeleton-like bodies on the right. 'Those poor souls.' That's the term my lieutenant kept screaming, 'Oh my God, look at these poor souls.'"

"I remember the expression and the attitude of all of us," Moskin continued. "'What in the freak? What is this? God almighty'"

Each of Alan Moskin's answers is then isolated by a team of researchers at the Shoah Foundation Office. They add into the system a variety of questions people might ask to trigger that response.

"For every question that we asked, there are 15 different ways of asking the same question," Maio said. "And that's all manual."

Editors rotate the image, turn the green screen background into black and then a long process of testing begins, some of it in schools.

Students are asked to try it out. Ask whatever questions they want and see if the system calls up the correct answer.

"How did you find out that your city was getting invaded by Germany?" One student asked.

"How did you feel about your family?" Another asked.

Pinchas Gutter's digital image responded to one student by asking, "Can you rephrase that, please?"

Every question and response is then reviewed.

"We log every single question that's asked of the system," Maio said. "And see if there is a better response that addresses that question more directly."

As Stahl's crew discovered, it's still a work in progress.

"Tell us about your family when you were a little boy," Stahl asked Gutter's digital image.

"How about you ask me about life after the war?" The digital image answered back.

"So, couple of things about artificial intelligence," Smith said. "It is mainly artificial and not so intelligent."

"Just yet, for now," Maio said.

"But the beauty of artificial intelligence is it develops over time," Smith said. "So we aren't changing the content. All the answers remain the same. But over time, the range of questions that you can ask will be enhanced considerably."

Questions to draw out what it was like for Aaron Elster hiding in that attic 75 years ago.

"I used to pray to God to let me live 'til I was 25," Elster's digital image said. "I wanted to taste what adulthood would be like. So, am I a lucky guy? Yes I am."

Of more than 20 men and women who have participated so far in the project, three have already passed away. Stahl had conversations with two of them, conversations that at times felt so normal, she said she could almost forget she was talking with the digital image of someone no longer living.

First, a spunky 4'9" woman named Eva Kor, an identical twin who, together with her sister, survived Auschwitz and the notorious experiments of Dr. Josef Mengele. Kor spent her life after the war in Terre Haute, Indiana. She died last summer at the age of 85.

"Hi, Eva. How are you today?" Stahl asked.

"I'm fine, and how are you?" Kor's digital image said back.

"I'm good," Stahl asked.

Stahl said it felt natural to answer Kor's question before posing her own.

"So how old were you when you went to Auschwitz?" Stahl asked.

"When I arrived in Auschwitz, I was ten years old," Kor's digital image said. "And I stayed in Auschwitz until liberation, which was about nine months later when we were liberated."

"So we made a little announcement about the fact we were starting this project," Smith said. "I get a call the next day from a lady called Eva Kor. I didn't know her at that point in time. And she says, 'I want to be one of those 3D interviews.'"

"'I wanna be a hologram,'" Maio recalled Kor saying.

"I said, 'Well, I'm traveling, I'm very sorry,'" Smith said. "'Where're you going?' 'Oh, well, I've got to go to New York. I'm going to D.C.' 'When are you gonna go to D.C.? I'm going to D.C.' Turns out we were going to the same event in D.C. I arrive at my hotel, she's sitting in the lobby, waiting for me."

When Eva, on the right, and her twin sister, Miriam arrived at Auschwitz, they were pulled away from their parents and older sisters and taken to a barrack full of twins. They never saw their family again.

60 Minutes reported on Mengele's twin experiments in a story back in 1992, and Stahl actually interviewed the living Eva Kor at her home in Terre Haute. Eva told Stahl then about becoming extremely sick after an injection.

"Mengele came in every morning and every evening, with four other doctors," Kor said in 1992. "And he declared, very sarcastically, laughing, 'Too bad. She's so young. She has only 2 weeks to live.'"

"When I heard that, I knew he was right and I immediately made a silent pledge that I would prove you, Dr. Mengele, wrong," Kor's digital image said in the present.

Imagine, picking up a conversation almost 30 years later -- and after Eva Kor's death.

"Eva, tell us about Dr. Mengele," Stahl asked. "What was he like?"

"He had a gorgeous face, a movie star face, and very pleasant, actually. Dark hair, dark eyes," Kor's digital image said. "When I looked into his eyes, I could see nothing but evil. People say that the eyes are the center of the soul, and in Mengele's case, that was correct."

Eva and Miriam are visible in footage taken by the Soviet forces that liberated Auschwitz 75 years ago.

They went back to the camp many times, Eva continuing to go even after Miriam's death in 1993. It was on one of those visits that Eva made a stunning announcement that she had decided to forgive her Nazi captors.

"I, Eva Moses Kor, hereby give amnesty to all Nazis who participated," Kor said at the time.

She came under blistering attack from other survivors.

See the rest here:
Holocaust survivors will be able to share their stories after death thanks to a new project - 60 Minutes - CBS News

The Growing Role of Artificial Intelligence in the Pharmaceutical Industry – BBN Times

object(stdClass)#31465 (59) { ["id"]=> string(4) "5792" ["title"]=> string(69) "How Blockchain Improves Logistics: Benefits, Drawbacks, and Use Cases" ["alias"]=> string(66) "how-blockchain-improves-logistics-benefits-drawbacks-and-use-cases" ["introtext"]=> string(177) "

The blockchain technology and logistics industry tend to be quite profitable for each other. Additionally, this collaboration brings benefits to carriers and customers.

In this article, well discuss all you need to know about blockchain in the logistics sector.

Blockchain allows optimizing logistic processes in the real-time mode. This technology tends to improve the relationship between shippers and carriers.

We've singled out six main advantages of using blockchain in the logistics sector. Lets discuss them more precisely.

There are a lot of documents required for the transportation process. For example, a bill of lading or B/L. It stands for an agreement that consists of transportation terms, conditions, and other issues.

Blockchain allows recording all the steps. As a result, any participant can look through the delivery chain. If something happens, recorded information will report a problem.

Blockchain allows optimizing routes and delivering faster. In this case, smaller companies can compete with bigger ones, offering faster routes.

As a result, its possible to reduce expenses on the shipping process.

Blockchain simplified the process of goods certification. The combination of IoT and blockchain allows creating smart contracts.

Transactions made on blockchain are secure. Its impossible to change something in the transaction. It leads to decreasing fraudulent operations.

Intermediaries stand for agents that take part in the transportation chain. However, withlogistics software development, the industry doesnt need such specialists.

Weve already mentioned smart contracts. They tend to reduce time and expenses. Also, both parties have an opportunity to automate the validation process, manage obligations, and so on.

There were main blockchain advantages. As you can see, this technology can be quite profitable for the logistics industry.

Of course, there are some disadvantages of using blockchain technology. Lets discuss them more precisely.

Blockchain allows automating supply processes. As a result, there is no need for various specialists. The number of unemployed workers will rise.

Businesses that use blockchain technology should have a standard process. Unfortunately, these days, businesses dont have one.

Its required to create a standard process on the government level. So, businesses can follow it to avoid some common problems.

Its evident that blockchain development is expensive and time-consuming. Additionally, its required to have powerful hardware.

Also, there are expenses on specialists that are experienced in blockchain integration.

So, blockchain is a quite expensive technology that requires additional preparation before using it. However, some companies have already integrated blockchain and got some profit.

Blockchain makes it easy to track goods during shipment. Additionally, the companies have an opportunity to monitor the conditions of the packages.

As a result, its possible to detect broken goods, spoilt products, and so on.

Of course, such opportunities lead to decreasing unnecessary expenditures.

Successful story. Walmart cooperates with IBM to use the blockchain technology in the logistics. The system allows seeing what products are sold. Also, its possible to know the location of the product (the particular warehouse). The company claims that the supply process becomes more transparent.

Blockchain allows reducing the number of documents required for transportation. Additionally, the number of mistakes will also decrease due to automatization. The shipping terms fulfill more precisely.

Let's face it payments are important for any business. Blockchain makes this process secure. For example, its possible to make transactions with such cryptocurrencies as Bitcoin. As a result, the payment process is secure and transparent. Also, such solutions improve international processes.

Successful story. Tallysticks has made a platform that simplified payment processes using blockchain. The solution offers smart contracts that can be customized depending on the business needs and requirements.

Blockchain gives end consumers an opportunity to simplify the authenticity of the goods. There are platforms with data about product origin, quality, fineness, and others.

This technology gives clients transparency. As a result, people trust companies more.

Blockchain makes cooperation easier. For example, enterprises can cooperate with small companies to deliver goods faster. Such a solution is profitable for both parties.

These days, companies can cooperate with each other without intermediaries. It leads to cost reduction and improved delivery processes.

Successful story. ShipChain platform uses blockchain to improve cooperation. For instance, the service allows tracking the delivery from the warehouse to the buyer's door. It leads to better customer experience and satisfaction.

The delivery process is long and expensive. Also, there can occur various delays due to weather conditions and other issues.

Usually, to manage such issues, companies hire lawyers. However, blockchain changes the situation. The transportation process is tracked from the beginning to the end. So, both parties can see any changes in the route.

Also, the parties can monitor any issues connected to the delivery and decide whose fault they were.

Blockchain technology innovates the logistics sector. The companies can simplify the delivery process, making routes shorter.

All these solutions lead to customer satisfaction. As a result, the clients trust companies and order goods or services more often.

The main advantage of blockchain is transparency. Business owners, as well as end consumers, have an opportunity to track the delivery process. Also, customers can ensure that during the delivery the storing conditions were followed.

However, developing blockchain-based solutions is an expensive and time-consuming process. Companies have to prepare beforehand and single out the requirements of the final product.

More:
The Growing Role of Artificial Intelligence in the Pharmaceutical Industry - BBN Times

Stanford is Using Artificial Intelligence to Help Treat Coronavirus Patients – ETF Trends

Clinicians and researchers at Stanford University are developing ways that artificial intelligence can help identify which patients will require intensive care amid a surge in coronavirus patients. Rather than build an algorithm from scratch, the goal by Stanford experts was to take existing technology and modify it for a seamless transition into clinical operations.

The hardest part, the most important part of this work is not the model development. But its the workflow design, the change management, figuring out how do you develop that system the model enables, said Ron Li, a Stanford physician, and clinical informaticist.

Per a STAT news report, the machine learning model Lis team is working with analyzes patients data and assigns them a score based on how sick they are and how likely they are to need escalated care. If the algorithm can be validated, Stanford plans to start using it to trigger clinical steps such as prompting a nurse to check in more frequently or order tests that would ultimately help physicians make decisions about a COVID-19 patients care.

As more technology flows into fighting the coronavirus pandemic, this can only open up opportunities for investors in healthcare-focused exchange-traded funds (ETFs).

ETF investors can look for opportunities in theHealth Care Select Sector SPDR ETF (NYSEArca: XLV),Vanguard Health Care ETF (NYSEArca: VHT)and theiShares US Medical Devices ETF (IHI).

XLV seeks investment results that correspond generally to the Health Care Select Sector Index. The index includes companies from the following industries: pharmaceuticals; health care equipment & supplies; health care providers & services; biotechnology; life sciences tools & services; and health care technology.

VHT employs an indexing investment approach designed to track the performance of the MSCI US Investable Market Index (IMI)/Health Care 25/50, an index made up of stocks of large, mid-size, and small U.S. companies within the health care sector, as classified under the Global Industry Classification Standard (GICS).

IHI seeks to track the investment results of the Dow Jones U.S. Select Medical Equipment Index composed of U.S. equities in the medical devices sector. The underlying index includes medical equipment companies, including manufacturers and distributors of medical devices such as magnetic resonance imaging (MRI) scanners, prosthetics, pacemakers, X-ray machines, and other non-disposable medical devices.

Another fund to consider is theRobo Global Healthcare Technology and Innovation ETF (HTEC). HTEC seeks to provide investment results that, before fees and expenses, correspond generally to the price and yield performance of the ROBO Global Healthcare Technology and Innovation Index.

The fund will normally invest at least 80 percent of its total assets in securities of the index or in depositary receipts representing securities of the index. The index is designed to measure the performance of companies that have a portion of their business and revenue derived from the field of healthcare technology, and the potential to grow within this space through innovation and market adoption of such companies, products and services.

For more market trends, visitETF Trends.

See the original post here:
Stanford is Using Artificial Intelligence to Help Treat Coronavirus Patients - ETF Trends

A guide to healthy skepticism of artificial intelligence and coronavirus – Brookings Institution

The COVID-19 outbreak has spurred considerable news coverage about the ways artificial intelligence (AI) can combat the pandemics spread. Unfortunately, much of it has failed to be appropriately skeptical about the claims of AIs value. Like many tools, AI has a role to play, but its effect on the outbreak is probably small. While this may change in the future, technologies like data reporting, telemedicine, and conventional diagnostic tools are currently far more impactful than AI.

Still, various news articles have dramatized the role AI is playing in the pandemic by overstating what tasks it can perform, inflating its effectiveness and scale, neglecting the level of human involvement, and being careless in consideration of related risks. In fact, the COVID-19 AI-hype has been diverse enough to cover the greatest hits of exaggerated claims around AI. And so, framed around examples from the COVID-19 outbreak, here are eight considerations for a skeptics approach to AI claims.

No matter what the topic, AI is only helpful when applied judiciously by subject-matter expertspeople with long-standing experience with the problem that they are trying to solve. Despite all the talk of algorithms and big data, deciding what to predict and how to frame those predictions is frequently the most challenging aspect of applying AI. Effectively predicting a badly defined problem is worse than doing nothing at all. Likewise, it always requires subject matter expertise to know if models will continue to work in the future, be accurate on different populations, and enable meaningful interventions.

In the case of predicting the spread of COVID-19, look to the epidemiologists, who have been using statistical models to examine pandemics for a long time. Simple mathematical models of smallpox mortality date all the way back to 1766, and modern mathematical epidemiology started in the early 1900s. The field has developed extensive knowledge of its particular problems, such as how to consider community factors in the rate of disease transmission, that most computer scientists, statisticians, and machine learning engineers will not have.

There is no value in AI without subject-matter expertise.

It is certainly the case that some of the epidemiological models employ AI. However, this should not be confused for AI predicting the spread of COVID-19 on its own. In contrast to AI models that only learn patterns from historical data, epidemiologists are building statistical models that explicitly incorporate a century of scientific discovery. These approaches are very, very different. Journalists that breathlessly cover the AI that predicted coronavirus and the quants on Twitter creating their first-ever models of pandemics should take heed: There is no value in AI without subject-matter expertise.

The set of algorithms that conquered Go, a strategy board game, and Jeopardy! have accomplishing impressive feats, but they are still just (very complex) pattern recognition. To learn how to do anything, AI needs tons of prior data with known outcomes. For instance, this might be the database of historical Jeopardy! questions, as well as the correct answers. Alternatively, a comprehensive computational simulation can be used to train the model, as is the case for Go and chess. Without one of these two approaches, AI cannot do much of anything. This explains why AI alone cant predict the spread of new pandemics: There is no database of prior COVID-19 outbreaks (as there is for the flu).

So, in taking a skeptics approach to AI, it is critical to consider whether a company spent the time and money to build an extensive dataset to effectively learn the task in question. Sadly, not everyone is taking the skeptical path. VentureBeat has regurgitated claims from Baidu that AI can be used with infrared thermal imaging to see the fever that is a symptom of COVID-19. Athena Security, which sells video analysis software, has also claimed it adapted its AI system to detect fever from thermal imagery data. Vice, Fast Company, and Forbes rewarded the companys claims, which included a fake software demonstration, with free press.

To even attempt this, companies would need to collect extensive thermal imaging data from people while simultaneously taking their temperature with a conventional thermometer. In addition to attaining a sample diverse in age, gender, size, and other factors, this would also require that many of these people actually have feversthe outcome they are trying to predict. It stretches credibility that, amid a global pandemic, companies are collecting data from significant populations of fevered persons. While there are other potential ways to attain pre-existing datasets, questioning the data sources is always a meaningful way to assess the viability of an AI system.

The company Alibaba claims it can use AI on CT imagery to diagnose COVID-19, and now Bloomberg is reporting that the company is offering this diagnostic software to European countries for free. There is some appeal to the idea. Currently, COVID-19 diagnosis is done through a process called polymerase chain reaction (PCR), which requires specialized equipment. Including shipping time, it can easily take several days, whereas Alibaba says its model is much faster and is 96% accurate.

However, it is not clear that this accuracy number is trustworthy. A poorly kept secret of AI practitioners is that 96% accuracy is suspiciously high for any machine learning problem. If not carefully managed, an AI algorithm will go to extraordinary lengths to find patterns in data that are associated with the outcome it is trying to predict. However, these patterns may be totally nonsensical and only appear to work during development. In fact, an inflated accuracy number can actually be an important sign that an AI model is not going to be effective out in the world. That Alibaba claims its model works that well without caveat or self-criticism is suspicious on its face.

[A]n inflated accuracy number can actually be an important sign that an AI model is not going to be effective out in the world.

In addition, accuracy alone does not indicate enough to evaluate the quality of predictions. Imagine if 90% of the people in the training data were healthy, and the remaining 10% had COVID-19. If the model was correctly predicting all of the healthy people, a 96% accuracy could still be truebut the model would still be missing 40% of the infected people. This is why its important to also know the models sensitivity, which is the percent of correct predictions for individuals who have COVID-19 (rather than for everyone). This is especially important when one type of mistaken prediction is worse than the other, which is the case now. It is far worse to mistakenly suggest that a person with COVID-19 is not sick (which might allow them to continue infecting others) than it is to suggest a healthy person has COVID-19.

Broadly, this is a task that seems like it could be done by AI, and it might be. Emerging research suggests that there is promise in this approach, but the debate is unsettled. For now, the American College of Radiology says that the findings on chest imaging in COVID-19 are not specific, and overlap with other infections, and that it should not be used as a first-line test to diagnose COVID-19. Until stronger evidence is presented and AI models are externally validated, medical providers should not consider changing their diagnostic workflowsespecially not during a pandemic.

The circumstances in which an AI system is deployed can also have huge implications for how valuable it really is. When AI models leave development and start making real-world predictions, they nearly always degrade in performance. In evaluating CT scans, a model that can differentiate between healthy people and those with COVID-19 might start to fail when it encounters patients who are sick with the regular flu (and it is still flu season in the United States, after all). A drop of 10% accuracy or more during deployment would not be unusual.

In a recent paper about the diagnosis of malignant moles with AI, researchers noticed that their models had learned that rulers were frequently present in images of moles known to be malignant. So, of course, the model learned that images without rulers were more likely to be benign. This is a learning pattern that leads to the appearance of high accuracy during model development, but it causes a steep drop in performance during the actual application in a health-care setting. This is why independent validation is absolutely essential before using new and high-impact AI systems.

When AI models leave development and start making real-world predictions, they nearly always degrade in performance.

This should engender even more skepticism of claims that AI can be used to measure body temperature. Even if a company did invest in creating this dataset, as previously discussed, reality is far more complicated than a lab. While measuring core temperature from thermal body measurements is imperfect even in lab conditions, environmental factors make the problem much harder. The approach requires an infrared camera to get a clear and precise view of the inner face, and it is affected by humidity and the ambient temperature of the target. While it is becoming more effective, the Centers for Disease Control and Prevention still maintain that thermal imaging cannot be used on its owna second confirmatory test with an accurate thermometer is required.

In high-stakes applications of AI, it typically requires a prediction that isnt just accurate, but also one that meaningfully enables an intervention by a human. This means sufficient trust in the AI system is necessary to take action, which could mean prioritizing health-care based on the CT scans or allocating emergency funding to areas where modeling shows COVID-19 spread.

With thermal imaging for fever-detection, an intervention might imply using these systems to block entry into airports, supermarkets, pharmacies, and public spaces. But evidence shows that as many as 90% of people flagged by thermal imaging can be false positives. In an environment where febrile people know that they are supposed to stay home, this ratio could be much higher. So, while preventing people with fever (and potentially COVID-19) from enabling community transmission is a meaningful goal, there must be a willingness to establish checkpoints and a confirmatory test, or risk constraining significant chunks of the population.

This should be a constant consideration for implementing AI systems, especially those used in governance. For instance, the AI fraud-detection systems used by the IRS and the Centers for Medicare and Medicaid Services do not determine wrongdoing on their own; rather, they prioritize returns and claims for auditing by investigators. Similarly, the celebrated AI model that identifies Chicago homes with lead paint does not itself make the final call, but instead flags the residence for lead paint inspectors.

Wired ran a piece in January titled An AI Epidemiologist Sent the First Warnings of the Wuhan Virus about a warning issued on Dec. 31 by infectious disease surveillance company, BlueDot. One blog post even said the company predicted the outbreak before it happened. However, this isnt really true. There is reporting that suggests Chinese officials knew about the coronavirus from lab testing as early as Dec. 26. Further, doctors in Wuhan were spreading concerns online (despite Chinese government censorship) and the Program for Monitoring Emerging Diseases, run by human volunteers, put out a notification on Dec. 30.

That said, the approach taken by BlueDot and similar endeavors like HealthMap at Boston Childrens Hospital arent unreasonable. Both teams are a mix of data scientists and epidemiologists, and they look across health-care analyses and news articles around the world and in many languages in order to find potential new infectious disease outbreaks. This is a plausible use case for machine learning and natural language processing and is a useful tool to assist human observers. So, the hype, in this case, doesnt come from skepticism about the feasibility of the application, but rather the specific type of value it brings.

AI is unlikely to build the contextual understanding to distinguish between a new but manageable outbreak and an emerging pandemic of global proportions.

Even as these systems improve, AI is unlikely to build the contextual understanding to distinguish between a new but manageable outbreak and an emerging pandemic of global proportions. AI can hardly be blamed. Predicting rare events is just very hard, and AIs reliance on historical data does it no favors here. However, AI does offer quite a bit of value at the opposite end of the spectrumproviding minute detail.

For example, just last week, California Gov. Gavin Newsom explicitly praised BlueDots work to model the spread of the coronavirus to specific zip codes, incorporating flight-pattern data. This enables relatively precise provisioning of funding, supplies, and medical staff based on the level of exposure in each zip code. This reveals one of the great strengths of AI: its ability to quickly make individualized predictions when it would be much harder to do so individually. Of course, individualized predictions require individualized data, which can lead to unintended consequences.

AI implementations tend to have troubling second-order consequences outside of their exact purview. For instance, consolidation of market power, insecure data accumulation, and surveillance concerns are very common byproducts of AI use. In the case of AI for fighting COVID-19, the surveillance issues are pervasive. In South Korea, the neighbors of confirmed COVID-19 patients were given details of that persons travel and commute history. Taiwan, which in many ways had a proactive response to the coronavirus, used cell phone data to monitor individuals who had been assigned to stay in their homes. Israel and Italy are moving in the same direction. Of exceptional concern is the deployed social control technology in China, which nebulously uses AI to individually approve or deny access to public space.

Government action that curtails civil liberties during an emergency (and likely afterwards) is only part of the problem. The incentives that markets create can also lead to long-term undermining of privacy. At this moment, Clearview AI and Palantir are among the companies pitching mass-scale surveillance tools to the federal government. This is the same Clearview AI that scraped the web to make an enormous (and unethical) database of facesand it was doing so as a reaction to an existing demand in police departments for identifying suspects with AI-driven facial recognition. If governments and companies continue to signal that they would use invasive systems, ambitious and unscrupulous start-ups will find inventive new ways to collect more data than ever before to meet that demand.

In new approaches to using AI in high-stakes circumstances, bias should be a serious concern. Bias in AI models results in skewed estimates across different subgroups, such as women, racial minorities, or people with disabilities. In turn, this frequently leads to discriminatory outcomes, as AI models are often seen as objective and neutral.

While investigative reporting and scientific research has raised awareness about many instances of AI bias, it is important to realize that AI bias is more systemic than anecdotal. An informed AI skeptic should hold the default assumption that AI models are biased, unless proven otherwise.

An informed AI skeptic should hold the default assumption that AI models are biased, unless proven otherwise.

For example, a preprint paper suggests it is possible to use biomarkers to predict mortality risk of Wuhan COVID-19 patients. This might then be used to prioritize care for those most at riska noble goal. However, there are myriad sources of potential bias in this type of prediction. Biological associations between race, gender, age, and these biomarkers could lead to biased estimates that dont represent mortality risk. Unmeasured behavioral characteristics can lead to biases, too. It is reasonable to suspect that smoking history, more common among Chinese men and a risk factor for death by COVID-19, could bias the model into broadly overestimating male risk of death.

Especially for models involving humans, there are so many potential sources of bias that they cannot be dismissed without investigation. If an AI model has no documented and evaluated biases, it should increase a skeptics certainty that they remain hidden, unresolved, and pernicious.

While this article takes a deliberately skeptical perspective, the future impact of AI on many of these applications is bright. For instance, while diagnosis of COVID-19 with CT scans is of questionable value right now, the impact that AI is having on medical imaging is substantial. Emerging applications can evaluate the malignancy of tissue abnormalities, study skeletal structures, and reduce the need for invasive biopsies.

Other applications show great promise, though it is too soon to tell if they will meaningfully impact this pandemic. For instance, AI-designed drugs are just now starting human trials. The use of AI to summarize thousands of research papers may also quicken medical discoveries relevant to COVID-19.

AI is a widely applicable technology, but its advantages need to be hedged in a realistic understanding of its limitations. To that end, the goal of this paper is not to broadly disparage the contributions that AI can make, but instead to encourage a critical and discerning eye for the specific circumstances in which AI can be meaningful.

The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Microsoft provides support to The Brookings InstitutionsArtificial Intelligence and Emerging Technology (AIET) Initiative. The findings, interpretations, and conclusions in this report are not influenced by any donation. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.

Read the original:
A guide to healthy skepticism of artificial intelligence and coronavirus - Brookings Institution

Artificial Intelligence turns a persons thoughts into text – Times of India

Scientists have developed an artificial intelligence system that can translate a persons thoughts into text by analysing their brain activity. Researchers at the University of California developed the AI to decipher up to 250 words in real-time from a set of between 30 and 50 sentences.The algorithm was trained using the neural signals of four women with electrodes implanted in their brains, which were already in place to monitor epileptic seizures. The volunteers repeatedly read sentences aloud while the researchers fed the brain data to the AI to unpick patterns that could be associated with individual words. The average word error rate across a repeated set was as low as 3%.'; var randomNumber = Math.random(); var isIndia = (window.geoinfo && window.geoinfo.CountryCode === 'IN') && (window.location.href.indexOf('outsideindia') === -1 ); console.log(isIndia && randomNumber A decade after speech was first decoded from human brain signals, accuracy and speed remain far below that of natural speech, states a paper detailing the research, published in the journal Nature Neuroscience. We trained a recurrent neural network to encode each sentence-length sequence of neural activity into an abstract representation, and then to decode this representation, word by word, into an English sentence, the report states.The system is, however, still a long way off being able to understand regular speech. People could become telepathic to some degree, able to converse not only without speaking but without words, the report stated.

See more here:
Artificial Intelligence turns a persons thoughts into text - Times of India

Stanford launches an accelerated test of AI to help with Covid-19 care – STAT

In the heart of Silicon Valley, Stanford clinicians and researchers are exploring whether artificial intelligence could help manage a potential surge of Covid-19 patients and identify patients who will need intensive care before their condition rapidly deteriorates.

The challenge is not to build the algorithm the Stanford team simply picked an off-the-shelf tool already on the market but rather to determine how to carefully integrate it into already-frenzied clinical operations.

The hardest part, the most important part of this work is not the model development. But its the workflow design, the change management, figuring out how do you develop that system the model enables, said Ron Li, a Stanford physician and clinical informaticist leading the effort. Li will present the work on Wednesday at a virtual conference hosted by Stanfords Institute for Human-Centered Artificial Intelligence.

advertisement

The effort is primed to be an accelerated test of whether hospitals can smoothly incorporate AI tools into their workflows. That process, typically slow and halting, is being sped up at hospitals all over the world in the face of the coronavirus pandemic.

The machine learning model Lis team is working with analyzes patients data and assigns them a score based on how sick they are and how likely they are to need escalated care. If the algorithm can be validated, Stanford plans to start using it to trigger clinical steps such as prompting a nurse to check in more frequently or order tests that would ultimately help physicians make decisions about a Covid-19 patients care.

advertisement

The model known as the Deterioration Index was built and is marketed by Epic, the big electronic health records vendor.Li and his team picked that particular algorithm out of convenience, because its already integrated into their EHR, Li said. Epic trained the model on data from hospitalized patients who did not have Covid-19 a limitation that raises questions about whether it will be generalizable for patients with a novel disease whose data it was never intended to analyze.

Nearly 50 health systems which cover hundreds of hospitals have been using the model to identify hospitalized patients with a wide range of medical conditions who are at the highest risk of deterioration, according to a spokesperson for Epic. The company recently built an update to help hospitals measure how well the model works specifically for Covid-19 patients. The spokesperson said that work showed the model performed well and didnt need to be altered. Some hospitals are already using it with confidence, according to the spokesperson. But others, including Stanford, are now evaluating the model in their own Covid-19 patients.

In the months before the coronavirus pandemic, Li and his team had been working to validate the model on data from Stanfords general population of hospitalized patients. Now, theyve switched their focus to test it on data from dozens of Covid-19 patients that have been hospitalized at Stanford a cohort that, at least for now, may be too small to fully validate the model.

Were essentially waiting as we get more and more Covid patients to see how well this works, Li said. He added that the model does not have to be completely accurate in order to prove useful in the way its being deployed: to help inform high-stakes care decisions, not to automatically trigger them.

As of Tuesday afternoon, Stanfords main hospital was treating 19 confirmed Covid-19 patients, nine of whom were in the intensive care unit; another 22 people were under investigation for possible Covid-19, according to Stanford spokesperson Julie Greicius. The branch of Stanfords health system serving communities east of the San Francisco Bay had five confirmed Covid-19 patients, plus one person under investigation. And Stanfords hospital for children had one confirmed Covid-19 patient, plus seven people under investigation, Greicius said.

Stanfords hospitalization numbers are very fluid. Many people under investigation may turn out to not be infected, and many confirmed Covid-19 patients who have relatively mild symptoms may be quickly cleared for discharge to go home.

The model is meant to be used in patients who are hospitalized, but not yet in the ICU. It analyzes patients data including their vital signs, lab test results, medications, and medical history and spits out a score on a scale from 0 to 100, with a higher number signaling elevated concern that the patients condition is deteriorating.

Already, Li and his team have started to realize that a patients score may be less important than how quickly and dramatically that score changes, he said.

If a patients score is 70, which is pretty high, but its been 70 for the last 24 hours thats actually a less concerning situation than if a patient scores 20 and then jumps up to 80 within 10 hours, he said.

Li and his colleagues are adamant that they will not set a specific score threshold that would automatically trigger a transfer to the ICU or prompt a patient to be intubated. Rather, theyre trying to decide which scores or changes in scores should set off alarm bells that a clinician might need to gather more data or take a closer look at how a patient is doing.

At the end of the day, it will still be the human experts who will make the call regarding whether or not the patient needs to go to the ICU or get intubated except that this will now be augmented by a system that is smarter, more automated, more efficient, Li said.

Using an algorithm in this way has potential to minimize the time that clinicians spend manually reviewing charts, so they can focus on the work that most urgently demands their direct expertise, Li said. That could be especially important if Stanfords hospital sees a flood of Covid-19 patients in the coming weeks. Santa Clara County, where Stanford is located, had confirmed 890 cases of Covid-19 as of Monday afternoon. Its not clear how many of them have needed hospitalization, though San Francisco Bay Area hospitals have not so far faced the crush of Covid-19 patients that New York City hospitals are experiencing.

That could change. And if it does, Li said, the model will have to be integrated into operations in a way that will work if Stanford has several hundred Covid-19 patients in its hospital.

This is part of a yearlong series of articles exploring the use of artificial intelligence in health care that is partly funded by a grant from the Commonwealth Fund.

Read the rest here:
Stanford launches an accelerated test of AI to help with Covid-19 care - STAT

AI vs your career? What artificial intelligence will really do to the future of work – ZDNet

Jill Watson has been a teaching assistant (TA) at the Georgia Institute of Technology for five years now, helping students day and night with all manner of course-related inquiries. But for all the hard work she has done, she still can't qualify for outstanding TA of the year.

That's because Jill Watson, contrary to many students' belief, is not actually human.

Created back in 2015 by Ashok Goel, professor of computer science and cognitive science at the Institute, Jill Watson is an artificial system based on IBM's Watson artificial intelligence software. Her role consists of answering students' questions a task she remarkably carries out with a 97% accuracy rate, for inquiries ranging from confirming the word count for an assignment, to complex technical questions related to the content of the course.

And she has certainly gone down well with students, many of whom, in 2015, were "flabbergasted" upon discovering that their favorite TA was not the serviceable, human lady that they expected, but in fact a cold-hearted machine.

What students found an amusing experiment is the sort of thing that worries many workers. Automation, we have been told time and again, will displace jobs; so are experiments like Jill Watson the first step towards unemployment for professionals?

SEE:How to implement AI and machine learning(ZDNet special report) |Download the report as a PDF(TechRepublic)

In fact, it's quite the contrary, Goel tells ZDNet. "Job losses are an important concern Jill Watson, in a way, could replace me as a teacher," he said. "But among the professors who use her, that question has never come up, because there is a huge need for teachers globally. Instead of replacing teachers, Jill Watson augments and amplifies their work, and that is something we actually need."

The AI was originally developed for an online masters in computer science, where students interact with teachers via a web discussion forum. Just in the spring of 2015, noticed Goel, 350 students posted 10,000 messages to the forum; to answer all of their questions, he worked out, would have taken a real-life teacher a year, working full time.

Jill Watson has only grown in popularity since 2015, said Goel, and she has now been deployed to a dozen other courses -- building her up for a new class takes less than ten hours. And while the artificial TA, for now, is only used at Georgia Institute of Technology, Jill Watson could change the education game if she were to be scaled globally. With UNESCO estimating that an additional 69 million teachers are needed to achieve sustainable development goals, the notion of 'augmenting' and 'amplifying' teachers' work could go a long way.

The automation of certain tasks is not such a scary prospect for those working in education. And perhaps neither is it a risk to the medical industry, where AI is already lending a helping hand with tasks ranging from disease diagnosis to prescription monitoring. It's a welcome support, rather than a looming threat, as the overwhelming majority of health services across the world report staff shortages and lack of resources even at the best of times.

But of course, not all professions are in dire need of more staff. For many workers, the advent of AI-powered technologies seems to be synonymous with permanent lay-off. Retailersare already using robotic fulfillment systems to pick orders in their warehouses. Google's project to build autonomous vehicles, Waymo, has launched its first commercial self-driving car service in the US, which in the long term will remove the need for a human taxi driver. Ford is even working on automating delivery services from start to finish, with a two-legged, two-armed robot that can walk around neighborhoods carrying parcels from the delivery vehicle right up to your doorstep.

Advancements in AI technology, therefore, don't bode well for all workers. "Nobody wants to be out of a job," says David McDonald, professor of human-centered design and engineering at the University of Washington. "Technological changes that impact our work, and thus, our ability to support ourselves and our families, are incredibly threatening."

"This suggests that when people hear stories saying that their livelihood is going to disappear," he says, "that they probably will not hear the part of the story that says there will be additional new jobs."

Consultancy McKinsey estimates that automation will cause up to 800 million individuals around the world to be displaced from their jobs by 2030 a statistic that will sound ominous, to say the least, to most of the workforce. But the firm's research also shows that in nearly all scenarios, and provided that there is sufficient investment and growth, most countries can expect to be at very near full employment by the same year.

The potential impact of artificial intelligence needs to be seen as part of the bigger picture. McKinsey highlighted that one of the countries that will face the largest displacement of workers is China, with up to 12% of the workforce needing to switch occupations. But although 12% seems like a lot, the consultancy noted, it's still relatively small compared with the tens of millions of Chinese who have moved out of agriculture in the past 25 years.

In other words, AI is only the latest news in the long history of technological progress and as with all previous advancements, the new opportunities that AI will open will balance out the skills that the technology makes out-of-date. At least that's the theory; one that Brett Frischmann explores in the book he co-authored, Re-engineering Humanity. It's a project that's been going on forever and more recent innovations are building on the efficiencies pioneered by the likes of Frederick Winslow Taylor and Henry Ford.

"At one point, human beings used spears to fish. As we developed fishing technology, fewer people needed that skill and did other things," he says. "The idea that there is something dramatically different about AI has to be looked at carefully. Ultimately, data-driven systems, for example as a way to optimize factory outputs, are only a ramped-up version of Ford and Taylor's processes."

Seeing AI as simply the next chapter of tech is a common position among experts. The University of Washington's McDonald is equally convinced that in one form or another, we have been building systems to complement work "for over 50 years".

So where does the big AI scare come from? A large part of the problem, as often, comes down to misunderstanding. There is one point that Frischmann was determined to clarify: people do tend to think, and wrongly so, that the technology is a force that has its own agenda -- one that involves coming against us and stealing our jobs.

"It's really important for people to understand that the AI doesn't want anything," he said. "It's not a bad guy. It doesn't have a role of its own, or an agenda. Human beings are the ones that create, design, damage, deploy, control those systems."

In reality, according to McKinsey, fewer than 5% of occupations can be entirely automated using current technology. But over half of jobs could have 30% of their activities taken on by AI. Rather than robots taking over, therefore, it looks like the future will be about task-sharing.

Gartner previously reported that by 2022one in five workers engaged in non-routine tasks will rely on AI to get work done. The research firm's analysts forecasted that combining human and artificial intelligence would be the way forward to maximize the value generated by the technology. AI, said Gartner, will assist workers in all types of jobs, from entry-level to highly-skilled.

The technology could become a virtual assistant, an intern, or another kind of robo-employee; in any case, it will lead to the development of an 'augmented' workforce, whose productivity will be enhanced by the tool.

For Gina Neff, associate professor at the Oxford Internet Institute, delegating tasks to AI will only bring about a brighter future for workers. "Humans are very good at lots of tasks, and there are lots of tasks that computers are better at than we are. I don't want to have to add large lists of sums by hand for my job, and thankfully I have a technology to help me do that."

"Increasingly, the conversation will shift towards thinking about what type of work we want to do, and how we can use the tools we have at our disposal to enhance our capacity, and make our work both productive and satisfying."

As machines take on tasks such as collecting and processing data, which they already carry out much better than humans, workers will find that they have more time to apply themselves to projects involving the cognitive skills logical reasoning, creativity, communication that robots (at least currently) lack.

Using technology to augment the human value of work is also the prospect that McDonald has in mind. "We should be using AI and complex computational systems to help people achieve their hopes, dreams and goals," he said. "That is, the AI systems we build should augment and extend our social and our cognitive skills and abilities."

There is a caveat. For AI systems to effectively bolster our hopes, dreams and goals, as McDonald said, it is crucial that the technology is designed from the start as a human-centered tool one that is made specifically to fulfil the interests of the human workforce.

Human-centricity might be the next big challenge for AI. Some believe, however, that so far the technology has not done such a good job at ensuring that it enhances humans. In Re-engineering Humanity, Frischmann, for one, does not do AI any favours.

"Smart systems and automation, in my opinion, cause atrophy, more than enhancement," he argued. "The question of whether robots will take our jobs is the wrong one. What is more relevant is how the deployment of AI affects humans. Are we engineering unintelligent humans, rather than intelligent machines?"

It is certainly a fine line, and going forward, will be a delicate balancing act. For Oxford Internet Institute's Neff, making AI work in humans' best interest will require a whole new category of workers, which she called "translators", to act as intermediaries between the real world and the technology.

For Neff, translators won't be roboticists or "hot-shot data scientists", but workers who understand the situation "on the ground" well enough to see how the technology can be applied efficiently to complement human activity.

In an example of good behaviour, and of a way to bridge between humans and technology, Amazon last year launched an initiative to help reconvert up to 1,300 employees that were being made redundant as the company deployed robots to its US fulfilment centres. The e-tailer announced that it would pay workers $10,000 to quit their jobs and set up their own delivery business, in order to tackle retail's infamous last-mile logistics challenge. Tens of thousands of workers have now applied to the program.

In a similar vein, Gartner recently suggested that HR departments startincluding a section dedicated to "robot resources", to better manage employees as they start working alongside robotic colleagues. "Getting an AI to collaborate with humans in the ways that we collaborate with others at work, every day, is incredibly hard," said McDonald. "One of the emerging areas in design is focused on designing AI that more effectively augments human capacity with respect for people."

SEE: 7 business areas ripe for an artificial intelligence boost

From human-centred design, to participatory design or user-experience design: for McDonald, humans have to be the main focus from the first stage of creating an AI.

And then there is the question of communication. At the Georgia Institute of Technology, Goel recognised that AI "has not done a good job" of selling itself to those who are not inside the experts' bubble.

"AI researchers like me cannot stay in our glass tower and develop tools while the rest of the world is anxious about the technology," he said. "We need to look at the social implications of what we do. If we can show that AI can solve previously unsolvable problems, then the value of AI will become clearer to everyone."

His dream for the future? To get every teacher in the world a Jill Watson assistant within five years; and that in the next decade, every parent can access one too, to help children with after-school questions. In fact, it's increasingly looking like every industry, not only education, will be getting their own version of a Jill Watson, too and that we needn't worry that she will be coming at our jobs anytime soon.

More:
AI vs your career? What artificial intelligence will really do to the future of work - ZDNet