Stanford is Using Artificial Intelligence to Help Treat Coronavirus Patients – ETF Trends

Clinicians and researchers at Stanford University are developing ways that artificial intelligence can help identify which patients will require intensive care amid a surge in coronavirus patients. Rather than build an algorithm from scratch, the goal by Stanford experts was to take existing technology and modify it for a seamless transition into clinical operations.

The hardest part, the most important part of this work is not the model development. But its the workflow design, the change management, figuring out how do you develop that system the model enables, said Ron Li, a Stanford physician, and clinical informaticist.

Per a STAT news report, the machine learning model Lis team is working with analyzes patients data and assigns them a score based on how sick they are and how likely they are to need escalated care. If the algorithm can be validated, Stanford plans to start using it to trigger clinical steps such as prompting a nurse to check in more frequently or order tests that would ultimately help physicians make decisions about a COVID-19 patients care.

As more technology flows into fighting the coronavirus pandemic, this can only open up opportunities for investors in healthcare-focused exchange-traded funds (ETFs).

ETF investors can look for opportunities in theHealth Care Select Sector SPDR ETF (NYSEArca: XLV),Vanguard Health Care ETF (NYSEArca: VHT)and theiShares US Medical Devices ETF (IHI).

XLV seeks investment results that correspond generally to the Health Care Select Sector Index. The index includes companies from the following industries: pharmaceuticals; health care equipment & supplies; health care providers & services; biotechnology; life sciences tools & services; and health care technology.

VHT employs an indexing investment approach designed to track the performance of the MSCI US Investable Market Index (IMI)/Health Care 25/50, an index made up of stocks of large, mid-size, and small U.S. companies within the health care sector, as classified under the Global Industry Classification Standard (GICS).

IHI seeks to track the investment results of the Dow Jones U.S. Select Medical Equipment Index composed of U.S. equities in the medical devices sector. The underlying index includes medical equipment companies, including manufacturers and distributors of medical devices such as magnetic resonance imaging (MRI) scanners, prosthetics, pacemakers, X-ray machines, and other non-disposable medical devices.

Another fund to consider is theRobo Global Healthcare Technology and Innovation ETF (HTEC). HTEC seeks to provide investment results that, before fees and expenses, correspond generally to the price and yield performance of the ROBO Global Healthcare Technology and Innovation Index.

The fund will normally invest at least 80 percent of its total assets in securities of the index or in depositary receipts representing securities of the index. The index is designed to measure the performance of companies that have a portion of their business and revenue derived from the field of healthcare technology, and the potential to grow within this space through innovation and market adoption of such companies, products and services.

For more market trends, visitETF Trends.

See the original post here:
Stanford is Using Artificial Intelligence to Help Treat Coronavirus Patients - ETF Trends

Artificial Intelligence turns a persons thoughts into text – Times of India

Scientists have developed an artificial intelligence system that can translate a persons thoughts into text by analysing their brain activity. Researchers at the University of California developed the AI to decipher up to 250 words in real-time from a set of between 30 and 50 sentences.The algorithm was trained using the neural signals of four women with electrodes implanted in their brains, which were already in place to monitor epileptic seizures. The volunteers repeatedly read sentences aloud while the researchers fed the brain data to the AI to unpick patterns that could be associated with individual words. The average word error rate across a repeated set was as low as 3%.'; var randomNumber = Math.random(); var isIndia = (window.geoinfo && window.geoinfo.CountryCode === 'IN') && (window.location.href.indexOf('outsideindia') === -1 ); console.log(isIndia && randomNumber A decade after speech was first decoded from human brain signals, accuracy and speed remain far below that of natural speech, states a paper detailing the research, published in the journal Nature Neuroscience. We trained a recurrent neural network to encode each sentence-length sequence of neural activity into an abstract representation, and then to decode this representation, word by word, into an English sentence, the report states.The system is, however, still a long way off being able to understand regular speech. People could become telepathic to some degree, able to converse not only without speaking but without words, the report stated.

See more here:
Artificial Intelligence turns a persons thoughts into text - Times of India

D-Wave makes its quantum computers free to anyone working on the coronavirus crisis – VentureBeat

D-Wave today made its quantum computers available for free to researchers and developers working on responses to the coronavirus (COVID-19) crisis. D-Wave partners and customers Cineca, Denso, Forschungszentrum Jlich, Kyocera, MDR, Menten AI, NEC, OTI Lumionics, QAR Lab at LMU Munich, Sigma-i, Tohoku University, and Volkswagen are also offering to help. They will provide access to their engineering teams with expertise on how to use quantum computers, formulate problems, and develop solutions.

Quantum computing leverages qubits to perform computations that would be much more difficult, or simply not feasible, for a classical computer. Based in Burnaby, Canada, D-Wave was the first company to sell commercial quantum computers, which are built to use quantum annealing. D-Wave says the move to make access free is a response to a cross-industry request from the Canadian government for solutions to the COVID-19 pandemic. Free and unlimited commercial contract-level access to D-Waves quantum computers is available in 35 countries across North America, Europe, and Asia via Leap, the companys quantum cloud service. Just last month, D-Wave debuted Leap 2, which includes a hybrid solver service and solves problems of up to 10,000 variables.

D-Wave and its partners are hoping the free access to quantum processing resources and quantum expertise will help uncover solutions to the COVID-19 crisis. We asked the company if there were any specific use cases it is expecting to bear fruit. D-Wave listed analyzing new methods of diagnosis, modeling the spread of the virus, supply distribution, and pharmaceutical combinations. D-Wave CEO Alan Baratz added a few more to the list.

The D-Wave system, by design, is particularly well-suited to solve a broad range of optimization problems, some of which could be relevant in the context of the COVID-19 pandemic, Baratz told VentureBeat. Potential applications that could benefit from hybrid quantum/classical computing include drug discovery and interactions, epidemiological modeling, hospital logistics optimization, medical device and supply manufacturing optimization, and beyond.

Earlier this month, Murray Thom, D-Waves VP of software and cloud services, told us quantum computing and machine learning are extremely well matched. In todays press release, Prof. Dr. Kristel Michielsen from the Jlich Supercomputing Centre seemed to suggest a similar notion: To make efficient use of D-Waves optimization and AI capabilities, we are integrating the system into our modular HPC environment.

Read more from the original source:
D-Wave makes its quantum computers free to anyone working on the coronavirus crisis - VentureBeat

Can Quantum Computing Be the New Buzzword – Analytics Insight

Quantum Mechanics created their chapter in the history of the early 20th Century. With its regular binary computing twin going out of style, quantum mechanics led quantum computing to be the new belle of the ball! While the memory used in a classical computer encodes binary bits one and zero, quantum computers use qubits (quantum bits). And Qubit is not confined to a two-state solution, but can also exist in superposition i.e., qubits can be employed at 0, 1 and both 1 and 0 at the same time.

Hence it can perform many calculations in parallel owing to the ability to pursue simultaneous probabilities through superposition along with manipulating them with magnetic fields. Its coefficients allow predicting how much zero-ness and one-ness it has, are complex numbers, which indicates the real and imaginary part. This provides a huge technical edge over other conventional computing. The beauty of this is if you have n qubits, you can have a superposition of 2n states or bits of information simultaneously.

Another magic up its sleeve is that Qubits are capable of pairing which is referred to as entanglement. Here, the state of one qubit cannot be described independently of the state of the others which allows instantaneous communication.

To quote American theoretical physicist, John Wheeler, If you are not completely confused by quantum mechanics, you do not understand it. So, without a doubt it is safe to say that even quantum computing has few pitfalls. First, the qubits tend to loss the information they contain, and also lose their entanglement in other words, decoherence. Second, imperfections of quantum rotations. These led to a loss of information within a few microsecond.

Ultimately, quantum computing is the Trump Card as promises to be a disruptive technology with such dramatic speed improvements. This will enable systems to solve complex higher-order mathematical problems that earlier took months to be computed, investigate material properties, design new ones, study superconductivity, aid in drug discovery via simulation and understanding new chemical reactions.

This quantum shift in the history of computer sciences can also pave way for encrypted communication (as keys cannot be copied nor hacked), much better than Blockchain technology, provide improved designs for solar panels, predict financial markets, big data mining, develop Artificial Intelligence to new heights, enhanced meteorological updates and a much-anticipated age of quantum internet. According to scientists, Future advancements can also lead to help find a cure for Alzheimers.

The ownership and effective employment of a quantum computer could change the political and technological dynamics of the world. Computing power, in the end, is power whether it is personal, national or globally strategic. In short, a quantum computer could be an existential threat to a nation that hasnt got one. At the moment Google, IBM, Intel, and D-Wave are pursuing this technology. While there are scientific minds who dont believe in the potential of quantum computing yet unless you are a time-traveler like Marty McFly in Back to the Future series or any one of the Doctor Who, one cannot say what future beholds.

Follow this link:
Can Quantum Computing Be the New Buzzword - Analytics Insight

Who Will Mine Cryptocurrency in the Future – Quantum Computers or the Human Body? – Coin Idol

Apr 01, 2020 at 09:31 // News

Companies including Microsoft, IBM and Google, race to come up with cheap and effective mining solutions to improve its cost and energy efficiency. Lots of fuss has been made around quantum computing and its potential for mining. Now, the time has come for a new solution - mining with the help of human body activity.

While quantum computers are said to be able to hack bitcoin mining algorithms, using physical activity for the process is quite a new and extraordinary thing. The question is, which technology turns out to be more efficient?

Currently, with the traditional cryptocurrency mining methods, the reward for mining a bitcoin block is around 12.5 bitcoins, at $4k per BTC and this should quickly be paid off after mining a few blocks.

Consequently, the best mining method as per now is to keep trying random numbers and wait to observe which one hashes to a number that isnt more than the target difficulty. And this is one of the reasons as to why mining pools have arisen where multiple PCs are functioning in parallel to look for the proper solution to the problem and if one of the PCs gets the solution, then the pool is given an appropriate reward which is then shared among all the miners.

Quantum computers possess more capacity and might potentially be able to significantly speed up mining while eliminating the need for numerous machines. Thus, it can improve both energy efficiency and the speed of mining.

In late 2019, Google released a quantum processor called Sycamore, many times faster than the existing supercomputer. There was even a post in the medium claiming that this new processor is able to mine all remaining bitcoins like in two seconds. Sometime later the post was deleted due to an error in calculations, according to the Bitcoinist news outlet.

Despite quantum computing having the potential to increase the efficiency of mining, its cost is close to stratospheric. It would probably take time before someone is able to afford it.

Meanwhile, another global tech giant, Microsoft, offers a completely new and extraordinary solution - to mine cryptos using a persons brain waves or body temperature. As coinidol.com, a world blockchain news outlet has reported, they have filed a patent for a groundbreaking system which can mine digital currencies using the data collected from human beings when they view ads or do exercises.

The IT giant disclosed that sensors could identify and diagnose any activity connected with the particular piece(s) of work like the time taken to read advertisements, and modify it into digital information that is readable by a computing device to do computation works, the same manner as a conventional proof-of-work (PoW) system works. Some tasks would either decrease or soar computational energy in an appropriate manner, basing on the produced amount of info from the users activity.

So far, there is no signal showing when Microsoft will start developing the system and it is still uncertain whether or not this system will be developed on its own blockchain network. Quantum computing also needs time to be fully developed and deployed.

However, both solutions bear a significant potential for transforming the entire mining industry. While quantum computing is able to boost the existing mining mechanism, having eliminated high energy-consuming mining firms, Microsofts new initiative can disrupt the industry making it even look different.

Which of these two solutions turns out to be more viable? We will see over time. What do you think about these mining solutions? Let us know in the comments below!

Read the original:
Who Will Mine Cryptocurrency in the Future - Quantum Computers or the Human Body? - Coin Idol

The Schizophrenic World Of Quantum Interpretations – Forbes

Quantum Interpretations

To the average person, most quantum theories sound strange, while others seem downright bizarre.There are many diverse theories that try to explain the intricacies of quantum systems and how our interactions affect them.And, not surprisingly, each approach is supported by its group of well-qualified and well-respected scientists.Here, well take a look at the two most popular quantum interpretations.

Does it seem reasonable that you can alter a quantum system just by looking at it? What about creating multiple universes by merely making a decision?Or what if your mind split because you measured a quantum system?

You might be surprised that all or some of these things might routinely happen millions of times every day without you even realizing it.

But before your brain gets twisted into a knot, lets cover a little history and a few quantum basics.

The birth of quantum mechanics

Classical physics describes how large objects behave and how they interact with the physical world.On the other hand, quantum theory is all about the extraordinary and inexplicable interaction of small particles on the invisible scale of such things as atoms, electrons, and photons.

Max Planck, a German theoretical physicist, first introduced the quantum theory in 1900. It was an innovation that won him the Nobel Prize in physics in 1918.Between 1925 and 1930, several scientists worked to clarify and understand quantum theory.Among the scientists were Werner Heisenberg and Erwin Schrdinger, both of whom mathematically expanded quantum mechanics to accommodate experimental findings that couldnt be explained by standard physics.

Heisenberg, along with Max Born and Pascual Jordan, created a formulation of quantum mechanics called matrix mechanics. This concept interpreted the physical properties of particles as matrices that evolved in time.A few months later, Erwin Schrdinger created his famous wave mechanics.

Although Heisenberg and Schrdinger worked independently from each other, and although their theories were very different in presentation, both theories were essentially mathematically the same. Of the two formulations, Schrdingers was more popular than Heisenbergs because it boiled down to familiar differential equations.

While today's physicists still use these formulations, they still debate their actual meaning.

First weirdness

A good place to start is Schrdingers equation.

Erwin Schrdingers equation provides a mathematical description of all possible locations and characteristics of a quantum system as it changes over time.This description is called the systems wave function.According to the most common quantum theory, everything has a wave function. The quantum system could be a particle, such as an electron or a photon, or even something larger.

Schrdingers equation won't tell you the exact location of a particle.It only reveals the probability of finding the particle at a given location.The probability of a particle being in many places or in many states at the same time is called its superposition. Superposition is one of the elements of quantum computing that makes it so powerful.

Almost everyone has heard about Schrdingers cat in a box.Simplistically, ignoring the radiation gadgets, while the cat is in the closed box, it is in a superposition of being both dead and alive at the same time.Opening the box causes the cat's wave function to collapse into one of two states and you'll find the cat either alive or dead.

There is little dispute among the quantum community that Schrdingers equation accurately reflects how a quantum wave function evolves.However, the wave function itself, as well as the cause and consequences of its collapse, are all subjects of debate.

David Deutsch is a brilliant British quantum physicist at the University of Cambridge. In his book, The Fabric of Reality, he said: Being able to predict things or to describe them, however accurately, is not at all the same thing as understanding them. Facts cannot be understood just by being summarized in a formula, any more than being listed on paper or committed to memory.

The Copenhagen interpretation

Quantum theories use the term "interpretation" for two reasons.One, it is not always obvious what a particular theory means without some form of translation.And, two, we are not sure we understand what goes on between a wave functions starting point and where it ends up.

There are many quantum interpretations.The most popular is the Copenhagen interpretation, a namesake of where Werner Heisenberg andNiels Bohr developed their quantum theory.

Werner Heisenberg (left) with Niels Bohr at a Conference in Copenhagen in 1934.

Bohr believed that the wave function of a quantum system contained all possible quantum states.However, when the system was observed or measured, its wave function collapsed into a single state.

Whats unique about the Copenhagen interpretation is that it makes the outside observer responsible for the wave functions ultimate fate. Almost magically, a quantum system, with all its possible states and probabilities, has no connection to the physical world until an observer interacts or measures the system. The measurement causes the wave function to collapse into one of its many states.

You might wonder what happens to all the other quantum states present in the wave function as described by the Copenhagen Interpretation before it collapsed?There is no explanation of that mystery in the Copenhagen interpretation. However, there is a quantum interpretation that provides an answer to that question.Its called the Many-Worlds Interpretation or MWI.

Billions of you?

Because the many-worlds interpretation is one of the strangest quantum theories, it has become central to the plot of many science fiction novels and movies.At one time, MWI was an outlier with the quantum community, but many leading physicists now believe it is the only theory that is consistent with quantum behavior.

The MWI originated in a Princeton doctoral thesis written by a young physicist named Hugh Everett in the late 1950s. Even though Everett derived his theory using sound quantum fundamentals, it was severely criticized and ridiculed by most of the quantum community. Even Everetts academic adviser at Princeton, John Wheeler, tried to distance himself from his student. Everette became despondent over the harsh criticism. He eventually left quantum research to work for the government as a mathematician.

The theory proposes that the universe has a single, large wave function that follows Schrdingers equation.Unlike the Copenhagen Interpretation, the MWI universal wave function doesnt collapse.

Everything in the universe is quantum, including ourselves. As we interact with parts of the universe, we become entangled with it.As the universal wave function evolves, some of our superposition states decohere. When that happens, our reality becomes separated from the other possible outcomes associated with that event. Just to be clear, the universe doesn't split and create a new universe. The probability of all realities, or universes, already exists in the universal wave function, all occupying the same space-time.

Schrdinger's Cat, many-worlds interpretation, with universe branching. Visualization of the ... [+] separation of the universe due to two superposed and entangled quantum mechanical states.

In the Copenhagen interpretation, by opening the box containing Schrdingers cat, you cause the wave function to collapse into one of its possible states, either alive or dead.

In the Many -Worlds interpretation, the wave function doesn't collapse. Instead, all probabilities are realized.In one universe, you see the cat alive, and in another universe the cat will be dead.

Right or wrong decisions become right and wrong decisions

Decisions are also events that trigger the separation of multiple universes. We make thousands of big and little choices every day. Have you ever wondered what your life would be like had you made different decisions over the years?

According to the Many-Worlds interpretation, you and all those unrealized decisions exist in different universes because all possible outcomes exist in the universal wave function.For every decision you make, at least two of "you" evolve on the other side of that decision. One universe exists for the choice you make, and one universe for the choice you didnt make.

If the Many-Worlds Interpretation is correct, then right now, a near infinite versions of you are living different and independent lives in their own universes.Moreover, each of the universes overlay each other and occupy the same space and time.

It is also likely that you are currently living in a branch universe spun off from a decision made by a previous version of yourself, perhaps millions or billions of previous iterations ago.You have all the old memories of your pre-decision self, but as you move forward in your own universe, you live independently and create your unique and new memories.

A Reality Check

Which interpretation is correct?Copenhagen or Many-Worlds?Maybe neither. But because quantum mechanics is so strange, perhaps both are correct.It is also possible that a valid interpretation is yet to be expressed. In the end, correct or not, quantum interpretations are just plain fun to think about.

Note: Moor Insights & Strategy writers and editors may have contributed to this article.

Disclosure: Moor Insights & Strategy, like all research and analyst firms, provides or has provided paid research, analysis, advising, or consulting to many high-tech companies in the industry, including Amazon.com, Advanced Micro Devices,Apstra,ARM Holdings, Aruba Networks, AWS, A-10 Strategies,Bitfusion,Cisco Systems, Dell, DellEMC, Dell Technologies, Diablo Technologies, Digital Optics,Dreamchain, Echelon, Ericsson, Foxconn, Frame, Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries,Google,HPInc., Hewlett Packard Enterprise, HuaweiTechnologies,IBM, Intel, Interdigital, Jabil Circuit, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, MACOM (Applied Micro),MapBox,Mavenir, Mesosphere,Microsoft,National Instruments, NetApp, NOKIA, Nortek,NVIDIA, ON Semiconductor, ONUG, OpenStack Foundation, Panasas,Peraso, Pixelworks, Plume Design,Portworx, Pure Storage,Qualcomm, Rackspace, Rambus,RayvoltE-Bikes, Red Hat, Samsung Electronics, Silver Peak, SONY,Springpath, Sprint, Stratus Technologies, Symantec, Synaptics, Syniverse,TensTorrent,TobiiTechnology, Twitter, Unity Technologies, Verizon Communications,Vidyo, Wave Computing,Wellsmith, Xilinx, Zebra, which may be cited in this article.

Read more here:
The Schizophrenic World Of Quantum Interpretations - Forbes

Research Shows Evidence of Broken Time-Reversal Symmetry in Superconducting UPt3 – HPCwire

March 30, 2020 Researchers at the University of Notre Dame, in partnership with those at Northwestern University, are a step closer to understanding how superconductors can be improved for reliability in future quantum computers.

The team, led by byMorten Eskildsen,professor in theDepartment of Physics at Notre Dameand William Halperin from Northwestern University, achieved a new discovery in the field of topological superconductivity. The materials are at the forefront of research in quantum computing and quantum sensing.

The team demonstrated in their paper, published in March in Nature Physics,that the superconducting compound UPt3breaks time-reversal symmetry, where superconducting electrons spontaneously circulate around a specific axis within the crystalline structure of the material.

The researchers used neutron-scattering experiments completed at Oak Ridge National Laboratory in Tennessee and at the Institut Laue-Langevin in Grenoble, France, to make the discovery, which had been predicted but had not been unambiguously detected before.

Topological properties of materials are being studied intensely because of their fundamental as well as practical importance, Eskildsen said. A classic example of topology is a Mbius strip that has only one surface and one edge. Here the twist is a robust feature that can only be undone by cutting the strip.

In solids, this concept is understood abstractly, referring to electronic properties that cannot be undone in a smooth manner. This provides what is known as topological protection, and is an avenue to increase reliability in novel electronic devices for quantum computation. Importantly, it is the understanding gained from the neutron scattering experiments, rather than the particular material, that will benefit the development of quantum devices.

The measurements were carried out at extremely low temperatures and high magnetic fields. The group looked at the properties of electric tornadoes or vortices in the material, and found a difference in their behavior depending on how the superconducting state was prepared. Specifically, their results show that the superconducting state in UPt3can be assigned a chirality, or handedness, and that this can be controlled by suitable magnetic field protocols.

In addition to Eskildsen, key collaborators included Keenan Avers and James Sauls from Northwestern University, as well as researchers from Oak Ridge National Laboratory, Institut Laue-Langevin, and the Laboratory for Neutron Scattering and Imaging at the Paul Scherrer Institute in Switzerland.

The research at Notre Dame was supported by aU.S. Department of EnergyBasic Energy Science Grant. Research of the Northwestern team was supported by a U.S. Department of Energy Basic Energy Science Grant and the Northwestern-Fermilab Center for Applied Physics and Superconducting Technologies.

Read more.

About Chicago Quantum Exchange (CQE)

The Chicago Quantum Exchange (CQE) is an intellectual hub and community of researchers with the common goal of advancing academic and industrial efforts in the science and engineering of quantum information across CQE members, partners, and our region. The hub aims to promote the exploration of quantum information technologies and the development of new applications. The CQE facilitates interactions between research groups of its member and partner institutions and provides an avenue for developing and fostering collaborations, joint projects, and information exchange.

Source: Chicago Quantum Exchange (CQE)

Continue reading here:
Research Shows Evidence of Broken Time-Reversal Symmetry in Superconducting UPt3 - HPCwire

AI vs your career? What artificial intelligence will really do to the future of work – ZDNet

Jill Watson has been a teaching assistant (TA) at the Georgia Institute of Technology for five years now, helping students day and night with all manner of course-related inquiries. But for all the hard work she has done, she still can't qualify for outstanding TA of the year.

That's because Jill Watson, contrary to many students' belief, is not actually human.

Created back in 2015 by Ashok Goel, professor of computer science and cognitive science at the Institute, Jill Watson is an artificial system based on IBM's Watson artificial intelligence software. Her role consists of answering students' questions a task she remarkably carries out with a 97% accuracy rate, for inquiries ranging from confirming the word count for an assignment, to complex technical questions related to the content of the course.

And she has certainly gone down well with students, many of whom, in 2015, were "flabbergasted" upon discovering that their favorite TA was not the serviceable, human lady that they expected, but in fact a cold-hearted machine.

What students found an amusing experiment is the sort of thing that worries many workers. Automation, we have been told time and again, will displace jobs; so are experiments like Jill Watson the first step towards unemployment for professionals?

SEE:How to implement AI and machine learning(ZDNet special report) |Download the report as a PDF(TechRepublic)

In fact, it's quite the contrary, Goel tells ZDNet. "Job losses are an important concern Jill Watson, in a way, could replace me as a teacher," he said. "But among the professors who use her, that question has never come up, because there is a huge need for teachers globally. Instead of replacing teachers, Jill Watson augments and amplifies their work, and that is something we actually need."

The AI was originally developed for an online masters in computer science, where students interact with teachers via a web discussion forum. Just in the spring of 2015, noticed Goel, 350 students posted 10,000 messages to the forum; to answer all of their questions, he worked out, would have taken a real-life teacher a year, working full time.

Jill Watson has only grown in popularity since 2015, said Goel, and she has now been deployed to a dozen other courses -- building her up for a new class takes less than ten hours. And while the artificial TA, for now, is only used at Georgia Institute of Technology, Jill Watson could change the education game if she were to be scaled globally. With UNESCO estimating that an additional 69 million teachers are needed to achieve sustainable development goals, the notion of 'augmenting' and 'amplifying' teachers' work could go a long way.

The automation of certain tasks is not such a scary prospect for those working in education. And perhaps neither is it a risk to the medical industry, where AI is already lending a helping hand with tasks ranging from disease diagnosis to prescription monitoring. It's a welcome support, rather than a looming threat, as the overwhelming majority of health services across the world report staff shortages and lack of resources even at the best of times.

But of course, not all professions are in dire need of more staff. For many workers, the advent of AI-powered technologies seems to be synonymous with permanent lay-off. Retailersare already using robotic fulfillment systems to pick orders in their warehouses. Google's project to build autonomous vehicles, Waymo, has launched its first commercial self-driving car service in the US, which in the long term will remove the need for a human taxi driver. Ford is even working on automating delivery services from start to finish, with a two-legged, two-armed robot that can walk around neighborhoods carrying parcels from the delivery vehicle right up to your doorstep.

Advancements in AI technology, therefore, don't bode well for all workers. "Nobody wants to be out of a job," says David McDonald, professor of human-centered design and engineering at the University of Washington. "Technological changes that impact our work, and thus, our ability to support ourselves and our families, are incredibly threatening."

"This suggests that when people hear stories saying that their livelihood is going to disappear," he says, "that they probably will not hear the part of the story that says there will be additional new jobs."

Consultancy McKinsey estimates that automation will cause up to 800 million individuals around the world to be displaced from their jobs by 2030 a statistic that will sound ominous, to say the least, to most of the workforce. But the firm's research also shows that in nearly all scenarios, and provided that there is sufficient investment and growth, most countries can expect to be at very near full employment by the same year.

The potential impact of artificial intelligence needs to be seen as part of the bigger picture. McKinsey highlighted that one of the countries that will face the largest displacement of workers is China, with up to 12% of the workforce needing to switch occupations. But although 12% seems like a lot, the consultancy noted, it's still relatively small compared with the tens of millions of Chinese who have moved out of agriculture in the past 25 years.

In other words, AI is only the latest news in the long history of technological progress and as with all previous advancements, the new opportunities that AI will open will balance out the skills that the technology makes out-of-date. At least that's the theory; one that Brett Frischmann explores in the book he co-authored, Re-engineering Humanity. It's a project that's been going on forever and more recent innovations are building on the efficiencies pioneered by the likes of Frederick Winslow Taylor and Henry Ford.

"At one point, human beings used spears to fish. As we developed fishing technology, fewer people needed that skill and did other things," he says. "The idea that there is something dramatically different about AI has to be looked at carefully. Ultimately, data-driven systems, for example as a way to optimize factory outputs, are only a ramped-up version of Ford and Taylor's processes."

Seeing AI as simply the next chapter of tech is a common position among experts. The University of Washington's McDonald is equally convinced that in one form or another, we have been building systems to complement work "for over 50 years".

So where does the big AI scare come from? A large part of the problem, as often, comes down to misunderstanding. There is one point that Frischmann was determined to clarify: people do tend to think, and wrongly so, that the technology is a force that has its own agenda -- one that involves coming against us and stealing our jobs.

"It's really important for people to understand that the AI doesn't want anything," he said. "It's not a bad guy. It doesn't have a role of its own, or an agenda. Human beings are the ones that create, design, damage, deploy, control those systems."

In reality, according to McKinsey, fewer than 5% of occupations can be entirely automated using current technology. But over half of jobs could have 30% of their activities taken on by AI. Rather than robots taking over, therefore, it looks like the future will be about task-sharing.

Gartner previously reported that by 2022one in five workers engaged in non-routine tasks will rely on AI to get work done. The research firm's analysts forecasted that combining human and artificial intelligence would be the way forward to maximize the value generated by the technology. AI, said Gartner, will assist workers in all types of jobs, from entry-level to highly-skilled.

The technology could become a virtual assistant, an intern, or another kind of robo-employee; in any case, it will lead to the development of an 'augmented' workforce, whose productivity will be enhanced by the tool.

For Gina Neff, associate professor at the Oxford Internet Institute, delegating tasks to AI will only bring about a brighter future for workers. "Humans are very good at lots of tasks, and there are lots of tasks that computers are better at than we are. I don't want to have to add large lists of sums by hand for my job, and thankfully I have a technology to help me do that."

"Increasingly, the conversation will shift towards thinking about what type of work we want to do, and how we can use the tools we have at our disposal to enhance our capacity, and make our work both productive and satisfying."

As machines take on tasks such as collecting and processing data, which they already carry out much better than humans, workers will find that they have more time to apply themselves to projects involving the cognitive skills logical reasoning, creativity, communication that robots (at least currently) lack.

Using technology to augment the human value of work is also the prospect that McDonald has in mind. "We should be using AI and complex computational systems to help people achieve their hopes, dreams and goals," he said. "That is, the AI systems we build should augment and extend our social and our cognitive skills and abilities."

There is a caveat. For AI systems to effectively bolster our hopes, dreams and goals, as McDonald said, it is crucial that the technology is designed from the start as a human-centered tool one that is made specifically to fulfil the interests of the human workforce.

Human-centricity might be the next big challenge for AI. Some believe, however, that so far the technology has not done such a good job at ensuring that it enhances humans. In Re-engineering Humanity, Frischmann, for one, does not do AI any favours.

"Smart systems and automation, in my opinion, cause atrophy, more than enhancement," he argued. "The question of whether robots will take our jobs is the wrong one. What is more relevant is how the deployment of AI affects humans. Are we engineering unintelligent humans, rather than intelligent machines?"

It is certainly a fine line, and going forward, will be a delicate balancing act. For Oxford Internet Institute's Neff, making AI work in humans' best interest will require a whole new category of workers, which she called "translators", to act as intermediaries between the real world and the technology.

For Neff, translators won't be roboticists or "hot-shot data scientists", but workers who understand the situation "on the ground" well enough to see how the technology can be applied efficiently to complement human activity.

In an example of good behaviour, and of a way to bridge between humans and technology, Amazon last year launched an initiative to help reconvert up to 1,300 employees that were being made redundant as the company deployed robots to its US fulfilment centres. The e-tailer announced that it would pay workers $10,000 to quit their jobs and set up their own delivery business, in order to tackle retail's infamous last-mile logistics challenge. Tens of thousands of workers have now applied to the program.

In a similar vein, Gartner recently suggested that HR departments startincluding a section dedicated to "robot resources", to better manage employees as they start working alongside robotic colleagues. "Getting an AI to collaborate with humans in the ways that we collaborate with others at work, every day, is incredibly hard," said McDonald. "One of the emerging areas in design is focused on designing AI that more effectively augments human capacity with respect for people."

SEE: 7 business areas ripe for an artificial intelligence boost

From human-centred design, to participatory design or user-experience design: for McDonald, humans have to be the main focus from the first stage of creating an AI.

And then there is the question of communication. At the Georgia Institute of Technology, Goel recognised that AI "has not done a good job" of selling itself to those who are not inside the experts' bubble.

"AI researchers like me cannot stay in our glass tower and develop tools while the rest of the world is anxious about the technology," he said. "We need to look at the social implications of what we do. If we can show that AI can solve previously unsolvable problems, then the value of AI will become clearer to everyone."

His dream for the future? To get every teacher in the world a Jill Watson assistant within five years; and that in the next decade, every parent can access one too, to help children with after-school questions. In fact, it's increasingly looking like every industry, not only education, will be getting their own version of a Jill Watson, too and that we needn't worry that she will be coming at our jobs anytime soon.

More:
AI vs your career? What artificial intelligence will really do to the future of work - ZDNet

Stanford launches an accelerated test of AI to help with Covid-19 care – STAT

In the heart of Silicon Valley, Stanford clinicians and researchers are exploring whether artificial intelligence could help manage a potential surge of Covid-19 patients and identify patients who will need intensive care before their condition rapidly deteriorates.

The challenge is not to build the algorithm the Stanford team simply picked an off-the-shelf tool already on the market but rather to determine how to carefully integrate it into already-frenzied clinical operations.

The hardest part, the most important part of this work is not the model development. But its the workflow design, the change management, figuring out how do you develop that system the model enables, said Ron Li, a Stanford physician and clinical informaticist leading the effort. Li will present the work on Wednesday at a virtual conference hosted by Stanfords Institute for Human-Centered Artificial Intelligence.

advertisement

The effort is primed to be an accelerated test of whether hospitals can smoothly incorporate AI tools into their workflows. That process, typically slow and halting, is being sped up at hospitals all over the world in the face of the coronavirus pandemic.

The machine learning model Lis team is working with analyzes patients data and assigns them a score based on how sick they are and how likely they are to need escalated care. If the algorithm can be validated, Stanford plans to start using it to trigger clinical steps such as prompting a nurse to check in more frequently or order tests that would ultimately help physicians make decisions about a Covid-19 patients care.

advertisement

The model known as the Deterioration Index was built and is marketed by Epic, the big electronic health records vendor.Li and his team picked that particular algorithm out of convenience, because its already integrated into their EHR, Li said. Epic trained the model on data from hospitalized patients who did not have Covid-19 a limitation that raises questions about whether it will be generalizable for patients with a novel disease whose data it was never intended to analyze.

Nearly 50 health systems which cover hundreds of hospitals have been using the model to identify hospitalized patients with a wide range of medical conditions who are at the highest risk of deterioration, according to a spokesperson for Epic. The company recently built an update to help hospitals measure how well the model works specifically for Covid-19 patients. The spokesperson said that work showed the model performed well and didnt need to be altered. Some hospitals are already using it with confidence, according to the spokesperson. But others, including Stanford, are now evaluating the model in their own Covid-19 patients.

In the months before the coronavirus pandemic, Li and his team had been working to validate the model on data from Stanfords general population of hospitalized patients. Now, theyve switched their focus to test it on data from dozens of Covid-19 patients that have been hospitalized at Stanford a cohort that, at least for now, may be too small to fully validate the model.

Were essentially waiting as we get more and more Covid patients to see how well this works, Li said. He added that the model does not have to be completely accurate in order to prove useful in the way its being deployed: to help inform high-stakes care decisions, not to automatically trigger them.

As of Tuesday afternoon, Stanfords main hospital was treating 19 confirmed Covid-19 patients, nine of whom were in the intensive care unit; another 22 people were under investigation for possible Covid-19, according to Stanford spokesperson Julie Greicius. The branch of Stanfords health system serving communities east of the San Francisco Bay had five confirmed Covid-19 patients, plus one person under investigation. And Stanfords hospital for children had one confirmed Covid-19 patient, plus seven people under investigation, Greicius said.

Stanfords hospitalization numbers are very fluid. Many people under investigation may turn out to not be infected, and many confirmed Covid-19 patients who have relatively mild symptoms may be quickly cleared for discharge to go home.

The model is meant to be used in patients who are hospitalized, but not yet in the ICU. It analyzes patients data including their vital signs, lab test results, medications, and medical history and spits out a score on a scale from 0 to 100, with a higher number signaling elevated concern that the patients condition is deteriorating.

Already, Li and his team have started to realize that a patients score may be less important than how quickly and dramatically that score changes, he said.

If a patients score is 70, which is pretty high, but its been 70 for the last 24 hours thats actually a less concerning situation than if a patient scores 20 and then jumps up to 80 within 10 hours, he said.

Li and his colleagues are adamant that they will not set a specific score threshold that would automatically trigger a transfer to the ICU or prompt a patient to be intubated. Rather, theyre trying to decide which scores or changes in scores should set off alarm bells that a clinician might need to gather more data or take a closer look at how a patient is doing.

At the end of the day, it will still be the human experts who will make the call regarding whether or not the patient needs to go to the ICU or get intubated except that this will now be augmented by a system that is smarter, more automated, more efficient, Li said.

Using an algorithm in this way has potential to minimize the time that clinicians spend manually reviewing charts, so they can focus on the work that most urgently demands their direct expertise, Li said. That could be especially important if Stanfords hospital sees a flood of Covid-19 patients in the coming weeks. Santa Clara County, where Stanford is located, had confirmed 890 cases of Covid-19 as of Monday afternoon. Its not clear how many of them have needed hospitalization, though San Francisco Bay Area hospitals have not so far faced the crush of Covid-19 patients that New York City hospitals are experiencing.

That could change. And if it does, Li said, the model will have to be integrated into operations in a way that will work if Stanford has several hundred Covid-19 patients in its hospital.

This is part of a yearlong series of articles exploring the use of artificial intelligence in health care that is partly funded by a grant from the Commonwealth Fund.

Read the rest here:
Stanford launches an accelerated test of AI to help with Covid-19 care - STAT

Enterprise Artificial Intelligence Along With Telehealth And Teleconferences Can Help In Fighting COVID-19 – Entrepreneur

Artificial intelligence can enable its productive tools to be employed to fight against COVID-19. Here's how

April1, 20204 min read

Opinions expressed by Entrepreneur contributors are their own.

You're reading Entrepreneur India, an international franchise of Entrepreneur Media.

In the fight against COVID-19, enterprise artificial intelligence (AI) is not getting the same attention as teleconferencing and telehealth technologies.

A massive shift to working from home to avoid spreading the virus means virtual collaboration companies like Zoom Video Communications are in the headlines. The same dynamic is playing out in healthcare as hospitals attempt to prioritize physical care for the coronavirus patients and are trying telehealth product suites like TelaDoc to manage everyone else and scale up.

AIs role in getting us through this remains less intuitive that is because not enough AI solutions can be plugged into an organization on the run like teleconferencing and telehealth can today. In addition to that, it served up as point solutions that just help with a single task do not do sufficient good fast enough. What is needed is its suite that quickly makes entire workflows easier just like teleconferencing and telehealth can.

Its worth listening to Benchmark Capitals Chetan Puttagunta on the first point as the pandemic accelerated in the U.S. as he reminded us that during previous downturns, the companies that could deliver their solutions fastest and easiest rose to the top. If it takes too long to implement a technology, you are now in a holding pattern.

The weakness of point solutions may not be as clear in times like these. The Pentagons former head of information technology in the 1990s, Paul Strassmann, articulated it best. He called it managerial productivity versus operational productivity. Steve Jobs showed Strassmanns managerial productivity helps you do a few things well and then its value tapers off. Operational productivity lifts everything the enterprise does.

AI has to show up in the form of an operational productivity solution that helps everything work better in the industry it serves. It cannot just be a tool that is floating out of context of an industrys workflows. It has to feel purpose built, have the power of the kind of solution suite that demonstrates a grasp of the unique problems of a specific industry.

The current pandemic is likely to winnow the pack of AI companies down to a smaller group of enterprise suites as some run out of cash and others realize they need to get back to the drawing board.

The same was true for teleconferencing solutions during the Great Recession. Industry veterans like myself saw the field of companies reduced down to enterprises that learned how to grow in that environment and refine their suite to be at the forefront today.

Taking a page from that period, tech leaders are looking for early lessons from COVID-19s new world to see what the future will look like. Fundamentally, this crisis means utilizing integrated digital systems to recognize and respond to emerging risks and consumer demands. The key is responding, not just automating todays activities and workflows. Changing events need to be anticipated, recognized and reacted to. That is the test AI faces.

Until now, most predictive technologies have been based on assessing two variables retrospectively. AI can be deployed to explore multiple variables and how they change through time in relation to each other. Make this easy to plug in and deploy as a suite that delivers operational productivity and a new game becomes possible.

In the insurance industry, it can mean an active claim intake processes and alerts to emerge threats in a multi-variable world. In the financial services industry, it can mean real-time understanding of liquidity, capital reserves, when best to utilize these reserves and when best to increase them. In healthcare, it can mean empowering front-line clinicians with tools that pull in data for collective use and also help them make more informed treatment decisions.

There is a lot at stake. Strassman may now be in his 90s but he still speaks to local groups at his hometown in Connecticut. At such a gathering a few weeks ago he pointed out the tensions in the global economy that could be pushed too far by todays events. His small book on the subject has already been sold out on Amazon. A perfect example of more panic buying in pivotal times.

Go here to read the rest:
Enterprise Artificial Intelligence Along With Telehealth And Teleconferences Can Help In Fighting COVID-19 - Entrepreneur