Can Quantum Computing Be the New Buzzword – Analytics Insight

Quantum Mechanics created their chapter in the history of the early 20th Century. With its regular binary computing twin going out of style, quantum mechanics led quantum computing to be the new belle of the ball! While the memory used in a classical computer encodes binary bits one and zero, quantum computers use qubits (quantum bits). And Qubit is not confined to a two-state solution, but can also exist in superposition i.e., qubits can be employed at 0, 1 and both 1 and 0 at the same time.

Hence it can perform many calculations in parallel owing to the ability to pursue simultaneous probabilities through superposition along with manipulating them with magnetic fields. Its coefficients allow predicting how much zero-ness and one-ness it has, are complex numbers, which indicates the real and imaginary part. This provides a huge technical edge over other conventional computing. The beauty of this is if you have n qubits, you can have a superposition of 2n states or bits of information simultaneously.

Another magic up its sleeve is that Qubits are capable of pairing which is referred to as entanglement. Here, the state of one qubit cannot be described independently of the state of the others which allows instantaneous communication.

To quote American theoretical physicist, John Wheeler, If you are not completely confused by quantum mechanics, you do not understand it. So, without a doubt it is safe to say that even quantum computing has few pitfalls. First, the qubits tend to loss the information they contain, and also lose their entanglement in other words, decoherence. Second, imperfections of quantum rotations. These led to a loss of information within a few microsecond.

Ultimately, quantum computing is the Trump Card as promises to be a disruptive technology with such dramatic speed improvements. This will enable systems to solve complex higher-order mathematical problems that earlier took months to be computed, investigate material properties, design new ones, study superconductivity, aid in drug discovery via simulation and understanding new chemical reactions.

This quantum shift in the history of computer sciences can also pave way for encrypted communication (as keys cannot be copied nor hacked), much better than Blockchain technology, provide improved designs for solar panels, predict financial markets, big data mining, develop Artificial Intelligence to new heights, enhanced meteorological updates and a much-anticipated age of quantum internet. According to scientists, Future advancements can also lead to help find a cure for Alzheimers.

The ownership and effective employment of a quantum computer could change the political and technological dynamics of the world. Computing power, in the end, is power whether it is personal, national or globally strategic. In short, a quantum computer could be an existential threat to a nation that hasnt got one. At the moment Google, IBM, Intel, and D-Wave are pursuing this technology. While there are scientific minds who dont believe in the potential of quantum computing yet unless you are a time-traveler like Marty McFly in Back to the Future series or any one of the Doctor Who, one cannot say what future beholds.

Follow this link:
Can Quantum Computing Be the New Buzzword - Analytics Insight

Who Will Mine Cryptocurrency in the Future – Quantum Computers or the Human Body? – Coin Idol

Apr 01, 2020 at 09:31 // News

Companies including Microsoft, IBM and Google, race to come up with cheap and effective mining solutions to improve its cost and energy efficiency. Lots of fuss has been made around quantum computing and its potential for mining. Now, the time has come for a new solution - mining with the help of human body activity.

While quantum computers are said to be able to hack bitcoin mining algorithms, using physical activity for the process is quite a new and extraordinary thing. The question is, which technology turns out to be more efficient?

Currently, with the traditional cryptocurrency mining methods, the reward for mining a bitcoin block is around 12.5 bitcoins, at $4k per BTC and this should quickly be paid off after mining a few blocks.

Consequently, the best mining method as per now is to keep trying random numbers and wait to observe which one hashes to a number that isnt more than the target difficulty. And this is one of the reasons as to why mining pools have arisen where multiple PCs are functioning in parallel to look for the proper solution to the problem and if one of the PCs gets the solution, then the pool is given an appropriate reward which is then shared among all the miners.

Quantum computers possess more capacity and might potentially be able to significantly speed up mining while eliminating the need for numerous machines. Thus, it can improve both energy efficiency and the speed of mining.

In late 2019, Google released a quantum processor called Sycamore, many times faster than the existing supercomputer. There was even a post in the medium claiming that this new processor is able to mine all remaining bitcoins like in two seconds. Sometime later the post was deleted due to an error in calculations, according to the Bitcoinist news outlet.

Despite quantum computing having the potential to increase the efficiency of mining, its cost is close to stratospheric. It would probably take time before someone is able to afford it.

Meanwhile, another global tech giant, Microsoft, offers a completely new and extraordinary solution - to mine cryptos using a persons brain waves or body temperature. As coinidol.com, a world blockchain news outlet has reported, they have filed a patent for a groundbreaking system which can mine digital currencies using the data collected from human beings when they view ads or do exercises.

The IT giant disclosed that sensors could identify and diagnose any activity connected with the particular piece(s) of work like the time taken to read advertisements, and modify it into digital information that is readable by a computing device to do computation works, the same manner as a conventional proof-of-work (PoW) system works. Some tasks would either decrease or soar computational energy in an appropriate manner, basing on the produced amount of info from the users activity.

So far, there is no signal showing when Microsoft will start developing the system and it is still uncertain whether or not this system will be developed on its own blockchain network. Quantum computing also needs time to be fully developed and deployed.

However, both solutions bear a significant potential for transforming the entire mining industry. While quantum computing is able to boost the existing mining mechanism, having eliminated high energy-consuming mining firms, Microsofts new initiative can disrupt the industry making it even look different.

Which of these two solutions turns out to be more viable? We will see over time. What do you think about these mining solutions? Let us know in the comments below!

Read the original:
Who Will Mine Cryptocurrency in the Future - Quantum Computers or the Human Body? - Coin Idol

The Schizophrenic World Of Quantum Interpretations – Forbes

Quantum Interpretations

To the average person, most quantum theories sound strange, while others seem downright bizarre.There are many diverse theories that try to explain the intricacies of quantum systems and how our interactions affect them.And, not surprisingly, each approach is supported by its group of well-qualified and well-respected scientists.Here, well take a look at the two most popular quantum interpretations.

Does it seem reasonable that you can alter a quantum system just by looking at it? What about creating multiple universes by merely making a decision?Or what if your mind split because you measured a quantum system?

You might be surprised that all or some of these things might routinely happen millions of times every day without you even realizing it.

But before your brain gets twisted into a knot, lets cover a little history and a few quantum basics.

The birth of quantum mechanics

Classical physics describes how large objects behave and how they interact with the physical world.On the other hand, quantum theory is all about the extraordinary and inexplicable interaction of small particles on the invisible scale of such things as atoms, electrons, and photons.

Max Planck, a German theoretical physicist, first introduced the quantum theory in 1900. It was an innovation that won him the Nobel Prize in physics in 1918.Between 1925 and 1930, several scientists worked to clarify and understand quantum theory.Among the scientists were Werner Heisenberg and Erwin Schrdinger, both of whom mathematically expanded quantum mechanics to accommodate experimental findings that couldnt be explained by standard physics.

Heisenberg, along with Max Born and Pascual Jordan, created a formulation of quantum mechanics called matrix mechanics. This concept interpreted the physical properties of particles as matrices that evolved in time.A few months later, Erwin Schrdinger created his famous wave mechanics.

Although Heisenberg and Schrdinger worked independently from each other, and although their theories were very different in presentation, both theories were essentially mathematically the same. Of the two formulations, Schrdingers was more popular than Heisenbergs because it boiled down to familiar differential equations.

While today's physicists still use these formulations, they still debate their actual meaning.

First weirdness

A good place to start is Schrdingers equation.

Erwin Schrdingers equation provides a mathematical description of all possible locations and characteristics of a quantum system as it changes over time.This description is called the systems wave function.According to the most common quantum theory, everything has a wave function. The quantum system could be a particle, such as an electron or a photon, or even something larger.

Schrdingers equation won't tell you the exact location of a particle.It only reveals the probability of finding the particle at a given location.The probability of a particle being in many places or in many states at the same time is called its superposition. Superposition is one of the elements of quantum computing that makes it so powerful.

Almost everyone has heard about Schrdingers cat in a box.Simplistically, ignoring the radiation gadgets, while the cat is in the closed box, it is in a superposition of being both dead and alive at the same time.Opening the box causes the cat's wave function to collapse into one of two states and you'll find the cat either alive or dead.

There is little dispute among the quantum community that Schrdingers equation accurately reflects how a quantum wave function evolves.However, the wave function itself, as well as the cause and consequences of its collapse, are all subjects of debate.

David Deutsch is a brilliant British quantum physicist at the University of Cambridge. In his book, The Fabric of Reality, he said: Being able to predict things or to describe them, however accurately, is not at all the same thing as understanding them. Facts cannot be understood just by being summarized in a formula, any more than being listed on paper or committed to memory.

The Copenhagen interpretation

Quantum theories use the term "interpretation" for two reasons.One, it is not always obvious what a particular theory means without some form of translation.And, two, we are not sure we understand what goes on between a wave functions starting point and where it ends up.

There are many quantum interpretations.The most popular is the Copenhagen interpretation, a namesake of where Werner Heisenberg andNiels Bohr developed their quantum theory.

Werner Heisenberg (left) with Niels Bohr at a Conference in Copenhagen in 1934.

Bohr believed that the wave function of a quantum system contained all possible quantum states.However, when the system was observed or measured, its wave function collapsed into a single state.

Whats unique about the Copenhagen interpretation is that it makes the outside observer responsible for the wave functions ultimate fate. Almost magically, a quantum system, with all its possible states and probabilities, has no connection to the physical world until an observer interacts or measures the system. The measurement causes the wave function to collapse into one of its many states.

You might wonder what happens to all the other quantum states present in the wave function as described by the Copenhagen Interpretation before it collapsed?There is no explanation of that mystery in the Copenhagen interpretation. However, there is a quantum interpretation that provides an answer to that question.Its called the Many-Worlds Interpretation or MWI.

Billions of you?

Because the many-worlds interpretation is one of the strangest quantum theories, it has become central to the plot of many science fiction novels and movies.At one time, MWI was an outlier with the quantum community, but many leading physicists now believe it is the only theory that is consistent with quantum behavior.

The MWI originated in a Princeton doctoral thesis written by a young physicist named Hugh Everett in the late 1950s. Even though Everett derived his theory using sound quantum fundamentals, it was severely criticized and ridiculed by most of the quantum community. Even Everetts academic adviser at Princeton, John Wheeler, tried to distance himself from his student. Everette became despondent over the harsh criticism. He eventually left quantum research to work for the government as a mathematician.

The theory proposes that the universe has a single, large wave function that follows Schrdingers equation.Unlike the Copenhagen Interpretation, the MWI universal wave function doesnt collapse.

Everything in the universe is quantum, including ourselves. As we interact with parts of the universe, we become entangled with it.As the universal wave function evolves, some of our superposition states decohere. When that happens, our reality becomes separated from the other possible outcomes associated with that event. Just to be clear, the universe doesn't split and create a new universe. The probability of all realities, or universes, already exists in the universal wave function, all occupying the same space-time.

Schrdinger's Cat, many-worlds interpretation, with universe branching. Visualization of the ... [+] separation of the universe due to two superposed and entangled quantum mechanical states.

In the Copenhagen interpretation, by opening the box containing Schrdingers cat, you cause the wave function to collapse into one of its possible states, either alive or dead.

In the Many -Worlds interpretation, the wave function doesn't collapse. Instead, all probabilities are realized.In one universe, you see the cat alive, and in another universe the cat will be dead.

Right or wrong decisions become right and wrong decisions

Decisions are also events that trigger the separation of multiple universes. We make thousands of big and little choices every day. Have you ever wondered what your life would be like had you made different decisions over the years?

According to the Many-Worlds interpretation, you and all those unrealized decisions exist in different universes because all possible outcomes exist in the universal wave function.For every decision you make, at least two of "you" evolve on the other side of that decision. One universe exists for the choice you make, and one universe for the choice you didnt make.

If the Many-Worlds Interpretation is correct, then right now, a near infinite versions of you are living different and independent lives in their own universes.Moreover, each of the universes overlay each other and occupy the same space and time.

It is also likely that you are currently living in a branch universe spun off from a decision made by a previous version of yourself, perhaps millions or billions of previous iterations ago.You have all the old memories of your pre-decision self, but as you move forward in your own universe, you live independently and create your unique and new memories.

A Reality Check

Which interpretation is correct?Copenhagen or Many-Worlds?Maybe neither. But because quantum mechanics is so strange, perhaps both are correct.It is also possible that a valid interpretation is yet to be expressed. In the end, correct or not, quantum interpretations are just plain fun to think about.

Note: Moor Insights & Strategy writers and editors may have contributed to this article.

Disclosure: Moor Insights & Strategy, like all research and analyst firms, provides or has provided paid research, analysis, advising, or consulting to many high-tech companies in the industry, including Amazon.com, Advanced Micro Devices,Apstra,ARM Holdings, Aruba Networks, AWS, A-10 Strategies,Bitfusion,Cisco Systems, Dell, DellEMC, Dell Technologies, Diablo Technologies, Digital Optics,Dreamchain, Echelon, Ericsson, Foxconn, Frame, Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries,Google,HPInc., Hewlett Packard Enterprise, HuaweiTechnologies,IBM, Intel, Interdigital, Jabil Circuit, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, MACOM (Applied Micro),MapBox,Mavenir, Mesosphere,Microsoft,National Instruments, NetApp, NOKIA, Nortek,NVIDIA, ON Semiconductor, ONUG, OpenStack Foundation, Panasas,Peraso, Pixelworks, Plume Design,Portworx, Pure Storage,Qualcomm, Rackspace, Rambus,RayvoltE-Bikes, Red Hat, Samsung Electronics, Silver Peak, SONY,Springpath, Sprint, Stratus Technologies, Symantec, Synaptics, Syniverse,TensTorrent,TobiiTechnology, Twitter, Unity Technologies, Verizon Communications,Vidyo, Wave Computing,Wellsmith, Xilinx, Zebra, which may be cited in this article.

Read more here:
The Schizophrenic World Of Quantum Interpretations - Forbes

Research Shows Evidence of Broken Time-Reversal Symmetry in Superconducting UPt3 – HPCwire

March 30, 2020 Researchers at the University of Notre Dame, in partnership with those at Northwestern University, are a step closer to understanding how superconductors can be improved for reliability in future quantum computers.

The team, led by byMorten Eskildsen,professor in theDepartment of Physics at Notre Dameand William Halperin from Northwestern University, achieved a new discovery in the field of topological superconductivity. The materials are at the forefront of research in quantum computing and quantum sensing.

The team demonstrated in their paper, published in March in Nature Physics,that the superconducting compound UPt3breaks time-reversal symmetry, where superconducting electrons spontaneously circulate around a specific axis within the crystalline structure of the material.

The researchers used neutron-scattering experiments completed at Oak Ridge National Laboratory in Tennessee and at the Institut Laue-Langevin in Grenoble, France, to make the discovery, which had been predicted but had not been unambiguously detected before.

Topological properties of materials are being studied intensely because of their fundamental as well as practical importance, Eskildsen said. A classic example of topology is a Mbius strip that has only one surface and one edge. Here the twist is a robust feature that can only be undone by cutting the strip.

In solids, this concept is understood abstractly, referring to electronic properties that cannot be undone in a smooth manner. This provides what is known as topological protection, and is an avenue to increase reliability in novel electronic devices for quantum computation. Importantly, it is the understanding gained from the neutron scattering experiments, rather than the particular material, that will benefit the development of quantum devices.

The measurements were carried out at extremely low temperatures and high magnetic fields. The group looked at the properties of electric tornadoes or vortices in the material, and found a difference in their behavior depending on how the superconducting state was prepared. Specifically, their results show that the superconducting state in UPt3can be assigned a chirality, or handedness, and that this can be controlled by suitable magnetic field protocols.

In addition to Eskildsen, key collaborators included Keenan Avers and James Sauls from Northwestern University, as well as researchers from Oak Ridge National Laboratory, Institut Laue-Langevin, and the Laboratory for Neutron Scattering and Imaging at the Paul Scherrer Institute in Switzerland.

The research at Notre Dame was supported by aU.S. Department of EnergyBasic Energy Science Grant. Research of the Northwestern team was supported by a U.S. Department of Energy Basic Energy Science Grant and the Northwestern-Fermilab Center for Applied Physics and Superconducting Technologies.

Read more.

About Chicago Quantum Exchange (CQE)

The Chicago Quantum Exchange (CQE) is an intellectual hub and community of researchers with the common goal of advancing academic and industrial efforts in the science and engineering of quantum information across CQE members, partners, and our region. The hub aims to promote the exploration of quantum information technologies and the development of new applications. The CQE facilitates interactions between research groups of its member and partner institutions and provides an avenue for developing and fostering collaborations, joint projects, and information exchange.

Source: Chicago Quantum Exchange (CQE)

Continue reading here:
Research Shows Evidence of Broken Time-Reversal Symmetry in Superconducting UPt3 - HPCwire

AI vs your career? What artificial intelligence will really do to the future of work – ZDNet

Jill Watson has been a teaching assistant (TA) at the Georgia Institute of Technology for five years now, helping students day and night with all manner of course-related inquiries. But for all the hard work she has done, she still can't qualify for outstanding TA of the year.

That's because Jill Watson, contrary to many students' belief, is not actually human.

Created back in 2015 by Ashok Goel, professor of computer science and cognitive science at the Institute, Jill Watson is an artificial system based on IBM's Watson artificial intelligence software. Her role consists of answering students' questions a task she remarkably carries out with a 97% accuracy rate, for inquiries ranging from confirming the word count for an assignment, to complex technical questions related to the content of the course.

And she has certainly gone down well with students, many of whom, in 2015, were "flabbergasted" upon discovering that their favorite TA was not the serviceable, human lady that they expected, but in fact a cold-hearted machine.

What students found an amusing experiment is the sort of thing that worries many workers. Automation, we have been told time and again, will displace jobs; so are experiments like Jill Watson the first step towards unemployment for professionals?

SEE:How to implement AI and machine learning(ZDNet special report) |Download the report as a PDF(TechRepublic)

In fact, it's quite the contrary, Goel tells ZDNet. "Job losses are an important concern Jill Watson, in a way, could replace me as a teacher," he said. "But among the professors who use her, that question has never come up, because there is a huge need for teachers globally. Instead of replacing teachers, Jill Watson augments and amplifies their work, and that is something we actually need."

The AI was originally developed for an online masters in computer science, where students interact with teachers via a web discussion forum. Just in the spring of 2015, noticed Goel, 350 students posted 10,000 messages to the forum; to answer all of their questions, he worked out, would have taken a real-life teacher a year, working full time.

Jill Watson has only grown in popularity since 2015, said Goel, and she has now been deployed to a dozen other courses -- building her up for a new class takes less than ten hours. And while the artificial TA, for now, is only used at Georgia Institute of Technology, Jill Watson could change the education game if she were to be scaled globally. With UNESCO estimating that an additional 69 million teachers are needed to achieve sustainable development goals, the notion of 'augmenting' and 'amplifying' teachers' work could go a long way.

The automation of certain tasks is not such a scary prospect for those working in education. And perhaps neither is it a risk to the medical industry, where AI is already lending a helping hand with tasks ranging from disease diagnosis to prescription monitoring. It's a welcome support, rather than a looming threat, as the overwhelming majority of health services across the world report staff shortages and lack of resources even at the best of times.

But of course, not all professions are in dire need of more staff. For many workers, the advent of AI-powered technologies seems to be synonymous with permanent lay-off. Retailersare already using robotic fulfillment systems to pick orders in their warehouses. Google's project to build autonomous vehicles, Waymo, has launched its first commercial self-driving car service in the US, which in the long term will remove the need for a human taxi driver. Ford is even working on automating delivery services from start to finish, with a two-legged, two-armed robot that can walk around neighborhoods carrying parcels from the delivery vehicle right up to your doorstep.

Advancements in AI technology, therefore, don't bode well for all workers. "Nobody wants to be out of a job," says David McDonald, professor of human-centered design and engineering at the University of Washington. "Technological changes that impact our work, and thus, our ability to support ourselves and our families, are incredibly threatening."

"This suggests that when people hear stories saying that their livelihood is going to disappear," he says, "that they probably will not hear the part of the story that says there will be additional new jobs."

Consultancy McKinsey estimates that automation will cause up to 800 million individuals around the world to be displaced from their jobs by 2030 a statistic that will sound ominous, to say the least, to most of the workforce. But the firm's research also shows that in nearly all scenarios, and provided that there is sufficient investment and growth, most countries can expect to be at very near full employment by the same year.

The potential impact of artificial intelligence needs to be seen as part of the bigger picture. McKinsey highlighted that one of the countries that will face the largest displacement of workers is China, with up to 12% of the workforce needing to switch occupations. But although 12% seems like a lot, the consultancy noted, it's still relatively small compared with the tens of millions of Chinese who have moved out of agriculture in the past 25 years.

In other words, AI is only the latest news in the long history of technological progress and as with all previous advancements, the new opportunities that AI will open will balance out the skills that the technology makes out-of-date. At least that's the theory; one that Brett Frischmann explores in the book he co-authored, Re-engineering Humanity. It's a project that's been going on forever and more recent innovations are building on the efficiencies pioneered by the likes of Frederick Winslow Taylor and Henry Ford.

"At one point, human beings used spears to fish. As we developed fishing technology, fewer people needed that skill and did other things," he says. "The idea that there is something dramatically different about AI has to be looked at carefully. Ultimately, data-driven systems, for example as a way to optimize factory outputs, are only a ramped-up version of Ford and Taylor's processes."

Seeing AI as simply the next chapter of tech is a common position among experts. The University of Washington's McDonald is equally convinced that in one form or another, we have been building systems to complement work "for over 50 years".

So where does the big AI scare come from? A large part of the problem, as often, comes down to misunderstanding. There is one point that Frischmann was determined to clarify: people do tend to think, and wrongly so, that the technology is a force that has its own agenda -- one that involves coming against us and stealing our jobs.

"It's really important for people to understand that the AI doesn't want anything," he said. "It's not a bad guy. It doesn't have a role of its own, or an agenda. Human beings are the ones that create, design, damage, deploy, control those systems."

In reality, according to McKinsey, fewer than 5% of occupations can be entirely automated using current technology. But over half of jobs could have 30% of their activities taken on by AI. Rather than robots taking over, therefore, it looks like the future will be about task-sharing.

Gartner previously reported that by 2022one in five workers engaged in non-routine tasks will rely on AI to get work done. The research firm's analysts forecasted that combining human and artificial intelligence would be the way forward to maximize the value generated by the technology. AI, said Gartner, will assist workers in all types of jobs, from entry-level to highly-skilled.

The technology could become a virtual assistant, an intern, or another kind of robo-employee; in any case, it will lead to the development of an 'augmented' workforce, whose productivity will be enhanced by the tool.

For Gina Neff, associate professor at the Oxford Internet Institute, delegating tasks to AI will only bring about a brighter future for workers. "Humans are very good at lots of tasks, and there are lots of tasks that computers are better at than we are. I don't want to have to add large lists of sums by hand for my job, and thankfully I have a technology to help me do that."

"Increasingly, the conversation will shift towards thinking about what type of work we want to do, and how we can use the tools we have at our disposal to enhance our capacity, and make our work both productive and satisfying."

As machines take on tasks such as collecting and processing data, which they already carry out much better than humans, workers will find that they have more time to apply themselves to projects involving the cognitive skills logical reasoning, creativity, communication that robots (at least currently) lack.

Using technology to augment the human value of work is also the prospect that McDonald has in mind. "We should be using AI and complex computational systems to help people achieve their hopes, dreams and goals," he said. "That is, the AI systems we build should augment and extend our social and our cognitive skills and abilities."

There is a caveat. For AI systems to effectively bolster our hopes, dreams and goals, as McDonald said, it is crucial that the technology is designed from the start as a human-centered tool one that is made specifically to fulfil the interests of the human workforce.

Human-centricity might be the next big challenge for AI. Some believe, however, that so far the technology has not done such a good job at ensuring that it enhances humans. In Re-engineering Humanity, Frischmann, for one, does not do AI any favours.

"Smart systems and automation, in my opinion, cause atrophy, more than enhancement," he argued. "The question of whether robots will take our jobs is the wrong one. What is more relevant is how the deployment of AI affects humans. Are we engineering unintelligent humans, rather than intelligent machines?"

It is certainly a fine line, and going forward, will be a delicate balancing act. For Oxford Internet Institute's Neff, making AI work in humans' best interest will require a whole new category of workers, which she called "translators", to act as intermediaries between the real world and the technology.

For Neff, translators won't be roboticists or "hot-shot data scientists", but workers who understand the situation "on the ground" well enough to see how the technology can be applied efficiently to complement human activity.

In an example of good behaviour, and of a way to bridge between humans and technology, Amazon last year launched an initiative to help reconvert up to 1,300 employees that were being made redundant as the company deployed robots to its US fulfilment centres. The e-tailer announced that it would pay workers $10,000 to quit their jobs and set up their own delivery business, in order to tackle retail's infamous last-mile logistics challenge. Tens of thousands of workers have now applied to the program.

In a similar vein, Gartner recently suggested that HR departments startincluding a section dedicated to "robot resources", to better manage employees as they start working alongside robotic colleagues. "Getting an AI to collaborate with humans in the ways that we collaborate with others at work, every day, is incredibly hard," said McDonald. "One of the emerging areas in design is focused on designing AI that more effectively augments human capacity with respect for people."

SEE: 7 business areas ripe for an artificial intelligence boost

From human-centred design, to participatory design or user-experience design: for McDonald, humans have to be the main focus from the first stage of creating an AI.

And then there is the question of communication. At the Georgia Institute of Technology, Goel recognised that AI "has not done a good job" of selling itself to those who are not inside the experts' bubble.

"AI researchers like me cannot stay in our glass tower and develop tools while the rest of the world is anxious about the technology," he said. "We need to look at the social implications of what we do. If we can show that AI can solve previously unsolvable problems, then the value of AI will become clearer to everyone."

His dream for the future? To get every teacher in the world a Jill Watson assistant within five years; and that in the next decade, every parent can access one too, to help children with after-school questions. In fact, it's increasingly looking like every industry, not only education, will be getting their own version of a Jill Watson, too and that we needn't worry that she will be coming at our jobs anytime soon.

More:
AI vs your career? What artificial intelligence will really do to the future of work - ZDNet

Stanford launches an accelerated test of AI to help with Covid-19 care – STAT

In the heart of Silicon Valley, Stanford clinicians and researchers are exploring whether artificial intelligence could help manage a potential surge of Covid-19 patients and identify patients who will need intensive care before their condition rapidly deteriorates.

The challenge is not to build the algorithm the Stanford team simply picked an off-the-shelf tool already on the market but rather to determine how to carefully integrate it into already-frenzied clinical operations.

The hardest part, the most important part of this work is not the model development. But its the workflow design, the change management, figuring out how do you develop that system the model enables, said Ron Li, a Stanford physician and clinical informaticist leading the effort. Li will present the work on Wednesday at a virtual conference hosted by Stanfords Institute for Human-Centered Artificial Intelligence.

advertisement

The effort is primed to be an accelerated test of whether hospitals can smoothly incorporate AI tools into their workflows. That process, typically slow and halting, is being sped up at hospitals all over the world in the face of the coronavirus pandemic.

The machine learning model Lis team is working with analyzes patients data and assigns them a score based on how sick they are and how likely they are to need escalated care. If the algorithm can be validated, Stanford plans to start using it to trigger clinical steps such as prompting a nurse to check in more frequently or order tests that would ultimately help physicians make decisions about a Covid-19 patients care.

advertisement

The model known as the Deterioration Index was built and is marketed by Epic, the big electronic health records vendor.Li and his team picked that particular algorithm out of convenience, because its already integrated into their EHR, Li said. Epic trained the model on data from hospitalized patients who did not have Covid-19 a limitation that raises questions about whether it will be generalizable for patients with a novel disease whose data it was never intended to analyze.

Nearly 50 health systems which cover hundreds of hospitals have been using the model to identify hospitalized patients with a wide range of medical conditions who are at the highest risk of deterioration, according to a spokesperson for Epic. The company recently built an update to help hospitals measure how well the model works specifically for Covid-19 patients. The spokesperson said that work showed the model performed well and didnt need to be altered. Some hospitals are already using it with confidence, according to the spokesperson. But others, including Stanford, are now evaluating the model in their own Covid-19 patients.

In the months before the coronavirus pandemic, Li and his team had been working to validate the model on data from Stanfords general population of hospitalized patients. Now, theyve switched their focus to test it on data from dozens of Covid-19 patients that have been hospitalized at Stanford a cohort that, at least for now, may be too small to fully validate the model.

Were essentially waiting as we get more and more Covid patients to see how well this works, Li said. He added that the model does not have to be completely accurate in order to prove useful in the way its being deployed: to help inform high-stakes care decisions, not to automatically trigger them.

As of Tuesday afternoon, Stanfords main hospital was treating 19 confirmed Covid-19 patients, nine of whom were in the intensive care unit; another 22 people were under investigation for possible Covid-19, according to Stanford spokesperson Julie Greicius. The branch of Stanfords health system serving communities east of the San Francisco Bay had five confirmed Covid-19 patients, plus one person under investigation. And Stanfords hospital for children had one confirmed Covid-19 patient, plus seven people under investigation, Greicius said.

Stanfords hospitalization numbers are very fluid. Many people under investigation may turn out to not be infected, and many confirmed Covid-19 patients who have relatively mild symptoms may be quickly cleared for discharge to go home.

The model is meant to be used in patients who are hospitalized, but not yet in the ICU. It analyzes patients data including their vital signs, lab test results, medications, and medical history and spits out a score on a scale from 0 to 100, with a higher number signaling elevated concern that the patients condition is deteriorating.

Already, Li and his team have started to realize that a patients score may be less important than how quickly and dramatically that score changes, he said.

If a patients score is 70, which is pretty high, but its been 70 for the last 24 hours thats actually a less concerning situation than if a patient scores 20 and then jumps up to 80 within 10 hours, he said.

Li and his colleagues are adamant that they will not set a specific score threshold that would automatically trigger a transfer to the ICU or prompt a patient to be intubated. Rather, theyre trying to decide which scores or changes in scores should set off alarm bells that a clinician might need to gather more data or take a closer look at how a patient is doing.

At the end of the day, it will still be the human experts who will make the call regarding whether or not the patient needs to go to the ICU or get intubated except that this will now be augmented by a system that is smarter, more automated, more efficient, Li said.

Using an algorithm in this way has potential to minimize the time that clinicians spend manually reviewing charts, so they can focus on the work that most urgently demands their direct expertise, Li said. That could be especially important if Stanfords hospital sees a flood of Covid-19 patients in the coming weeks. Santa Clara County, where Stanford is located, had confirmed 890 cases of Covid-19 as of Monday afternoon. Its not clear how many of them have needed hospitalization, though San Francisco Bay Area hospitals have not so far faced the crush of Covid-19 patients that New York City hospitals are experiencing.

That could change. And if it does, Li said, the model will have to be integrated into operations in a way that will work if Stanford has several hundred Covid-19 patients in its hospital.

This is part of a yearlong series of articles exploring the use of artificial intelligence in health care that is partly funded by a grant from the Commonwealth Fund.

Read the rest here:
Stanford launches an accelerated test of AI to help with Covid-19 care - STAT

Enterprise Artificial Intelligence Along With Telehealth And Teleconferences Can Help In Fighting COVID-19 – Entrepreneur

Artificial intelligence can enable its productive tools to be employed to fight against COVID-19. Here's how

April1, 20204 min read

Opinions expressed by Entrepreneur contributors are their own.

You're reading Entrepreneur India, an international franchise of Entrepreneur Media.

In the fight against COVID-19, enterprise artificial intelligence (AI) is not getting the same attention as teleconferencing and telehealth technologies.

A massive shift to working from home to avoid spreading the virus means virtual collaboration companies like Zoom Video Communications are in the headlines. The same dynamic is playing out in healthcare as hospitals attempt to prioritize physical care for the coronavirus patients and are trying telehealth product suites like TelaDoc to manage everyone else and scale up.

AIs role in getting us through this remains less intuitive that is because not enough AI solutions can be plugged into an organization on the run like teleconferencing and telehealth can today. In addition to that, it served up as point solutions that just help with a single task do not do sufficient good fast enough. What is needed is its suite that quickly makes entire workflows easier just like teleconferencing and telehealth can.

Its worth listening to Benchmark Capitals Chetan Puttagunta on the first point as the pandemic accelerated in the U.S. as he reminded us that during previous downturns, the companies that could deliver their solutions fastest and easiest rose to the top. If it takes too long to implement a technology, you are now in a holding pattern.

The weakness of point solutions may not be as clear in times like these. The Pentagons former head of information technology in the 1990s, Paul Strassmann, articulated it best. He called it managerial productivity versus operational productivity. Steve Jobs showed Strassmanns managerial productivity helps you do a few things well and then its value tapers off. Operational productivity lifts everything the enterprise does.

AI has to show up in the form of an operational productivity solution that helps everything work better in the industry it serves. It cannot just be a tool that is floating out of context of an industrys workflows. It has to feel purpose built, have the power of the kind of solution suite that demonstrates a grasp of the unique problems of a specific industry.

The current pandemic is likely to winnow the pack of AI companies down to a smaller group of enterprise suites as some run out of cash and others realize they need to get back to the drawing board.

The same was true for teleconferencing solutions during the Great Recession. Industry veterans like myself saw the field of companies reduced down to enterprises that learned how to grow in that environment and refine their suite to be at the forefront today.

Taking a page from that period, tech leaders are looking for early lessons from COVID-19s new world to see what the future will look like. Fundamentally, this crisis means utilizing integrated digital systems to recognize and respond to emerging risks and consumer demands. The key is responding, not just automating todays activities and workflows. Changing events need to be anticipated, recognized and reacted to. That is the test AI faces.

Until now, most predictive technologies have been based on assessing two variables retrospectively. AI can be deployed to explore multiple variables and how they change through time in relation to each other. Make this easy to plug in and deploy as a suite that delivers operational productivity and a new game becomes possible.

In the insurance industry, it can mean an active claim intake processes and alerts to emerge threats in a multi-variable world. In the financial services industry, it can mean real-time understanding of liquidity, capital reserves, when best to utilize these reserves and when best to increase them. In healthcare, it can mean empowering front-line clinicians with tools that pull in data for collective use and also help them make more informed treatment decisions.

There is a lot at stake. Strassman may now be in his 90s but he still speaks to local groups at his hometown in Connecticut. At such a gathering a few weeks ago he pointed out the tensions in the global economy that could be pushed too far by todays events. His small book on the subject has already been sold out on Amazon. A perfect example of more panic buying in pivotal times.

Go here to read the rest:
Enterprise Artificial Intelligence Along With Telehealth And Teleconferences Can Help In Fighting COVID-19 - Entrepreneur

Applying Artificial Intelligence in the Fight Against The Coronavirus – HIT Consultant

Dr. Ulrik Kristensen, Senior Market Analyst at Signify Research

Drug discovery is a notoriously long, complex and expensive process requiring the concerted efforts of the worlds brightest minds. The complexity in understanding human physiology and molecular mechanisms is increasing with every new research paper published and for every new compound tested. As the world is facing a new challenge in trying to both adapt to and defend itself against the coronavirus, artificial intelligence is offering new hope that a cure might be developed faster than ever before.

In this article, we will present some of the technologies being developed and applied in todays drug discovery process, working side-by-side with scientists tracking new findings, and assisting in the creation of new compounds and potential vaccines. In addition, we will examine how the industry is applying AI in the fight against the coronavirus.

Start-ups focusing on the use of artificial intelligence in drug development and clinical trials have seen significant investment in recent years, and vendors focusing specifically on drug design and discovery received the majority of the total $5.2B funding observed between 2012 and 2019

Information EnginesInformation Engines are fundamental machines behind applications in both drug discovery and clinical trials, serving as the basic information aggregator and synthesizer layer, on which the other applications can draw their insights, conclusions and prescriptive functions. The information available to scientists today is increasing exponentially, so the purpose of information engines being developed today is to help scientists update and aggregate all this information and pull out the data most likely to be relevant for a specific study.

The types of information going into these engines vary broadly. An advanced information engine integrates information from multiple sources such as scientific research publications, medical records, doctors journals, biomedical information such as known drug targets, ligand information and disease-specific information, historical clinical trial data, patent information from molecules currently being investigated at global pharma companies, proprietary enterprise data from internal research studies at the individual pharma client, genomic sequencing data, radiology imaging data, cohort data and even other real-world evidence such as society and environmental data.

In a recentanalyst insight, we discussed how these information engines are being applied in clinical trials to enhance success rates and reduce associated trial costs. When it comes to the upstream processes relating to drug discovery, their purpose is to synthesize and analyze these vast amounts of information to help the scientist understand disease mechanisms and select the most promising targets, drug candidates or biomarkers; or as we will see in the next section, to assist the drug design application in creating the molecular designs or optimize a compound with desired properties. Information is typically presented via a knowledge graph that visualizes the relationships between diseases, genes, drugs and other data points, which the researcher then uses for target identification, biomarker discovery or other research areas.

Drug DesignAI-based drug design applications are involved directly with the molecular structure of the drugs. They draw data and insights from information engines to help generate novel drug candidates, to validate or optimize drug candidates, or to repurpose existing drugs for new therapeutic areas.

For target identification, machine learning is used to predict potential disease targets, and an AI triage then typically orders targets based on chemical opportunity, safety and druggability and presents them ranked with most promising targets. This information is then fed into the drug design application which optimizes the compounds with desired properties before they are selected for synthesis. Experimental data from the selected compounds can then be fed back into the model to generate additional data for optimization.

For drug repurposing, existing drugs approved for specific therapeutic areas are compared against possible similar pathways and targets in alternative diseases, which creates an opportunity for additional revenue from already developed pharmaceuticals. It also gives potential relief for rare disease areas where developing a new compound wouldnt be profitable. Additionally, keeping repurposing in mind during the development of a new drug as opposed to having a disease-specific mindset, may result in more profitable multi-purpose pharmaceuticals entering the market in the coming years.

Recent substantial investment in AI for drug development has meant the start-ups have had the manpower and resources to develop their technologies. Compared to AI in medical imaging the total investment has been more than four-fold, even though the number of funded start-ups is equivalent between the two industries. This makes the average deal size for AI in drug development 3.5 times bigger than in medical imaging. The funding has been spent on significantly expanding and building capacity, as the total number of employees across these AI start-ups is now close to 10,000 globally.

A strong focus for start-up vendors is to create tight partnerships with the pharma industry. For many still in the early product development stages, this gives them the ability to test and optimize their solutions and to create proof-of-concept as a basis for additional deals.

For the more established start-ups, partnerships with the pharmaceutical industry turn the initial investments into revenue in the form of subscription or consulting charges, and potential milestone payments for new drug candidates, preparing the company for further investments, IPO, acquisition or continued success as a separate company. Pharmaceutical companies with high numbers of publicly announced AI partnerships include AstraZeneca, GSK, Sanofi, Merck, Janssen, and Pfizer, but many more are actively pursuing such opportunities today.

Many AI start-ups are therefore in the phase where they have a solution ready and are either looking for further partnerships or would like to showcase their solution and capabilities. The COVID-19 pandemic has, therefore, come as an important test for many of these vendors, where they can demonstrate the value of their technologies and hopefully help the world get through this crisis faster.

Understanding the protein structures on the coronavirus capsule can form the basis of a drug or vaccine. Google Deepmind have been using their artificial intelligence engine to quickly predict the structure of six proteins linked to the coronavirus, and although they have not been experimentally verified, they may still contribute to the research ultimately leading to therapeutics.

Hong Kong-based Insilico Medicine took the next step in finding possible treatments, using their AI algorithms to design new molecules that could potentially limit the viruss ability to replicate. Using existing data on the similar virus which caused the SARS outbreak in 2003, they published structures of six new molecules that could potentially treat COVID-19. Also, Germany-based Innoplexus has used its drug discovery information engine to design a novel molecule candidate with a high binding affinity to a target protein on the coronavirus while maintaining drug-likeness criteria such as bioavailability, absorption, toxicity, etc. Other AI players following similar strategies to identify new targets and molecules include Pepticom, Micar Innovation, Acellera, MAbSilico, InveniAI and Iktos, and further initiatives are announced daily.

It is important to remember that even if AI helps researchers identify targets and design new molecules faster, clinical testing and regulatory approval will still take about a year. So, while waiting for a vaccine or a new drug to be developed, other teams are looking at existing drugs on the market that could be repurposed to treat COVID-19. BenevolentAI used their machine learning-based information engine to search for already approved drugs that could block the infection process. After analyzing chemical properties, medical data and scientific literature they identified Baricitinib, typically used to treat moderate and severe rheumatoid arthritis, as a potential candidate to treat COVID-19. The theory is that the drug would prevent the virus from entering the cells by inhibiting endocytosis, and thereby in combination with antiviral drugs reduce viral infectivity and replication and prevent the inflammatory response which causes some of the COVID-19 symptoms.

But although a lot is happening in the industry right now and there are many suggestions as to what might work as a therapy for COVID-19, both from existing drugs already on the market and from new molecules being designed by the AI drug developers, the scientific and medical community, as well as regulators, will not neglect the scientific method. Suggestions and new ideas are essential for progress, but so is rigor in testing and validation of hypotheses. A systematic approach, fuelled by accelerated findings using AI and bright minds in collaboration, will lead to a better outcome.

About Dr. Ulrik Kristensen

Dr. Ulrik Kristensen is a Senior Market Analyst atSignify Research, an independent supplier of market intelligence and consultancy to the global healthcare technology industry. Ulrik is part of the Healthcare IT team and leads the research covering Drug Development, Oncology, and Genomics. Ulrik holds an MSc in Molecular Biology from Aarhus University and a Ph.D. from the University of Strasbourg.

See the original post:
Applying Artificial Intelligence in the Fight Against The Coronavirus - HIT Consultant

Artificial Intelligence in Retail Market Projected to Grow with a CAGR of 35.9% Over the Forecast Period, 2019-2025 – ResearchAndMarkets.com – Yahoo…

The "Artificial Intelligence in Retail Market by Product (Chatbot, Customer Relationship Management), Application (Programmatic Advertising), Technology (Machine Learning, Natural Language Processing), Retail (E-commerce and Direct Retail)- Forecast to 2025" report has been added to ResearchAndMarkets.com's offering.

The artificial intelligence in retail market is expected to grow at a CAGR of 35.9% from 2019 to 2025 to reach $15.3 billion by 2025.

The growth in the artificial intelligence in retail market is driven by several factors such as the rising number of internet users, increasing adoption of smart devices, rapid adoption of advances in technology across retail chain, and increasing adoption of the multi-channel or omnichannel retailing strategy. Besides, the factors such as increasing awareness about AI and big data & analytics, consistent proliferation of Internet of Things, and enhanced end-user experience is also contributing to the market growth. However, high cost of transformation and lack of infrastructure are the major factors hindering the market growth during the forecast period.

The study offers a comprehensive analysis of the global artificial intelligence in retail market with respect to various types.

The global artificial intelligence in retail market is segmented on the basis of product (chatbot, customer relationship management, inventory management), application (programmatic advertising, market forecasting), technology (machine learning, natural language processing, computer vision), retail (e-commerce and direct retail), and geography

The predictive merchandising segment accounted for the largest share of the overall artificial intelligence in retail market in 2019, mainly due to growing demand for the customer behavior tracking solutions among the retailers. However, the in-store visual monitoring and surveillance segment is expected to witness rapid growth during the forecast period, as it helps in plummeting the issue of shoplifting in retail, which is one of the major reasons to incur financial loss in the stores.

An in-depth analysis of the geographical scenario of the market provides detailed qualitative and quantitative insights about the five regions including North America, Europe, Asia Pacific, Latin America, and the Middle East and Africa. In 2019, North America commanded the largest share of the global artificial intelligence in retail market, followed by Europe and Asia Pacific. The large share of this region is mainly attributed to its open-minded approach towards smart technologies and high technology adoption rate, presence of key players & start-ups, and increased internet access. However, the factors such as speedy growth in spending power, presence of young population, and government initiatives supporting digitalization is helping Asia Pacific to register the fastest growth in the global artificial intelligence in retail market.

Key Topics Covered:

1. Introduction

1.1. Market Definition

1.2. Market Ecosystem

1.3. Currency and Limitations

1.3.1. Currency

1.3.2. Limitations

1.4. Key Stakeholders

2. Research Methodology

2.1. Research Approach

2.2. Data Collection & Validation

2.2.1. Secondary Research

2.2.2. Primary Research

2.3. Market Assessment

2.3.1. Market Size Estimation

2.3.2. Bottom-Up Approach

2.3.3. Top-Down Approach

2.3.4. Growth Forecast

2.4. Assumptions for the Study

3. Executive Summary

3.1. Overview

3.2. Market Analysis, by Product Offering

3.3. Market Analysis, by Application

3.4. Market Analysis, by Learning Technology

3.5. Market Analysis, by Type

3.6. Market Analysis, by End-User

3.7. Market Analysis, by Deployment Type

3.8. Market Analysis, by Geography

3.9. Competitive Analysis

4. Market insights

4.1. Introduction

4.2. Market Dynamics

4.2.1. Drivers

4.2.2. Restraints

4.2.3. Opportunities

4.2.4. Challenges

4.2.5. Trends

5. Artificial Intelligence in Retail Market, by Product Type

5.1. Introduction

5.2. Solutions

5.2.1. Chatbot

5.2.2. Recommendation Engines

5.2.3. Customer Behaviour Tracking

5.2.4. Visual Search

5.2.5. Customer Relationship Management

5.2.6. Price Optimization

5.2.7. Supply Chain Management

5.2.8. inventory Management

5.3. Services

5.3.1. Managed Services

5.3.2. Professional Services

6. Artificial Intelligence in Retail Market, by Application

Story continues

6.1. Introduction

6.2. Predictive Merchandising

6.3. Programmatic Advertising

6.4. In-Store Visual Monitoring & Surveillance

6.5. Market Forecasting

6.6. Location-Based Marketing

7. Artificial Intelligence in Retail Market, by Learning Technology

7.1. Introduction

7.2. Machine Learning

7.3. Natural Language Processing

7.4. Computer Vision

8. Artificial Intelligence in Retail Market, by Type

8.1. Introduction

8.2. Offline Retail

8.2.1. Brick & Mortar Stores

8.2.2. Supermarkets & Hypermarket

8.2.3. Specialty Stores

8.3. Online Retail

9. Artificial Intelligence in Retail Market, by End-User

9.1. Introduction

9.2. Food & Groceries

9.3. Health & Wellness

9.4. Automotive

9.5. Electronics & White Goods

9.6. Fashion & Clothing

9.7. Other

10. Artificial Intelligence in Retail Market, by Deployment Type

10.1. Introduction

10.2. Cloud

10.3. On-Premise

11. Global Artificial Intelligence in Retail Market, by Geography

11.1. Introduction

11.2. North America

11.3. Europe

11.4. Asia-Pacific

11.5. Latin America

11.6. Middle East & Africa

12. Competitive Landscape

12.1. Competitive Growth Strategies

12.1.1. New Product Launches

Continue reading here:
Artificial Intelligence in Retail Market Projected to Grow with a CAGR of 35.9% Over the Forecast Period, 2019-2025 - ResearchAndMarkets.com - Yahoo...

The race problem with AI: Machines are learning to be racist’ – Metro.co.uk

Artificial intelligence (AI) is already deeply embedded in so many areas of our lives. Societys reliance on AI is set to increase at a pace that is hard to comprehend.

AI isnt the kind of technology that is confined to futuristic science fiction movies the robots youve seen on the big screen that learn how to think, feel, fall in love, and subsequently take over humanity. No, AI right now is much less dramatic and often much harder to identify.

Artificial intelligence is simply machine learning. And our devices do this all the time. Every time you input data into your phone, your phone learns more about you and adjusts how it responds to you. Apps and computer programmes work the same way too.

Any digital programmes that display learning, reasoning or problem solving, are displaying artificial intelligence. So, even something as simple as a game of chess on your desktop counts as artificial intelligence.

The problem is that the starting point for artificial intelligence always has to be human intelligence. Humans programme the machines to learn and develop in a certain way which means they are passing on their unconscious biases.

The tech and computer industry is still overwhelmingly dominated by white men. In 2016, there were ten large tech companies in Silicon Valley the global epicentre for technological innovation that did not employ a single black woman. Three companies had no black employees at all.

When there is no diversity in the room, it means the machines are learning the same biases and internal prejudices of the majority white workforces that are developing them.

And, with a starting point that is grounded in inequality, machines are destined to develop in ways that perpetuate the mistreatment of and discrimination against people of colour. In fact, we are already seeing it happen.

In 2017, a video went viral of social media of a soap dispenser that would only automatically release soap onto white hands.

The dispenser was created by a companycalledTechnical Concepts, and the flaw occurred because no one on the development team thought to test their product on dark skin.

A study in March last year found that driverless cars are more likely to drive into black pedestrians, again because their technology has been designed to detect white skin, so they are less likely to stop for black people crossing the road.

It would be easy to chalk these high-profile viral incidents up as individual errors, but data and AI specialist Mike Bugembe, says it would be a mistake to think of these problems in isolation. He says they are indicative of a much wider issue with racism in technology, one that is likely to spiral in the next few years.

I can give you so many examples of where AI has been prejudiced or racist or sexist, Mike tells Metro.co.uk.

The danger now is that we are actually listening and accepting the decisions of machines. When computer says no, we increasingly accept that as gospel. So, were listening now to something that is perpetuating, or even accentuating the biases that already exist in society.

Mike says the growth of AI can have much bigger, systemic ramifications for the lives of people of colour in the UK. The implications of racist technology go far beyond who does and who doesnt get to use hand soap.

AI is involved in decisions about where to deploy police officers, in deciding who is likely to take part in criminial activity and reoffend. He says in the future we will increasingly see AI playing a part in things like hospital admissions, school exclusions and HR hiring processes.

Perpetuating racism in these areas has the potential to cause serious, long-lasting harm to minorities. Mike says its vital that more black and minority people enter this sector to diversify the pool of talent and help to eradicate the problematic biases.

If we dont have a system that can see us and give us the same opportunities, the impact will be huge. If we dont get involved in this industry, our long-term livelihoods will be impacted, explains Mike.

Its no secret that within six years, pretty much 98% of human consumer transactions will go through machines. And if these machines dont see us, minorities, then everything will be affected for us. Everything.

An immediate concern for many campaigners, equality activists and academics is the deployment and roll out of facial recognition as a power for the police.

In February, the Metropolitan Police began operational use of facial recognition CCTV, with vans stationed outside a large shopping centre in east London, despite widespread criticism about the methods.

A paper last year found that using artificial intelligence to fight crime could raise the risk of profiling bias. The research warned that algorithms might judge people from disadvantaged backgrounds as a greater risk.

Outside of China, the Metropolitan police is the largest police force outside of China to roll it out, explains Kimberly McIntosh, senior policy officer at Runnymede Trust. We all want to stay safe but giving the green light to letting dodgy tech turn our public spaces into surveillance zones should be treated cautiously.

Kimberly points to research that shows that facial recognition software has trouble identifying the faces of women and black people.

Yet roll outs in areas like Stratford have significant black populations, she says.There is currently no law regulating facial recognition in the UK. What is happening to all that data?

93% of the Mets matches have wrongly flagged innocent people. The Equality and Human Rights Commission is right the use of this technology should be paused. It is not fit for purpose.

Kimberlys example shows how the inaccuracies and inherent biases of artificial intelligence can have real-world consequences for people of colour in this case, it is already contibuting to their disproportionate criminalisation.

The ways in which technological racism could personally and systemically harm people of colour are numerous and wildly varied.

Racial bias in technology already exists in society, even in the smaller, more innocuous ways that you might not even notice.

There was a time where if you typed black girl into Google, all it would bring up was porn, explains Mike.

Google is a trusted source of information, so we cant overstate the impact that search results like these have on how people percieve the world and minorities. Is it any wonder that black women are persistantly hypersexualised when online search results are backing up these ideas?

Right now, if you Google cute baby, you will only see white babies in the results. So again, there are these more pervasive messages being pushed out there that speak volumes about the worth and value of minorities in society.

Mike is now raising money to gather data scientists together for a new project. His aim is to train a machine that will be able to make sure other machines arent racist.

We need diversity in the people creating the algorithms. We need diversity in the data. And we need approaches to make sure that those biases dont carry on, says Mike. So, how do you teach a kid not to be racist? The same way you will teach a machine not to be racist, right?

Some companies say to be well, we dont put race in our feature set which is the data used to train the algorithms. So they think it doesnt apply to them. But that is just as meaningless and unhelpful as saying they dont see race.

Just as humans have to acknoweldge race and racism in order to beat it, so too do machines, algorithm and artificial intelligence.

If we are teaching a machine about human behaviour, it has got to include our prejudices, and strategies that spot them and fight against them.

Mike says that discussing racism and existing biases can be hard for people with power, particuarly when their companies have a distinct lack of employees with relevant lived experiences. But he says making it less personal can actually make it easier for companies to address.

The current definition of racism is very individual and very easy to shrug off people can so easily say, Well, thats not me, Im not racist, and thats the end of that conversation, says Mike.

If you change the definition of racism to a pattern of behaviour like an algorithm itself thats a whole different story. You can see what is recurring, the patterns than pop up. Suddenly, its not just me thats racist, its everything. And thats the way it needs to be addressed on a wider scale.

All of us are increasingly dependent on technology to get through our lives. Its how we connect with friends, pay for food, order new clothes. And on a wider scale, technology already governs so many of our social systems.

Technology companies must ensure that in this race towards a more digital-led world, ethnic minorities are not being ignored or treated as collateral damage.

Technological advancements are meaningless if their systems only serve to uphold archaic prejudices.

This series is an in-depth look at racism in the UK in 2020.

We aim to look at how, where and why racist attitudes and biases impact people of colour from all walks of life.

It's vital to improve the language we have to talk about racism and start the difficult conversations about inequality.

We want to hear from you - if you have a personal story or experience of racism that you would like to share get in touch: metrolifestyleteam@metro.co.uk

MORE: Muslims are scared of going to therapy in case theyre linked to terrorism

MORE: How the word woke was hijacked to silence people of colour

MORE: Black women are being targeted with disgusting misogynoir in online gaming forums

Read this article:
The race problem with AI: Machines are learning to be racist' - Metro.co.uk