Artificial Intelligence Crowdsourcing Competition for Injury Surveillance – EC&M

By Sydney Webb, PhD; Carlos Siordia, PhD; Stephen Bertke, PhD; Diana Bartlett, MPH, MPP; and Dan Reitz

In 2018, NIOSH, the Bureau of Labor Statistics (BLS), and the Occupational Safety and Health Administration (OSHA) contracted the National Academies of Science (NAS) to conduct a consensus study on improving the cost-effectiveness and coordination of occupational safety and health (OSH) surveillance systems. NASs report recommended that the federal government use recent advancements in machine learning and artificial intelligence (AI) to automate the processing of data in OSH surveillance systems.

The main source of OSH information on fatal and non-fatal workplace incidents comes from the unstructured free-text injury narratives recorded in surveillance systems. For example, an employer may report an injury as worker fell from the ladder after reaching out for a box. For decades, humans have read these injury narratives to assign standardized codes using the U.S. Bureau of Labor Statistics (BLS) Occupational Injury and Illness Classification System (OIICS). Coding these injury narratives to analyze data is expensive, time consuming, and fraught with coding errors.

AI, namely machine learning text classification, offers a solution to this problem. If algorithms can be developed to read the injury narratives, data can be pulled from these surveillance systems in a fraction of the time of hand coding.

NIOSH developed an AI algorithm to apply OIICS codes based on injury narratives from a hospital emergency department surveillance system. However, the efficiency of this algorithm was not clear. To see if better coding algorithms could be developed, NIOSH turned to crowdsourcing.

While not unique to AI, crowdsourcing involves asking the crowd or people in the public with a variety of skill sets to provide their unique solution to a problem. The approach results in a large number of potential solutions that can be assessed to identify those that work best. Generally, the best crowd solutions are better than the original solution. In this case, NIOSH worked with two crowds one internal to CDC and one external to CDC to propose better solutions to NIOSHs initial coding algorithm.

Courtesy of NIOSH

Before conducting an external competition, a team of 17 researchers from NIOSH, the Centers for Disease Control and Prevention (CDC), BLS, OSHA, the Federal Emergency Management Administration (FEMA), the Census Bureau, the National Institutes of Health (NIH)7, and the Consumer Products Safety Commission hosted a competition for staff at CDC. A total of 19 employees competed to develop the best algorithm to code worker injury narratives. The team received nine algorithms, five of which outperformed the NIOSH baseline script, which had an accuracy of 81%. The internal crowdsourcing competition winning algorithm was 87% a 6% improvement.

In October 2019, NIOSH, together with National Aeronautics and Space Administration (NASA), hired a Tournament Lab vendor, Topcoder, to host the external crowdsourcing competition. This was the first-ever external crowdsourcing competition from CDC and NIOSH, which was partially funded through the CDC Innovation Fund Challenge. The competition accessed Topcoders global community of data science experts to develop a Natural Language Processing (NLP) algorithm to classify occupational work-related injury records according to OIICS.

Like the internal competition, the external competition was also a success. There were 961 submissions from 388 registrants representing over 26 countries (32% United States, 21% India). Those participating self-identified as having degrees in computer science and engineering, chemistry, computer engineering, computer science, data science, and economics to name a few. This competition produced 21% more registrants and 66% more submissions than the average Topcoder competition. The high-quality submissions achieved nearly 90% accuracy, which surpassed the 87% accuracy goal achieved during the internal competition.

The 1st place external crowdsource winner was Raymond van Veneti, who is a doctoral student in numerical mathematics at the University of Amsterdam. Second place was awarded to a senior data scientist at Sherbank AI lab in Russia; 3rd place was awarded to a developer and data scientist from China; 4th place was awarded to a biostatistician at the School of Medicine at Emory University in Atlanta, GA; and 5th place was awarded to a full stack engineer from Bangalore, India.

External crowdsource 1st place winner Raymond van Veneti.Courtesy of NIOSH

The external competition and the resulting algorithm support improving efficiency and reducing costs associated with coding occupational safety and health surveillance data. Ultimately, it is hoped that the improved algorithm will contribute to greater worker safety and health. The NIOSH project team will work with the 1st prize winners script to make an easy-to-use web tool for public use. In the interim, the top 5 winning solutions are available on GitHub.

For more information, visit https://blogs.cdc.gov.

Original post:
Artificial Intelligence Crowdsourcing Competition for Injury Surveillance - EC&M

Artificial Intelligence Contributes Higher in Healthcare Compared to Other Industries – EnterpriseTalk

KPMG survey has revealed that 53% of healthcare executives believe the healthcare industry is ahead of most other sectors in the adoption of artificial intelligence (AI)

The latest report from KPMG has stated that more than half (53%) of executives say that when it comes to the adoption of AI, the healthcare sector is quite ahead to other industries. As per the report, 37% of healthcare executives believe factors like cost and skill barriers are slowing the AI implementation in the healthcare industry.

The Critical Role of Cyber Security in Healthcare

The adoption of AI and automation in hospital systems has increased exponentially since 2017, found the report. Nearly 90% of respondents believe AI is already creating efficiencies in their operations, and 91% say AI is expanding patient access to care. AI will be effective in diagnosing patient illnesses, say 68% of respondents. Meanwhile, 47% believe diagnostics will have a significant impact by 2022, according to the study. Respondents also said AI would have a positive effect on process automation. Around 40% of them think AI will assist providers with better X-rays and CT scans. It will continue to advance the digitization of healthcare as per the KPMG survey. With the help of AI technologies, 41% of respondents expect improved records management, while 48% believe the most significant impact of AI will be in biometric-related applications.

In the field of healthcare and diagnostics, studies have shown that AI can assist doctors while making informed decisions, and enhance patient diagnostics, even to the extent of identification of cancer. Nearly half of healthcare executives said their institutions offer AI training courses to employees, and 67% say their employees support AI adoption.

On the flip side, there has also been suspicion that AI has increased the overall cost of healthcare more than half of the surveys respondents feel this way. This suggests that healthcare executives are still trying to determine the most cost-effective areas to use AI tools. Two of the major concerns for healthcare companies are privacy and security. According to the survey, 75% of respondents have concerns that AI could threaten the privacy and security of patient data, while 86% say their companies are taking steps to protect patient privacy as they implement AI.

Cyber Security- Only 17% of Global Enterprises are Cyber Resilient Leaders

Healthcare leaders agree that AI will play a key role in improving care delivery, with 90% of respondents saying they believe that AI will improve the patient experience. The results show that once leaders address key issues to implementation, the benefits of AI could outweigh potential risks. Applying AI to unstructured data will also be quite useful in the diagnosis and more accurate prognosis of health issues. Supported by doctors, AI could well be the closest tool to increase the accuracy of healthcare analysis and provide much more error-free results in days to come.

Read this article:
Artificial Intelligence Contributes Higher in Healthcare Compared to Other Industries - EnterpriseTalk

Artificial Intelligence + Robotic Process Automation: The Future of Business – Wire19

Indias only conference on Intelligent Process Automation designed around Tech Enthusiasts

80+ Delegates, 30+ Eminent Speakers, 2 Keynote Presentations, 10 Industrial Presentations, 3 Panel Discussions, 5 Sponsors & 17 Partners

RPA and AI are among the top technologies that are gaining grounds in the global business space, the others being cloud, mobile applications development, Internet of Things etc. Artificial Intelligence & Robotic Process Automation are expected to bring about major changes in terms of knowledge & skill requirements. Therefore, its inevitable for aspirants to be prepared for new-age job roles in the near future.

Related read: 5 decisions CEOs need to make in 2020 for embracing technologies faster

At Intelligent Process Automation Summit, Network & Benchmark with leading experts championing concepts, theories and applications in the automation domain. Map and Design a Winning Automation Strategy & converse on failure-free implementation. Only at this exclusive opportunity gather insights from leading industry experts.

Date & Venue: 4th & 5th March 2020 | Ramada Powai, Mumbai

Head over to the #IPASummit 2020, to confirm your participation!

Network, Benchmark & Innovate like never before at the #IPASummit with:

Keynote Speaker-

They will be joined by 30+ eminent speakers from top companies, who will share their experience, knowledge & expertise on how to put theory into action for AI + RPA.

The Intelligent Process Automation Summit is backed by Title Partner: Softomotive, Gold Partner: Nividous, Silver Partner: Tricentis, Exhibit Partners: NeoSOFT Technologies & Cygnet Infotech, Gift Partner: Spa La Vie, Association Partner: Analytics Society of India, Media Partners: Automation Connect, GIBF, The CEO Magazine, Analytics Insight, Business Connect India, Innovative Zone, The CEO Story, Electronics Media, Timestech.in, WIRE19, Business News This Week, Free Press Journal, CIO Insider India & Silicon India.

Must read: Whats missing from your growth strategy for 2020

Join the Intelligent Process Automation Summit, to keep pace with the rapid evolution of products/process, network with the new age automation talent pool and gain diversified exposure to new ideas by industry leaders & experts to conduct a thorough assessment of automation projects.

For Delegate Registration / Partnership or more Information Visit http://bit.ly/ipa-summit

See the rest here:
Artificial Intelligence + Robotic Process Automation: The Future of Business - Wire19

Artificial Intelligence (AI) and medicine | Interviews – The Naked Scientists

Chris Smith and Phil Sansom delve into the world of artificial Intelligence (AI) to find out how this emerging technology is changing the way we practise medicine...

Mike - I think this is an area where AI stands a really good chance of making quite dramatic improvements to very large numbers of people's lives.

Carolyn - Save lives and reduce medical complications.

Andre - Solid algorithms aiding physicians in some of their greatest challenges.

Beth - Thats a concern - when machine-learning algorithms learn the wrong things.

Andrew - Frankly revolutionary productivity that we are now starting to see from these AI approaches in drug design.

Lee - It will replace all manual labor in all research laboratories. And then suddenly everyone can collaborate.

What is AI?

Chris - AI - artificial intelligence. For many, its a term straight out of sci-fi, conjuring up visions of utopias or dystopias; from films ranging from the Terminator; to I Robot; to, well, the film AI!

Phil - But what was previously sci-fi is now closer to reality. AI technology exists, and theres a brand new frontier where its being applied to the world of healthcare. Were seeing AI helping to diagnose cancer, AI designing new medicines, and even AI predicting a persons medical future.

Chris - But this isnt the AI you see in the movies. In the words of Kent University computer scientist Colin Johnson, this is more software than Schwarzeneggar...

Colin - When scientists say AI, they often mean some piece of code that's running on a computer and it's taking some inputs. So if it was doing medical diagnosis, it might be taking scans and processing those and trying to generalise from it. So it will take, say, a thousand examples of these scans and the diagnosis that people had and build what's called a model, a kind of mathematical formula, that tells it how to predict when it sees a new example.

Phil - In some ways, these predictive algorithms are just an extension of the tools scientists have always used to analyse data: statistics. The only difference is how complex and layered they can get.

Colin - AI varies in complexity from things that can run on your laptop to things that require huge networks of computers. One approach that's particularly been common in recent years has been deep learning. Let's talk about that in the context of computer vision - computers learning to see and recognise. And deep learning would start by recognising colours and lines, and then the next layer would recognise shapes, circles, corners, textures, and so on. All the way up to the final layer where it's recognising whole objects. It's able, for example, to tell apart cats and dogs.

Phil - Could I download an AI to my computer?

Colin - You could download some code to do AI, what are called open source projects, projects that are made publicly available.

Phil - And code comes in lines, right? How many lines of code would I be getting?

Colin - Thousands and thousands of lines of code. But I think the complexity is not necessarily in the code as much as in the data that you'd need to train it.

Phil - Okay. Say I wanted a dog identification robot.

Colin - Yup.

Phil - And I had a picture of every dog in the world. How close would I be to the top AI systems that exist in the world today?

Colin - Pretty similar, to within probably a couple of percent of what something that was trained on a huge supercomputer could do. And that's facilitating revolutions like self driving cars. The ability to recognise road signs and pedestrians and other vehicles needs to happen in a small machine that can sit inside your car.

Phil - But while I could make my laptop very good at identifying objects in pictures, apparently there are other jobs it would find much more difficult - like identifying language.

Colin - They're very good at translation, but they're very bad at converting language into something that we might think of as understanding, particularly visual understanding. Can a crocodile run a steeplechase? That's a piece of language. We immediately convert that into an image of a crocodile trying to jump over large hurdles and we know that that's not possible. But for a current AI system that doesn't have that capacity for visualisation.

Phil - Are you saying Colin, that my dog translation robot isn't as easy to get?

Colin - I don't think you can do that. No, I don't think we could translate the language of dogs.

MEDICAL CARE

Chris - Phil, Im sorry that Colin crushed your dreams of dog dialogue - but you must admit, the degree to which these algorithms can effectively learn from the data theyre given is pretty astounding. Its also why some people refer to this as machine learning rather than the more general term AI.

Phil - It seems that computer vision - recognising patterns in images - is one of the places where machine learning excels. This is where healthcare comes in, because doctors spend lots of time examining scans or images. At Stanford University, Andre Esteva is applying machine learning to the diagnosis of skin cancer.

Andre - So we built computer vision algorithms that could, given an image of someone's skin, detect any lesions that might be concerning, and upon zooming into those lesions, diagnose them.

Phil - And does it work?

Andre - It worked really well, yes. We demonstrated that the algorithms are actually as effective as dermatologists at identifying if a lesion was benign or malignant.

Chris - To create algorithms that are as good as actual doctors, Andre had to teach them, by feeding them a large amount of data...

Andre - We collected a dataset of 130,000 images that were comprised of over 2000 different diseases.

Chris - Some of those images were used to train the algorithm, and others were used to test it afterwards, to ensure it actually worked.

Andre - The algorithms that we developed got a really good sense which ones are more concerning, which ones are less, and with that we were then able to fine tune it to work specifically well on skin cancers.

Chris - And not only could the AI distinguish a cancerous lesion from a normal one, it could even diagnose multiple lesions at once.

Andre - We actually built an AI that could take such a patch of skin with many lesions, and automatically zoom in on the ones that were most concerning.

Chris - This is just one example of how AI can help doctors with their work. Around the world, researchers are training algorithms to analyse scans and other medical information, including the DNA of cancers to track how the disease behaves and make predictions about the best treatments. Critically, in each of these examples, AI isnt replacing a doctor so much as helping a doctor with the heavy lifting.

Andre - I often describe AI as having a precocious resident following you around in clinic, being able to provide second opinions and surface questions which you might not have considered.

AI IS SPECIFIC

Phil - With all these achievements, its tempting to imagine robot doctors of the future. But according to Oxford University computer scientist Mike Wooldridge, and author of the book The Road to Conscious Machines, thats unlikely...

Mike - In the last decade, we have seen breakthroughs in artificial intelligence, but you need to be very careful when you talk about a breakthrough. Those breakthroughs are in tiny, narrow little areas.

Colin - So a system that's built and trained to do, say, medical diagnosis won't be the same artificial intelligence system that's, say, playing a game of chess.

Phil - Thats Colin Johnson again. Andres Estevas skin cancer AI, for example, wont become sentient - in fact it cant even do what Kris blood flow algorithm can do.

Colin - Current AI systems are very specific and they don't have motivations. They're doing exactly what they are told to do.

Mike - It can't explain what it's doing. It can't generalise its strategies and explain them to you or me. It can't tie its shoe laces or cook an omelette or ride a bicycle. We can do all of those things. Human beings have a much, much richer, much more general intelligence and capability than anything we can build now or anything that we're likely able to build in the near future.

Mike - I think it is extremely unlikely that there will be some kind of intelligence explosion as happens in the Terminator films - you know, the idea that intelligence suddenly multiplies overnight, and machines become sentient, and its out of our control overnight. Why isn't it very likely? Because we've been trying to build intelligent machines for the last 70 years and frankly despite the fact that they can do some very narrow tasks very well, they are actually not that smart.

GLOBAL HEALTH AND PREDICTING THE FUTURE

Chris - So AI is not without its limitations, but there are some truly massive problems that it can help us to tackle. Mike Wooldridge again.

Mike - If you look at what makes healthcare expensive, one of the key challenges is expertise. Training up a doctor takes a long time. There aren't very many people who can do it. It requires a very special set of skills. It's very, very expensive, very, very time consuming. What we can do with AI is we can capture that expertise and we can get that expertise out to people in places where, at the moment, it's just impossible.

Chris - Crucially, the poorer parts of the world - where medical care is in short supply - might really benefit from software that eases some of the doctors burdens.

Mike - A nice example from here in Oxford is in a company called Ultromics. They do ultrasound scans for hearts. Now, if you've ever looked at those ultrasound scans, it's impossible to figure out what's going on. The people that have the ability to interpret those ultrasound scans and detect abnormalities, that skill is very, very scarce. What Ultromics have done is they've taken records of ultrasound scans over a decade long period, and they've basically given that information to AI programs and they built systems that can detect abnormalities on these ultrasound scans. And they've got FDA approval, the Federal Drug Administration in the United States, so they can go live with this technology. And what that means is that a doctor in a remote part of the world with a handheld ultrasound scanner connected to their smartphone, they can do an ultrasound scan and they don't have to have that expertise themselves. That scan can be uploaded securely to a repository in Oxford, automatically analysed and they get that information back. So what that means is we'll be able to get out healthcare to huge numbers of people that just don't have it at the moment.

Chris - Were in very early days here, because a lot of these technologies are right now getting off the ground. Thats partly because they rely on a) a certain amount of IT infrastructure, and b) a good supply of data that applies to the patients.

Mike - And I know a lot of people are concerned about the idea of an AI program doing healthcare for them. That is I think, a rather first-world concern. I think for a lot of people in the world, the choice isn't between a person looking at your ultrasound scan or an AI program looking at your ultrasound scan. It's the AI program or nothing. And that I think is a real huge potential win for AI technologies in the decades ahead.

Chris - And moving beyond diagnosis; some are starting to use AI to predict the future. Carolyn McGregor from Ontario Tech University is doing groundbreaking work here in paediatrics.

Carolyn - We can monitor premature infants, and those born ill at term by monitoring their breathing, their heart rate, and their oxygen levels in their tiny bodies. We use AI to detect and predict when the behaviors of these are changing, and we classify the changes into the likely set of conditions causing the change. This has great potential to save lives and reduce medical complications.

Chris - The project is called Artemis, and its particularly important because of how vulnerable these babies are.

The challenge for these preterm infants in particular is that they're trying to complete the development outside of the womb and doing that presents them with many challenges, and it means that they're susceptible to many different conditions that they can develop, and many challenges in the development of various organs.

Chris - Artemis is designed to run in real-time, to help doctors with information that a human would find difficult to process - which, like the example of ultrasound scans earlier, could ease the burden off of doctors in poorer countries.

Carolyn - What were looking to do currently is deploy a version of Artemis for a hospital in India. Now this is interesting because were demonstrating how we can use the same techniques to support infants in low-income settings. This is very important, as the health outcomes for preterm infants in countries like India and areas of Africa are much worse than Western countries.

Chris - AI seems to do a pretty good job of predicting medical futures in many different ways - as long as it has the right data. Which, according to Mike Wooldridge, were beginning to give it.

Mike - We will be able to monitor our physiology on a 24 hour a day, seven day a week basis and that information is going to enable us to manage our health on a much better basis than we can do now. I have colleagues who think that you will be able to detect the onset of dementia just by the way that you use your smartphone. Just by looking at the pattern of usage, by the way that you search for a contact in your contact list or the way that you scan your email. As those patterns change, as you start to get the very, very early signs of dementia, it could be the smart phone is going to be able to detect that on your behalf, long before there would be any sort of formal diagnosis.

Phil - Coming up after break - AI that can invent new medicines, and peering inside the black box.

DATA AND BLACK BOXES

Phil - After all this talk about predicting your medical future with huge amounts of your personal data, its worth briefly taking a step back. Cambridge Universitys Beth Singler researches the implications of the machine learning revolution.

Beth - AI also doesn't work unless you have large amounts of data, so it cannot progress in particular directions unless it has access to human subject data. Large companies are probably less of a concern than some of the user chosen apps, that there's something like 320,000 medical apps available through app stores and that's a concern as well. We need to be protective of our data going forward.

Phil - And not only do you need to trust who has your data, but once the data goes in, its often a complete mystery what the algorithms will do with it. Colin Johnson again, followed by Beth.

Colin - One concern is that they are a black box. You don't understand what's going on within them. The understanding is very distributed across thousands or millions of little mathematical formulae and little pieces of data, and this is potentially problematic because if you're using these systems for something important, like making medical diagnoses or making decisions about job applications, it can't explain necessarily why it's made the decision it has.

Beth - They don't always learn the things you want them to learn. So for example, in looking at cases of pneumonia, a deduction was made using an AI system that actually, people with asthma shouldn't be treated as much because looking at the historical data, people with asthma seem to do better overall when they caught pneumonia. But actually what was happening was humans were triaging them more, giving them more attention because they had asthma.

Phil - A human would probably have made that link - but a computer just sees the data in black and white.

Beth - Every piece of data that we would want to put into these systems is either short-cuttable in that way, or comes laden with its own human inputted biases. So for example, in the case of women seeking treatment for pain, historically women are less likely to receive pain killing medicine in response to pain than men are. And they're more likely to have to go back and back, back again to the GP. So all that data gets into the system, to the extent that then any kind of machine learning system is going to say, if you're female, you don't need treatment in the same way that if you're male. So this kind of form of algorithmic bias is something we need to be really careful about.

DRUG DISCOVERY

Chris - When you give an algorithm data about people, any biases in that data can affect a persons health outcomes. But theres a whole other area of medical science where the relevant data isnt about individual people, but where AI could go on to save lives on a massive scale. Were talking about drug discovery - inventing brand new medicines. Mike Wooldridge.

Mike - The pharmaceutical industry, although it's ultimately about designing and building new drugs, more than anything I think it's the quintessential knowledge-based industry. It relies heavily on processing large amounts of data and being able to make extrapolations from that data. And so I think it's very, very well positioned to be able to make use of new artificial intelligence techniques and machine learning techniques in designing those drugs and understanding their consequences.

Chris - This area in particular has recently become a massive, multi-billion dollar industry. Every big pharma company is getting in on the action. And its starting to pay off, because recently a company called Exscientia announced a world first.

Andrew - This is the first time drug designed by AI will be tested in humans: DSP-1181, just starting phase one clinical trials, for the treatment of obsessive compulsive disorder.

Chris - Thats Exscientia CEO Andrew Hopkins. To create their drug, they used complex machine learning techniques inspired by the way evolution works in nature.

Andrew - We can generate millions of potential ideas, inside the computer. And then we can use all of the data that we can collect from patterns, from published scientific articles, we can take all that data, and we can build predictive models. But actually, one of the real challenges we also face is that whenever were starting a new project, its actually just on the boundary or sometimes just outside potentially the limits of our ability to predict with machine-learning models. So therefore we need a different set of algorithms to help us in this learning phase. Its a set of maths we call active learning. And active learning actually, its not about just picking the fittest compound, but its actually about selecting the most informative compounds to then make and test, and improve our models, and improve our predictions. And this is actually why weve seen the frankly revolutionary productivity that we are now starting to see from these AI approaches in drug design. We discovered the drug candidate molecule thats now going into the clinic, in about 12 months. A fifth of the time it normally takes.

Chris - Part of the reason drug design normally takes so much longer is because making a drug isnt just about helping the body in a specific way - its also crucial to simultaneously avoid harming the body by hitting the wrong target. Essentially, its about designing a key that fits only one lock and doesnt accidentally open any others...

Andrew - Its not just about designing a specific key to fit a specific lock. We also need to design that key so it avoids fitting maybe 21,000 other locks, which is effectively the number of proteins expressed by the human genome. Because, by hitting those other proteins, it actually potentially causes side effects. So what we have then, is a very difficult design problem, which potentially runs into a very large number of dimensions. This is exactly the type of problem we believe artificial intelligence can be used, then, to satisfy this large number of design objectives.

Chris - Other objectives include making sure the drug can actually be manufactured easily, and that it can be taken up by the body. With so many potential pitfalls, it was particularly important that Exscientias algorithms were not a complete black box.

Andrew - The beauty of the algorithms is that we can then trace the contribution that every atom is making to all the design objectives which we are designing against.

Chris - Their new drug, DSP-1181, isnt ready for the shelves yet - clinical trials take many years, and this is a part that the algorithms definitely should not be doing.

Andrew - How a drug is designed - whether its by humans or artificial intelligence or a combination of the two - that does not change how we want to then test for safety, and test for efficacy. One thing thats important is to know which are the really important battles that AI can make a difference to. And we can make a difference to how we can rapidly discover compounds, and the cost it may take to discover a new medicine. And the speed of bringing it to the clinic. But also we must remember that human biology is incredibly complex. It would be a mistake for people to think that AI can allow us to predict all the possibilities of how a medicine may interact with the human body.

CHEMPUTER

Chris - In the next few years, we might see more and more drugs designed using this kind of evolution-inspired AI. And soon after, there might be some basic manufacture and testing by AI as well - thanks to devices like Lee Cronins Chemputer.

Lee - The chemputer is the world's first general purpose programmable robot that makes molecules on demand. The reason I set out to make this was actually to make a chemical internet that would help me search for the origin of life, believe it or not. We couldn't get funding for that on its own, and I figured that the same technology we use to search for biology would also be very good in drug discovery and making molecules and personalising medicine.

Chris - Like the AI that works in medical diagnosis, the chemputer was originally designed to remove the grunt work out of chemistry so the chemist could be free to do the interesting parts. It consists of both software and hardware.

Lee - It looks like a normal chemistry set actually, round bottom flasks, conical flasks, test tubes, pipes and things.

Lee - We have to feed in some chemicals, like putting ink into a printer, and also we put in a code and that code has two parts to it. One is a graph which is literally understanding where those chemicals have to be moved to. And the other is like a recipe - like cooking a souffle - what temperature, for how long, and what ingredients must be added together in what order. So we can make the perfect chemical souffle, if you like, every time, correctly.

Chris - The result works like a 3D printer for molecules - but Lee started to apply AI to help the chemputer course-correct.

Lee - A bit like how an automated car works, the chemputer can drive perfectly when all the instructions are correct, but what about if something goes wrong or something is not quite as expected? Because we've put some sensors into the chemputer, it can feed back and say, "Oh, there's something a bit wrong here with the heating" or "we don't need to stay at this temperature for quite as long as we thought. Let's make another decision." And so what we've been doing in the last few years is integrating AI into the chemputer.

Chris - This combination of sensors and machine learning meant that the chemputer could start learning from, and experimenting on its own recipes.

Lee - Now we don't tell the robot to make molecules. We tell it to make molecules that have properties. Say we want a blue thing or a nano thing. We're able to dial this in and make a sensor for a blue nano thing and then the chemputer is able, if you like, to search chemical space randomly to start with, and then use a series of algorithms to focus in to say, is that bluer? Make it bluer, more nano, more blue. Yes, hit stop. And it's literally the ability to make a closed loop system where you have molecular discovery, synthesis and testing in a continuous workflow.

Chris - At this point, the Chemputer not only does the grunt work of a chemist; it does the chemists full job. Lee is even looking into teaching it to look through research papers and pick up new techniques, by translating them into its own chemical language.

Lee - That was the vision for our initial paper that it would literally be able to play the literature. Almost like taking vinyl records, digitising them, putting them onto Spotify.

Chris - And if the machine can do the full job of a chemist, that includes trying to synthesise new medicines. Lee already has one working on short biological molecules called peptides.

Lee - Now peptides are a good example because peptides are made by robots already, but our chemputer not only makes peptides but it can do any other type of chemistry on the peptide that you want. And that's getting the biochemists really excited, because we can start to dream up new types of drug molecules that maybe can look at the iron pumping system in the cell, or certain receptors at the membranes in the cell.

THE FUTURE

Visit link:
Artificial Intelligence (AI) and medicine | Interviews - The Naked Scientists

Could artificial intelligence have predicted the COVID-19 coronavirus? – Euronews

The use of artificial intelligence is now the norm in many industries, from integrating the technology in autonomous vehicles for safety, to AI algorithms being used to improve advertising campaigns. But, by using it in healthcare, could it also help us predict the outbreak of a virus such as the COVID-19 coronavirus?

Since the first cases were seen at the end of December 2019, coronavirus has spread from Wuhan, China, to 34 countries around the world, with more than 80,000 cases recorded. A hospital was built in 10 days to provide the 1,000 beds needed for those who had fallen victim to the virus in Wuhan 97 per cent of cases reported are in China.

The World Health Organisation (WHO) has said the world should prepare for a global coronavirus pandemic. The virus can be spread from person to person via respiratory droplets expelled when an infected person coughs or sneezes. According to the WHO: "Common signs of infection include respiratory symptoms, fever, cough, shortness of breath and breathing difficulties. In more severe cases, infection can cause pneumonia, severe acute respiratory syndrome, kidney failure, and even death."

AI developers have suggested that the technology could have been used to flag irregular symptoms before clinicians realise there is a developing problem. AI could alert medical institutions to spikes in the number of people suffering from the same symptoms, giving them two to four weeks' advance warning which in turn could allow them time to test for a cure and keep the public better informed.

As the virus continues to spread, AI is now being used to help predict where in the world it will strike next. The technology sifts through news stories and air traffic information, in order to detect and monitor the spread of the virus.

Read more:
Could artificial intelligence have predicted the COVID-19 coronavirus? - Euronews

Artificial intelligence and machine learning for data centres and edge computing to feature at Datacloud Congress 2020 in Monaco – Data Economy

Vertiv EMEA president Giordano Albertazzi looks back on data center expansion in the Nordics and the regions role as an efficient best execution venue for the future.

At the start of the new year its natural to look to thefuture. But its also worth taking some time to think back to the past.

Last year was not only another period of strong data center growthglobally, and in the Nordic region specifically, but also the end of a decadeof sustained digital transformation.

There have been dramatic shifts over the last ten years butthe growth in hyperscale facilities is one of the most defining and one withwhich the Nordic region is very well acquainted.

According to figures from industry analysts Synergy Researchthe total number of hyperscale sites has tripled since 2013 and there are nowmore than 500 such facilities worldwide

And it seems that growth shows no signs of abating. Accordingto Synergy, in addition to the 504 current hyperscale data centers, a further151 that are at various stages of planning or building.

A good numberof those sites will be sited in the Nordics if recent history is anything to goby. The region has already seen significant investment from cloud andhyperscale operators such as Facebook, AWS and Apple. Google was also one ofthe early entrants and invested $800 million inits Hamina, Finland facility in 2010. It recently announced plans to invest a further $600 million in an expansion ofthat site.

I was lucky enough to speak at the recent DataCloud Nordicsevent at the end of last year. My presentation preceded Googles country manager,Google Cloud, Denmark and Finland, Peter Harden, who described the companysgrowth plans for the region. Hamina, Finland is one of Googles mostsustainable facilities thanks in no small part to its Nordics location whichenables 100% renewable energy and innovative sea water cooling.

Continuing that theme of sustainability, if the last decadehas been about keeping pace with data demand, then the next ten years will beabout continued expansion but importantly efficient growth in the right locations,using the right technology and infrastructure. The scale of growth beingpredicted billions of new edge devices for example will necessitate asustainable approach.

That future we at Vertiv, and others, believe will be basedaround putting workloads where they make most sense from a cost, risk, latency,security and efficiency perspective. Or as industry analysts 451 Research putsit: TheBest Execution Venue (BEV). (a slightlyunwieldy term but an accurate one). BEV refers to the specific ITinfrastructure an app or workload should run on cloud, on-premise or at theedge for example but could also equally apply to geographic location of datacenters.

In that BEV future, the Nordics will become increasingly important for hosting a variety of workloads but the sweet-spot could be those that are less latency sensitive high performance compute (HPC) for example and can therefore benefit from the stable, renewable and cheap power as well as the abundance of free cooling. Several new sub-sea cables coming online over the near future will also address some of the connectivity issues the region has faced.

Newsletter

Time is precious, but news has no time. Sign up today to receive daily free updates in your email box from the Data Economy Newsroom.

A recent study by the Nordic Council of Ministers estimatesthat approximately EUR 2.2 bn. have been invested in the Nordics on initiateddata centre construction works over the last 12 to 18 months (2018). Mainlywithin hyperscale and cloud infrastructure. This number could exceed EUR 4 bn.annually within the next five to seven years because of increasing marketdemand and a pipeline of planned future projects.

Vertiv recently conducted some forward-looking research thatappears to reinforce the Nordics future potential. Vertiv first conducted itsData Center 2025 research back in 2014 to understand where the industry thoughtit was headed. In 2019, weupdated that study to find out how attitudes had shifted in the interveningfive years a half way point if you will be between 2014 and 2025.

The survey of more than 800 data center experts covers a range of technology areas but lets focus on a few that are important and relevant to the Nordics.

We mentioned the edge a little earlier when talking about BEV.Vertiv has identified fourkey edge archetypes that cover the edge use cases that our experts believewill drive edge deployments in the future. According to the 2025 research, ofthose participants who have edge sites today, or expect to have edge sites in2025, 53% expect the number of edge sites they support to grow by at least 100%with 20% expecting an increase of 400% or more.

So along with providing a great venue for future colo and cloud growth, the Nordics, like other regions, is also likely to see strong edge growth. That edge demand will require not only new data center form-factors such as prefabricated modular (PFM) data center designs but also monitoring and management software and specialist services.

Another challenge around edge compute, and the core for thatmatter, is energy availability and increasingly, access to clean, renewableenergy.

The results of the 2025 research revealed that respondentsare perhaps more realistic and pragmatic about the importance and access toclean power than back in 2014. Participants in the original survey projected22% of data center power would come from solar and an additional 12% from windby 2025. Thats a little more than one-third of data center power from thesetwo renewable sources, which seemed like an unrealistic projection at the time.

This years numbers for solar and wind (13% and 8% respectively) seem more realistic. However, importantly for Nordics countries with an abundance of hydropower, participants in this years survey expect hydro to be the largest energy source for data centers in 2025.

The data center 2025 research, also looked at one of theother big drivers for building capacity in the Nordics: access to efficientcooling.

According to the 2025 survey, around 42% of respondentsexpect future cooling requirements to be met by mechanical cooling systems. Liquidcooling and outside air also saw growth from 20% in 2014 to 22% in 2019, likelydriven by the more extreme rack densities being observed today. This growth inthe use of outside air obviously benefits temperate locations like the Nordics.

In summary, if the last ten years have been about simplykeeping up with data center demand, the next ten years will be about addingpurposeful capacity in the most efficient, sustainable and cost-effective way:the right data center type, thermal and power equipment, and location for theright workloads.

If the past is anything to go by, the Nordics will have an important role to play in that future.

Read the latest from the Data Economy Newsroom:

See the original post:
Artificial intelligence and machine learning for data centres and edge computing to feature at Datacloud Congress 2020 in Monaco - Data Economy

Artificial Intelligence, is the Future of Human Resources. – 401kTV

Artificial Intelligence, is the Future of Human Resources

Artificial intelligence AI takes the lead over intelligent automation IA. Intelligent automation is the combination of robotic process automation and artificial intelligence to automate processes, according to a recent article on the topic in HR Dive, a publication for human resources professionals. Organizations that embrace intelligent automation may experience a return on investment of 200% or more, according to an Everest Group report cited by HR Dive. However, that doesnt mean organizations can expect a reduction in headcount, according to the report. In fact, projections of a reduction in workforce thanks to intelligent automation may be highly exaggerated, the Everest Group noted.

The Everest Group identified eight companies it called Pinnacle Enterprises companies distinguished by their advanced intelligent automation capabilities and their superior outcomes. These companies generated about 140% ROI and reported more than 60% cost savings, thanks to artificial intelligence and intelligent automation. The companies the Everest Group identified as Pinnacle Enterprises also experienced a 67% improvement in operational metrics, compared to the 48% improvement reported by other organizations. The Pinnacle Organizations also experienced improvements in their top lines, time-to-market, and customer and employee experiences as a result of using artificial intelligence and intelligent automation in their businesses, according to the Everest Group report.

Technology, particularly artificial intelligence, and now, intelligent automation, is infiltrating businesses little by little, particularly in the human resources space. In fact, artificial intelligence in HR has been cited as a top employee benefits trend for 2020. Its a trend employers would do well to pay attention to, especially since cost savings and ROI seem to be significant potential positive outcomes of adopting such technologies.

Technology such as artificial intelligence and intelligent automation makes human resources more efficient. According to a Hackett Group report from 2019, HR in organizations that leverage automation technology can do more with fewer resources an important distinction in a department thats often considered the heart of an organization, and that typically has more work than staff to complete it. In addition, the utilization of artificial intelligence and intelligent automation are hallmarks of a distinguished organization. Per the Hackett Group data, cited by HR Dive, world-class HR organizations leverage [artificial intelligence] and, as a result, have costs that are 20% lower than non-digital organizations and provide required services with 31% fewer employees.

Despite the apparent benefits, not everyone is a fan of automated technologies such as artificial intelligence and intelligent automation. Professors at the Wharton School of the University of Pennsylvania and ESSEC Business School, an international higher education institution located in France, Singapore and Morocco, cautioned employers about the potential downsides of using artificial intelligence and intelligent automation in human resources functions. Specifically, they warned that artificial intelligence could create problems for human resources because its unable to measure some HR functions and infrequent employee activities because they generate little data, can solicit negative employee reactions and isconstrained by ethical and legal considerations. However, human resources professionals are finding some success in using artificial intelligence and intelligent automation to perform functions such as searching through resumes for keywords and assisting with other recruiting functions, for example.

Despite the concerns of some, its likely that artificial intelligence and intelligent automation will continue to command a presence in human resources. As such, automation will prompt organizations to make a heftier investment in talent, noted a study byMIT Sloan Management Review and Boston Consulting Groups BCG GAMMA and BCG Henderson Institute. The study found that employers who successfully embrace artificial intelligence and intelligent automation will build technology teams in-house and rely less on external vendors. Theyll also poach artificial intelligence talent from other companies and upskill current employees to be on the front lines of the automation movement. Artificial intelligence and intelligent automation is here to stay, and its only getting more pervasive, especially in human resources and employee benefits. Employers should be ready.

Steff C. Chalk is Executive Director of The Retirement Advisor University, a collaboration with UCLA Anderson School of Management Executive Education. Steff also serves as Executive Director of The Plan Sponsor University and is current faculty of The Retirement Adviser University.

See the original post:
Artificial Intelligence, is the Future of Human Resources. - 401kTV

Artificial Intelligence learns the value of teamwork to form efficient football teams – University of Southampton

Home>>>

Published:27 February 2020

Machine learning experts from the University of Southampton are optimising football team selection by using AI to value teamwork between pairs of players.

The new approach uses historic performance data to identify which player combinations are most important to a team, generating insights that can help select teams' most efficient line-ups and identify suitable transfer targets.

The study, led by PhD student Ryan Beal in the Agents, Interaction and Complexity (AIC) Group, has developed a number of teamwork metrics that can accurately predict team performance statistics, including passes, shots on target and goals.

Researchers presented their findings and hosted an AI in Team Sports workshop at this months Association for the Advancement of Artificial Intelligence (AAAI) Conference in New York.

"We have tested our methods from games in the 2018 FIFA World Cup and the last two seasons of the English Premier League," Ryan says. "We found that we could select teams using the AI in a similar fashion to human managers and then also suggest changes that would improve the team.

"When looking at the results for the Premier League, the teamwork analysis identified Aymeric Laporte as one of the key players for Manchester City. He has been injured for much of this season which may explain their downturn in form compared to last season."

The Southampton team have used a number of machine learning techniques to assess teamwork values from the historic data and found that teams with higher teamwork levels are more likely to win. They then trained an optimisation method to assess the teamwork between pairs of players and compute a number of new metrics that they compare in their latest paper.

"While this work could be used as a tool to assist football managers, we think that the approach could also be extended into other domains where teamwork between humans is important, such as emergency response or in security," Ryan says.

Ryan has also presented his work to sporting industry experts at the StatsBomb Innovation in Football Conference at Stamford Bridge in October.

Ryan's work is supported by UK Research and Innovation (UKRI) and AXA Research Fund. The work was done in collaboration with Narayan Changder (NIT Durgapur), Professor Tim Norman and Professor Gopal Ramchurn.

Team sport performance is one of a several innovative AI research topics being explored in the AIC Group. In 2018, Gopal and Dr Tim Matthews revealed how machine learning algorithms can accurately predict team and player performance to finish in the top 1% of the Fantasy Premier League game, outperforming close to six million human players.

Link:
Artificial Intelligence learns the value of teamwork to form efficient football teams - University of Southampton

Artificial Intelligence will take over Liverpool’s World Museum this summer – The Guide Liverpool

27/02/2020

In its first UK showing outside London, AI: More than Human surveys the creative and scientific developments within artificial intelligence (AI) through extraordinary international artworks and a plethora of interactive and immersive playful experiences.

Taking visitors on an extremely unique and unexpected journey, this exhibition will explore the complex relationship between humans and technology in the past, present and what we can expect in the future.

AI more than Human will examine what it means to be human when sophisticated technology such as AI is changing so much around us, and asks big questions: What is consciousness? Will machines ever outsmart a human? How can humans and machines work collaboratively? Today, we are on the cusp of our next great era as a species the augmented age and this exhibition will connect the visitor to a world far beyond our natural senses.

Anne Fahy, Head of World Museum said: World Museum explores millions of years of Earths history and the human activity that has shaped it. In this fascinating exhibition, we see just how long and how important the epic story of AI has been on human development.

Featuring remarkable work by leading scientists, researchers and artists, AI: More Than Humanis an unmissable taste of the breadth of creativity that is being generated and inspired by algorithms and machines. It demonstrates the opportunity for us to push our creative boundaries, and the potential of exciting collaborations between humans and machines. It allows visitors to consider their own relationship with AI, and what the future may look like.

With so many opportunities for visitors to interact with the works it is an exhibition for curious minds of all ages who want to play experience and understand this ubiquitous technology.

Neil McConnon, Head of Barbican International Enterprises, said: Barbican is delighted to have an opportunity to collaborate with World Museum, Liverpool to stage AI: More than Human. We hope it inspires, informs and provides a space to discover and reflect on the multitude of complex implications of AI on our lives, both liberating and daunting.

AI: More Than Humanis divided into four sections starting with TheDream of AI, which looks at the origins and history of AI. This section focusses on the religious traditions of Judaism and Shintoism, the sciences of Arabic alchemy and early mathematics and Gothic philosophies. It looks at how these beliefs and philosophies continue to influence our perception of and interaction with technology today, and how our fascination with creating beings, goes far back to ancient times.

Section 2, Mind Machines, explains how AI has developed through history, charting the groundbreaking work of some of AIs founding figures, such as Ada Lovelace, Charles Babbage and Alan Turing. Mind Machines documents pioneering computing moments, including when AI was used to beat a pro-chess player and even a human contestant on US game show, Jeopardy. A special commission explores the story of DeepMinds AlphaGo, the first computer to defeat a professional human Go player the Chinese strategy game with origins from 3,000 years ago. In addition to these immersive and interactive installations, this section presents artworks that use and respond to the ways AI sees images or understands language and movement, such as Anna Ridlers Myriad and Mosaic Virus and Mario Klingemanns piece, Circuit Training.

Another exhibition highlight is Sonys 2018 robot puppy aibo; who gradually develops a unique and innovative personality from its database of memories, which visitors are encouraged to contribute to by interacting and playing with aibo.

Section 3, Data Worlds examines the capability of AI to change society, as well as looking at ethical issues such as bias, truth, control and privacy. It looks at AIs role in fields such as healthcare, journalism and retail. It features Learning to See by artist Memo Akten who has worked with Nexus Studios to create an interactive work that invites visitors to manipulate everyday objects to illustrate how a neural network (a series of algorithms) can be fooled into seeing the world as a painting. Within this section, scientist, activist and founder of the Algorithmic Justice League, Joy Buolamwini, examines racial and gender bias using facial analysis software as part of Gender Shades, a project to reveal how prejudice can find its way into technology.

The final section,Endless Evolution, looks to the future of the human race, and where artificial life fits in. It features work byMassive Attack who encoded their seminal album, Mezzanine into synthetic DNA; Justine Emards beguiling Co(AI)xistence, exploring communication between human and machine, and Yuri Suzukis Electronium, enabling visitors to compose with AI. This final section explores the seemingly endless possibilities of AI to shape our lives.

Threaded throughout the exhibition are specially commissioned installations where visitors are encouraged to interact and engage. Including; 2065 an open-world video game set on a virtual island by Lawrence Lek. Universal Everythings Future You, an uncanny installation where visitors can interact with an AI version of themselves, Es Devlins PoemPortraits, which brings together art, design, poetry and machine learning, and Chris Salters Totem, a 14-metre light installation, gives a feeling of a living, breathing entity.

AI has evolved to provide many benefits in every aspect of life, from fashion, to art, music, medicine and even human rights. Our relationship with it is becoming more complex, and through this playful, interactive exhibition, we aim to explore the permeating feature of artificial intelligence. Fusing innovation, art and technology, visitors are invited to immerse themselves in this multi-sensory exhibition.

Continue reading here:
Artificial Intelligence will take over Liverpool's World Museum this summer - The Guide Liverpool

Learn how to start using artificial intelligence in your newsroom (before it is too late) – Journalism.co.uk

The upcoming Newsrewired conference, taking place on 4 June at MediaCityUK, will feature a workshop where delegates will learn how to start implementing artificial intelligence (AI) in their everyday journalistic work.

The session will be led by Charlie Beckett, a professor in the Department of Media and Communications and founding director of Polis, the London School of Economics international journalism think-tank.

Professor Beckett is the author of the study "New powers, new responsibilities. A global survey of journalism and artificial intelligence". He said that newsrooms have between two and five years to develop a meaningful strategy or risk falling behind their competitors.

"This is a marathon, not a sprint but theyve got to start running now.

"Youve got two years to start running and at least working out your route and if youre not active within five years, youre going to lose the window of opportunity. If you miss that, youll be too late," he said in an article for Journalism.co.uk.

Its really clear if you look at other industries that AI is shaping customer behaviour. People expect personalisation, be that in retail or housing, for production, supply or content creation. They use AI because of the efficiencies that it generates and how it enhances the services or products it offers.

Charlie Beckett is currently leading the Polis Journalism and AI project. He was director of the LSEs Truth, Trust and Technology Commission that reported on the misinformation crisis in 2018.

He is the author of "SuperMedia" (Wiley Blackwell, 2008) that set out how journalism is being transformed by technological and other changes. His second book "WikiLeaks: News In The Networked Era" (Polity 2012) described the history and significance of WikiLeaks and the wider context of new kinds of disruptive online journalism.

He was an award-winning filmmaker and editor at LWT, BBC and ITN.

To take advantage of our early-bird offer, book your ticket before 28 Fabruary 2020 and save 50.

If you like our news and feature articles, you can sign up to receive our free daily (Mon-Fri) email newsletter (mobile friendly).

View post:
Learn how to start using artificial intelligence in your newsroom (before it is too late) - Journalism.co.uk