Applying Artificial Intelligence in the Fight Against The Coronavirus – HIT Consultant

Dr. Ulrik Kristensen, Senior Market Analyst at Signify Research

Drug discovery is a notoriously long, complex and expensive process requiring the concerted efforts of the worlds brightest minds. The complexity in understanding human physiology and molecular mechanisms is increasing with every new research paper published and for every new compound tested. As the world is facing a new challenge in trying to both adapt to and defend itself against the coronavirus, artificial intelligence is offering new hope that a cure might be developed faster than ever before.

In this article, we will present some of the technologies being developed and applied in todays drug discovery process, working side-by-side with scientists tracking new findings, and assisting in the creation of new compounds and potential vaccines. In addition, we will examine how the industry is applying AI in the fight against the coronavirus.

Start-ups focusing on the use of artificial intelligence in drug development and clinical trials have seen significant investment in recent years, and vendors focusing specifically on drug design and discovery received the majority of the total $5.2B funding observed between 2012 and 2019

Information EnginesInformation Engines are fundamental machines behind applications in both drug discovery and clinical trials, serving as the basic information aggregator and synthesizer layer, on which the other applications can draw their insights, conclusions and prescriptive functions. The information available to scientists today is increasing exponentially, so the purpose of information engines being developed today is to help scientists update and aggregate all this information and pull out the data most likely to be relevant for a specific study.

The types of information going into these engines vary broadly. An advanced information engine integrates information from multiple sources such as scientific research publications, medical records, doctors journals, biomedical information such as known drug targets, ligand information and disease-specific information, historical clinical trial data, patent information from molecules currently being investigated at global pharma companies, proprietary enterprise data from internal research studies at the individual pharma client, genomic sequencing data, radiology imaging data, cohort data and even other real-world evidence such as society and environmental data.

In a recentanalyst insight, we discussed how these information engines are being applied in clinical trials to enhance success rates and reduce associated trial costs. When it comes to the upstream processes relating to drug discovery, their purpose is to synthesize and analyze these vast amounts of information to help the scientist understand disease mechanisms and select the most promising targets, drug candidates or biomarkers; or as we will see in the next section, to assist the drug design application in creating the molecular designs or optimize a compound with desired properties. Information is typically presented via a knowledge graph that visualizes the relationships between diseases, genes, drugs and other data points, which the researcher then uses for target identification, biomarker discovery or other research areas.

Drug DesignAI-based drug design applications are involved directly with the molecular structure of the drugs. They draw data and insights from information engines to help generate novel drug candidates, to validate or optimize drug candidates, or to repurpose existing drugs for new therapeutic areas.

For target identification, machine learning is used to predict potential disease targets, and an AI triage then typically orders targets based on chemical opportunity, safety and druggability and presents them ranked with most promising targets. This information is then fed into the drug design application which optimizes the compounds with desired properties before they are selected for synthesis. Experimental data from the selected compounds can then be fed back into the model to generate additional data for optimization.

For drug repurposing, existing drugs approved for specific therapeutic areas are compared against possible similar pathways and targets in alternative diseases, which creates an opportunity for additional revenue from already developed pharmaceuticals. It also gives potential relief for rare disease areas where developing a new compound wouldnt be profitable. Additionally, keeping repurposing in mind during the development of a new drug as opposed to having a disease-specific mindset, may result in more profitable multi-purpose pharmaceuticals entering the market in the coming years.

Recent substantial investment in AI for drug development has meant the start-ups have had the manpower and resources to develop their technologies. Compared to AI in medical imaging the total investment has been more than four-fold, even though the number of funded start-ups is equivalent between the two industries. This makes the average deal size for AI in drug development 3.5 times bigger than in medical imaging. The funding has been spent on significantly expanding and building capacity, as the total number of employees across these AI start-ups is now close to 10,000 globally.

A strong focus for start-up vendors is to create tight partnerships with the pharma industry. For many still in the early product development stages, this gives them the ability to test and optimize their solutions and to create proof-of-concept as a basis for additional deals.

For the more established start-ups, partnerships with the pharmaceutical industry turn the initial investments into revenue in the form of subscription or consulting charges, and potential milestone payments for new drug candidates, preparing the company for further investments, IPO, acquisition or continued success as a separate company. Pharmaceutical companies with high numbers of publicly announced AI partnerships include AstraZeneca, GSK, Sanofi, Merck, Janssen, and Pfizer, but many more are actively pursuing such opportunities today.

Many AI start-ups are therefore in the phase where they have a solution ready and are either looking for further partnerships or would like to showcase their solution and capabilities. The COVID-19 pandemic has, therefore, come as an important test for many of these vendors, where they can demonstrate the value of their technologies and hopefully help the world get through this crisis faster.

Understanding the protein structures on the coronavirus capsule can form the basis of a drug or vaccine. Google Deepmind have been using their artificial intelligence engine to quickly predict the structure of six proteins linked to the coronavirus, and although they have not been experimentally verified, they may still contribute to the research ultimately leading to therapeutics.

Hong Kong-based Insilico Medicine took the next step in finding possible treatments, using their AI algorithms to design new molecules that could potentially limit the viruss ability to replicate. Using existing data on the similar virus which caused the SARS outbreak in 2003, they published structures of six new molecules that could potentially treat COVID-19. Also, Germany-based Innoplexus has used its drug discovery information engine to design a novel molecule candidate with a high binding affinity to a target protein on the coronavirus while maintaining drug-likeness criteria such as bioavailability, absorption, toxicity, etc. Other AI players following similar strategies to identify new targets and molecules include Pepticom, Micar Innovation, Acellera, MAbSilico, InveniAI and Iktos, and further initiatives are announced daily.

It is important to remember that even if AI helps researchers identify targets and design new molecules faster, clinical testing and regulatory approval will still take about a year. So, while waiting for a vaccine or a new drug to be developed, other teams are looking at existing drugs on the market that could be repurposed to treat COVID-19. BenevolentAI used their machine learning-based information engine to search for already approved drugs that could block the infection process. After analyzing chemical properties, medical data and scientific literature they identified Baricitinib, typically used to treat moderate and severe rheumatoid arthritis, as a potential candidate to treat COVID-19. The theory is that the drug would prevent the virus from entering the cells by inhibiting endocytosis, and thereby in combination with antiviral drugs reduce viral infectivity and replication and prevent the inflammatory response which causes some of the COVID-19 symptoms.

But although a lot is happening in the industry right now and there are many suggestions as to what might work as a therapy for COVID-19, both from existing drugs already on the market and from new molecules being designed by the AI drug developers, the scientific and medical community, as well as regulators, will not neglect the scientific method. Suggestions and new ideas are essential for progress, but so is rigor in testing and validation of hypotheses. A systematic approach, fuelled by accelerated findings using AI and bright minds in collaboration, will lead to a better outcome.

About Dr. Ulrik Kristensen

Dr. Ulrik Kristensen is a Senior Market Analyst atSignify Research, an independent supplier of market intelligence and consultancy to the global healthcare technology industry. Ulrik is part of the Healthcare IT team and leads the research covering Drug Development, Oncology, and Genomics. Ulrik holds an MSc in Molecular Biology from Aarhus University and a Ph.D. from the University of Strasbourg.

See the original post:
Applying Artificial Intelligence in the Fight Against The Coronavirus - HIT Consultant

Artificial Intelligence in Retail Market Projected to Grow with a CAGR of 35.9% Over the Forecast Period, 2019-2025 – ResearchAndMarkets.com – Yahoo…

The "Artificial Intelligence in Retail Market by Product (Chatbot, Customer Relationship Management), Application (Programmatic Advertising), Technology (Machine Learning, Natural Language Processing), Retail (E-commerce and Direct Retail)- Forecast to 2025" report has been added to ResearchAndMarkets.com's offering.

The artificial intelligence in retail market is expected to grow at a CAGR of 35.9% from 2019 to 2025 to reach $15.3 billion by 2025.

The growth in the artificial intelligence in retail market is driven by several factors such as the rising number of internet users, increasing adoption of smart devices, rapid adoption of advances in technology across retail chain, and increasing adoption of the multi-channel or omnichannel retailing strategy. Besides, the factors such as increasing awareness about AI and big data & analytics, consistent proliferation of Internet of Things, and enhanced end-user experience is also contributing to the market growth. However, high cost of transformation and lack of infrastructure are the major factors hindering the market growth during the forecast period.

The study offers a comprehensive analysis of the global artificial intelligence in retail market with respect to various types.

The global artificial intelligence in retail market is segmented on the basis of product (chatbot, customer relationship management, inventory management), application (programmatic advertising, market forecasting), technology (machine learning, natural language processing, computer vision), retail (e-commerce and direct retail), and geography

The predictive merchandising segment accounted for the largest share of the overall artificial intelligence in retail market in 2019, mainly due to growing demand for the customer behavior tracking solutions among the retailers. However, the in-store visual monitoring and surveillance segment is expected to witness rapid growth during the forecast period, as it helps in plummeting the issue of shoplifting in retail, which is one of the major reasons to incur financial loss in the stores.

An in-depth analysis of the geographical scenario of the market provides detailed qualitative and quantitative insights about the five regions including North America, Europe, Asia Pacific, Latin America, and the Middle East and Africa. In 2019, North America commanded the largest share of the global artificial intelligence in retail market, followed by Europe and Asia Pacific. The large share of this region is mainly attributed to its open-minded approach towards smart technologies and high technology adoption rate, presence of key players & start-ups, and increased internet access. However, the factors such as speedy growth in spending power, presence of young population, and government initiatives supporting digitalization is helping Asia Pacific to register the fastest growth in the global artificial intelligence in retail market.

Key Topics Covered:

1. Introduction

1.1. Market Definition

1.2. Market Ecosystem

1.3. Currency and Limitations

1.3.1. Currency

1.3.2. Limitations

1.4. Key Stakeholders

2. Research Methodology

2.1. Research Approach

2.2. Data Collection & Validation

2.2.1. Secondary Research

2.2.2. Primary Research

2.3. Market Assessment

2.3.1. Market Size Estimation

2.3.2. Bottom-Up Approach

2.3.3. Top-Down Approach

2.3.4. Growth Forecast

2.4. Assumptions for the Study

3. Executive Summary

3.1. Overview

3.2. Market Analysis, by Product Offering

3.3. Market Analysis, by Application

3.4. Market Analysis, by Learning Technology

3.5. Market Analysis, by Type

3.6. Market Analysis, by End-User

3.7. Market Analysis, by Deployment Type

3.8. Market Analysis, by Geography

3.9. Competitive Analysis

4. Market insights

4.1. Introduction

4.2. Market Dynamics

4.2.1. Drivers

4.2.2. Restraints

4.2.3. Opportunities

4.2.4. Challenges

4.2.5. Trends

5. Artificial Intelligence in Retail Market, by Product Type

5.1. Introduction

5.2. Solutions

5.2.1. Chatbot

5.2.2. Recommendation Engines

5.2.3. Customer Behaviour Tracking

5.2.4. Visual Search

5.2.5. Customer Relationship Management

5.2.6. Price Optimization

5.2.7. Supply Chain Management

5.2.8. inventory Management

5.3. Services

5.3.1. Managed Services

5.3.2. Professional Services

6. Artificial Intelligence in Retail Market, by Application

Story continues

6.1. Introduction

6.2. Predictive Merchandising

6.3. Programmatic Advertising

6.4. In-Store Visual Monitoring & Surveillance

6.5. Market Forecasting

6.6. Location-Based Marketing

7. Artificial Intelligence in Retail Market, by Learning Technology

7.1. Introduction

7.2. Machine Learning

7.3. Natural Language Processing

7.4. Computer Vision

8. Artificial Intelligence in Retail Market, by Type

8.1. Introduction

8.2. Offline Retail

8.2.1. Brick & Mortar Stores

8.2.2. Supermarkets & Hypermarket

8.2.3. Specialty Stores

8.3. Online Retail

9. Artificial Intelligence in Retail Market, by End-User

9.1. Introduction

9.2. Food & Groceries

9.3. Health & Wellness

9.4. Automotive

9.5. Electronics & White Goods

9.6. Fashion & Clothing

9.7. Other

10. Artificial Intelligence in Retail Market, by Deployment Type

10.1. Introduction

10.2. Cloud

10.3. On-Premise

11. Global Artificial Intelligence in Retail Market, by Geography

11.1. Introduction

11.2. North America

11.3. Europe

11.4. Asia-Pacific

11.5. Latin America

11.6. Middle East & Africa

12. Competitive Landscape

12.1. Competitive Growth Strategies

12.1.1. New Product Launches

Continue reading here:
Artificial Intelligence in Retail Market Projected to Grow with a CAGR of 35.9% Over the Forecast Period, 2019-2025 - ResearchAndMarkets.com - Yahoo...

The race problem with AI: Machines are learning to be racist’ – Metro.co.uk

Artificial intelligence (AI) is already deeply embedded in so many areas of our lives. Societys reliance on AI is set to increase at a pace that is hard to comprehend.

AI isnt the kind of technology that is confined to futuristic science fiction movies the robots youve seen on the big screen that learn how to think, feel, fall in love, and subsequently take over humanity. No, AI right now is much less dramatic and often much harder to identify.

Artificial intelligence is simply machine learning. And our devices do this all the time. Every time you input data into your phone, your phone learns more about you and adjusts how it responds to you. Apps and computer programmes work the same way too.

Any digital programmes that display learning, reasoning or problem solving, are displaying artificial intelligence. So, even something as simple as a game of chess on your desktop counts as artificial intelligence.

The problem is that the starting point for artificial intelligence always has to be human intelligence. Humans programme the machines to learn and develop in a certain way which means they are passing on their unconscious biases.

The tech and computer industry is still overwhelmingly dominated by white men. In 2016, there were ten large tech companies in Silicon Valley the global epicentre for technological innovation that did not employ a single black woman. Three companies had no black employees at all.

When there is no diversity in the room, it means the machines are learning the same biases and internal prejudices of the majority white workforces that are developing them.

And, with a starting point that is grounded in inequality, machines are destined to develop in ways that perpetuate the mistreatment of and discrimination against people of colour. In fact, we are already seeing it happen.

In 2017, a video went viral of social media of a soap dispenser that would only automatically release soap onto white hands.

The dispenser was created by a companycalledTechnical Concepts, and the flaw occurred because no one on the development team thought to test their product on dark skin.

A study in March last year found that driverless cars are more likely to drive into black pedestrians, again because their technology has been designed to detect white skin, so they are less likely to stop for black people crossing the road.

It would be easy to chalk these high-profile viral incidents up as individual errors, but data and AI specialist Mike Bugembe, says it would be a mistake to think of these problems in isolation. He says they are indicative of a much wider issue with racism in technology, one that is likely to spiral in the next few years.

I can give you so many examples of where AI has been prejudiced or racist or sexist, Mike tells Metro.co.uk.

The danger now is that we are actually listening and accepting the decisions of machines. When computer says no, we increasingly accept that as gospel. So, were listening now to something that is perpetuating, or even accentuating the biases that already exist in society.

Mike says the growth of AI can have much bigger, systemic ramifications for the lives of people of colour in the UK. The implications of racist technology go far beyond who does and who doesnt get to use hand soap.

AI is involved in decisions about where to deploy police officers, in deciding who is likely to take part in criminial activity and reoffend. He says in the future we will increasingly see AI playing a part in things like hospital admissions, school exclusions and HR hiring processes.

Perpetuating racism in these areas has the potential to cause serious, long-lasting harm to minorities. Mike says its vital that more black and minority people enter this sector to diversify the pool of talent and help to eradicate the problematic biases.

If we dont have a system that can see us and give us the same opportunities, the impact will be huge. If we dont get involved in this industry, our long-term livelihoods will be impacted, explains Mike.

Its no secret that within six years, pretty much 98% of human consumer transactions will go through machines. And if these machines dont see us, minorities, then everything will be affected for us. Everything.

An immediate concern for many campaigners, equality activists and academics is the deployment and roll out of facial recognition as a power for the police.

In February, the Metropolitan Police began operational use of facial recognition CCTV, with vans stationed outside a large shopping centre in east London, despite widespread criticism about the methods.

A paper last year found that using artificial intelligence to fight crime could raise the risk of profiling bias. The research warned that algorithms might judge people from disadvantaged backgrounds as a greater risk.

Outside of China, the Metropolitan police is the largest police force outside of China to roll it out, explains Kimberly McIntosh, senior policy officer at Runnymede Trust. We all want to stay safe but giving the green light to letting dodgy tech turn our public spaces into surveillance zones should be treated cautiously.

Kimberly points to research that shows that facial recognition software has trouble identifying the faces of women and black people.

Yet roll outs in areas like Stratford have significant black populations, she says.There is currently no law regulating facial recognition in the UK. What is happening to all that data?

93% of the Mets matches have wrongly flagged innocent people. The Equality and Human Rights Commission is right the use of this technology should be paused. It is not fit for purpose.

Kimberlys example shows how the inaccuracies and inherent biases of artificial intelligence can have real-world consequences for people of colour in this case, it is already contibuting to their disproportionate criminalisation.

The ways in which technological racism could personally and systemically harm people of colour are numerous and wildly varied.

Racial bias in technology already exists in society, even in the smaller, more innocuous ways that you might not even notice.

There was a time where if you typed black girl into Google, all it would bring up was porn, explains Mike.

Google is a trusted source of information, so we cant overstate the impact that search results like these have on how people percieve the world and minorities. Is it any wonder that black women are persistantly hypersexualised when online search results are backing up these ideas?

Right now, if you Google cute baby, you will only see white babies in the results. So again, there are these more pervasive messages being pushed out there that speak volumes about the worth and value of minorities in society.

Mike is now raising money to gather data scientists together for a new project. His aim is to train a machine that will be able to make sure other machines arent racist.

We need diversity in the people creating the algorithms. We need diversity in the data. And we need approaches to make sure that those biases dont carry on, says Mike. So, how do you teach a kid not to be racist? The same way you will teach a machine not to be racist, right?

Some companies say to be well, we dont put race in our feature set which is the data used to train the algorithms. So they think it doesnt apply to them. But that is just as meaningless and unhelpful as saying they dont see race.

Just as humans have to acknoweldge race and racism in order to beat it, so too do machines, algorithm and artificial intelligence.

If we are teaching a machine about human behaviour, it has got to include our prejudices, and strategies that spot them and fight against them.

Mike says that discussing racism and existing biases can be hard for people with power, particuarly when their companies have a distinct lack of employees with relevant lived experiences. But he says making it less personal can actually make it easier for companies to address.

The current definition of racism is very individual and very easy to shrug off people can so easily say, Well, thats not me, Im not racist, and thats the end of that conversation, says Mike.

If you change the definition of racism to a pattern of behaviour like an algorithm itself thats a whole different story. You can see what is recurring, the patterns than pop up. Suddenly, its not just me thats racist, its everything. And thats the way it needs to be addressed on a wider scale.

All of us are increasingly dependent on technology to get through our lives. Its how we connect with friends, pay for food, order new clothes. And on a wider scale, technology already governs so many of our social systems.

Technology companies must ensure that in this race towards a more digital-led world, ethnic minorities are not being ignored or treated as collateral damage.

Technological advancements are meaningless if their systems only serve to uphold archaic prejudices.

This series is an in-depth look at racism in the UK in 2020.

We aim to look at how, where and why racist attitudes and biases impact people of colour from all walks of life.

It's vital to improve the language we have to talk about racism and start the difficult conversations about inequality.

We want to hear from you - if you have a personal story or experience of racism that you would like to share get in touch: metrolifestyleteam@metro.co.uk

MORE: Muslims are scared of going to therapy in case theyre linked to terrorism

MORE: How the word woke was hijacked to silence people of colour

MORE: Black women are being targeted with disgusting misogynoir in online gaming forums

Read this article:
The race problem with AI: Machines are learning to be racist' - Metro.co.uk

New blood test study uses artificial intelligence to identify cancer. But its not ready for patients yet. – Cancer Research UK – Science Blog

Credit: Vascular Development Laboratory and EM Unit

A blood test that can detect over 50 cancer types is big news this week.

Theres a lot of excitement around the latest research, published in Annals of Oncology. And its easy to see why.

Scientists have used machine learning to help identify if someone has cancer based on tiny bits of tumour DNA floating in their blood. Which could open the door to a blood test that can detect and identify multiple types of cancer.

But its not there yet. And in the blood test buzz, some news articles have missed out crucial details.

The team looked for differences in the DNA shed from cancer cells and healthy cells into the blood.

They focused on differences in a chemical tag that sit on top of DNA in cells, called methyl groups. These groups are usually spread evenly across the DNA in cells, but in cancer cells they tend to cluster at different points. And its this distinction scientists wanted to exploit.

They trained a machine learning algorithm a type of artificial intelligence that pick up patterns and signals to detect differences between methylation patterns in DNA from cancer and non-cancer cells.

The algorithm was trained on 3,052 samples from people with and without cancer from two large databases.

And once the program was fired up and ready to go, the team tested its cancer-spotting ability on a different set of 1,264 samples f, half of which were from people with cancer.

Any test with the goal of being able to detect cancers at their earliest stages in people without symptoms must strike the right balance between picking up cancer (sensitivity) and not giving false positives (specificity). Weve blogged before about what makes a good cancer test, as well as the efforts to develop a cancer blood test.

How do you assess a cancer test?

Researchers look at 3 main things when assessing a new diagnostic test.

Firstly the good news: fewer than 1% of people without cancer were wrongly identified as having the disease. Which is a good sign for the specificity of this test.

And when it came to detecting cancer, across all types of cancer, the test correctly identified the disease in 55% of cases. This is a measure of the tests sensitivity.

But there was a huge variation in sensitivity depending on the type of cancer and how advanced the disease was. The test was better at picking up more advanced cancer, which makes sense more advanced cancers typically shed more DNA into the bloodstream.

If we look at the numbers, across all cancer types the test correctly detected the disease in 93% of those with stage 4 cancer, but only 18% of early, stage 1 cancers.

An important consideration is that the study was only testing if the algorithm could detect cancer in patients who were already known to have cancer. According to the researchers, these figures may change if the test was used on a wider, general population.

Encouragingly for a multicancer test, when the researchers looked at a smaller number of samples to explore if the test helped them identify where the cancer was growing, the algorithm was able to predict the location in 96% of samples, and it was accurate in 93%.

First things first, although the samples numbers are big, they become a lot smaller when you break them down by cancer type and cancer stage. Some cancer types were particularly poorly represented, with only 1 or 2 samples included in the final analysis so theres more work to do there. Based on this, its a bit too soon to say that the test can pick up 50 cancer types.

And if the plan is to use this as a screening tool, then the researchers will need to do more to study people who didnt have symptoms when they were diagnosed. The current study included people who were symptomatic as well as people without symptoms.

And the participant data lacked variation in age, race and ethnicity. Between 83 and 87% of all the samples used to train and test the algorithm were Caucasian.

The big conclusion is that these results are encouraging and should be taken forward into bigger studies. But its important to put the results in context theyre a step in the right direction. There are a lot of steps between this study and a fully-fledged cancer test.

According to the research team, they plan to validate the results using samples from US and UK studies, and well as to begin to examine if the test could be used to screen for cancer. We look forward to seeing the results.

Our head of early detection research, Dr David Crosby, sums it up nicely: Although this test is still at an early stage of development, the initial results are encouraging. And if the test can be fine-tuned to be more efficient at catching cancers in their earliest stages, it could become a tool for early detection.

But more research is needed to improve the tests ability to catch early cancers and we still need to explore how it might work in a real cancer screening scenario.

Lilly

More on this topic

Visit link:
New blood test study uses artificial intelligence to identify cancer. But its not ready for patients yet. - Cancer Research UK - Science Blog

Artificial Intelligence: IDTechEx Research on State-Of-The-Art and Commercialisation Status in Diagnostics and Triage – Yahoo Finance

BOSTON, March 31, 2020 /PRNewswire/ -- Artificial intelligence (AI) is revolutionizing medical diagnostics. The state-of-the-art results have already demonstrated that software can achieve fast and accurate image-based diagnostics on various conditions affecting the skin, eye, ear, lung, breast, and so on. These technological advancements can help automate the diagnosis and triage processes, accelerating the process to speed up the referral process especially in urgent cases, freeing up expert resources, offering the best accuracy everywhere regardless of skill levels, and making the processes more widely available. This is a ground-breaking development with far-reaching consequences. Naturally, many innovators are scrambling to capitalize on these advancements.

The report "Digital Health & Artificial Intelligence 2020: Trends, Opportunities, and Outlook" from emerging technology research firm IDTechEx, has examined this trend. This report considers the trend towards digital and AI applications in health. It outlines the state-of-the-art in AI-based diagnosis of various conditions affecting the skin, eye, heart, breast, brain, lung, blood, genetic disorders and so on. The data sources employed are diverse including dermoscopic images, fundus images, OCT, CT, CTA, echocardiograms, electrocardiogram, mammography, pathology slides, low-res mobile phone pictures and more. This report then identifies and highlights companies seeking to capitalize on these technology advances to automate the diagnostic and triage process.

Furthermore, this report considers the trend of digital health more generally. It provides a detailed overview of the ecosystem and offersinsights into the key trends, opportunities and outlooks for all aspects of digital health, including:Telehealth and telemedicine, Remote patient monitoring, Digital therapeutics / digiceuticals / software as a medical device, Diabetes management, Consumer genetic testing, Smart home as a carer and AI in diagnostics.

Ground-breaking technology

Significant funding is flowing to start-ups and R&D teams of large corporations who develop AI tools to accelerate and/or improve the detection and classification of various diseases based on numerous data sources ranging from RGB images to CT scans, ECG signals, mammograms and to pathological slides. The state-of-the-art results demonstrate that software can do these tasks faster, cheaper, and often more accurately than trained experts and professionals.

This is an important development which, if successful, can have far-reaching consequences: it can make diagnostics much more widely available and it can free up medical experts' time to focus on more complex tasks which currently sit beyond the capabilities of AI-based automation. The technology is today making leaps forward, but technology is only a piece of the puzzle, and many other challenges will need to be overcome before such software tools are widely adopted. However, the direction of travel is clear.

This trend is today on the rise because (a) the availability of digitized medical data sources is rapidly increasing, offering excellent algorithm training feedstock, and (b) advancements in AI algorithms specially trained deep neural networking are enabling software to tackle tasks which it hitherto could not do.

Story continues

The IDTechEx report "Digital Health & Artificial Intelligence 2020: Trends, Opportunities, and Outlook" outlines many such advancements and identifies some of the key companies pursuing each opportunity. The remainder of this article briefly outlines two specific cases: eye disease and skin disease.

Eye Disease

Diabetic retinopathy is a complication that affects the eye. Researchers from India have recently shown that the software accurately interprets retinal fundus photographs to enable a large-scale screening program to detect diabetic retinopathy. The software is trained to make multiple binary classifications, allocating a risk level to each patient. The algorithm was trained and tuned on a total of more than 140k images. The machine matched and exceeded the sensitivity and selectivity level achieved by trained manual experts. The software achieved 92.1% and 95.2% sensitivity and selectivity, respectively.

Naturally, there is a strong business case here, and many are seeking to capitalize on it. One example is IDx, based out of Iowa in the US, who has designed and developed an algorithm to detect diabetic retinopathy. Their AI system achieves a sensitivity and specificity of 87% and 90%, respectively. In as early as 2017, it was tested at 10 sites across the US on 900 patients.

A very insightful test in eye clinics is the OCT (optical coherence tomography), which creates high-resolution (5um) 3D maps of the back of the eye and require expert analysis to interpret. OCT is now one of the most common imaging procedures with 5.35 million OCT scans performed in the US Medicare population in 2014 alone. This creates a backlog in processing and triage, and such delays can be harmful when they cause avoidable treatment delay for urgent cases.

DeepMind (Google) has demonstrated an algorithm that can automate the triage process based on 3D OCT image. Their algorithm design has some unique features. It consists of two stages: (1) a segmentation network and (2) a classification network. The first network will output a labelled tissue segmentation map. Based on the segmented maps, the second network will output a diagnosis probability for over 50 eye-threatening eye conditions and provide referral suggestion. The first part was trained on 877 sparely and manually segmented images and the second network on 14,884 training tissue maps with confirmed diagnosis and referral decision.This database is one of the best curated medical eye databases worldwide.

This two-stage design is beneficial in that when the OCT machine or image definition changes, only the first part will need to be retrained. This will help this algorithm become more universally applicable. In an end-to-end training network, the entire network would need to be retrained.

DeepMind demonstrated that performance of their AI in making a referral recommendation, reaches or exceeds that of experts on a range of sight-threatening retinal diseases. The error rate on referral decision is 5.5%, exceeding or matching specialists even when specialists are given fundus images as well as patient notes in addition to the OCT. Furthermore, the AI beat all retina specialists and optometrists on selectivity and sensitivity measures in referring urgent cases. This is clearly the first step, but an important one that truly opens the door.

Skin disease

Researchers at Heidelberg have already demonstrated that trained deep neural networks, in this case based on Google's Inception v4 CNN architecture, can recognize melanoma based on dermoscopy images. These researchers showed that the software achieves 10 percent more specificity than human clinicians when the sensitivity was set at a level matching human clinicians. The machine can achieve a high 95% sensitivity at a 63.8% specificity.

This is a promising result that shows such diagnostics can be automated. Indeed, multiple companies are automating detection of cancer diseases. One example is SkinVision, from the Netherlands, which seeks to offer a risk rating of skin cancer based on relatively low-quality smartphone images. They trained their algorithm on more than 131k images from 31k users in multiple countries. The risk ranking of the training images were annotated by dermatologists. Studies show that the algorithm can score a 95.1% sensitivity in detecting (pre)malignant conditions with 78.3% specificity. These are good results although the specificity may need to improve as it could unnecessarily alarm some patients.

The business cases are not just limited to cancer detection. Haut.AI is an Estonian company that proposes to use images to track skin dynamics and offer recommendations. One example is that their AI can be a simple and accurate predictor of chronological age using just the anonymized images of eye corners. The networks were trained on 8414 anonymized highresolution images of eye corners labelled with the correct chronological age. For people within the age range of 20 to 80 in a specific population, the machine reaches a mean absolute error of 2.3 years.

There are naturally many more start-ups active in this field. Some firms are focused on health diagnostic whilst others are seeking to use the AI to create tailored skincare regimes and product recommendation. The path to market, and the regulatory barriers, for each target function will naturally be different.

To learn more about this exciting field, please see IDTechEx's report "Digital Health & Artificial Intelligence 2020: Trends, Opportunities, and Outlook" by visitingwww.IDTechEx.com/digitalhealth. This report outlines the state-of-the-art in the use of AI in diagnosing a range of medical conditions. It also identifies and discusses the progress of various companies seeking to commercialize such technological advances. Furthermore, this report considers the trend of digital health more generally. It provides a detailed overview of the ecosystem and offers insights into the key trends, opportunities and outlooks for all aspects of digital health, including: Telehealth and telemedicine, Remote patient monitoring, Digital therapeutics / digiceuticals / software as a medical device, Diabetes management, Consumer genetic testing, Smart home as a carer and AI in diagnostics.

To connect with others on this topic, register for The IDTechEx Show! USA 2020, November 18-19 2020, Santa Clara, USA. Presenting the latest emerging technologies at one event, with six concurrent conferences and a single exhibition covering 3D Printing and 3D Electronics, Electric Vehicles, Energy Storage, Graphene & 2D Materials, Healthcare, Internet of Things, Printed Electronics, Sensors and Wearable Technology. Please visit http://www.IDTechEx.com/USAto find out more.

IDTechEx guides your strategic business decisions through its Research, Consultancy and Event products, helping you profit from emerging technologies. For more information on IDTechEx Research and Consultancy contact research@IDTechEx.com or visit http://www.IDTechEx.com.

Media Contact:

Jessica AbineriMarketing Coordinatorpress@IDTechEx.com +44-(0)-1223-812300

View original content:http://www.prnewswire.com/news-releases/artificial-intelligence-idtechex-research-on-state-of-the-art-and-commercialisation-status-in-diagnostics-and-triage-301032810.html

SOURCE IDTechEx

Here is the original post:
Artificial Intelligence: IDTechEx Research on State-Of-The-Art and Commercialisation Status in Diagnostics and Triage - Yahoo Finance

58m Tempo by DLBA: superyacht optimized with artificial intelligence in every system – Yacht Harbour

DLBA Naval Architects created a new artificial intelligence concept, that makes it possible for some systems to be operated without any human interference.

DLBA has selected a 58m superyacht concept to develop internally as an autonomous yacht. The result, TEMPO, will be a study in all vessel systems where artificial intelligence can be used to enhance an owners experience onboard.

There are three main areas where autonomous technology can be used in the maritime world - navigational autonomy, equipment health monitoring, and mechanical and electrical systems automation. All reduce the need for human input while increasing efficiency.

Navigation autonomy relieves the workload on the vessel operator, and unmanned vessels have been operating in the commercial and military space for years. Hull, mechanical and electrical automation is like having an onboard engineering team at your fingertips. By ensuring elements at the sub-system level are AI-ready, the vessel can be kept operating at peak performance, efficiently.

The number and complexity of auxiliary systems and equipment onboard yachts is increasing year-on-year, and with that comes the increasing demand on crews time to interpret feedback from the systems - equipment health monitoring lessens this demand on crew time.

View post:
58m Tempo by DLBA: superyacht optimized with artificial intelligence in every system - Yacht Harbour

The Limitations of Artificial Intelligence in Businesses – AZoRobotics

Written by AZoRoboticsApr 1 2020

Businesses are often tempted to employ a range of technologies, including artificial intelligence (AI), to enhance performance, reduce labor costs, and improve the bottom linea fact that is logical.

Image Credit: Rensselaer Polytechnic Institute.

However, before opting for automation that can potentially risk the jobs of humans, business owners should carefully assess their operations.

According to Chris Meyer, a professor of practice and the director of undergraduate education at the Lally School of Management at Rensselaer Polytechnic Institute, the same method should not be used when applying AI to each business.

Meyer had studied this topic and has now detailed this in a recent conceptual paperpublishedin an exclusive issue of the Journal of Service Management on AI and Machine Learning in Service Management.

AI has the potential to upend our ideas about what tasks are uniquely suited to humans, but poorly implemented or strategically inappropriate service automation can alienate customers, and that will hurt businesses in the long term.

Chris Meyer, Professor of Practice and Director of Undergraduate Education, The Lally School of Management, Rensselaer Polytechnic Institute

Based on Meyers findings, the option to utilize AI or automation has to be a strategic decision. For example, if a companys business competes by providing an array of service offerings that shift from one client to another, or by offering a considerable amount of human interaction, then its business will experience a lower success rate if human experts are replaced with AI technologies.

Meyer further observed that the reverse is also true: Businesses that restrict customer interaction and choice will witness better success if they decide to automate.

Business leaders planning to migrate to automation should cautiously assess their strategies for handling knowledge resources. Before investing in AI, companies should first understand whether it is a strategically viable option to use algorithms and digital technologies in the place of human interaction and judgment.

The ideas are of use to managers, as they suggest where and how to use automation or human service workers based on ideas that are both sound and practical. Managers need guidance. Like any form of knowledge, AI and all forms of service automation have their place, but managers need good models to know where that place is.

Chris Meyer, Professor of Practice and Director of Undergraduate Education, The Lally School of Management, Rensselaer Polytechnic Institute

Meyer also established that in businesses where reputation and trust are vital factors in fostering and sustaining a client base, individuals will probably be effective than that of automated technologies.

On the other hand, in businesses where human biases are specifically dangerous to the service provision, AI will serve as a comparatively better tool for companies.

Meyer further stressed that several businesses will eventually be utilizing a combination of automation and peoples skills to compete effectively. Even AI, which can manage highly complicated jobs, works optimally alongside humansand the other way round.

Automation and human workers can and should be used together. But the extent of automation must fit with the businesss strategic approach to customers.

Chris Meyer, Professor of Practice and Director of Undergraduate Education, The Lally School of Management, Rensselaer Polytechnic Institute

Source: https://rpi.edu/

Read more:
The Limitations of Artificial Intelligence in Businesses - AZoRobotics

VA Looking to Expand Usage of Artificial Intelligence Data – GovernmentCIO Media

The agency is looking at how to best apply curated data sets to new use cases.

The Department of Veterans Affairs is closer to expanding its use of artificial intelligence and developing novel use cases.

In looking back on the early stages of the VAs newly launched artificial intelligence program, the department's Director of AI Gil Alterovitz noted ongoing questions about how to best leverage AI data sets for secondary uses.

One of the interesting challenges is often that data is collected for maybe one reason, and it may be used for analyzing and finding results for that one particular reason. But there may be other uses for that data as well. So when you get to secondary uses you have to examine a number of challenges, he said at AFCEA's Automation Transformation conference.

Some of the most pressing concerns the VAs AI program hasencountered include questions of how to best apply curated data sets to newfound use cases, as well as how to properly navigate consent of use for proprietary medical data.

Considering the specificity of use cases, particularly for advanced medical diagnostics and predictive analytics, Alterovitz has proposed releasing broader ecosystems of data sets that can be chosen and applied depending on the demands of specific AI projects.

Theres a lot to think about data sets and how they work together. Rather than release one data set, consider releasing an ecosystem of data sets that are related," he said."Imagine, for example, someone is searching for a trial you have information about. Consider the patient looking for the trial, the physician, the demographics, pieces of information about the trial itself, where its located. Having all that put together makes for an efficient use case and allows us to better work together."

Alterovitz also discussed the value of combining structured and unstructured data sets in AI projects, a methodology that Veterans Affairs has found to provide stronger results than using structured data alone.

When you look at unstructured data, there have been a number of studies in health care looking at medical records where if you look at only structured data or only unstructured data individually, you dont get as much of a predictive capability whether it be for diagnostics or prognostics as by combining them, he said.

Beyond refining and expanding these data applications methodologies, the VA also appears attentive to how to best leverage proprietary medical data while protecting personally identifying information.

The solution appears to lie in creating synthetic data sets that mimic the statistical parameters and overall metrics of a given data set while obscuring the particularities of the original data set it was sourced from.

How do you make data available considering privacy and other concerns?" Alterovitz said."One area is synthetic data, essentially looking at the statistics of the underlying data and creating a new data set that has the same statistics, but cant be identified because it generates at the individual level a completely different data set that has similar statistics."

Similarly, creating select variation within a given data set can serve to remove the possibility of identifying the patient source, You can take the data, and then vary that information so that its not the exact same information you received, but is maybe 20% different. This makes it so you can show its statistically not possible to identify that given patient with confidence.

Going forward, the VA appears intent on solving these quandaries so as to best inform expanded AI research.

A lot of the data we have wasnt originally designed for AI. How you make it designed and ready for use in AI is a challenge and one that has a number of different potential avenues, Alterovitz concluded

Read the original:
VA Looking to Expand Usage of Artificial Intelligence Data - GovernmentCIO Media

Hampstead Theatre to show three more plays online for free – Camden New Journal newspapers website

Hampstead Theatre shut a fortnight ago because of coronavirus

THREE productionsfrom Hampstead Theatre are to be screened online forfree.

MikeBartletts Wild a 2016 play inspired by the American whistleblower Edward Snowden can be watchedfrom tonight (Monday) until April 5 through thetheatreswebsite.

Beth Steels Wonderland a witty drama set in the 1984-5 Miners strike will be availablefrom Monday, April 6 10am until 10pm onApril 12.

Howard BrentonsDrawing the Line (2013) about thechaotic partitioning of India in 1947 will be on theweek.

Artistic Director of Hampstead Theatre, Roxana Silbert, said:I hope these productions offer audiences entertainment, connection and nourishment in a time of uncertainty and isolation. These three plays all shine a light on turbulent points in our international history which, along with acknowledging the worst of human behaviour, celebrates the ingenuity, humour, compassion and resilience of the best.

All three productions were originally live streamed from Hampstead Theatre and were available to watch on the Guardians websitefor 72 hours. The plays havebeen made availablewithpermission of theKings Cross media giant.

Hampstead Theatre, which shuton March 16 due to health advice, has alreadyscreened one of its earlier plays through Instagram.

Visitwww.hampsteadtheatre.com and the Guardian website for more details.

The rest is here:
Hampstead Theatre to show three more plays online for free - Camden New Journal newspapers website

Performance artist Brian Feldman returns to Orlando for a ‘social distancing’ version of three shows – Orlando Weekly

As coronavirus has canceled live entertainment worldwide, we've seen countless performers attempting to convert theatrical experiences into digital streams, with varying degrees of success. But if anyone might be able to capitalize creatively on this crazy cultural moment, it could be Orlando Weekly's favorite performance artist, Brian Feldman. After all, this was the guy who sealed himself inside a Skill Crane arcade machine and performed musicals over the telephone long before social distancing was a thing.

Feldman has been quiet for the first quarter of 2020, but he's returning to the virtual stage this April Fools' Day with an online triple feature. At noon, you can watch as Brian Feldman Writes His Last Will & Testament live on Facebook (facebook.com/brianfeldmanprojects), followed at 6 p.m. with a one-shot Social Distancing Dinner edition of The Feldman Dynamic, featuring his parents and sibling sharing a meal over Jitsi Meet. The evening concludes at 7:30 p.m. with the first-ever online-only presentation of #txtshow, Feldman's signature interactive performance piece. You can register free for all three "pay what you can" events at brianfeldman.com. Since we couldn't meet at a vegan restaurant, a Disney theme park or any of our other usual hangouts, Feldman emailed me these thoroughly virus-scanned replies to my questions about being a performance artist in the midst of a pandemic.

Where are you passing your "stay at home" quarantine?

I've been sleeping on the couch and hanging in the living room of Studio 6107 (the family apartment) in Sanford, where [at the time of this interview] there actually is no "stay at home" quarantine.

How have you been spending your time while stuck inside?

You know, save for the lack of daily bike rides, it hasn't been all that much different from when everyone's not at home in quarantine. I've been at the computer somewhat obsessively reading the news, scrolling through Twitter on my phone, texting and WhatsApping friends to check in and see how they're doing, listening to songs to wash your hands to, watching people adapt shows for Facebook and Zoom, falling into YouTube spirals, eating my usual one meal a day yet somehow washing more dishes than anybody else (I am the Dishwasher, after all), forgetting to take a shower some days, arguing with my Dad much more than I should (I'm sorry, Dad), and just trying my best to stay optimistic about the future. There's also a 50-inch flatscreen TV here, which I've turned on a total of one time.

What are some of the notable possessions you'll be including in your Last Will & Testament?

While it's no David Geffen yacht, it is like #48hYardSale, only with all the stuff I just could never part with. There are museum-worthy paintings, items from my childhood, boxes upon boxes of photo prints, negatives and slides; Warhol-esque time capsules and other pop culture artifacts I've probably hung onto for too long. All of my performance archives: project posters, signage (including the portable marquee that I retired after The Most Expensive Gas in America, which still has the gas prices on it), programs, tickets, props (the Orlando Weekly box I was inside of), wardrobe (most notably The Singing Menorah costume and Hannah's wedding dress from Marries Anybody: Part II), handwritten notes, hundreds of buttons and other ephemera. And, of course, The Skill Crane Kid machine. Now that belongs in a museum.

How is your family doing, and do you think social distancing will improve The Feldman Dynamic?

Early articles and reviews written about The Feldman Dynamic really played up the whole "dysfunctional family" angle. But the truth is, there is literally no way you can be a truly dysfunctional family and pull off a live theatrical presentation like this. While there've certainly been moments many moments, actually that none of us have wanted to continue doing this "show" (in quotes, since it's a relative term), it's continued to go on. Now, the show must go online. That stated, we had my Mom on FaceTime for the sixth night of 8 Wards of Chanukah up in D.C. and people told me they hated that.

#txtshow seems tailor-made for our current moment; how do you think performing remotely will impact the experience?

I'd have to agree that the show is more relevant now than ever. But doing the show online is something I've been reluctant to do since immediately following that very first performance at the Kerouac House, when people were already encouraging me to livestream it.

My resistance has always stemmed from feeling that it's vital for the audience to be in the room where it happens, so that everyone can see, hear and react to each other. When something shocking or surprising is said, it's always beneficial to know that someone in that space with you right then and there wrote it, and made the character say it. Making it anonymous via two screens (Twitter and, in this case, Jitsi Meet) ultimately may or may not work. But I guess we'll find that out together!

Any advice for other artists interested in using Jitsi Meet for performances?

Yes. Don't do it! Stick with Zoom and leave Jitsi Meet for me and Edward Snowden.

So, in researching possible video conferencing platforms to utilize for projects during this period of #TheaterAtHome, the main thing I focused on was selecting one that'd be extremely easy to use, extremely free without a time limit, and which offered an assurance that I could hear all audio in a single source from every single person in the room at the same time. You know, like traditional theater.

Zoom, which everyone's using and I almost went with, doesn't always allow everyone to be heard clearly at once, and when on Speaker View it jumps back and forth, which I didn't think would work for The Feldman Dynamic (especially when everyone's talking simultaneously) or #txtshow (when it's really helpful to be able to hear the silence, and not just have everyone on mute). Ultimately, we'll find out if going with one of my best friend's top suggestions (Jitsi Meet) was the best choice when we do it live!

Do you have any upcoming projects or plans?

I was originally scheduled to travel to Goa from May through June to shoot another micro-budget feature with the same team I shot a film in Chennai, India, called Goodbye, White Guy, which has yet to be released. Ideally, if the world ever returns to normal, a notable festival will accept it and audiences will finally get to see what I look like after not shaving, eating that much, changing my clothes or taking a shower for days on end. Oh wait that sounds like the plot to last week.

Depending on how long this thing lasts (answer: September 2021, at least), I might finally stage my long joked-about project, Brian Feldman Reads the Phone Book. Assuming I can find one. Honestly, I have no idea what I'm gonna do next. And Baruch Hashem for that!

As a theme park fan, what are you most looking forward to when the attractions reopen?

Is it too on-the-nose for me to say I'd like to visit Carousel of Progress, sing "the song" and hope that nothing breaks in the process? If it is, since I've unfortunately had to go gluten-free since my last visit to the parks, and since it doesn't look like I'm going to be spending (or making) all that much money for the foreseeable future, perhaps with my stimulus payment check I'll be able to afford one of everything from Erin McKenna's Bakery NYC at Disney Springs? If not that, then Dole Whips for everybody!

More importantly, I'm most looking forward to seeing everyone on my Facebook feed breathe a massive sigh of relief.

How many times did you wash your hands today?

More times than Lady Macbeth.

skubersky@orlandoweekly.com

Go here to see the original:
Performance artist Brian Feldman returns to Orlando for a 'social distancing' version of three shows - Orlando Weekly