Artificial Intelligence That Can Evolve on Its Own Is Being Tested by Google Scientists – Newsweek

Computer scientists working for a high-tech division of Google are testing how machine learning algorithms can be created from scratch, then evolve naturally, based on simple math.

Experts behind Google's AutoML suite of artificial intelligence tools have now showcased fresh research which suggests the existing software could potentially be updated to "automatically discover" completely unknown algorithms while also reducing human bias during the data input process.

Read more

According to ScienceMag, the software, known as AutoML-Zero, resembles the process of evolution, with code improving every generation with little human interaction.

Machine learning tools are "trained" to find patterns in vast amounts of data while automating such processes and constantly being refined based on past experience.

But researchers say this comes with drawbacks that AutoML-Zero aims to fix. Namely, the introduction of bias.

"Human-designed components bias the search results in favor of human-designed algorithms, possibly reducing the innovation potential of AutoML," their team's paper states. "Innovation is also limited by having fewer options: you cannot discover what you cannot search for."

The analysis, which was published last month on arXiv, is titled "Evolving Machine Learning Algorithms From Scratch" and is credited to a team working for Google Brain division.

"The nice thing about this kind of AI is that it can be left to its own devices without any pre-defined parameters, and is able to plug away 24/7 working on developing new algorithms," Ray Walsh, a computer expert and digital researcher at ProPrivacy, told Newsweek.

As noted by ScienceMag, AutoML-Zero is designed to create a population of 100 "candidate algorithms" by combining basic random math, then testing the results on simple tasks such as image differentiation. The best performing algorithms then "evolve" by randomly changing their code.

The resultswhich will be variants of the most successful algorithmsthen get added to the general population, as older and less successful algorithms get left behind, and the process continues to repeat. The network grows significantly, in turn giving the system more natural algorithms to work with.

Haran Jackson, the chief technology officer (CTO) at Techspert, who has a PhD in Computing from the University of Cambridge, told Newsweek that AutoML tools are typically used to "identify and extract" the most useful features from datasetsand this approach is a welcome development.

"As exciting as AutoML is, it is restricted to finding top-performing algorithms out of the, admittedly large, assortment of algorithms that we already know of," he said.

"There is a sense amongst many members of the community that the most impressive feats of artificial intelligence will only be achieved with the invention of new algorithms that are fundamentally different to those that we as a species have so far devised.

"This is what makes the aforementioned paper so interesting. It presents a method by which we can automatically construct and test completely novel machine learning algorithms."

Jackson, too, said the approach taken was similar to the facts of evolution first proposed by Charles Darwin, noting how the Google team was able to induce "mutations" into the set of algorithms.

"The mutated algorithms that did a better job of solving real-world problems were kept alive, with the poorly-performing ones being discarded," he elaborated.

"This was done repeatedly, until a set of high-performing algorithms was found. One intriguing aspect of the study is that this process 'rediscovered' some of the neural network algorithms that we already know and use. It's extremely exciting to see if it can turn up any algorithms that we haven't even thought of yet, the impact of which to our daily lives may be enormous." Google has been contacted for comment.

The development of AutoML was previously praised by Alphabet's CEO Sundar Pichai, who said it had been used to improve an algorithm that could detect the spread of breast cancer to adjacent lymph nodes. "It's inspiring to see how AI is starting to bear fruit," he wrote in a 2018 blog post.

The Google Brain team members who collaborated on the paper said the concepts in the most recent research were a solid starting point, but stressed that the project is far from over.

"Starting from empty component functions and using only basic mathematical operations, we evolved linear regressors, neural networks, gradient descent... multiplicative interactions. These results are promising, but there is still much work to be done," the scientists' preprint paper noted.

Walsh told Newsweek: "The developers of AutoML-Zero believe they have produced a system that has the ability to output algorithms human developers may never have thought of.

"According to the developers, due to its lack of human intervention AutoML-Zero has the potential to produce algorithms that are more free from human biases. This theoretically could result in cutting-edge algorithms that businesses could rely on to improve their efficiency.

"However, it is worth bearing in mind that for the time being the AI is still proof of concept and it will be some time before it is able to output the complex kinds of algorithms currently in use. On the other hand, the research [demonstrates how] the future of AI may be algorithms produced by other machines."

See the rest here:
Artificial Intelligence That Can Evolve on Its Own Is Being Tested by Google Scientists - Newsweek

Should I Stay or Should I Go? Artificial Intelligence (And The Clash) has the Answer to Your Employee Access Dilemma. – Security Boulevard

What happens when employees have access to data, apps or services that they shouldnt? Best case scenario: they might know the salaries of all their colleagues and company execs. Worst case scenario: malicious actors exploit that access and extract sensitive business data, causing millions of dollars in damage and irreparable harm to brand reputation.

In past blogs, I wrote how security starts with protecting users and that by verifying the user we greatly reduce the attack surface from all humans to just those you actually trust (aka your employees). I also wrote that we want to make sure every device is being used in a secure manner. In other words, by validating every device, we reduce the attack surface even more by limiting the devices that gain access from billions of computers, phones, or tablets to just the select few in the users possession.

Verifying users and validating devices represent steps one and two on the road to Zero Trust. But while this combination drastically improves security posture, more layers are necessary to guarantee risks of fraudulent access are no more. Just because a person is who they say they are and are using a trusted device doesnt mean that they should have broad access rights beyond what they need to do their job. Whether by accident or malicious intent, insiders can still misuse their access or share access with people whom they shouldnt.

To stop this from happening, you need to vastly reduce the risk associated with the access rights each user has. We do this by limiting user access (even to verified users and validated devices) to only those apps and resources that they need to do their job, and to only when they specifically need to do it. This is step number three that completes the trinity of a Zero Trust security approach: Verify every user, validate their devices, and intelligently limit their access.

Companies typically grant access to necessary apps and resources as they onboard employees. When an employee moves on, either up the ranks or out the door, we tend to forget about those original grants. Were all guilty of this. For example, Im now head of marketing at Idaptive, so I shouldnt have access to our product source code the same way I did back when I was a product manager. The accumulation of access to data, apps, and services creates serious risks. Instead, we must tailor that access to just what a person needs for the job they perform today and automatically remove that access when they leave.

Thats easier said than done for IT teams (and sometimes HR) who historically had to manually provision and deprovision users or at least manually write the rules for role-based access control programs. Someone had to tell IT that an employees role had changed, and then IT would have to figure out how that relates to the access that they should or shouldnt have. We often refer to this process as lifecycle management, and provisioning is just one piece of this mammoth responsibility that enterprise teams are tasked with managing.

The role of lifecycle management in the Zero Trust model is critically important because it determines who has which rights on which systems and applications. You can ensure that a user only has access to what he needs to do his job, create reliable reports, and audit those rights at any given time.

IT staff knows that accounts are difficult to manage because:

Some form of automation and automatic deprovisioning is required. Combining self-service, workflow, and provisioning automation can ensure that users only receive the access they need, help them be productive quickly, and automatically remove their access as their roles change or when they leave the company.

Even if you dont have hands-on experience with lifecycle management, its not hard to see how this spreadsheet-style or swivel chair provisioning access can snowball into something both time-consuming and error-prone leading to an accumulation of access over time. And when employees have access to things they shouldnt, attackers know that a simple phishing attempt is all it takes to gain insider access and wreak havoc on business systems.

If youre saying right now there has to be a secure, more efficient and maybe even automated way to do this, youd be right. The answer lies within a Zero Trust approach powered by Next-Gen Access identity technology.

With Provisioning and Lifecycle Management you can enable users to request access to applications from the app catalog of pre-integrated applications, provide specific users the ability to approve or reject these access requests, and automatically create, update, and deactivate accounts based on roles in your user directory. Provisioning enables users to be productive on day one with the appropriate access, authorization, and client configuration across their devices.

Lifecycle Management should also seamlessly import identities from your preferred HR system or application, including Workday, UltiPro, BambooHR, or SuccessFactors, and provision them (typically) to Active Directory. This enables you to unify your provisioning and HR workflows and have an HR-driven primary system of record for user data across all your applications.

By way of example, with Active Directory (AD) synchronization for Microsoft Office 365, you can keep your AD accounts and Office 365 accounts in sync and automatically provision and deprovision user accounts, groups, and group memberships to simplify Office 365 license management.

Lifecycle Management not only can save IT teams a great deal of time and frustration, but it can ultimately save companies from crippling data breaches. Such is the power of intelligently limiting access as part of a Zero Trust framework.

Link:
Should I Stay or Should I Go? Artificial Intelligence (And The Clash) has the Answer to Your Employee Access Dilemma. - Security Boulevard

Insilico enters into a research collaboration with Boehringer Ingelheim to apply novel generative artificial intelligence system for discovery of…

HONG KONG, April 14, 2020 /PRNewswire/ --Insilico Medicine is pleased to announce that it has entered into a research collaboration with Boehringer Ingelheim to utilize Insilico's generative machine learning technology and proprietary Pandomics Discovery Platform with the aim of identifying potential therapeutic targets implicated in a variety of diseases.

Insilico enters into a research collaboration with Boehringer Ingelheim

"Insilico Medicine is very impressed with the Research Beyond Borders group at Boehringer Ingelheim capabilities in the search of potential drug targets. In this collaboration, Insilico will provide additional AI capabilities to discover novel targets for a variety of diseases to benefit the patients worldwide. We are very happy to partner with such an advanced group," said Alex Zhavoronkov, PhD, founder, and CEO of Insilico Medicine.

"We believe that Insilico's exclusive Pandomics platform will provide huge boost to our ability to explore and identify drug targets. We look forward to using AI to significantly improve the drug discovery process and contribute to human health," said from Dr. Weiyi Zhang, Head of External Innovation Hub, Boehringer Ingelheim GreaterChina.

In September 2019, Insilico Medicineannounced a $37 million round led by prominent biotechnology and AI investors.

About Insilico MedicineSince 2014 Insilico Medicine is focusing on generative models, reinforcement learning (RL), and other modern machine learning techniques for the generation of new molecular structures with the specified parameters, generation of synthetic biological data, target identification, and prediction of clinical trials outcomes. Since its inception, Insilico Medicine raised over $52 million, published over 70 peer-reviewed papers, applied for over 20 patents, and received multiple industry awards.

Websitehttp://insilico.com/

Media ContactFor further information, images or interviews, please contact:ai@insilico.com

About Boehringer Ingelheim Improving the health of humans and animals is the goal of the research-driven pharmaceutical company Boehringer Ingelheim. The focus in doing so is on diseases for which no satisfactory treatment option exists to date. The company therefore concentrates on developing innovative therapies that can extend patients' lives. In animal health, Boehringer Ingelheim stands for advanced prevention.

Family-owned since it was established in 1885, Boehringer Ingelheim is one of the pharmaceutical industry's top 20 companies. Some 50,000 employees create value through innovation daily for the three business areas human pharmaceuticals, animal health and biopharmaceuticals. In 2018, Boehringer Ingelheim achieved net sales of around 17.5 billion euros. R&D expenditure of almost 3.2 billion euros, corresponded to 18.1 per cent of net sales.

As a family-owned company, Boehringer Ingelheim plans in generations and focuses on long-term success. The company therefore aims at organic growth from its own resources with simultaneous openness to partnerships and strategic alliances in research. In everything it does, Boehringer Ingelheim naturally adopts responsibility towards mankind and the environment.

More information about Boehringer Ingelheim can be found on http://www.boehringer-ingelheim.com or in our annual report: http://annualreport.boehringer-ingelheim.com

Story continues

Read the original post:
Insilico enters into a research collaboration with Boehringer Ingelheim to apply novel generative artificial intelligence system for discovery of...

LucidHealth and Riverain Technologies Are Committed to the Delivery of Advanced Radiology Through Artificial Intelligence – BioSpace

MIAMISBURG, Ohio--(BUSINESS WIRE)-- LucidHealth, a physician-owned and led radiology company, announced today that it is using FDA-approved ClearRead CT by Riverain Technologies, an artificial intelligence (AI) imaging software solution for the early detection of lung disease. LucidHealth is one of the first radiology companies in the Midwest to incorporate AI through its partnership with Riverain Technologies.

This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20200414005825/en/

LucidHealth is committed to advancing the quality of community radiology patient care by combining leading radiologist expertise with cutting edge Artificial Intelligence. Riverains ClearRead in combination with LucidHealths RadAssist workflow is just such an example, said Peter Lafferty, M.D., Chief of Physician Integration at LucidHealth.

We are proud to be working with LucidHealth as an AI vendor, said Steve Worrell, CEO at Riverain Technologies. Our ClearRead CT suite allows LucidHealth radiologists to provide quicker, more accurate readings, to work even more efficiently and generate higher-quality reports for better patient outcomes.

Riverain Technologies designs advanced AI imaging software used by leading international healthcare organizations. Riverain ClearRead solutions significantly improve a clinicians ability to accurately and efficiently detect disease in thoracic CT and Xray images and more successfully address the challenges of early detection of lung disease. Powered by machine learning and advanced modeling, the patented, FDA-cleared ClearRead software tools are deployed in the clinic or the Cloud and are powered by the most advanced AI methods available to the medical imaging market.

About LucidHealth:

LucidHealth is a physician-owned and led radiology management company. We partner with radiology groups to provide the technology and resources to increase the strategic value of their practices nationwide. Our belief is that all patients should have access to the highest quality of subspecialized imaging care, regardless of facility size or location. Our mission is to empower independent radiology groups to deliver world-class, subspecialized care to all patients within the communities they serve. For more information, please visit http://www.lucidhealth.com.

About Riverain Technologies:

Dedicated to the early detection of lung disease, Riverain believes the opportunities for machine learning and software solutions in healthcare are at an unprecedented level. Never before has the opportunity to do more with less been so great. We believe that these software tools incorporate an increasing degree of intelligence that will facilitate decision making which leads to greater efficiency and effectiveness in patient outcomes. Riverain Technologies is excited to be part of the advances in machine learning and scalability of technology that will bring efficiency and accuracy to physicians and, ultimately, improved patient care. For more information, please visit https://www.riveraintech.com/

View source version on businesswire.com: https://www.businesswire.com/news/home/20200414005825/en/

View post:
LucidHealth and Riverain Technologies Are Committed to the Delivery of Advanced Radiology Through Artificial Intelligence - BioSpace

New Bright Pattern AI Survey Finds 78% of Companies Have or Plan to Deploy AI In Their Call Center – Associated Press

Press release content from PR Newswire. The AP news staff was not involved in its creation.

Click to copy

SOUTH SAN FRANCISCO, Calif., April 14, 2020 /PRNewswire/ -- Adoption of artificial intelligence continues to increase in U.S. contact centers. According to Canam Research, 78% of contact centers in the U.S. report plans to deploy artificial intelligence in their contact center in the next 3 years, with an overwhelming number (97%) of survey respondents planning to use artificial intelligence to support agents as opposed to 7% who plan to use AI to replace some or all of their current call center staff. Top uses of artificial intelligence include bots, self-service, and AI for quality management.

These insights stem from a survey sponsored by Bright Pattern, the leading provider of AI-powered omnichannel cloud contact center software for innovative enterprises. The survey examined the current state of U.S. contact centers usage and preferences around artificial intelligence in the contact centers. Bright Pattern surveyed companies of all sizes and industries in the 2020 Contact Center AI Benchmark Trend Report.

Survey Respondents Top Goals for Implementing AI:

Everyone has been talking about AI for improving the customer experience but few companies know where to start, said Ted Hunting, Senior Vice President Marketing, Bright Pattern. We conducted this research to better understand what customers need. It resulted in the creation of our BrightStart Solution Packs for AI which help customers immediately deploy AI in their contact centers.

Call Center AI Key Findings:

Find out more about the current state of AI in the contact center by downloading the 2020 Contact Center AI Benchmark Trend Report.

Survey Methodology Bright Pattern commissioned third-party research consultancy Canam to conduct an online survey of over 300 U.S. contact center executives from a total pool of 14 industry categories.

Bright Pattern announced initial customer experience survey findings in April and will continue to release additional insights in the coming months. For more details about the survey methodology or to receive a free copy of the report, contact the Bright Pattern media relations team at marketing@brightpattern.com.

About Bright PatternBright Pattern provides the simplest and most powerful AI-powered contact center for innovative midsize and enterprise companies. With the purpose of making customer service brighter, easier, and faster than ever before, Bright Pattern offers the only true omnichannel cloud platform with embedded AI that can be deployed quickly and nimbly by business userswithout costly professional services. Bright Pattern allows companies to offer an effortless, personal, and seamless customer experience across channels like voice, text, chat, email, video, messengers, and bots. Bright Pattern also allows companies to measure and act on every interaction on every channel via embedded AI omnichannel quality management capability. The company was founded by a team of industry veterans who pioneered the leading contact center solutions and today are delivering architecture for the future with an advanced cloud-first approach. Bright Patterns cloud contact center solution is used globally in over 26 countries and 12 languages.

View original content to download multimedia: http://www.prnewswire.com/news-releases/new-bright-pattern-ai-survey-finds-78-of-companies-have-or-plan-to-deploy-ai-in-their-call-center-301039667.html

SOURCE Bright Pattern

Original post:
New Bright Pattern AI Survey Finds 78% of Companies Have or Plan to Deploy AI In Their Call Center - Associated Press

Artificial Intelligence at UBS Current Applications and Initiatives – Emerj

UBS is a Swiss multinational investment banking and financial services company ranked 30th on S&P Globals list of the top 100 banks. In addition to investment banking and wealth management, the company is looking to improve its tech stack through several AI projects.

Our AI Opportunity Landscape research in financial services uncovered the following three AI initiatives at UBS:

We begin our coverage of UBS AI initiatives with their project for a virtual financial assistant for their banking clients.

UBS partnered with IBM and Digital Humans (formerly FaceMe) to create a virtual financial assistant for its customers. The virtual assistant is a conversational interface built with IBMs Watson Natural Language Understanding solution. Watson runs primarily on natural language processing technology, which is an approach to AI that enables the extraction and analysis of written text and human speech. Digital Humans provided the 3D character model for the avatar, which represents the assistant on-screen.

The video below explains how Watson Natural Language Understanding works:

UBS developed two distinct digital avatars. One avatar, named Fin, is built for managing simple tasks such as helping a customer cancel and replace a credit card. The second avatar, Daniel, can purportedly answer investment questions. IBM claims Watson affords UBS the following capabilities:

UBS also started an internal initiative with the goal of solving liquidity issues within foreign exchange using machine learning. In 2018, the bank announced its ORCA direct solution, which purportedly helped its employees execute foreign exchange transactions more quickly.

The banks software could automatically decide the best digital channel by which to execute a foreign exchange deal. This may save the bank a significant amount of time, as it would be particularly difficult to optimize for a bank with access to so many separate trading channels.

Additionally, these platforms may run on different pricing metrics, and banks may incur certain fees depending on the type of trade they are making. UBS updated the solution to ORCA Pro in 2019, which it claims can now act as a single-dealer platform.

This platform is linked to UBS optimization engine, which helps reduce disparity between the expected price and the price at which a trade is executed. For example, if a given deal is made weeks after UBS financial advisor had last spoken to the client, ORCA pro might be able to discern that the bid/ask spread for the deal has fluctuated without either party noticing.

UBS claims their ORCA Direct and Pro solutions provide the following capabilities to their staff:

UBS third AI initiative is their partnership with vendor Attivio to develop an NLP-enabled search engine for their wealth management, asset management, and investment banking services. Attivio refers to this NLP-based solution as cognitive search, which can be understood as an AI-powered enterprise search application.

The short, 1-minute video below explains how machine learning can enable enterprise search and provide context for more detailed results:

The vendor claims UBS developed this application to facilitate the following capabilities:

Financial services companies need to understand what their competitors are doing with AI if they hope to compete in the same domains and win the customers their competitors are trying to court with more convenient experiences and more financial lucrative wealth management services.

Leaders at large financial services companies use Emerj AI Opportunity Landscapes to discover where AI can bring powerful ROI in areas like wealth and asset management, customer service, fraud detection, and more, so they can win market share well into the future. Learn more about Emerj Research Services.

Header Image Credit: UBS

Go here to read the rest:
Artificial Intelligence at UBS Current Applications and Initiatives - Emerj

Artificial intelligence used to measure impact of Coronavirus on American construction – News – GCR

Analysis by camera firm OxBlue has used artificial intelligence (AI) from construction site data to determine the drop in construction productivity across America due to the Coronavirus pandemic.

OxBlue is using data from commercial construction projects, which excludes single-family residential construction.

Using almost near real-time field data and comparing it to previous activity across all 50 states, OxBlue has determined that construction has declined by 5% throughout March 2020 in the US, based on the weighted average of the construction volume for each state.

The analysis found:

The two states with the most severe decline in activity were subject to Covid-19 quarantine restrictions, with Pennsylvanias issue to close non-life-sustaining businesses meaning building work was reduced by 77%. Michigan experienced a 74% drop of construction work after ordering residents to stop work on March 23rd.

12 states that are yet to issue Coronavirus restrictions saw an increase in productivity.

States with high construction construction outputs also saw large declines in activity, such as a 43% decline in New York, which ordered all non-essential construction to stop, and a 57% drop for building work in Massachusetts.

OxBlue found that construction in Americas northeast reduced the most by 34%.Images courtesy of OxBlue

Read more:
Artificial intelligence used to measure impact of Coronavirus on American construction - News - GCR

Artificial Intelligence and the Insurer – Lexology

No longer used solely by innovative technology companies, AI is now of strategic importance to more risk-averse sectors such as healthcare, retail banking, and even insurance. Built upon DAC Beachcrofts depth of experience in advising across the insurance market, this article explores a few ways in which artificial intelligence is changing the insurance industry.

How might AI change insurance?

Artificial intelligence (AI) is an increasingly pervasive aspect of modern life, thanks to its role in a wide variety of applications. The technological advancement and applicability of AI systems has exploded due to, cheaper data storage costs, increased computing resources, and an ever-growing output of and demand for consumer data. As such, we expect to see change in several critical aspects of the insurance industry.

Of course, it is important to note that insurance is a large and complex industry. Even in light of the perceived advantages discussed above, insurers may not always find it easy to integrate AI within products or backend systems. A Capgemini survey revealed that as of 2018, only 2 per cent of insurers worldwide have seen full-scale implementation of AI within their business, with a further 34% still in ideation stages. Furthermore, there are important ethical considerations which have yet to be addressed, with critics warning that AI could lead to detrimental outcomes, especially in relation to personal data privacy and hyper-personalised risk assessments. While more work needs to be done to understand the various implications of AI in insurance, it nevertheless remains an important and fascinating space to watch.

Go here to see the original:
Artificial Intelligence and the Insurer - Lexology

Eyenuk Successfully Fulfills Contract Awarded by Public Health England for Artificial Intelligence Grading of Retinal Images – BioSpace

60,000 Patient Image Sets from 6 Different Diabetic Eye Screening Programmes Analyzed Using EyeArt AI Eye Screening System

LOS ANGELES--(BUSINESS WIRE)-- Eyenuk, Inc., a global artificial intelligence (AI) medical technology and services company and the leader in real-world applications for AI Eye Screening, announced that it has successfully fulfilled the contract awarded by Public Health England (PHE) to use Eyenuks EyeArt AI Eye Screening System to grade 60,000 patient image sets from 6 different National Health Service (NHS) Diabetic Eye Screening Programmes in England.

Diabetic retinopathy (DR) is a vision-threatening complication of diabetes and a leading cause of preventable vision loss globally.1 In England, an estimated 4.6 million are living with diabetes, one-third of whom are at risk of developing DR. Diabetes has become a growing health concern as the number of people diagnosed with diabetes in the U.K. has more than doubled in the last 20 years.2

The U.K. has been leading the world in diabetic retinopathy screening, achieving patient uptake rates of over 80% (screening nearly 2.5 million diabetes patients annually),3 as compared with most parts of the world where typically less than half of diabetes patients receive annual eye screening.4 As a result, diabetic retinopathy is no longer the leading cause of blindness in the working age group in England.5 However, the growing diabetes population poses significant challenges ahead.

Public Health England (PHE) is an executive agency of the Department of Health and Social Care (DH) that oversees the NHS national health screening programmes. An independent Health Technology Assessment from the Moorfields Eye Hospital to determine the screening performance and cost-effectiveness of multiple DR detection AI solutions was conducted and published in 2016.6 Subsequently, PHE initiated a tender process seeking to commission an automated retinal image grading software to grade 60,000 patient image sets from multiple diabetic eye screening programmes.

At the end of the competitive tender process, the contract was awarded to Eyenuk.7 The National Diabetic Eye Screening Programme (NDESP) identified 6 local diabetic eye screening (DES) programmes to participate in the project with Eyenuk. The project aim was to compare the number of image sets categorised as having no disease, as determined by human graders (manual programme grading), with the number as determined by the EyeArt AI eye screening system. Results from this latest real-world analysis, together with results from previous assessments have shown that the EyeArt system has excellent agreement and sensitivity and specificity for detecting diabetic retinopathy.

Eyenuk was honored to have been awarded the PHE contract for diabetic retinopathy grading, and we are gratified that our EyeArt AI system delivered excellent results when compared with six DES programmes in England, said Kaushal Solanki, Ph.D., founder and CEO of Eyenuk. We look forward to expanding our work in the U.K. with hope to support all diabetic eye screening programmes in the future.

The independent Health Technology Assessment (HTA) from Moorfields Eye Hospital involving more than 20,000 patients was conducted to determine the screening performance and cost-effectiveness of multiple automated retinal image analysis systems. This study demonstrated that the EyeArt AI System delivered much higher sensitivity (i.e., patient safety) for DR screening than other automated DR screening technologies investigated and that its use is cost-effective alternative to the current, purely manual grading approach. The HTA demonstrated that the EyeArt performance was not affected by ethnicity, gender, or camera type.

About the EyeArt AI Eye Screening System

The EyeArt AI Eye Screening System provides fully automated DR screening, including retinal imaging, DR grading on international standards and the option of immediate reporting, during a diabetic patients regular office visit. Once the patients fundus images have been captured and submitted to the EyeArt AI System, the DR screening results are available in a PDF report in less than 60 seconds.

The EyeArt AI System was developed with funding from the U.S. National Institutes of Health (NIH) and is validated by the U.K. National Health Service (NHS). The EyeArt AI System has CE marking as a class IIa medical device in the European Union and a Health Canada license. In the U.S., the EyeArt AI System is limited by federal law to investigational use. It is designed to be General Data Protection Regulation (GDPR) and Health Insurance Portability and Accountability Act of 1996 (HIPAA) compliant.

VIDEO: Learn more about the EyeArt AI Eye Screening System for Diabetic Retinopathy

About Eyenuk, Inc.

Eyenuk, Inc. is a global artificial intelligence (AI) medical technology and services company and the leader in real-world AI Eye Screening for autonomous disease detection and AI Predictive Biomarkers for risk assessment and disease surveillance. Eyenuks first product, the EyeArt AI Eye Screening System, is the most extensively validated AI technology for autonomous detection of DR. Eyenuk is on a mission to screen every eye in the world to ensure timely diagnosis of life- and vision-threatening diseases, including diabetic retinopathy, glaucoma, age-related macular degeneration, stroke risk, cardiovascular risk and Alzheimers disease. Find Eyenuk online on its website, Twitter, Facebook, and LinkedIn.

http://www.eyenuk.com

1 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4657234/ 2 https://www.diabetes.org.uk/about_us/news/diabetes-prevalence-statistics 3 https://www.gov.uk/government/publications/diabetic-eye-screening-2016-to-2017-data 4 K. Fitch, T. Weisman, T. Engel, A. Turpcu, H. Blumen, Y. Rajput, and P. Dave. Longitudinal commercial claims-based cost analysis of diabetic retinopathy screening patterns. Am Health Drug Benefits. 2015;8(6):300308.5 G. Liew, M. Michaelides, C. Bunce. A comparison of the causes of blindness certifications in England and Wales in working age adults (1664 years), 19992000 with 20092010. BMJ Open Bd. 4 (2014), Nr. 26 Adnan Tufail, Venediktos V Kapetanakis, Sebastian Salas-Vega, Catherine Egan, Caroline Rudisill, Christopher G Owen, Aaron Lee, et al. An Observational Study to Assess If Automated Diabetic Retinopathy Image Assessment Software Can Replace One or More Steps of Manual Imaging Grading and to Determine Their Cost-Effectiveness. Health Technology Assessment 20, no. 92 (December 2016). https://doi.org/10.3310/hta20920 7 https://www.contractsfinder.service.gov.uk/Notice/13b069bd-97b4-40b6-ac66-337d1526d1e6

View source version on businesswire.com: https://www.businesswire.com/news/home/20200415005222/en/

See the original post here:
Eyenuk Successfully Fulfills Contract Awarded by Public Health England for Artificial Intelligence Grading of Retinal Images - BioSpace

COVID-19 and privacy: artificial intelligence and contact tracing in combatting the pandemic – Lexology

COVID-19 is having a debilitating effect on peoples health and their economic well-being. People are being forced by social distancing/isolating edicts and provincial emergency closure orders to stay home. As we slowly look like we may be emerging from the first wave of this health and economic emergency, people are rightly asking how we can gradually start to re-open the economy and resume semblances of normalcy without triggering substantial negative health rebounds or violating privacy norms or rights.

Governments, medical practitioners, researchers, policy-makers and others have been feverishly pursuing solutions to this challenge. Medical solutions such as vaccines and treatment methods including the use of antibodies and experimental medications such as placenta-based cell-therapy are being pursued with understandable urgency. Testing for COVID-19 and persons with COVID-19 antibodies to identify lower risk groups of individuals for whom the emergency measures could be relaxed is an obvious strategy being debated. German researchers are planning to introduce immunity certificates which theoretically could be used to identify some of these individuals. So far these conversations about testing have focused only on voluntary and not mandatory testing for the virus thus not implicating privacy concerns, at least insofar as the testing results are used only for diagnosing and treating the individuals tested.

Artificial intelligence solutions

Artificial intelligence technologies are being used in varied ways to combat the pandemic. For example, AI has been used to identify and track the spread of the virus. A Canadian company, BlueDot was among the first in the world to identify the emerging risk from COVID-19 in Hubei province and to publish a first scientific paper on COVID-19, accurately predicting its global spread using its proprietary models. AI technologies such as chatbots are being used as virtual assistants to provide information about the virus. AI is also been used to help diagnose the disease including via the use of diagnostic robots, to predict which patients will likely develop severe symptoms requiring treatment, to develop drugs, and find cures including through literature searches for clues to cures buried in heaps of scientific literature. Data-mining operations have been conducted on large datasets to build predictive computer models to provide real-time information about health services, showing where demand is rising and where critical equipment needs to be deployed. AI has also found uses to monitor for crowd formations to help enforce social distancing rules. Some of these uses raise privacy compliance issues as they involve, amongst other things, the collection, use, aggregation, analysis and disclosure to third parties of datasets that may or may not include de-identified or re-identifiable data.

Other uses of AI for tracking and public surveillance purposes also raise privacy compliance issues and, depending on who is conducting these activities and the purposes, issues under the Canadian Charter of Rights and Freedoms. Tracking and surveillance such as using location data stored on or generated by smartphone use, scanning public spaces for people potentially affected using fever detecting infrared cameras, facial recognition and other computer vision surveillance technologies, are examples.

Contact tracing solutions

A solution that is increasingly being relied upon is COVID-19 contact tracing. Public Health Ontario defined contact tracing in an online notice linking to a Government of Canada website portal soliciting volunteers for the National COVID-19 Volunteer Recruitment Campaign as a process that is used to identify, educate and monitor individuals who have had close contact with someone who is infected with a virus. These individuals are at a higher risk of becoming infected and sharing the virus with others. Contact tracing can help the individuals understand their risk and limit further spread of the virus.

Contact tracing as an epidemic control measure is not new. It is infectious disease control 101, often deployed against other illnesses such as measles, SARs, typhoid, meningococcal disease and sexually transmitted infections like AIDS. The use of smartphone technologies and various other technologies to help identify and trace individuals with various diseases has also either been proposed in connection with other diseases such as Ebola.

Contact tracing using location tracking capabilities to combat COVID-19 has already been implemented in other countries such as South Korea and Taiwan. It as also been deployed in China using a plugin App to the ubiquitous WeChat and Alipay Apps. The use was not compulsory, but was compulsory to move between certain areas and public spaces. A central database collected user data which was analyzed using AI tools.

Singapore deployed its TraceTogether mobile application to enable community-driven contact tracing where participating devices exchange proximity information whenever an app detects another device with the TraceTogether app installed. It uses Bluetooth Relative Signal Strength Indicator (RSSI) readings between devices across time to approximate the proximity and duration of an encounter between two users. This proximity and duration information is stored in an encrypted form on a persons phone for 21 days on a rolling basis. No location data is collected. If a person unfortunately falls ill with COVID-19, the Ministry of Health (MOH) would work with the individual to map out 14 days worth of activity, for contact tracing. And if the person has the TraceTogether app installed, he/she is required by law to assist in the activity mapping of his/her movements and interactions and may be asked to produce any document or record in his/her possession including data stored by any apps in the persons phone.

The European Data Protection Supervisor (EDPS) has also called for a pan-European mobile app to track the spread of the in EU countries.

It may not be realistically possible to stem the COVID-19 virus and return to a semblance of normalcy without using a sophisticated contact tracing technology. It would take an army of coronavirus trackers to attempt to curb the spread of the disease using traditional contact tracing techniques. Further, even if contact tracing technologies would not replace humans, they could speed up the process of tracking down possibly infected contacts and play a vital role in controlling the epidemic. A research article published in Science concluded:

" that viral spread is too fast to be contained by manual contact tracing, but could be controlled if this process was faster, more efficient and happened at scale. A contact-tracing App which builds a memory of proximity contacts and immediately notifies contacts of positive cases can achieve epidemic control if used by enough people. By targeting recommendations to only those at risk, epidemics could be contained without need for mass quarantines (lock-downs) that are harmful to society. "

Organizations, recognizing the challenges in combatting the pandemic, have started to propose privacy-sensitive mobile phone based contact tracing solutions that could potentially be used in Canada. MIT researchers, for example, are developing a system that augments manual contact tracing by public health officials, while purporting to preserve the privacy of individuals. The system relies on short-range Bluetooth signals emitted from peoples smartphones. These signals represent random strings of numbers, likened to chirps that other nearby smartphones can remember hearing. If a person tests positive, he/she can upload the list of chirps the persons phone has put out in the past 14 days to a database. Other people can then scan the database to see if any of those chirps match the ones picked up by their phones. If theres a match, a notification will inform that person that they may have been exposed to the virus, and will include information from public health authorities on next steps to take.

Last week Google and Apple announced they are jointly launching a comprehensive solution that includes application programming interfaces (APIs) and operating system-level technology to assist in enabling contact tracing while reportedly maintaining strong protections for user privacy. In May, both companies plan to release APIs that will enable interoperability between Android and iOS devices using apps from public health authorities. These official apps will be available for users to download via their respective app stores. Later, Apple and Google will work to enable a broader Bluetooth-based contact tracing platform by building this functionality into the underlying platforms that would allow more individuals to participate, if they choose to opt in, as well as enable interaction with a broader ecosystem of apps and government health authorities. According to Apple and Google Privacy, transparency, and consent are of utmost importance in this effort, and we look forward to building this functionality in consultation with interested stakeholders. We will openly publish information about our work for others to analyze.

A diagram of how the Apple/Google solution is intended to work is shown below.

As part of the partnership, Google and Apple released draft technical documentation including information on how user privacy will be maintained in their Bluetooth and cryptography specifications and framework documentation. The privacy enhancing features are described as explicit user consent required, the solution Doesnt collect personally identifiable information or user location data, people youve been in contact with never leave your phone, People who test positive are not identified to other users, Google or Apple, and the app Will only be used for contact tracing by public health authorities for COVID-19 pandemic management.

The UK Government confirmed that the UKs National Health Service (NHS) is also working on a contact tracing system with two technology companies. NHSX, the technological branch of the NHS, has reportedly been working on the software alongside Apple and Google. Experts in clinical safety and digital ethics are also involved. Pre-release testing is scheduled for next week. Apple also launched COVID-19 screening tools built in collaboration with the U.S. Centers for Disease Control and Prevention (CDC), Federal Emergency Management Agency (FEMA), and the White House. It promises that the tools include strong privacy and security protections and that Apple will never sell the data it collects.

It is unclear what technological contact tracing technologies the governments of Canada, the provinces or organizations operating in Canada will deploy. However, as contact tracing solutions using mobile phone technologies all involve at least some collection, use, and disclosure of personal data, their adoption will necessarily be influenced by a variety of factors including who implements the solutions e.g. governments health authorities and/or private organizations, and whether the operators are subject to privacy laws, or are given any special immunities from liability under emergency orders.

Privacy law issues

Canada has a myriad of federal and provincial laws across the country that could apply to any proposed contact tracing solution. Much would depend on the public or private entities, or combinations of organizations, that would be involved.

Federally, the Privacy Act applies to departments and ministries of the Government of Canada. This legislation includes provisions that regulates the uses and disclosures of personal information under the control of the government institution. The Privacy Act applies to Health Canada. (Health Canada also regulates medical devices under the Food and Drugs Act. Consideration may need to be given as to whether a contract tracing system which can include software (SaMd) and medical device data systems (MDDS) requires Health Canada approval.) Canadas comprehensive privacy legislation PIPEDA could also be implicated if, for example, personal information is collected, used or disclosed by an organization in the course of commercial activities.

There are also a myriad of provincial laws that could apply. There are comprehensive privacy regimes in Quebec, Alberta, and British Columbia and health privacy laws such as those in the provinces of Ontario, New Brunswick, Newfoundland and Labrador and Nova Scotia. There are also privacy statutes that apply to provincial institutions. For example, in Ontario the Personal Health Information Protection Act (PHIPA) applies to health information custodians that include physicians, hospitals, and medical officers of health. The Municipal Freedom of Information and Protection of Privacy Act (MFIPPA) applies to various institutions including municipalities and boards of health. There are statutory or common invasion of privacy laws across the country.

While there are some similarities between privacy laws across the country, there are also key differences. This includes differences in the standards for obtaining consents from individuals and the types of exemptions federal and provincial authorities and private organizations might look for. There is not, for example, a common framework like there is in the European Union under the GDPR which contains specific exemptions for processing data including when processing is necessary for reasons of substantial public interest and specific exemptions for health data. (This is one area that may be ripe for reform in Canada.)

There are numerous privacy considerations that could be taken into account in evaluating the adoption of technologies to tackle the COVID-19 epidemic. As for contact tracing technologies, the factors may include the architecture and protocols used by the solution, who has access to any data including public authorities and for what purposes, whether the use of the solution is voluntarily or mandatory, whether the data is encrypted, whether users are anonymous, what is revealed by infected users to individuals they come into contact with, whether the system can by exploited by external parties, and how reliable and secure the system is.

Concluding remarks

All Canadians must certainly share a common goal of overcoming this pandemic. Until a vaccine is publicly available, measures to resume at least some of the economic and other activities that have been shut down will need to be considered. It seems likely that innovative new technologies such as artificial intelligence and contact tracing technologies could be deployed to foster this.

Artificial intelligence and contact tracing tools will not be the panacea that alone will solve this crisis. Artificial intelligence can be helpful, but one has to be cautious about evaluating over hyped claims about what AI can achieve and whether AI firms have the data and expertise to deliver on their promises. Experience with contact tracing such as in Singapore has shown shortcomings including the potential for not flagging cases where the virus has spread and producing false positives. Moreover, we wont be able to re-open the country without much more including widespread testing programs.

Privacy laws should not impede uses of technologies that can help ameliorate this emergency situation and which maintain an appropriate balance of privacy interests. Privacy laws in Canada have always recognized the need for balancing of interests. Privacy, as a moral or legal principle, does not trump all other laws or interests.

Ethical arguments for using mobile phone based contact tracing in privacy sensitive ways were cogently expressly by the University of Oxford researchers of the Science research article referred to above:

" Successful and appropriate use of the App relies on it commanding well-founded public trust and confidence. This applies to the use of the App itself and of the data gathered. There are strong, well-established ethical arguments recognizing the importance of achieving health benefits and avoiding harm. These arguments are particularly strong in the context of an epidemic with the potential for loss of life on the scale possible with COVID-19. Requirements for the intervention to be ethical and capable of commanding the trust of the public are likely to comprise the following. i. Oversight by an inclusive and transparent advisory board, which includes members of the public. ii. The agreement and publication of ethical principles by which the intervention will be guided. iii. Guarantees of equity of access and treatment. iv. The use of a transparent and auditable algorithm. v. Integrating evaluation and research in the intervention to inform the effective management of future major outbreaks. vi. Careful oversight of and effective protections around the uses of data. vii. The sharing of knowledge with other countries, especially low- and middle-income countries. viii. Ensuring that the intervention involves the minimum imposition possible and that decisions in policy and practice are guided by three moral values: equal moral respect, fairness, and the importance of reducing suffering. "

Some have argued that abridgements of privacy and democratic rights even in emergency situations create risks that measures may become permanent or be hard to reverse. However, in a thoughtful article recently published in the MIT Technology Review by Genevieve Bell, the director of the Autonomy, Agency, and Assurance Institute at the Australian National University and a senior fellow at Intel, the author concludes that the present circumstances justify a response to this pandemic that should be subject to a sunset clause.

" The speed of the virus and the response it demands shouldnt seduce us into thinking we need to build solutions that last forever. Theres a strong argument that much of what we build for this pandemic should have a sunset clausein particular when it comes to the private, intimate, and community data we might collect. The decisions we make to opt in to data collection and analysis now might not resemble the decisions we would make at other times. Creating frameworks that allow a change in values and trade-off calculations feels important too.There will be many answers and many solutions, and none will be easy. We will trial solutions here at the ANU, and I know others will do the same. We will need to work out technical arrangements, update regulations, and even modify some of our long-standing institutions and habits. And perhaps one day, not too long from now, we might be able to meet in public, in a large gathering, and share what we have learned, and what we still need to get rightfor treating this pandemic, but also for building just, equitable, and fair societies with no judas holes in sight. "

First published @ barrysookman.com. This update is part of our continuing efforts to keep you informed about COVID-19. Follow our COVID-19 hub for the latest updates and considerations for your business.

Originally posted here:
COVID-19 and privacy: artificial intelligence and contact tracing in combatting the pandemic - Lexology