Eyenuk Successfully Fulfills Contract Awarded by Public Health England for Artificial Intelligence Grading of Retinal Images – BioSpace

60,000 Patient Image Sets from 6 Different Diabetic Eye Screening Programmes Analyzed Using EyeArt AI Eye Screening System

LOS ANGELES--(BUSINESS WIRE)-- Eyenuk, Inc., a global artificial intelligence (AI) medical technology and services company and the leader in real-world applications for AI Eye Screening, announced that it has successfully fulfilled the contract awarded by Public Health England (PHE) to use Eyenuks EyeArt AI Eye Screening System to grade 60,000 patient image sets from 6 different National Health Service (NHS) Diabetic Eye Screening Programmes in England.

Diabetic retinopathy (DR) is a vision-threatening complication of diabetes and a leading cause of preventable vision loss globally.1 In England, an estimated 4.6 million are living with diabetes, one-third of whom are at risk of developing DR. Diabetes has become a growing health concern as the number of people diagnosed with diabetes in the U.K. has more than doubled in the last 20 years.2

The U.K. has been leading the world in diabetic retinopathy screening, achieving patient uptake rates of over 80% (screening nearly 2.5 million diabetes patients annually),3 as compared with most parts of the world where typically less than half of diabetes patients receive annual eye screening.4 As a result, diabetic retinopathy is no longer the leading cause of blindness in the working age group in England.5 However, the growing diabetes population poses significant challenges ahead.

Public Health England (PHE) is an executive agency of the Department of Health and Social Care (DH) that oversees the NHS national health screening programmes. An independent Health Technology Assessment from the Moorfields Eye Hospital to determine the screening performance and cost-effectiveness of multiple DR detection AI solutions was conducted and published in 2016.6 Subsequently, PHE initiated a tender process seeking to commission an automated retinal image grading software to grade 60,000 patient image sets from multiple diabetic eye screening programmes.

At the end of the competitive tender process, the contract was awarded to Eyenuk.7 The National Diabetic Eye Screening Programme (NDESP) identified 6 local diabetic eye screening (DES) programmes to participate in the project with Eyenuk. The project aim was to compare the number of image sets categorised as having no disease, as determined by human graders (manual programme grading), with the number as determined by the EyeArt AI eye screening system. Results from this latest real-world analysis, together with results from previous assessments have shown that the EyeArt system has excellent agreement and sensitivity and specificity for detecting diabetic retinopathy.

Eyenuk was honored to have been awarded the PHE contract for diabetic retinopathy grading, and we are gratified that our EyeArt AI system delivered excellent results when compared with six DES programmes in England, said Kaushal Solanki, Ph.D., founder and CEO of Eyenuk. We look forward to expanding our work in the U.K. with hope to support all diabetic eye screening programmes in the future.

The independent Health Technology Assessment (HTA) from Moorfields Eye Hospital involving more than 20,000 patients was conducted to determine the screening performance and cost-effectiveness of multiple automated retinal image analysis systems. This study demonstrated that the EyeArt AI System delivered much higher sensitivity (i.e., patient safety) for DR screening than other automated DR screening technologies investigated and that its use is cost-effective alternative to the current, purely manual grading approach. The HTA demonstrated that the EyeArt performance was not affected by ethnicity, gender, or camera type.

About the EyeArt AI Eye Screening System

The EyeArt AI Eye Screening System provides fully automated DR screening, including retinal imaging, DR grading on international standards and the option of immediate reporting, during a diabetic patients regular office visit. Once the patients fundus images have been captured and submitted to the EyeArt AI System, the DR screening results are available in a PDF report in less than 60 seconds.

The EyeArt AI System was developed with funding from the U.S. National Institutes of Health (NIH) and is validated by the U.K. National Health Service (NHS). The EyeArt AI System has CE marking as a class IIa medical device in the European Union and a Health Canada license. In the U.S., the EyeArt AI System is limited by federal law to investigational use. It is designed to be General Data Protection Regulation (GDPR) and Health Insurance Portability and Accountability Act of 1996 (HIPAA) compliant.

VIDEO: Learn more about the EyeArt AI Eye Screening System for Diabetic Retinopathy

About Eyenuk, Inc.

Eyenuk, Inc. is a global artificial intelligence (AI) medical technology and services company and the leader in real-world AI Eye Screening for autonomous disease detection and AI Predictive Biomarkers for risk assessment and disease surveillance. Eyenuks first product, the EyeArt AI Eye Screening System, is the most extensively validated AI technology for autonomous detection of DR. Eyenuk is on a mission to screen every eye in the world to ensure timely diagnosis of life- and vision-threatening diseases, including diabetic retinopathy, glaucoma, age-related macular degeneration, stroke risk, cardiovascular risk and Alzheimers disease. Find Eyenuk online on its website, Twitter, Facebook, and LinkedIn.

http://www.eyenuk.com

1 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4657234/ 2 https://www.diabetes.org.uk/about_us/news/diabetes-prevalence-statistics 3 https://www.gov.uk/government/publications/diabetic-eye-screening-2016-to-2017-data 4 K. Fitch, T. Weisman, T. Engel, A. Turpcu, H. Blumen, Y. Rajput, and P. Dave. Longitudinal commercial claims-based cost analysis of diabetic retinopathy screening patterns. Am Health Drug Benefits. 2015;8(6):300308.5 G. Liew, M. Michaelides, C. Bunce. A comparison of the causes of blindness certifications in England and Wales in working age adults (1664 years), 19992000 with 20092010. BMJ Open Bd. 4 (2014), Nr. 26 Adnan Tufail, Venediktos V Kapetanakis, Sebastian Salas-Vega, Catherine Egan, Caroline Rudisill, Christopher G Owen, Aaron Lee, et al. An Observational Study to Assess If Automated Diabetic Retinopathy Image Assessment Software Can Replace One or More Steps of Manual Imaging Grading and to Determine Their Cost-Effectiveness. Health Technology Assessment 20, no. 92 (December 2016). https://doi.org/10.3310/hta20920 7 https://www.contractsfinder.service.gov.uk/Notice/13b069bd-97b4-40b6-ac66-337d1526d1e6

View source version on businesswire.com: https://www.businesswire.com/news/home/20200415005222/en/

See the original post here:
Eyenuk Successfully Fulfills Contract Awarded by Public Health England for Artificial Intelligence Grading of Retinal Images - BioSpace

COVID-19 and privacy: artificial intelligence and contact tracing in combatting the pandemic – Lexology

COVID-19 is having a debilitating effect on peoples health and their economic well-being. People are being forced by social distancing/isolating edicts and provincial emergency closure orders to stay home. As we slowly look like we may be emerging from the first wave of this health and economic emergency, people are rightly asking how we can gradually start to re-open the economy and resume semblances of normalcy without triggering substantial negative health rebounds or violating privacy norms or rights.

Governments, medical practitioners, researchers, policy-makers and others have been feverishly pursuing solutions to this challenge. Medical solutions such as vaccines and treatment methods including the use of antibodies and experimental medications such as placenta-based cell-therapy are being pursued with understandable urgency. Testing for COVID-19 and persons with COVID-19 antibodies to identify lower risk groups of individuals for whom the emergency measures could be relaxed is an obvious strategy being debated. German researchers are planning to introduce immunity certificates which theoretically could be used to identify some of these individuals. So far these conversations about testing have focused only on voluntary and not mandatory testing for the virus thus not implicating privacy concerns, at least insofar as the testing results are used only for diagnosing and treating the individuals tested.

Artificial intelligence solutions

Artificial intelligence technologies are being used in varied ways to combat the pandemic. For example, AI has been used to identify and track the spread of the virus. A Canadian company, BlueDot was among the first in the world to identify the emerging risk from COVID-19 in Hubei province and to publish a first scientific paper on COVID-19, accurately predicting its global spread using its proprietary models. AI technologies such as chatbots are being used as virtual assistants to provide information about the virus. AI is also been used to help diagnose the disease including via the use of diagnostic robots, to predict which patients will likely develop severe symptoms requiring treatment, to develop drugs, and find cures including through literature searches for clues to cures buried in heaps of scientific literature. Data-mining operations have been conducted on large datasets to build predictive computer models to provide real-time information about health services, showing where demand is rising and where critical equipment needs to be deployed. AI has also found uses to monitor for crowd formations to help enforce social distancing rules. Some of these uses raise privacy compliance issues as they involve, amongst other things, the collection, use, aggregation, analysis and disclosure to third parties of datasets that may or may not include de-identified or re-identifiable data.

Other uses of AI for tracking and public surveillance purposes also raise privacy compliance issues and, depending on who is conducting these activities and the purposes, issues under the Canadian Charter of Rights and Freedoms. Tracking and surveillance such as using location data stored on or generated by smartphone use, scanning public spaces for people potentially affected using fever detecting infrared cameras, facial recognition and other computer vision surveillance technologies, are examples.

Contact tracing solutions

A solution that is increasingly being relied upon is COVID-19 contact tracing. Public Health Ontario defined contact tracing in an online notice linking to a Government of Canada website portal soliciting volunteers for the National COVID-19 Volunteer Recruitment Campaign as a process that is used to identify, educate and monitor individuals who have had close contact with someone who is infected with a virus. These individuals are at a higher risk of becoming infected and sharing the virus with others. Contact tracing can help the individuals understand their risk and limit further spread of the virus.

Contact tracing as an epidemic control measure is not new. It is infectious disease control 101, often deployed against other illnesses such as measles, SARs, typhoid, meningococcal disease and sexually transmitted infections like AIDS. The use of smartphone technologies and various other technologies to help identify and trace individuals with various diseases has also either been proposed in connection with other diseases such as Ebola.

Contact tracing using location tracking capabilities to combat COVID-19 has already been implemented in other countries such as South Korea and Taiwan. It as also been deployed in China using a plugin App to the ubiquitous WeChat and Alipay Apps. The use was not compulsory, but was compulsory to move between certain areas and public spaces. A central database collected user data which was analyzed using AI tools.

Singapore deployed its TraceTogether mobile application to enable community-driven contact tracing where participating devices exchange proximity information whenever an app detects another device with the TraceTogether app installed. It uses Bluetooth Relative Signal Strength Indicator (RSSI) readings between devices across time to approximate the proximity and duration of an encounter between two users. This proximity and duration information is stored in an encrypted form on a persons phone for 21 days on a rolling basis. No location data is collected. If a person unfortunately falls ill with COVID-19, the Ministry of Health (MOH) would work with the individual to map out 14 days worth of activity, for contact tracing. And if the person has the TraceTogether app installed, he/she is required by law to assist in the activity mapping of his/her movements and interactions and may be asked to produce any document or record in his/her possession including data stored by any apps in the persons phone.

The European Data Protection Supervisor (EDPS) has also called for a pan-European mobile app to track the spread of the in EU countries.

It may not be realistically possible to stem the COVID-19 virus and return to a semblance of normalcy without using a sophisticated contact tracing technology. It would take an army of coronavirus trackers to attempt to curb the spread of the disease using traditional contact tracing techniques. Further, even if contact tracing technologies would not replace humans, they could speed up the process of tracking down possibly infected contacts and play a vital role in controlling the epidemic. A research article published in Science concluded:

" that viral spread is too fast to be contained by manual contact tracing, but could be controlled if this process was faster, more efficient and happened at scale. A contact-tracing App which builds a memory of proximity contacts and immediately notifies contacts of positive cases can achieve epidemic control if used by enough people. By targeting recommendations to only those at risk, epidemics could be contained without need for mass quarantines (lock-downs) that are harmful to society. "

Organizations, recognizing the challenges in combatting the pandemic, have started to propose privacy-sensitive mobile phone based contact tracing solutions that could potentially be used in Canada. MIT researchers, for example, are developing a system that augments manual contact tracing by public health officials, while purporting to preserve the privacy of individuals. The system relies on short-range Bluetooth signals emitted from peoples smartphones. These signals represent random strings of numbers, likened to chirps that other nearby smartphones can remember hearing. If a person tests positive, he/she can upload the list of chirps the persons phone has put out in the past 14 days to a database. Other people can then scan the database to see if any of those chirps match the ones picked up by their phones. If theres a match, a notification will inform that person that they may have been exposed to the virus, and will include information from public health authorities on next steps to take.

Last week Google and Apple announced they are jointly launching a comprehensive solution that includes application programming interfaces (APIs) and operating system-level technology to assist in enabling contact tracing while reportedly maintaining strong protections for user privacy. In May, both companies plan to release APIs that will enable interoperability between Android and iOS devices using apps from public health authorities. These official apps will be available for users to download via their respective app stores. Later, Apple and Google will work to enable a broader Bluetooth-based contact tracing platform by building this functionality into the underlying platforms that would allow more individuals to participate, if they choose to opt in, as well as enable interaction with a broader ecosystem of apps and government health authorities. According to Apple and Google Privacy, transparency, and consent are of utmost importance in this effort, and we look forward to building this functionality in consultation with interested stakeholders. We will openly publish information about our work for others to analyze.

A diagram of how the Apple/Google solution is intended to work is shown below.

As part of the partnership, Google and Apple released draft technical documentation including information on how user privacy will be maintained in their Bluetooth and cryptography specifications and framework documentation. The privacy enhancing features are described as explicit user consent required, the solution Doesnt collect personally identifiable information or user location data, people youve been in contact with never leave your phone, People who test positive are not identified to other users, Google or Apple, and the app Will only be used for contact tracing by public health authorities for COVID-19 pandemic management.

The UK Government confirmed that the UKs National Health Service (NHS) is also working on a contact tracing system with two technology companies. NHSX, the technological branch of the NHS, has reportedly been working on the software alongside Apple and Google. Experts in clinical safety and digital ethics are also involved. Pre-release testing is scheduled for next week. Apple also launched COVID-19 screening tools built in collaboration with the U.S. Centers for Disease Control and Prevention (CDC), Federal Emergency Management Agency (FEMA), and the White House. It promises that the tools include strong privacy and security protections and that Apple will never sell the data it collects.

It is unclear what technological contact tracing technologies the governments of Canada, the provinces or organizations operating in Canada will deploy. However, as contact tracing solutions using mobile phone technologies all involve at least some collection, use, and disclosure of personal data, their adoption will necessarily be influenced by a variety of factors including who implements the solutions e.g. governments health authorities and/or private organizations, and whether the operators are subject to privacy laws, or are given any special immunities from liability under emergency orders.

Privacy law issues

Canada has a myriad of federal and provincial laws across the country that could apply to any proposed contact tracing solution. Much would depend on the public or private entities, or combinations of organizations, that would be involved.

Federally, the Privacy Act applies to departments and ministries of the Government of Canada. This legislation includes provisions that regulates the uses and disclosures of personal information under the control of the government institution. The Privacy Act applies to Health Canada. (Health Canada also regulates medical devices under the Food and Drugs Act. Consideration may need to be given as to whether a contract tracing system which can include software (SaMd) and medical device data systems (MDDS) requires Health Canada approval.) Canadas comprehensive privacy legislation PIPEDA could also be implicated if, for example, personal information is collected, used or disclosed by an organization in the course of commercial activities.

There are also a myriad of provincial laws that could apply. There are comprehensive privacy regimes in Quebec, Alberta, and British Columbia and health privacy laws such as those in the provinces of Ontario, New Brunswick, Newfoundland and Labrador and Nova Scotia. There are also privacy statutes that apply to provincial institutions. For example, in Ontario the Personal Health Information Protection Act (PHIPA) applies to health information custodians that include physicians, hospitals, and medical officers of health. The Municipal Freedom of Information and Protection of Privacy Act (MFIPPA) applies to various institutions including municipalities and boards of health. There are statutory or common invasion of privacy laws across the country.

While there are some similarities between privacy laws across the country, there are also key differences. This includes differences in the standards for obtaining consents from individuals and the types of exemptions federal and provincial authorities and private organizations might look for. There is not, for example, a common framework like there is in the European Union under the GDPR which contains specific exemptions for processing data including when processing is necessary for reasons of substantial public interest and specific exemptions for health data. (This is one area that may be ripe for reform in Canada.)

There are numerous privacy considerations that could be taken into account in evaluating the adoption of technologies to tackle the COVID-19 epidemic. As for contact tracing technologies, the factors may include the architecture and protocols used by the solution, who has access to any data including public authorities and for what purposes, whether the use of the solution is voluntarily or mandatory, whether the data is encrypted, whether users are anonymous, what is revealed by infected users to individuals they come into contact with, whether the system can by exploited by external parties, and how reliable and secure the system is.

Concluding remarks

All Canadians must certainly share a common goal of overcoming this pandemic. Until a vaccine is publicly available, measures to resume at least some of the economic and other activities that have been shut down will need to be considered. It seems likely that innovative new technologies such as artificial intelligence and contact tracing technologies could be deployed to foster this.

Artificial intelligence and contact tracing tools will not be the panacea that alone will solve this crisis. Artificial intelligence can be helpful, but one has to be cautious about evaluating over hyped claims about what AI can achieve and whether AI firms have the data and expertise to deliver on their promises. Experience with contact tracing such as in Singapore has shown shortcomings including the potential for not flagging cases where the virus has spread and producing false positives. Moreover, we wont be able to re-open the country without much more including widespread testing programs.

Privacy laws should not impede uses of technologies that can help ameliorate this emergency situation and which maintain an appropriate balance of privacy interests. Privacy laws in Canada have always recognized the need for balancing of interests. Privacy, as a moral or legal principle, does not trump all other laws or interests.

Ethical arguments for using mobile phone based contact tracing in privacy sensitive ways were cogently expressly by the University of Oxford researchers of the Science research article referred to above:

" Successful and appropriate use of the App relies on it commanding well-founded public trust and confidence. This applies to the use of the App itself and of the data gathered. There are strong, well-established ethical arguments recognizing the importance of achieving health benefits and avoiding harm. These arguments are particularly strong in the context of an epidemic with the potential for loss of life on the scale possible with COVID-19. Requirements for the intervention to be ethical and capable of commanding the trust of the public are likely to comprise the following. i. Oversight by an inclusive and transparent advisory board, which includes members of the public. ii. The agreement and publication of ethical principles by which the intervention will be guided. iii. Guarantees of equity of access and treatment. iv. The use of a transparent and auditable algorithm. v. Integrating evaluation and research in the intervention to inform the effective management of future major outbreaks. vi. Careful oversight of and effective protections around the uses of data. vii. The sharing of knowledge with other countries, especially low- and middle-income countries. viii. Ensuring that the intervention involves the minimum imposition possible and that decisions in policy and practice are guided by three moral values: equal moral respect, fairness, and the importance of reducing suffering. "

Some have argued that abridgements of privacy and democratic rights even in emergency situations create risks that measures may become permanent or be hard to reverse. However, in a thoughtful article recently published in the MIT Technology Review by Genevieve Bell, the director of the Autonomy, Agency, and Assurance Institute at the Australian National University and a senior fellow at Intel, the author concludes that the present circumstances justify a response to this pandemic that should be subject to a sunset clause.

" The speed of the virus and the response it demands shouldnt seduce us into thinking we need to build solutions that last forever. Theres a strong argument that much of what we build for this pandemic should have a sunset clausein particular when it comes to the private, intimate, and community data we might collect. The decisions we make to opt in to data collection and analysis now might not resemble the decisions we would make at other times. Creating frameworks that allow a change in values and trade-off calculations feels important too.There will be many answers and many solutions, and none will be easy. We will trial solutions here at the ANU, and I know others will do the same. We will need to work out technical arrangements, update regulations, and even modify some of our long-standing institutions and habits. And perhaps one day, not too long from now, we might be able to meet in public, in a large gathering, and share what we have learned, and what we still need to get rightfor treating this pandemic, but also for building just, equitable, and fair societies with no judas holes in sight. "

First published @ barrysookman.com. This update is part of our continuing efforts to keep you informed about COVID-19. Follow our COVID-19 hub for the latest updates and considerations for your business.

Originally posted here:
COVID-19 and privacy: artificial intelligence and contact tracing in combatting the pandemic - Lexology

AI and the coronavirus fight: How artificial intelligence is taking on COVID-19 – ZDNet

As the COVID-19 coronavirus outbreak continues to spread across the globe, companies and researchers are looking to use artificial intelligence as a way of addressing the challenges of the virus. Here are just some of the projects using AI to address the coronavirus outbreak.

Using AI to find drugs that target the virus

A number of research projects are using AI to identify drugs that were developed to fight other diseases but which could now be repurposed to take on coronavirus. By studying the molecular setup of existing drugs with AI, companies want to identify which ones might disrupt the way COVID-19 works.

BenevolentAI, a London-based drug-discovery company, began turning its attentions towards the coronavirus problem in late January. The company's AI-powered knowledge graph can digest large volumes of scientific literature and biomedical research to find links between the genetic and biological properties of diseases and the composition and action of drugs.

EE: How to implement AI and machine learning (ZDNet special report) | Download the report as a PDF (TechRepublic)

The company had previously been focused on chronic disease, rather than infections, but was able to retool the system to work on COVID-19 by feeding it the latest research on the virus. "Because of the amount of data that's being produced about COVID-19 and the capabilities we have in being able to machine-read large amounts of documents at scale, we were able to adapt [the knowledge graph] so to take into account the kinds of concepts that are more important in biology, as well as the latest information about COVID-19 itself," says Olly Oechsle, lead software engineer at BenevolentAI.

While a large body of biomedical research has built up around chronic diseases over decades, COVID-19 only has a few months' worth of studies attached to it. But researchers can use the information that they have to track down other viruses with similar elements, see how they function, and then work out which drugs could be used to inhibit the virus.

"The infection process of COVID-19 was identified relatively early on. It was found that the virus binds to a particular protein on the surface of cells called ACE2. And what we could with do with our knowledge graph is to look at the processes surrounding that entry of the virus and its replication, rather than anything specific in COVID-19 itself. That allows us to look back a lot more at the literature that concerns different coronaviruses, including SARS, etc. and all of the kinds of biology that goes on in that process of viruses being taken in cells," Oechsle says.

The system suggested a number of compounds that could potentially have an effect on COVID-19 including, most promisingly, a drug called Baricitinib. The drug is already licensed to treat rheumatoid arthritis. The properties of Baricitinib mean that it could potentially slow down the process of the virus being taken up into cells and reduce its ability to infect lung cells. More research and human trials will be needed to see whether the drug has the effects AI predicts.

Shedding light on the structure of COVID-19

DeepMind, the AI arm of Google's parent company Alphabet, is using data on genomes to predict organisms' protein structure, potentially shedding light on which drugs could work against COVID-19.

DeepMind has released a deep-learning library calledAlphaFold, which uses neural networks to predict how the proteins that make up an organism curve or crinkle, based on their genome. Protein structures determine the shape of receptors in an organism's cells. Once you know what shape the receptor is, it becomes possible to work out which drugs could bind to them and disrupt vital processes within the cells: in the case of COVID-19, disrupting how it binds to human cells or slowing the rate it reproduces, for example.

Aftertraining up AlphaFold on large genomic datasets, which demonstrate the links between an organism's genome and how its proteins are shaped, DeepMind set AlphaFold to work on COVID-19's genome.

"We emphasise that these structure predictions have not been experimentally verified, but hope they may contribute to the scientific community's interrogation of how the virus functions, and serve as a hypothesis generation platform for future experimental work in developing therapeutics," DeepMind said. Or, to put it another way, DeepMind hasn't tested out AlphaFold's predictions outside of a computer, but it's putting the results out there in case researchers can use them to develop treatments for COVID-19.

Detecting the outbreak and spread of new diseases

Artificial-intelligence systems were thought to be among the first to detect that the coronavirus outbreak, back when it was still localised to the Chinese city of Wuhan, could become a full-on global pandemic.

It's thought that AI-driven HealthMap, which is affiliated with the Boston Children's Hospital,picked up the growing clusterof unexplained pneumonia cases shortly before human researchers, although it only ranked the outbreak's seriousness as 'medium'.

"We identified the earliest signs of the outbreak by mining in Chinese language and local news media -- WeChat, Weibo -- to highlight the fact that you could use these tools to basically uncover what's happening in a population," John Brownstein, professor of Harvard Medical School and chief innovation officer at Boston Children's Hospital, told the Stanford Institute for Human-Centered Artificial Intelligence's COVID-19 and AI virtual conference.

Human epidemiologists at ProMed, an infectious-disease-reporting group, published their own alert just half an hour after HealthMap, and Brownstein also acknowledged the importance of human virologists in studying the spread of the outbreak.

"What we quickly realised was that as much it's easy to scrape the web to create a really detailed line list of cases around the world, you need an army of people, it can't just be done through machine learning and webscraping," he said. HealthMap also drew on the expertise of researchers from universities across the world, using "official and unofficial sources" to feed into theline list.

The data generated by HealthMap has been made public, to be combed through by scientists and researchers looking for links between the disease and certain populations, as well as containment measures. The data has already been combined with data on human movements, gleaned from Baidu,to see how population mobility and control measuresaffected the spread of the virus in China.

HealthMap has continued to track the spread of coronavirus throughout the outbreak, visualising itsspread across the world by time and location.

Spotting signs of a COVID-19 infection in medical images

Canadian startup DarwinAI has developed a neural network that can screen X-rays for signs of COVID-19 infection. While using swabs from patients is the default for testing for coronavirus, analysing chest X-rays could offer an alternative to hospitals that don't have enough staff or testing kits to process all their patients quickly.

DarwinAI released COVID-Net as an open-source system, and "the response has just been overwhelming", says DarwinAI CEO Sheldon Fernandez. More datasets of X-rays were contributed to train the system, which has now learnt from over 17,000 images, while researchers from Indonesia, Turkey, India and other countries are all now working on COVID-19. "Once you put it out there, you have 100 eyes on it very quickly, and they'll very quickly give you some low-hanging fruit on ways to make it better," Fernandez said.

The company is now working on turning COVID-Net from a technical implementation to a system that can be used by healthcare workers. It's also now developing a neural network for risk-stratifying patients that have contracted COVID-19 as a way of separating those with the virus who might be better suited to recovering at home in self-isolation, and those who would be better coming into hospital.

Monitoring how the virus and lockdown is affecting mental health

Johannes Eichstaedt, assistant professor in Stanford University's department of psychology, has been examining Twitter posts to estimate how COVID-19, and the changes that it's brought to the way we live our lives, is affecting our mental health.

Using AI-driven text analysis, Eichstaedt queried over two million tweets hashtagged with COVID-related terms during February and March, and combined it with other datasets on relevant factors including the number of cases, deaths, demographics and more, to illuminate the virus' effects on mental health.

The analysis showed that much of the COVID-19-related chat in urban areas was centred on adapting to living with, and preventing the spread of, the infection. Rural areas discussed adapting far less, which the psychologist attributed to the relative prevalence of the disease in urban areas compared to rural, meaning those in the country have had less exposure to the disease and its consequences.

SEE:Coronavirus: Business and technology in a pandemic

There are also differences in how the young and old are discussing COVID-19. "In older counties across the US, there's talk about Trump and the economic impact, whereas in young counties, it's much more problem-focused coping; the one language cluster that stand out there is that in counties that are younger, people talk about washing their hands," Eichstaedt said.

"We really need to measure the wellbeing impact of COVID-19, and we very quickly need to think about scalable mental healthcare and now is the time to mobilise resources to make that happen," Eichstaedt told the Stanford virtual conference.

Forecasting how coronavirus cases and deaths will spread across cities and why

Google-owned machine-learning community Kaggle is setting a number of COVID-19-related challenges to its members, includingforecasting the number of cases and fatalities by cityas a way of identifying exactly why some places are hit worse than others.

"The goal here isn't to build another epidemiological model there are lots of good epidemiological models out there. Actually, the reason we have launched this challenge is to encourage our community to play with the data and try and pick apart the factors that are driving difference in transmission rates across cities," Kaggle's CEO Anthony Goldbloom told the Stanford conference.

Currently, the community is working on a dataset of infections in 163 countries from two months of this year to develop models and interrogate the data for factors that predict spread.

Most of the community's models have been producing feature-importance plots to show which elements may be contributing to the differences in cases and fatalities. So far, said Goldbloom, latitude and longitude are showing up as having a bearing on COVID-19 spread. The next generation of machine-learning-driven feature-importance plots will tease out the real reasons for geographical variances.

"It's not the country that is the reason that transmission rates are different in different countries; rather, it's the policies in that country, or it's the cultural norms around hugging and kissing, or it's the temperature. We expect that as people iterate on their models, they'll bring in more granular datasets and we'll start to see these variable-importance plots becoming much more interesting and starting to pick apart the most important factors driving differences in transmission rates across different cities. This is one to watch," Goldbloom added.

More here:
AI and the coronavirus fight: How artificial intelligence is taking on COVID-19 - ZDNet

You Cant Spell Creative Without A.I. – The New York Times

This article is part of our latest Artificial Intelligence special report, which focuses on how the technology continues to evolve and affect our lives.

Steve Jobs once described personal computing as a bicycle for the mind.

His idea that computers can be used as intelligence amplifiers that offer an important boost for human creativity is now being given an immediate test in the face of the coronavirus.

In March, a group of artificial intelligence research groups and the National Library of Medicine announced that they had organized the worlds scientific research papers about the virus so the documents, more than 44,000 articles, could be explored in new ways using a machine-learning program designed to help scientists see patterns and find relationships to aid research.

This is a chance for artificial intelligence, said Oren Etzioni, the chief executive of the Allen Institute for Artificial Intelligence, a nonprofit research laboratory that was founded in 2014 by Paul Allen, the Microsoft co-founder.

There has long been a dream of using A.I. to help with scientific discovery, and now the question is, can we do that?

The new advances in software applications that process human language lie at the heart of a long-running debate over whether computer technologies such as artificial intelligence will enhance or even begin to substitute for human creativity.

The programs are in effect artificial intelligence Swiss Army knives that can be repurposed for a host of different practical applications, ranging from writing articles, books and poetry to composing music, language translation and scientific discovery.

In addition to raising questions about whether machines will be able to think creatively, the software has touched off a wave of experimentation and has also raised questions about new challenges to intellectual property laws and concerns about whether they might be misused for spam, disinformation and fraud.

The Allen Institute program, Semantic Scholar, began in 2015. It is an early example of this new class of software that uses machine-learning techniques to extract meaning from and identify connections between scientific papers, helping researchers more quickly gain in-depth understanding.

Since then, there has been a rapid set of advances based on new language process techniques leading a variety of technology firms and research groups to introduce competing programs known as language models, each more powerful than the next.

What has been in effect an A.I. arms race reached a high point in February, when Microsoft introduced Turing-NLG (natural language generation), named after the British mathematician and computing pioneer Alan Turing. The machine-learning behemoth consists of 17 billion parameters, or weights, which are numbers that are arrived at after the program was trained on an immense library of human-written texts, effectively more than all the written material available on the internet.

As a result, significant claims have been made for the capability of language models, including the ability to write plausible-sounding sentences and paragraphs, as well as draw and paint and hold a believable conversation with a human.

Where weve seen the most interesting applications has really been in the creative space, said Ashley Pilipiszyn, a technical director at OpenAI, an independent research group based in San Francisco that was founded as a nonprofit research organization to develop socially beneficial artificial intelligence-based technology and later established a for-profit corporation.

Early last year, the group announced a language model called GPT-2 (generative pretrained transformer), but initially did not release it publicly, saying it was concerned about potential misuse in creating disinformation. But near the end of the year, the program was made widely available.

Everyone has innate creative capabilities, she said, and this is a tool that helps push those boundaries even further.

Hector Postigo, an associate professor at the Klein College of Media and Communication at Temple University, began experimenting with GPT-2 shortly after it was released. His first idea was to train the program to automatically write a simple policy statement about ethics policies for A.I. systems.

After fine-tuning GPT-2 with a large collection of human-written articles, position papers, and laws collected in 2019 on A.I., big data and algorithms, he seeded the program with a single sentence: Algorithmic decision-making can pose dangers to human rights.

The program created a short essay that began, Decision systems that assume predictability about human behavior can be prone to error. These are the errors of a data-driven society. It concluded, Recognizing these issues will ensure that we are able to use the tools that humanity has entrusted to us to address the most pressing rights and security challenges of our time.

Mr. Postigo said the new generation of tools would transform the way people create as authors.

We already use autocomplete all the time, he said. The cat is already out of the bag.

Since his first experiment, he has trained GPT-2 to compose classical music and write poetry and rap lyrics.

That poses the question of whether the programs are genuinely creative. And if they are able to create works of art that are indistinguishable from human works, will they devalue those created by humans?

A.I. researchers who have worked in the field for decades said that it was important to realize that the programs were simply assistive and that they were not creating artistic works or making other intellectual achievements independently.

The early signs are that the new tools will be quickly embraced. The Semantic Scholar coronavirus webpage was viewed more than 100,000 times in the first three days it was available, Dr. Etzioni said. Researchers at Google Health, Johns Hopkins University, the Mayo Clinic, the University of Notre Dame, Hewlett Packard Labs and IBM Research are using the service, among others.

Jerry Kaplan, an artificial-intelligence researcher who was involved with two of Silicon Valleys first A.I. companies, Symantec and Teknowledge during the 1980s, pointed out that the new language modeling software was actually just a new type of database retrieval technology, rather than an advance toward any kind of thinking machine.

Creativity is still entirely on the human side, he said. All this particular tool is doing is making it possible to get insights that would otherwise take years of study.

Although that may be true, philosophers have begun to wonder whether these new tools will permanently change human creativity.

Brian Smith, a philosopher and a professor of artificial intelligence at the University of Toronto, noted that although students are still taught how to do long division by hand, calculators now are universally used for the task.

We once used rooms full of human computers to do these tasks manually, he said, noting that nobody would want to return to that era.

In the future, however, it is possible that these new tools will begin to take over much of what we consider creative tasks such as writing, composing and other artistic ventures.

What we have to decide is, what is at the heart of our humanity that is worth preserving, he said.

Read the original:
You Cant Spell Creative Without A.I. - The New York Times

Addressing the gender bias in artificial intelligence and automation – OpenGlobalRights

Geralt/Pixabay

Twenty-five years after the adoption of the Beijing Declaration and Platform for Action, significant gender bias in existing social norms remains. For example, as recently as February 2020, the Indian Supreme Court had to remind the Indian government that its arguments for denying women command positions in the Army were based on stereotypes. And gender bias is not merely a male problem: a recent UNDP report entitled Tackling Social Norms found that about 90% of people (both men and women) hold some bias against women.

Gender bias and various forms of discrimination against women and girls pervades all spheres of life. Womens equal access to science and information technology is no exception. While the challenges posed by the digital divide and under-representation of women in STEM (science, technology, engineering and mathematics) continue, artificial intelligence (AI) and automation are throwing newer challenges to achieving substantive gender equality in the era of the Fourth Industrial Revolution.

If AI and automation are not developed and applied in a gender-responsive way, they are likely to reproduce and reinforce existing gender stereotypes and discriminatory social norms. In fact, this may already be happening (un)consciously. Let us consider a few examples:

Despite the potential for such gender bias, the growing crop of AI standards do not adequately integrate a gender perspective. For example, the Montreal Declaration for the Responsible Development of Artificial Intelligence does not make an explicit reference to integrating a gender perspective, while the AI4Peoples Ethical Framework for a Good AI Society mentions diversity/gender only once. Both the OECD Council Recommendation on AI and the G20 AI Principles stress the importance of AI contributing to reducing gender inequality, but provide no details on how this could be achieved.

The Responsible Machine Learning Principles do embrace bias evaluation as one of the principles. This siloed approach of embracing gender is also adopted by companies like Google and Microsoft, whose AI Principles underscore the need to avoid creating or reinforcing unfair bias and to treat all people fairly, respectively. Companies related to AI and automation should adopt a gender-response approach across all principles to overcome inherent gender bias. Google should, for example, embed a gender perspective in assessing which new technologies are socially beneficial or how AI systems are built and tested for safety.

What should be done to address the gender bias in AI and automation? The gender framework for the UN Guiding Principles on Business and Human Rights could provide practical guidance to states, companies and other actors. The framework involves a three-step cycle: gender-responsive assessment, gender-transformative measures and gender-transformative remedies. The assessment should be able to respond to differentiated, intersectional, and disproportionate adverse impacts on womens human rights. The consequent measures and remedies should be transformative in that they should be capable of bringing change to patriarchal norms, unequal power relations. and gender stereotyping.

States, companies and other actors can take several concrete steps. First, women should be active participantsrather than mere passive beneficiariesin creating AI and automation. Women and their experiences should be adequately integrated in all steps related to design, development and application of AI and automation. In addition to proactively hiring more women at all levels, AI and automation companies should engage gender experts and womens organisations from the outset in conducting human rights due diligence.

Second, the data that informs algorithms, AI and automation should be sex-disaggregated, otherwise the experiences of women will not inform these technological tools and in turn might continue to internalise existing gender biases against women. Moreover, even data related to women should be guarded against any inherent gender bias.

Third, states, companies and universities should plan for and invest in building capacity of women to achieve smooth transition to AI and automation. This would require vocational/technical training at both education and work levels.

Fourth, AI and automation should be designed to overcome gender discrimination and patriarchal social norms. In other words, these technologies should be employed to address challenges faced by women such as unpaid care work, gender pay gap, cyber bullying, gender-based violence and sexual harassment, trafficking, breach of sexual and reproductive rights, and under-representation in leadership positions. Similarly, the power of AI and automation should be employed to enhance womens access to finance, higher education and flexible work opportunities.

Fifth, special steps should be taken to make women aware of their human rights and the impact of AI and automation on their rights. Similar measures are needed to ensure that remedial mechanismsboth judicial and non-judicialare responsive to gender bias, discrimination, patriarchal power structures, and asymmetries of information and resources.

Sixth, states and companies should keep in mind the intersectional dimensions of gender discrimination, otherwise their responses, despite good intentions, will fall short of using AI and automation to accomplish gender equality. Low-income women, single mothers, women of colour, migrant women, women with disability, and non-heterosexual women all may be affected differently by AI and automation and would have differentiated needs or expectations.

Finally, all standards related to AI and automation should integrate a gender perspective in a holistic manner, rather than treating gender as merely a bias issue to be managed.

Technologies are rarely gender neutral in practice. If AI and automation continue to ignore womens experiences or to leave women behind, everyone will be worse off.

Visit link:
Addressing the gender bias in artificial intelligence and automation - OpenGlobalRights

Banking and payments predictions 2020: Artificial intelligence – Verdict

Artificial intelligence (AI) refers to software-based systems that use data inputs to make decisions on their own. Machine learning is an application of AI that gives computer systems the ability to learn and improve from data without being explicitly programmed.

2019 saw financial institutions explore a broad-range of possible AI use cases in both customer-facing and back-office processes, increasing budgets, headcounts, and partnerships. 2020 will see increased focus on breaking out the marketing story from actual business impact to place bigger bets in fewer areas. This will help banks scale proven AI across the enterprise to forge competitive advantage.

Artificial intelligence will re-invigorate digital money management, helping incumbents drip-feed highly personalised spending tips to build trust and engagement in the absence of in-person interaction. Features like predictive insights around cashflow shortfalls, alerts on upcoming bill payments, and various what if scenarios when trying on different financial products give customers transparency around their options and the risks they face. This service will render as an always-on, in-your-pocket, and predictive advisor.

AI-enhanced customer relationship management (CRM) will help digital banks optimise product recommendations to rival the conversion rates of best-in-class online retailers. These product suggestions wont render as sales, but rather valuable advice received, such as a pre-approved loan before a cash shortfall or an option to remortgage to fund home improvements. This will help incumbents build customer advocacy and trust as new entrants vie for attention.

AI-powered onboarding, when combined with voice and facial recognition technologies, will help incumbents make themselves much easier to do business with, especially at the initial point of conversion but also thereafter at each moment of authentication. AI will offer particular support through Know Your Customer (KYC) processes, helping incumbents keep pace with new entrants. Standard Bank in South Africa, for example, used WorkFusions AI capabilities to reduce the customer onboarding time from 20 days to just five minutes.

Banks heavy compliance burden will continue to drive AI. Last year, large global banks such as OCBC Bank, Commonwealth Bank, Wells Fargo, and HSBC made big investments in areas such as automated data management, reporting, anti-money laundering (AML), compliance, automated regulation interpretation, and mapping. Increasingly partnering with artificial intelligence-enabled regtech firms will help incumbents reduce operational risk and enhance reporting quality.

As artificial intelligence becomes more embedded into all areas of customers lives, concerns around the black box driving decisions will grow, with more demands for explainable AI. As it is, customers with little or no digital footprint are less visible to applications that rely on data to profile people and assess risk. Traditional banks credit risk algorithms often disproportionately exclude black and Hispanic groups in the US as well as women, because these groups have historically earned less over their lifetimes.

In 2020, senior management will be held directly accountable for the decisions of AI-enabled algorithms. This will drive increased focus on data quality to feed the algorithms and perhaps limits to the use of the most dynamic machine learning because of their regulatory opacity.

This is an edited extract from the Banking & Payments Predictions 2020 Thematic Research report produced by GlobalData Thematic Research.

GlobalData is this websites parent business intelligence company.

See original here:
Banking and payments predictions 2020: Artificial intelligence - Verdict

How Artificial Intelligence is helping the fight against COVID-19 – Health Europa

The Artificial Intelligence (AI) tool has been shown to accurately predict which patients that have been newly infected with the COVID-19 virus would go on to develop severe respiratory disease.

Named SARS-CoV-2, the new novel coronavirus, as of March 30, had infected 735,560 patients worldwide. According to the World Health Organization, the illness has caused more than 34,830 deaths to date, more often among older patients with underlying health conditions.

The study, published in the journalComputers, Materials & Continua, was led by NYU Grossman School of Medicine and the Courant Institute of Mathematical Sciences at New York University, in partnership with Wenzhou Central Hospital and Cangnan Peoples Hospital, both in Wenzhou, China.

The study has revealed the best indicators of future severity and found that they were not as expected.

Corresponding author Megan Coffee, clinical assistant professor in the Division of Infectious Disease & Immunology at NYU Grossman School of Medicine, said: While work remains to further validate our model, it holds promise as another tool to predict the patients most vulnerable to the virus, but only in support of physicians hard-won clinical experience in treating viral infections.

Our goal was to design and deploy a decision-support tool using AI capabilities mostly predictive analytics to flag future clinical coronavirus severity, says co-author Anasse Bari, PhD, a clinical assistant professor in Computer Science at the Courant institute. We hope that the tool, when fully developed, will be useful to physicians as they assess which moderately ill patients really need beds, and who can safely go home, with hospital resources stretched thin.

For the study, demographic, laboratory, and radiological findings were collected from 53 patients as each tested positive in January 2020 for COVID-19 at the two Chinese hospitals. In a minority of patients, severe symptoms developed with a week, including pneumonia.

The researchers wanted to find out whether AI techniques could help to accurately predict which patients with the virus would go on to develop Acute Respiratory Distress Syndrome or ARDS, the fluid build-up in the lungs that can be fatal in the elderly.

To do this they designed computer models that make decisions based on the data fed into them, with programmes getting smarter the more data they consider. Specifically, the current study used decision trees that track series of decisions between options, and that model the potential consequences of choices at each step in a pathway.

The AI tool found that changes in three features levels of the liver enzyme alanine aminotransferase (ALT), reported myalgia, and haemoglobin levels were most accurately predictive of subsequent, severe disease. Together with other factors, the team reported being able to predict risk of ARDS with up to 80% accuracy.

ALT levels, which rise dramatically as diseases like hepatitis damage the liver, were only a bit higher in patients with COVID-19, but still featured prominently in prediction of severity. In addition, deep muscle aches (myalgia) were also more commonplace and have been linked by past research to higher general inflammation in the body.

Lastly, higher levels of haemoglobin, the iron-containing protein that enables blood cells to carry oxygen to bodily tissues, were also linked to later respiratory distress. Could this be explained by other factors, like unreported smoking of tobacco, which has long been linked to increased haemoglobin levels?

Of the 33 patients at Wenzhou Central Hospital interviewed on smoking status, the two who reported having smoked, also reported that they had quit.

Limitations of the study, say the authors, included the relatively small data set and the limited clinical severity of disease in the population studied.

I will be paying more attention in my clinical practice to our data points, watching patients closer if they for instance complain of severe myalgia, adds Coffee. Its exciting to be able to share data with the field in real time when it can be useful. In all past epidemics, journal papers only published well after the infections had waned.

Read more here:
How Artificial Intelligence is helping the fight against COVID-19 - Health Europa

Spending in Artificial Intelligence to accelerate across the public sector due to automation and social distancing compliance needs in response to…

April 9, 2020 - LONDON, UK: Prior to the COVID-19 pandemic, the IDC (International Data Corporation) Worldwide Artificial Intelligence Spending Guide had forecast European artificial intelligence (AI) spending of $10 billion for 2020, and a healthy growth at a 33% CAGR throughout 2023. With the COVID-19 outbreak, IDC expects a variety of changes in spending in 2020. AI solutions deployed in the cloud will experience a strong uptake, showing that companies are looking at deploying intelligence in the cloud to be more efficient and agile.

"Following the COVID-19 outbreak, many industries such as transportation and personal and consumer services will be forced to revise their technology investments downwards," said Andrea Minonne, senior research analyst at IDC Customer Insights & Analysis. "On the other hand, AI is a technology that can play a significant role in helping businesses and societies deal with and solve large scale disruption caused by quarantines and lockdowns. Of all industries, the public sector will experience an acceleration of AI investments. Hospitals are looking at AI to speed up COVID-19 diagnosis and testing and to provide automated remote consultations to patients in self-isolation through chatbots. At the same time, governments will use AI to assess social distancing compliance"

In the IDC report, What is the Impact of COVID-19 on the European IT Market? (IDC #EUR146175020, April 2020) we assessed the impact of COVID-19 across 181 European companies and found that, as of March 23, 16% of European companies believe automation through AI and other emerging technologies can help them minimize the impact of COVID-19. With large scale lockdowns in place, a shortage of workers and supply chain disruptions will drive automation needs across manufacturing.

Applying intelligence to automate processes is a crucial response to the COVID-19 crisis. Not only does automation allow European companies to digitally transform, but also to make prompt data-driven decisions and have a positive impact on business efficiency. IDC expects a surge in adoption of automated COVID-19 diagnosis in healthcare to speed up diagnosis and save time for both doctors and patients. As the virus spreads quickly, labor shortages in industries where product demand is surging can become a critical problem. For that reason, companies are renovating their hiring processes, applying a mix of intelligent automation and virtualization in their hiring processes. Companies will also aim to automate their supply chains, maintain their agility and avoid production bottlenecks, especially for industries with vast supplier networks. With customer service centers becoming severely restricted, automation will be a crucial part for remote customer engagement and chatbots will help customers in self-isolation get the support they need without having to wait a long time.

"As a short-term response to the COVID-19 crisis, AI can play a crucial part in automating processes and limiting human involvement to a necessary minimum," said Petr Vojtisek, research analyst at IDC Customer Insights & Analysis. "In the longer term, we might observe an increase in AI adoption for companies that otherwise wouldn't consider it, both for competitive and practical reasons."

IDC's Worldwide Semiannual Artificial Intelligence Spending Guide provides guidance on the expected technology opportunity around the AI market across nine regions. Segmented by 32 countries, 19 industries, 27 use cases, and 6 technologies, the guide provides IT vendors with insight into this rapidly growing market and how the market will develop over the coming years.

For IDCs European coverage of COVID-19, click here.

Read the original:
Spending in Artificial Intelligence to accelerate across the public sector due to automation and social distancing compliance needs in response to...

CORRECTION – Labelbox Awarded Artificial Intelligence Contract by Department of Defense – Yahoo Finance

Leading provider of training data platforms for machine learning, Labelbox receives prestigious SBIR contract from AFWERX for U.S. Air Force

SAN FRANCISCO, April 09, 2020 (GLOBE NEWSWIRE) -- This release for Labelbox corrects and replaces the release issued today at 7:00 am ET with the headline Labelbox Awarded Artificial Intelligence Grant by Department of Defense. The word grant has been replaced in the headline, subheadline, and release body with the word contract. The corrected release follows.

Labelbox, the worlds leading training data platform, is among an elite selection of artificial intelligence companies to receive a contract from the Department of Defense to support national security as the U.S. scrambles to stay ahead of its rivals.

While some in Silicon Valley balk at working with the government, Labelboxs founders are vocal about their belief that technology companies have a responsibility to help the U.S. maintain its technological advantage in the face of competition from nation states.

I grew up in a poor family, with limited opportunities and little infrastructure said Manu Sharma, CEO and one of Labelboxs co-founders, who was raised in a village in India near the Himalayas. He said that opportunities afforded by the U.S. have helped him achieve more success in ten years than multiple generations of his family back home. Weve made a principled decision to work with the government and support the American system, he said.

Labelbox is a software platform that allows data science teams to manage the data used to train supervised-learning models. Supervised learning is a branch of artificial intelligence that uses labeled data to train algorithms to recognize patterns in images, audio, video or text. After being fed millions of labeled pictures of mobile missile launchers from satellite imagery, for example, a supervised-learning system will learn to pick out missile launchers in pictures it has never seen.

For data science teams to work better with each other and with labelers around the world, they need a platform and tools. Without those things, managing large sets of data quickly becomes overwhelming. Labelbox solves that problem by facilitating collaboration, rework, quality assurance, model evaluation, audit trails, and model-assisted labeling in one platform. The platform is tailored for computer vision systems but can handle all forms of data. The platform also helps with billing and time management.

Labelbox is an integrated solution for data science teams to not only create the training data but also to manage it in one place, said Sharma. Its the foundational infrastructure for customers to build their machine learning pipeline.

The company won an Air Force AEFWRX Phase 1 Small Business Innovation Research contract to conduct feasibility studies on how to integrate the Labelbox platform with various stakeholders in the Air Force. Labelbox recently hired a representative in Washington, D.C., to manage the process.

The Small Business Innovation Research (SBIR) program is a highly competitive program that encourages domestic small businesses to engage in Federal Research and Development. The United States Department of Defense is the largest of 11 federal agencies participating in the program. Air Force Innovation Hub Network (AFWERX) is a United States Air Force program intended to engage innovators and entrepreneurs in developing effective solutions to challenges faced by the service.

About LabelboxFounded in 2018 and based in San Francisco, Labelbox is a collaborative training data platform for machine learning applications. Instead of building their own expensive and incomplete homegrown tools, companies rely on Labelbox as the training data platform that acts as a central hub for data science teams to interface with dispersed labeling teams. Better ways to input and manage data translates into higher-quality training data and more accurate machine-learning models. Labelbox has raised $39 million in capital from leading VCs in Silicon Valley. For more information, visit: https://www.Labelbox.com/

Editorial ContactLonn Johnston for Labelbox+1 650.219.7764lonn@flak42.com

Original post:
CORRECTION - Labelbox Awarded Artificial Intelligence Contract by Department of Defense - Yahoo Finance

Neuromorphic Chips: The Third Wave of Artificial Intelligence – Analytics Insight

The age of traditional computers is reaching its limit. Without innovations taking place, it is difficult to move past the technology threshold. Hence it is necessary to bring major design transformation with improved performance that can change the way we view computers. The Moores law (named after Gordon Moore, in 1965) states that the number of transistors in a dense integrated circuit doubles about every two years while their price halves. But now the law is losing its validity. Hence hardware and software experts have come up with two solutions: Quantum Computing and Neuromorphic Computing. While quantum computing has made major strides, neuromorphic is still in its lab stage, until recently when Intel announced its neuromorphic chip, Loihi. This may indicate the third wave of Artificial Intelligence.

The first generation of AI was marked with defining rules and emulated classical logic to draw reasoned conclusions within a specific, narrowly defined problem domain. It was well suited to monitoring processes and improving efficiency, for example. The second generation was populated by using deep learning networks to analyze the contents and data that were largely concerned with sensing and perception. The third generation is about drawing parallels to the human thought process, like interpretation and autonomous adaptation. In short, it mimics neurons spiking like the nervous system of humans. It relies on densely connected transistors that mimic the activity of ion channels. This allows them to integrate memory, computation, and communication, at higher speed, complexity, and better energy efficiency.

Loihi is Intels fifth-generation neuromorphic chip. This 14-nanometer chip has a 60-millimeter die size and contains over 2 billion transistors, as well as three managing Lakemont cores for orchestration. It contains a programmable microcode engine for on-chip training of asynchronous spiking neural networks (SNNs). Total, it has 128 cores packs. Each core has a built-in learning module and a total of around 131,000 computational neurons that communicate with one another, allowing the chip to understand stimuli. On March 16, Intel and Cornell University showcased a new system, demonstrating the ability of this chip to learn and recognize 10 hazardous materials from the smell. And this can function even in the presence of data noise and occlusion. According to their joint profiled paper in Nature Machine Intelligence, this can be used to detect the presence of explosives, narcotics, polymers and other harmful substances like signs of smoke, carbon monoxide, etc. It can purportedly do this faster, more accurate than sniffer dogs thereby threatening to replace them. They achieved this by training it constructing a circuit diagram of biological olfaction. They drew this insight by creating a dataset by exposing ten hazardous chemicals (including acetone, ammonia, and methane) through a wind tunnel, and a set consisting of the activity of 72 chemical sensors collected the signals.

This tech has multifold applications like identifying harmful substances in the airport, detecting the presence of diseases and toxic fumes in the air. The best part is, it constantly re-wires its internal network to allow different types of learning. The futuristic version can transform traditional computers into machines that can learn from experience and make cognitive decisions. Hence it is adaptive like human senses. And to put a cherry on top, it uses a fraction of energy than the current state of art systems in vogue. It is predicted to displace Graphics Processing Units (GPUs).

Although Loihi may soon evolve into a household word, it is not the only one. The neuromorphic approach is being investigated by IBM, HPE, MIT, Purdue, Stanford, and others. IBM is in the race with its TrueNorth. It has 4096 cores, each having 256 neurons and each neuron having 256 synapses to communicate with others. Germanys Jlich Research Centres Institute of Neuroscience and Medicine and UKs Advanced Processor Technologies Group at the University of Manchester are working on a low-grade supercomputer called SpiNNaker. It stands for Spiking Neural Network Architecture. It is believed to stimulate so-called cortical microcircuits, hence the human brain cortex and help us understand complex diseases like Alzheimers.

Who knows what sort of computational trends we may foresee in the coming years. But one thing is sure, the team at Analytics Insight will keep a close watch on it.

More here:
Neuromorphic Chips: The Third Wave of Artificial Intelligence - Analytics Insight