Fujitsu, AIST and RIKEN Achieve Unparalleled Speed on MLPerf HPC Machine Learning Processing Benchmark – HPCwire

TOKYO, Nov 19, 2020 Fujitsu, the National Institute of Advanced Industrial Science and Technology (AIST), and RIKEN today announced a performance milestone in supercomputing, achieving the highest performance and claiming the ranking positions on the MLPerf HPC benchmark. The MLPerf HPC benchmark measures large-scale machine learning processing on a level requiring supercomputers and the parties achieved these outcomes leveraging approximately half of the AI-Bridging Cloud Infrastructure (ABCI) supercomputer system, operated by AIST, and about 1/10 of the resources of the supercomputer Fugaku, which is currently under joint development by RIKEN and Fujitsu.

Utilizing about half the computing resources of its system, ABCI achieved processing speeds 20 times faster than other GPU-type systems. That is the highest performance among supercomputers based on GPUs, computing devices specialized in deep learning. Similarly, about 1/10 of Fugaku was utilized to set a record for CPU-type supercomputers consisting of general-purpose computing devices only, achieving a processing speed 14 times faster than that of other CPU-type systems.

The results were presented as MLPerf HPC v0.7 on November 18th (November 19th Japan Time) at the 2020 International Conference for High Performance Computing, Networking, Storage, and Analysis (SC20) event, which is currently being held online.

Background

MLPerf HPC is a performance competition in two benchmark programs: CosmoFlow(2), which predicts cosmological parameters, and DeepCAM, which identifies abnormal weather phenomena. The ABCI ranked first in metrics of all registered systems in the CosmoFlow benchmark program, with about half of the whole ABCI system, and Fugaku ranked second with measurement of about 1/10 of the whole system. The ABCI system delivered 20 times the performance of the other GPU types, while Fugaku delivered 14 times the performance of the other CPU types. ABCI achieved first place amongst all registered systems in the DeepCAM benchmark program as well, also with about half of the system. In this way, ABCI and Fugaku overwhelmingly dominated the top positions, demonstrating the superior technological capabilities of Japanese supercomputers in the field of machine learning.

Fujitsu, AIST, RIKEN and Fujitsu Laboratories Limited will release the software stacks including the library and the AI framework which accelerate the large-scale machine learning process developed for this measurement to the public. This move will make it easier to use large-scale machine learning with supercomputers, while its use in analyzing simulation results is anticipated to contribute to the detection of abnormal weather phenomena and to new discoveries in astrophysics. As a core platform for building Society 5.0, it will also contribute to solve social and scientific issues, as it is expected to expand to applications such as the creation of general-purpose language models that require enormous computational performance.

About MLPerf HPC

MLPerf is a machine learning benchmark community established in May 2018 for the purpose of creating a performance list of systems running machine learning applications. MLPerf developed MLPerf HPC as a new machine learning benchmark to evaluate the performance of machine learning calculations using supercomputers. It is used for supercomputers around the world and is expected to become a new industry standard. MLPerf HPC v0.7 evaluated performance on two real applications, CosmoFlow and DeepCAM, to measure large-scale machine learning performance requiring the use of a supercomputer.

All measurement data are available on the following website: https://mlperf.org/

Comments from the Partners

Fujitsu, Executive Director, Naoki Shinjo: The successful construction and optimization of the software stack for large-scale deep learning processing, executed in close collaboration with AIST, RIKEN, and many other stakeholders made this achievement a reality, helping us to successfully claim the top position in the MLPerf HPC benchmark in an important milestone for the HPC community. I would like to express my heartfelt gratitude to all concerned for their great cooperation and support. We are confident that these results will pave the way for the use of supercomputers for increasingly large-scale machine learning processing tasks and contribute to many research and development projects in the future, and we are proud that Japans research and development capabilities will help lead global efforts in this field.

Hirotaka Ogawa, Principal Research Manager, Artificial Intelligence Research Center, AIST: ABCI was launched on August 1, 2018 as an open, advanced, and high-performance computing infrastructure for the development of artificial intelligence technologies in Japan. Since then, it has been used in industry-academia-government collaboration and by a diverse range of businesses, to accelerate R&D and verification of AI technologies that utilize high computing power, and to advance social utilization of AI technologies. The overwhelming results of MLPerf HPC, the benchmark for large-scale machine learning processing, showed the world the high level of technological capabilities of Japans industry-academia-government collaboration. AISTs Artificial Intelligence Research Center is promoting the construction of large-scale machine learning models with high versatility and the development of its application technologies, with the aim of realizing easily-constructable AI. We expect that the results of this time will be utilized in such technological development.

Satoshi Matsuoka, Director General, RIKEN Center for Computational Science: In this memorable first MLPerf HPC, Fugaku, Japans top CPU supercomputer, along with AISTs ABCI, Japans top GPU supercomputer, exhibited extraordinary performance and results, serving as a testament to Japans ability to compete at an exceptional level on the global stage in the area of AI research and development. I only regret that we couldnt achieve the overwhelming performance as we did for HPL-AI to be compliant with inaugural regulations for MLPerf HPC benchmark. In the future, as we continue to further improve the performance on Fugaku, we will make ongoing efforts to take advantage of Fugakus super large-scale environment in the area of high-performance deep learning in cooperation with various stakeholders.

About Fujitsu

Fujitsu is a leading Japanese information and communication technology (ICT) company offering a full range of technology products, solutions and services. Approximately 130,000 Fujitsu people support customers in more than 100 countries. We use our experience and the power of ICT to shape the future of society with our customers. Fujitsu Limited (TSE:6702) reported consolidated revenues of 3.9 trillion yen (US$35 billion) for the fiscal year ended March 31, 2020. For more information, please see http://www.fujitsu.com.

About National Institute of Advanced Industrial Science & Technology (AIST)

AIST is the largest public research institute established in 1882 in Japan. The research fields of AIST covers all industrial sciences, e.g., electronics, material science, life science, metrology, etc. Our missions are bridging the gap between basic science and industrialization and solving social problems facing the world. we prepare several open innovation platforms to contribute to these missions, where researchers in companies, university professors, graduated students, as well as AIST researchers, get together to achieve our missions. The open innovation platform established recently is The Global Zero Emission Research Center which contributes to achieving a zero-emission society collaborating with foreign researches.https://www.aist.go.jp/index_en.html

About RIKEN Center for Computational Science

RIKEN is Japans largest comprehensive research institution renowned for high-quality research in a diverse range of scientific disciplines. Founded in 1917 as a private research foundation in Tokyo, RIKEN has grown rapidly in size and scope, today encompassing a network of world-class research centers and institutes across Japan including the RIKEN Center for Computational Science (R-CCS), the home of the supercomputer Fugaku. As the leadership center of high-performance computing, the R-CCS explores the Science of computing, by computing, and for computing. The outcomes of the exploration the technologies such as open source software are its core competence. The R-CCS strives to enhance the core competence and to promote the technologies throughout the world.

Source: Fujitsu

See more here:
Fujitsu, AIST and RIKEN Achieve Unparalleled Speed on MLPerf HPC Machine Learning Processing Benchmark - HPCwire

SVG Tech Insight: Increasing Value of Sports Content Machine Learning for Up-Conversion HD to UHD – Sports Video Group

This fall SVG will be presenting a series of White Papers covering the latest advancements and trends in sports-production technology. The full series of SVGs Tech Insight White Papers can be found in the SVG Fall SportsTech Journal HERE.

Following the height of the 2020 global pandemic, live sports are starting to re-emerge worldwide albeit predominantly behind closed doors. For the majority of sports fans, video is the only way they can watch and engage with their favorite teams or players. This means the quality of the viewing experience itself has become even more critical.

With UHD being adopted by both households and broadcasters around the world, there is a marked expectation around visual quality. To realize these expectations in the immediate term, it will be necessary for some years to up-convert from HD to UHD when creating 4K UHD sports channels and content.

This is not so different from the early days of HD, where SD sporting related content had to be up-converted to HD. In the intervening years, however, machine learning as a technology has progressed sufficiently to be a serious contender for performing better up-conversions than with more conventional techniques, specifically designed to work for TV content.

Ideally, we want to process HD content into UHD with a simple black box arrangement.

The problem with conventional up-conversion, though, is that it does not offer an improved resolution, so does not fully meet the expectations of the viewer at home watching on a UHD TV. The question, therefore, becomes: can we do better for the sports fan? If so, how?

UHD is a progressive scan format, with the native TV formats being 38402160, known as 2160p59.64 (usually abbreviated to 2160p60) or 2160p50. The corresponding HD formats, with the frame/field rates set by region, are either progressive 1280720 (720p60 or 720p50) or interlaced 19201080 (1080i30 or 1080i25).

Conversion from HD to UHD for progressive images at the same rate is fairly simple. It can be achieved using spatial processing only. Traditionally, this might typically use a bi-cubic interpolation filter, (a 2-dimensional interpolation commonly used for photographic image scaling.) This uses a grid of 44 source pixels and interpolates intermediate locations in the center of the grid. The conversion from 1280720 to 38402160 requires a 3x scaling factor in each dimension and is almost the ideal case for an upsampling filter.

These types of filters can only interpolate, resulting in an image that is a better result than nearest-neighbor or bi-linear interpolation, but does not have the appearance of being a higher resolution.

Machine Learning (ML) is a technique whereby a neural network learns patterns from a set of training data. Images are large, and it becomes unfeasible to create neural networks that process this data as a complete set. So, a different structure is used for image processing, known as Convolutional Neural Networks (CNNs). CNNs are structured to extract features from the images by successively processing subsets from the source image and then processes the features rather than the raw pixels.

Up-conversion process with neural network processing

The inbuilt non-linearity, in combination with feature-based processing, mean CNNs can invent data not in the original image. In the case of up-conversion, we are interested in the ability to create plausible new content that was not present in the original image, but that doesnt modify the nature of the image too much. The CNN used to create the UHD data from the HD source is known as the Generator CNN.

When input source data needs to be propagated through the whole chain, possibly with scaling involved, then a specific variant of a CNN known as a Residual Network (ResNet) is used. A ResNet has a number of stages, each of which includes a contribution from a bypass path that carries the input data. For this study, a ResNet with scaling stages towards the end of the chain was used as the Generator CNN.

For the Generator CNN to do its job, it must be trained with a set of known data patches of reference images and a comparison is made between the output and the original. For training, the originals are a set of high-resolution UHD images, down-sampled to produce HD source images, then up-converted and finally compared to the originals.

The difference between the original and synthesized UHD images is calculated by the compare function with the error signal fed back to the Generator CNN. Progressively, the Generator CNN learns to create an image with features more similar to original UHD images.

The training process is dependent on the data set used for training, and the neural network tries to fit the characteristics seen during training onto the current image. This is intriguingly illustrated in Googles AI Blog [1], where a neural network presented with a random noise pattern introduces shapes like the ones used during training. It is important that a diverse, representative content set is used for training. Patches from about 800 different images were used for training during the process of MediaKinds research.

The compare function affects the way the Generator CNN learns to process the HD source data. It is easy to calculate a sum of absolute differences between original and synthesized. This causes an issue due to training set imbalance; in this case, the imbalance is that real pictures have large proportions with relatively little fine detail, so the data set is biased towards regenerating a result like that which is very similar to the use of a bicubic interpolation filter.

This doesnt really achieve the objective of creating plausible fine detail.

Generative Adversarial Neural Networks (GANs) are a relatively new concept [2], where a second neural network, known as the Discriminator CNN, is used and is itself trained during the training process of the Generator CNN. The Discriminator CNN learns to detect the difference between features that are characteristic of original UHD images and synthesized UHD images. During training, the Discriminator CNN sees either an original UHD image or a synthesized UHD image, with the detection correctness fed back to the discriminator and, if the image was a synthesized one, also fed back to the Generator CNN.

Each CNN is attempting to beat the other: the Generator by creating images that have characteristics more like originals, while the Discriminator becomes better at detecting synthesized images.

The result is the synthesis of feature details that are characteristic of original UHD images.

With a GAN approach, there is no real constraint to the ability of the Generator CNN to create new detail everywhere. This means the Generator CNN can create images that diverge from the original image in more general ways. A combination of both compare functions can offer a better balance, retaining the detail regeneration, but also limiting divergence. This produces results that are subjectively better than conventional up-conversion.

Conversion from 1080i60 to 2160p60 is necessarily more complex than from 720p60. Starting from 1080i, there are three basic approaches to up-conversion:

Training data is required here, which must come from 2160p video sequences. This enables a set of fields to be created, which are then downsampled, with each field coming from one frame in the original 2160p sequence, so the fields are not temporally co-located.

Surprisingly, results from field-based up-conversion tended to be better than using de-interlaced frame conversion, despite using sophisticated motion-compensated de-interlacing: the frame-based conversion being dominated by the artifacts from the de-interlacing process. However, it is clear that potentially useful data from the opposite fields did not contribute to the result, and the field-based approach missed data that could produce a better result.

A solution to this is to use multiple fields data as the source data directly into a modified Generator CNN, letting the GAN learn how best to perform the deinterlacing function. This approach was adopted and re-trained with a new set of video-based data, where adjacent fields were also provided.

This led to both high visual spatial resolution and good temporal stability. These are, of course, best viewed as a video sequence, however an example of one frame from a test sequence shows the comparison:

Comparison of a sample frame from different up-conversion techniques against original UHD

Up-conversion using a hybrid GAN with multiple fields was effective across a range of content, but is especially relevant for the visual sports experience to the consumer. This offers a realistic means by which content that has more of the appearance of UHD can be created from both progressive and interlaced HD source, which in turn can enable an improved experience for the fan at home when watching a sports UHD channel.

1 A. Mordvintsev, C. Olah and M. Tyka, Inceptionism: Going Deeper into Neural Networks, 2015. [Online]. Available: https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html

2 I. e. a. Goodfellow, Generative Adversarial Nets, Neural Information Processing Systems Proceedings, vol. 27, 2014.

Read more here:
SVG Tech Insight: Increasing Value of Sports Content Machine Learning for Up-Conversion HD to UHD - Sports Video Group

Machine learning removes bias from algorithms and the hiring process – PRNewswire

Arena Analytics' Chief Data Scientist unveils a cutting edge technique that removes latent bias from algorithmic models.

Currently, the primary methods of reducing the impact of bias on models has been limited to adjusting input data or adjust models after-the-fact to ensure there is no disparate impact.

Recent reporting from the Wall Street Journal confirmed these as the most recent advances, concluding, "It's really up to the software engineers and leaders of the company to figure out how to fix it [or] go into the algorithm and tweak some of the main factors it considers in making its decisions."

For several years, Arena Analytics was also limited to these approaches, but that all changed 9 months ago. Up until then, Arena removed all data from the models that could correlate to protected classifications and then measured demographic parity.

"These efforts brought us in line with EEOC compliance thresholds - also known as the or 80% rule," explains Myra Norton, President/COO of Arena. "But we've always wanted to go further than a compliance threshold.We've wanted to surface a MORE diverse slate of candidates for every role in a client organization.And that's exactly what we've accomplished, now surpassing 95% in our representation of different classifications."

Chief Data Scientist Patrick Hagerty will explain at MLConf the way he and his team have leveraged techniques known asadversarial networks,an aspect of Generative Adversarial Networks (GAN's), tools that pit one algorithm against another.

"Arena's primary model predicts the outcomes our clients want, and Model Two is a Discriminator designed to predict a classification," says Hagerty. "The Discriminator attempts to detect the race, gender, background, and any other protected class data of a person. This causes the Predictor to adjust and optimize while eliminating correlations with the classifications the Discriminator is detecting."

Arena trained models to do this until achieving what's known as the Nash Equilibrium. This is the point at which the predictor and discriminator have reached peak optimization.

Arena's technology has helped industrious individuals find a variety of jobs - from RNs to medtechs, caregivers to cooks, concierge to security. Job candidates who Arena predicted for success include veterans with no prior experience in healthcare or senior/assisted living, recent high school graduates whose plans to work while attending college were up-ended, and former hospitality sector employees who decided to apply their dining service expertise to a new setting.

"We succeeded in our intent to reduce bias and diversify the workforce, but what surprised us was the impact this approach had on our core predictions. Data once considered unusable, such as commuting distance, we can now analyze because we've removed the potentially-associated protected-class-signal," says Michael Rosenbaum, Arena's founder and CEO. "As a result, our predictions are stronger AND we surface a more diverse slate of candidates across multiple spectrums. Our clients can now use their talent acquisition function to really support and lead out front on Diversity and Inclusion."

About Arena (https://www.arena.io/) applies predictive analytics and machine learning to solve talent acquisition challenges. Learning algorithms analyze a large amount of data topredict with high levels of accuracy the likelihood of different outcomes occurring, such as someone leaving, being engaged, having excellent attendance, and more. By revealing each individual's likely outcomes in specific positions, departments, and locations, Arena is transforming the labor market from one based on perception and unconscious bias, to one based on outcomes. Arena is currently growing dramatically within the healthcare and hospitality industry and expanding its offerings to other people intensive industries. For more information contact [emailprotected]arena.io

SOURCE Arena

https://www.arena.io/

Here is the original post:
Machine learning removes bias from algorithms and the hiring process - PRNewswire

Using machine learning to track the pandemic’s impact on mental health – MIT News

Dealing with a global pandemic has taken a toll on the mental health of millions of people. A team of MIT and Harvard University researchers has shown that they can measure those effects by analyzing the language that people use to express their anxiety online.

Using machine learning to analyze the text of more than 800,000 Reddit posts, the researchers were able to identify changes in the tone and content of language that people used as the first wave of the Covid-19 pandemic progressed, from January to April of 2020. Their analysis revealed several key changes in conversations about mental health, including an overall increase in discussion about anxiety and suicide.

We found that there were these natural clusters that emerged related to suicidality and loneliness, and the amount of posts in these clusters more than doubled during the pandemic as compared to the same months of the preceding year, which is a grave concern, says Daniel Low, a graduate student in the Program in Speech and Hearing Bioscience and Technology at Harvard and MIT and the lead author of the study.

The analysis also revealed varying impacts on people who already suffer from different types of mental illness. The findings could help psychiatrists, or potentially moderators of the Reddit forums that were studied, to better identify and help people whose mental health is suffering, the researchers say.

When the mental health needs of so many in our society are inadequately met, even at baseline, we wanted to bring attention to the ways that many people are suffering during this time, in order to amplify and inform the allocation of resources to support them, says Laurie Rumker, a graduate student in the Bioinformatics and Integrative Genomics PhD Program at Harvard and one of the authors of the study.

Satrajit Ghosh, a principal research scientist at MITs McGovern Institute for Brain Research, is the senior author of the study, which appears in the Journal of MedicalInternet Research. Other authors of the paper include Tanya Talkar, a graduate student in the Program in Speech and Hearing Bioscience and Technology at Harvard and MIT; John Torous, director of the digital psychiatry division at Beth Israel Deaconess Medical Center; and Guillermo Cecchi, a principal research staff member at the IBM Thomas J. Watson Research Center.

A wave of anxiety

The new study grew out of the MIT class 6.897/HST.956 (Machine Learning for Healthcare), in MITs Department of Electrical Engineering and Computer Science. Low, Rumker, and Talkar, who were all taking the course last spring, had done some previous research on using machine learning to detect mental health disorders based on how people speak and what they say. After the Covid-19 pandemic began, they decided to focus their class project on analyzing Reddit forums devoted to different types of mental illness.

When Covid hit, we were all curious whether it was affecting certain communities more than others, Low says. Reddit gives us the opportunity to look at all these subreddits that are specialized support groups. Its a really unique opportunity to see how these different communities were affected differently as the wave was happening, in real-time.

The researchers analyzed posts from 15 subreddit groups devoted to a variety of mental illnesses, including schizophrenia, depression, and bipolar disorder. They also included a handful of groups devoted to topics not specifically related to mental health, such as personal finance, fitness, and parenting.

Using several types of natural language processing algorithms, the researchers measured the frequency of words associated with topics such as anxiety, death, isolation, and substance abuse, and grouped posts together based on similarities in the language used. These approaches allowed the researchers to identify similarities between each groups posts after the onset of the pandemic, as well as distinctive differences between groups.

The researchers found that while people in most of the support groups began posting about Covid-19 in March, the group devoted to health anxiety started much earlier, in January. However, as the pandemic progressed, the other mental health groups began to closely resemble the health anxiety group, in terms of the language that was most often used. At the same time, the group devoted to personal finance showed the most negative semantic change from January to April 2020, and significantly increased the use of words related to economic stress and negative sentiment.

They also discovered that the mental health groups affected the most negatively early in the pandemic were those related to ADHD and eating disorders. The researchers hypothesize that without their usual social support systems in place, due to lockdowns, people suffering from those disorders found it much more difficult to manage their conditions. In those groups, the researchers found posts about hyperfocusing on the news and relapsing back into anorexia-type behaviors since meals were not being monitored by others due to quarantine.

Using another algorithm, the researchers grouped posts into clusters such as loneliness or substance use, and then tracked how those groups changed as the pandemic progressed. Posts related to suicide more than doubled from pre-pandemic levels, and the groups that became significantly associated with the suicidality cluster during the pandemic were the support groups for borderline personality disorder and post-traumatic stress disorder.

The researchers also found the introduction of new topics specifically seeking mental health help or social interaction. The topics within these subreddit support groups were shifting a bit, as people were trying to adapt to a new life and focus on how they can go about getting more help if needed, Talkar says.

While the authors emphasize that they cannot implicate the pandemic as the sole cause of the observed linguistic changes, they note that there was much more significant change during the period from January to April in 2020 than in the same months in 2019 and 2018, indicating the changes cannot be explained by normal annual trends.

Mental health resources

This type of analysis could help mental health care providers identify segments of the population that are most vulnerable to declines in mental health caused by not only the Covid-19 pandemic but other mental health stressors such as controversial elections or natural disasters, the researchers say.

Additionally, if applied to Reddit or other social media posts in real-time, this analysis could be used to offer users additional resources, such as guidance to a different support group, information on how to find mental health treatment, or the number for a suicide hotline.

Reddit is a very valuable source of support for a lot of people who are suffering from mental health challenges, many of whom may not have formal access to other kinds of mental health support, so there are implications of this work for ways that support within Reddit could be provided, Rumker says.

The researchers now plan to apply this approach to study whether posts on Reddit and other social media sites can be used to detect mental health disorders. One current project involves screening posts in a social media site for veterans for suicide risk and post-traumatic stress disorder.

The research was funded by the National Institutes of Health and the McGovern Institute.

Link:
Using machine learning to track the pandemic's impact on mental health - MIT News

The consistency of machine learning and statistical models in predicting clinical risks of individual patients – The BMJ – The BMJ

Now, imagine a machine learning system with an understanding of every detail of that persons entire clinical history and the trajectory of their disease. With the clinicians push of a button, such a system would be able to provide patient-specific predictions of expected outcomes if no treatment is provided to support the clinician and patient in making what may be life-or-death decisions[1] This would be a major achievement. The English NHS is currently investing 250 million in Artificial Intelligence (AI). Part of this AI work could help to identify patients most at risk of diseases such as heart disease or dementia, allowing for earlier diagnosis and cheaper, more focused, personalised prevention. [2] Multiple papers have suggested that machine learning outperforms statistical models including cardiovascular disease risk prediction. [3-6] We tested whether it is true with prediction of cardiovascular disease as exemplar.

Risk prediction models have been implemented worldwide into clinical practice to help clinicians make treatment decisions. As an example, guidelines by the UK National Institute for Health and Care Excellence recommend that statins are considered for patients with a predicted 10-year cardiovascular disease risk of 10% or more. [7] This is based on the estimation of QRISK which was derived using a statistical model. [8] Our research evaluated whether the predictions of cardiovascular disease risk for an individual patient would be similar if another model, such as a machine learning models were used, as different predictions could lead to different treatment decisions for a patient.

An electronic health record dataset was used for this study with similar risk factor information used across all models. Nineteen different prediction techniques were applied including 12 families of machine learning models (such as neural networks) and seven statistical models (such as Cox proportional hazards models). It was found that the various models had similar population-level model performance (C-statistics of about 0.87 and similar calibration). However, the predictions for individual CVD risks varied widely between and within different types of machine learning and statistical models, especially in patients with higher CVD risks. Most of the machine learning models, tested in this study, do not take censoring into account by default (i.e., loss to follow-up over the 10 years). This resulted in these models substantially underestimating cardiovascular disease risk.

The level of consistency within and between models should be assessed before they are used for treatment decisions making, as an arbitrary choice of technique and model could lead to a different treatment decision.

So, can a push of a button provide patient-specific risk prediction estimates by machine learning? Yes, it can. But should we use such estimates for patient-specific treatment-decision making if these predictions are model-dependant? Machine learning may be helpful in some areas of healthcare such as image recognition, and could be as useful as statistical models on population level prediction tasks. But in terms of predicting risk for individual decision making we think a lot more work could be done. Perhaps the claim that machine learning will revolutionise healthcare is a little premature.

Yan Li, doctoral student of statistical epidemiology, Health e-Research Centre, Health Data Research UK North, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester.

Matthew Sperrin, senior lecturer in health data science, Health e-Research Centre, Health Data Research UK North, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester.

Darren M Ashcroft, professor of pharmacoepidemiology, Centre for Pharmacoepidemiology and Drug Safety, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester.

Tjeerd Pieter van Staa, professor in health e-research, Health e-Research Centre, Health Data Research UK North, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester.

Competing interests: None declared.

References:

Visit link:
The consistency of machine learning and statistical models in predicting clinical risks of individual patients - The BMJ - The BMJ

Free Webinar | Machine Learning and Data Analytics in the Pandemic Era – MIT Sloan

Select your countryUnited StatesCanadaAfghanistanAlbaniaAlgeriaAmerican SamoaAndorraAngolaAntigua and BarbudaArgentinaArmeniaAustraliaAustriaAzerbaijanBahamasBahrainBangladeshBarbadosBelarusBelgiumBelizeBeninBermudaBhutanBoliviaBosnia and HerzegovinaBotswanaBrazilBruneiBulgariaBurkina FasoBurundiCambodiaCameroonCanadaCape VerdeCayman IslandsCentral African RepublicChadChileChinaColombiaComorosCongo, Democratic Republic of theCongo, Republic of theCosta RicaCte d'IvoireCroatiaCubaCyprusCzech RepublicDenmarkDjiboutiDominicaDominican RepublicEast TimorEcuadorEgyptEl SalvadorEquatorial GuineaEritreaEstoniaEthiopiaFaroe IslandsFijiFinlandFranceFrench PolynesiaGabonGambiaGeorgiaGermanyGhanaGreeceGreenlandGrenadaGuamGuatemalaGuineaGuinea-BissauGuyanaHaitiHondurasHong KongHungaryIcelandIndiaIndonesiaIranIraqIrelandIsraelItalyJamaicaJapanJordanKazakhstanKenyaKiribatiNorth KoreaSouth KoreaKosovoKuwaitKyrgyzstanLaosLatviaLebanonLesothoLiberiaLibyaLiechtensteinLithuaniaLuxembourgMacedoniaMadagascarMalawiMalaysiaMaldivesMaliMaltaMarshall IslandsMauritaniaMauritiusMexicoMicronesiaMoldovaMonacoMongoliaMontenegroMoroccoMozambiqueMyanmarNamibiaNauruNepalNetherlandsNew ZealandNicaraguaNigerNigeriaNorthern Mariana IslandsNorwayOmanPakistanPalauPalestine, State ofPanamaPapua New GuineaParaguayPeruPhilippinesPolandPortugalPuerto RicoQatarRomaniaRussiaRwandaSaint Kitts and NevisSaint LuciaSaint Vincent and the GrenadinesSamoaSan MarinoSao Tome and PrincipeSaudi ArabiaSenegalSerbiaSeychellesSierra LeoneSingaporeSint MaartenSlovakiaSloveniaSolomon IslandsSomaliaSouth AfricaSpainSri LankaSudanSudan, SouthSurinameSwazilandSwedenSwitzerlandSyriaTaiwanTajikistanTanzaniaThailandTogoTongaTrinidad and TobagoTunisiaTurkeyTurkmenistanTuvaluUgandaUkraineUnited Arab EmiratesUnited KingdomUnited StatesUruguayUzbekistanVanuatuVatican CityVenezuelaVietnamVirgin Islands, BritishVirgin Islands, U.S.YemenZambiaZimbabwe

Select your industryAd AgenciesAgricultureApparelAutomotiveBiotechnologyChemicalsConstructionConsultingConsumer GoodsEducationEnergyEngineeringEntertainmentEnvironmentalFinance & BankingFood & BeverageGovernmentHealth CareHospitalityInsuranceManufacturingMediaNot For ProfitRecreationRetailSecurityServicesTechnologyTelecommunicationsTransportationTravel and LeisureUtilitiesWholesaleOther (please specify)

Privacy Policy

By submitting this form to MIT SMR, you acknowledge that your name and contact information will be shared with SAS Institute Inc., which may contact you regarding the content.

By submitting this form to MIT SMR, you acknowledge that your name and contact information will be shared with SAS Institute Inc., which may contact you regarding the content.

This field is for validation purposes and should be left unchanged.

Read this article:
Free Webinar | Machine Learning and Data Analytics in the Pandemic Era - MIT Sloan

Google Introduces New Analytics with Machine Learning and Predictive Models – IBL News

IBL News | New York

Google announcedthe introduction of its new Google Analytics with machine learning at its core, which is privacy-centric by design. They are built on the foundation of the App + Web propertypresentedlast year.

The goal of the giant searching company is to help users to get better ROI and improve their marketing decisions. It follows what a survey from Forrester Consulting points out that improving the use of analytics is a top priority for marketers.

The machine learning models include will allow the ability to alert on trends in data, like products seeing rising demand, and help to anticipate future actions from customers. For example, it calculates churn probability so you can more efficiently invest in retaining customers at a time when marketing budgets are under pressure, says in a blog-postVidhya Srinivasan,Vice President, Measurement, Analytics, and Buying Platforms at Google.

It also adds new predictive metrics indicating the potential revenue that can be earned from a particular group of customers. This allows you to create audiences to reach higher-value customers and run analyses to better understand why some customers are likely to spend more than others, so you can take action to improve your results, wroteVidhya Srinivasan.

The new Google Analytics providescustomer-centric measurement, including conversion from YouTube video views, Google and non-Google paid channels, search, social, and email. The setup works with or without cookies or identifiers.

They come by default for new web properties. In order toreplace the existing setup, Google encourages tocreate a new Google Analytics 4 property (previously called an App + Web property). Enterprise marketers are currently using a beta version with an Analytics 360 version with SLAs and advanced integrations with tools like BigQuery.

Original post:
Google Introduces New Analytics with Machine Learning and Predictive Models - IBL News

PathAI and Gilead Report Data from Machine Learning Model Predictions of Liver Disease Progression and Treatment Response at AASLD’s The Liver Meeting…

BOSTON (PRWEB) November 06, 2020

PathAI, a global provider of AI-powered technology applied to pathology research, today announced the results of a research collaboration with Gilead that retrospectively analyzed liver biopsies from participants in clinical trials evaluating treatments for NASH or CHB (1). Using digitized hematoxylin and eosin (H&E)-, picrosirius red-, and trichrome-stained biopsy slides, PathAIs machine learning (ML) models were able to accurately predict changes in features traditionally used as markers for liver disease progression in clinical practice and clinical trials, including fibrosis, steatosis, hepatocellular ballooning, and inflammation. The new results will be presented in an oral presentation and 4 poster sessions at The Liver Meeting Digital Experience (TLMdX) that will be held from November 13-16, 2020.

The data builds upon PathAIs previous success in retrospectively staging liver biopsies from clinical trials by showing that ML models may uncover patterns of histological features that correlate with disease progression or treatment response. Furthermore, ML models were able to estimate the hepatic venous pressure gradient (HVPG) in study subjects with NASH related cirrhosis and quantify fibrosis heterogeneity from digitized slides, which are measures that are not reliably captured by traditional pathology methods. After appropriate clinical validation, these new tools could be useful in staging disease more accurately than can be done with current approaches.

"We continue to use machine learning to advance our understanding of liver diseases, including NASH and hepatitis B, as a foundation for developing new methods to track disease progression and assess response to therapeutics, said PathAI co-founder and Chief Executive Officer Andy Beck MD, PhD. Our long-standing partnership with Gilead continues to demonstrate the power of AI-based pathology to support development efforts to bring new therapies to patients."

Highlights include:

Data presented at AASLD demonstrate the potential of machine learning approaches to improve our assessment of liver disease severity, reduce the variability of human interpretation of liver biopsies, and identify novel features associated with disease progression, said Rob Myers, MD, Vice President, Liver Inflammation/Fibrosis, Gilead Sciences. We are proud of our ongoing partnership with PathAI and look forward to continued collaboration toward our shared goals of enhancing research efforts and improving outcomes of patients with liver disease.

The antiviral drug TDF effectively suppresses hepatitis B virus in patients with CHB, but a small subset of patients have persistently elevated serum ALT despite virologic suppression. ML-models were applied to biopsy data from registrational studies of TDF to examine this small subgroup of non-responders. Analyses of the ML-model predicted histologic features showed that persistently elevated ALT after five years of TDF treatment is associated with a higher steatosis score at BL and increases in steatosis during follow-up. These data suggest that subjects with elevated ALT despite TDF treatment may have underlying fatty liver disease that impacts biochemical response.Machine Learning Enables Quantitative Assessment of Histopathologic Signatures Associated with ALT Normalization in Chronic Hepatitis B Patients Treated with Tenofovir Disoproxil Fumarate (TDF) Oral Abstract #18

ML-models were deployed on biopsies from registrational trials of TDF in CHB to identify cellular and tissue-based phenotypes associated with HBV DNA and hepatitis B e-antigen (HBeAg). The study demonstrated that proportionate areas of ML-model-predicted hepatocellular ballooning at BL and Yr 5, and lobular inflammation at Yr 5 were higher in subjects that did not achieve virologic suppression. In addition, lymphocyte density across the tissue and within regions of lobular inflammation correlated with HBeAg loss, supporting the importance of an early immune response for viral clearance.Machine Learning Based Quantification of Histology Features from Patients Treated for Chronic Hepatitis B Identifies Features Associated with Viral DNA Suppression and dHBeAg Loss Poster Number #0848

Standard manual methods for staging liver fibrosis have limited sensitivity and reproducibility. Application of a ML-model to evaluate changes in fibrosis in response to treatment in the STELLAR and ATLAS trials enabled development of the DELTA (Deep Learning Treatment Assessment) Liver Fibrosis Score. This scoring method accounts for the heterogeneity in fibrosis severity that can be detected by ML-models and reflects changes in fibrotic patterns that occur in response to treatment. Application of the DELTA Liver Fibrosis Score to biopsies from the Phase 2b ATLAS trial demonstrated a reduction in fibrosis in response to treatment with the investigational combination of cilofexor and firsocostat that was not detected by standard staging methods. Validation of a Machine Learning-Based Approach (DELTA Liver Fibrosis Score) for the Assessment of Histologic Response in Patients with Advanced Fibrosis Due to NASH Poster Number #1562

Integration of tissue transcriptomic data with histologic information is likely to reveal new insights into disease. Using liver tissue obtained during the STELLAR trials evaluating NASH subjects with advanced fibrosis, RNA-seq-generated, tissue-level gene expression profiles were integrated with ML-predicted histology. This analysis revealed five key genes strongly correlated with proportionate areas of portal inflammation and bile ducts, features that are themselves predictive of disease progression in NASH. High levels of expression of these genes was associated with an increased risk of progression to cirrhosis in subjects with bridging (F3) fibrosis (hazard ratio [HR] 2.1; 95% CI 1.25, 3.49) and liver-related clinical events among those with cirrhosis (HR 4.05; 95% CI 1.4, 14.36). Integration of Machine Learning-Based Histopathology and Hepatic Transcriptomic Data Identifies Genes Associated with Portal Inflammation and Ductular Proliferation as Predictors of Disease progression in Advanced Fibrosis Due to NASH Poster Number #595

The severity of portal hypertension as assessed by HPVG predicts the risk of hepatic complications in patients with liver disease but is not simple to measure. ML-models were trained on images of 320 trichrome-stained liver biopsies from a phase 2b trial of investigational simtuzumab in subjects with compensated cirrhosis due to NASH to recognize patterns of fibrosis that correlate with centrally-read HVPG measurements. Deployed on a test set of slides, ML-calculated HVPG scores strongly correlated with measured HVPG and could discriminate subjects with clinically-significant portal hypertension (HVPG 10 mm Hg).A Machine Learning Model Based on Liver Histology Predicts the Hepatic Venous Pressure Gradient (HVPG) in Patients with Compensated Cirrhosis Due to Nonalcoholic Steatohepatitis (NASH) Poster Number #1471

(1) Trials include STELLAR, ATLAS, and NCT01672879 for investigation of NASH therapies, and registrational studies GS-US-174-102/103 for tenofovir disoproxil fumarate [TDF] for CHB.

About PathAIPathAI is a leading provider of AI-powered research tools and services for pathology. PathAIs platform promises substantial improvements to the accuracy of diagnosis and the efficacy of treatment of diseases like cancer, leveraging modern approaches in machine and deep learning. Based in Boston, PathAI works with leading life sciences companies and researchers to advance precision medicine. To learn more, visit pathai.com.

Share article on social media or email:

Originally posted here:
PathAI and Gilead Report Data from Machine Learning Model Predictions of Liver Disease Progression and Treatment Response at AASLD's The Liver Meeting...

AI Recognizes COVID-19 in the Sound of a Cough Machine Learning Times – The Predictive Analytics Times

Originally published in IEEE Spectrum, Nov 4, 2020.

Based on a cellphone-recorded cough, machine learning models accurately detect coronavirus even in people with no symptoms.

Again and again, experts have pleaded that we need more and faster testing to control the coronavirus pandemicand many have suggested that artificial intelligence (AI) can help. Numerous COVID-19 diagnostics in development use AI to quickly analyze X-ray or CT scans, but these techniques require a chest scan at a medical facility.

Since the spring, research teams have been working toward anytime, anywhere apps that could detect coronavirus in the bark of a cough. In June, a team at the University of Oklahoma showed it was possible to distinguish a COVID-19 cough from coughs due to other infections, and now a paper out of MIT, using the largest cough dataset yet, identifies asymptomatic people with a remarkable 100 percentdetection rate.

If approved by the FDA and other regulators, COVID-19cough apps, in which a person records themselves coughing on command,could eventually be used for free, large-scale screening of the population.

With potential like that, the field is rapidly growing: Teams pursuing similar projects include a Bill and Melinda Gates Foundation-funded initiative, Cough Against Covid, at the Wadhwani Institute for Artificial Intelligence in Mumbai; the Coughvid project out of the Embedded Systems Laboratory of the cole Polytechnique Fdrale de Lausanne in Switzerland; and the University of Cambridges COVID-19 Sounds project.

The fact that multiple models can detect COVID in a cough suggeststhat there is no such thing astruly asymptomatic coronavirus infectionphysical changes alwaysoccurthat change the way a person produces sound. There arent many conditions that dont give you any symptoms, says Brian Subirana, director of the MIT Auto-ID lab and co-author on the recent study, published in the IEEE Open Journal of Engineering in Medicine and Biology.

To continue reading this article, click here.

Original post:
AI Recognizes COVID-19 in the Sound of a Cough Machine Learning Times - The Predictive Analytics Times

Post Covid-19 Impact on Machine Learning in Communication Sales, Price, Revenue, Gross Margin and Market Share 2020 to 2026 Amazon, IBM, Microsoft,…

The global Machine Learning in Communication Market has been studied by a set of researchers for a defined forecast period of 2020 to 2026. This study has provided insights to the stakeholders in the market landscape. It includes an in-depth analysis of various aspects of the market. These aspects include an overview section, with market segmentation, regional analysis, and competitive outlook of the global Machine Learning in Communication Market for the forecast period. All these sections of the report have been analyzed in detail to arrive at accurate and credible conclusion of the future trajectory. This also includes an overview section that mentions the definition, classification, and primary applications of the product/service to provide larger context to the audience to this report.

Key Players

The global Machine Learning in Communication Market report has provided a profiling of significant players that are impacting the trajectory of the market with their strategies for expansion and retaining of market share. The major vendors covered: Amazon, IBM, Microsoft, Google, Nextiva, Nexmo, Twilio, Dialpad, Cisco, RingCentral, and more

Get a free sample copy @ https://www.reportsandmarkets.com/sample-request/global-machine-learning-in-communication-market-2019-by-company-regions-type-and-application-forecast-to-2024?utm_source=icotodaymagazine&utm_medium=39

Market Dynamics

The report on the global Machine Learning in Communication Market includes a section that discusses various market dynamics that provide higher insight in the relationship and the impact of change these dynamics hold on the market functioning. These dynamics include the factors that are providing impetus to the market over the forthcoming years for growth and expansion. Alternatively, it also includes factors that are poised to challenge the market growth over the forecast period. These factors are expected to reveal certain hidden trends that aid in the better understanding of the market over the forecast period.

Market Segmentation

The global Machine Learning in Communication Market has been studied for a detailed segmentation that is based on different aspects to provide insight in the functioning of the segmental market. This segmentation has enabled the researchers to study the relationship and impact of the growth chart witnessed by these singular segments on the comprehensive market growth rate. It has also enabled various stakeholders in the global Machine Learning in Communication Market to gain insights and make accurate relevant decisions. A regional analysis of the market has been conducted that is studied for the segments of North America, Asia Pacific, Europe, Latin America, and the Middle East & Africa.

Research Methodology

The global Machine Learning in Communication Market has been analyzed using Porters Five Force Model to gain precise insight in the true potential of the market growth. Further, a SWOT analysis of the market has aided in the revealing of different opportunities for expansion that are inculcated in the market environment.

If you have any special requirements about this Machine Learning in Communication Market report, please let us know and we can provide custom report.

Inquire more about this report @ https://www.reportsandmarkets.com/enquiry/global-machine-learning-in-communication-market-2019-by-company-regions-type-and-application-forecast-to-2024?utm_source=icotodaymagazine&utm_medium=39

About Us

ReportsAndMarkets.com allocates the globally available market research and many company reports from reputed market research companies that are a pioneer in their respective domains. We are completely an autonomous group and serve our clients by offering the trustworthy available research stuff, as we know this is an essential aspect of Market Research.

Contact Us

Sanjay Jain

Manager Partner Relations & International Marketing

http://www.reportsandmarkets.com

Ph: +1-352-353-0818 (US)

See original here:
Post Covid-19 Impact on Machine Learning in Communication Sales, Price, Revenue, Gross Margin and Market Share 2020 to 2026 Amazon, IBM, Microsoft,...