BioSig and Mayo Clinic Collaborate on New R&D Program to Develop Transformative AI and Machine Learning Technologies for its PURE EP System – BioSpace

Westport, CT, Feb. 02, 2021 (GLOBE NEWSWIRE) --

BioSig Technologies, Inc. (NASDAQ: BSGM) (BioSig or the Company), a medical technology company commercializing an innovative signal processing platform designed to improve signal fidelity and uncover the full range of ECG and intra-cardiac signals, today announced a strategic collaboration with the Mayo Foundation for Medical Education and Research to develop a next-generation AI- and machine learning-powered software for its PURE EP system.

The new collaboration will include an R&D program that will expand the clinical value of the Companys proprietary hardware and software with advanced signal processing capabilities and aim to develop novel technological solutions by combining the electrophysiological signals delivered by the PURE EPand other data sources. The development program will be conducted under the leadership of Samuel J. Asirvatham, M.D., Mayo Clinics Vice-Chair of Innovation and Medical Director, Electrophysiology Laboratory, and Alexander D. Wissner-Gross, Ph.D., Managing Director of Reified LLC.

The global market for AI in healthcare is expected to grow from $4.9 billion in 2020 to $45.2 billion by 2026 at an estimated compound annual growth rate (CAGR) of 44.9%1. According to Accenture, key clinical health AI applications, when combined, can potentially create $150 billion in annual savings for the United States healthcare economy by 20262.

AI-powered algorithms that are developed on superior data from multiple biomarkers could drastically improve the way we deliver therapies, and therefore may help address the rising global demand for healthcare, commented Kenneth L Londoner, Chairman and CEO of BioSig Technologies, Inc. We believe that combining the clinical science of Mayo Clinic with the best-in-class domain expertise of Dr. Wissner-Gross and the technical leadership of our engineering team will enable us to develop powerful applications and help pave the way toward improved patient outcomes in cardiology and beyond.

Artificial intelligence presents a variety of novel opportunities for extracting clinically actionable information from existing electrophysiological signals that might otherwise be inaccessible. We are excited to contribute to the advancement of this field, said Dr. Wissner-Gross.

BioSig announced its partnership with Reified LLC, a provider of advanced artificial intelligence-focused technical advisory services to the private sector in late 2019. The new research program builds upon the progress achieved by this collaboration in 2020, which included an abstract for Computational Reconstruction of Electrocardiogram Lead Placement presented during the 2020 Computing in Cardiology Conference in Rimini, Italy, and the development of an initial suite of electrophysiological analytics for the PURE EPSystem.

BioSig signed a 10-year collaboration agreement with Mayo Clinic in March 2017. In November 2019, the Company announced that it signed three new patent and know-how license agreements with the Mayo Foundation for Medical Education and Research.

About BioSig TechnologiesBioSig Technologies is a medical technology company commercializing a proprietary biomedical signal processing platform designed toimprove signal fidelity and uncover the full range of ECG and intra-cardiac signals(www.biosig.com).

The Companys first product,PURE EP Systemis a computerized system intended for acquiring, digitizing, amplifying, filtering, measuring and calculating, displaying, recording and storing of electrocardiographic and intracardiac signals for patients undergoing electrophysiology (EP) procedures in an EP laboratory.

Forward-looking Statements

This press release contains forward-looking statements. Such statements may be preceded by the words intends, may, will, plans, expects, anticipates, projects, predicts, estimates, aims, believes, hopes, potential or similar words. Forward- looking statements are not guarantees of future performance, are based on certain assumptions and are subject to various known and unknown risks and uncertainties, many of which are beyond the Companys control, and cannot be predicted or quantified and consequently, actual results may differ materially from those expressed or implied by such forward-looking statements. Such risks and uncertainties include, without limitation, risks and uncertainties associated with (i) the geographic, social and economic impact of COVID-19 on our ability to conduct our business and raise capital in the future when needed, (ii) our inability to manufacture our products and product candidates on a commercial scale on our own, or in collaboration with third parties; (iii) difficulties in obtaining financing on commercially reasonable terms; (iv) changes in the size and nature of our competition; (v) loss of one or more key executives or scientists; and (vi) difficulties in securing regulatory approval to market our products and product candidates. More detailed information about the Company and the risk factors that may affect the realization of forward-looking statements is set forth in the Companys filings with the Securities and Exchange Commission (SEC), including the Companys Annual Report on Form 10-K and its Quarterly Reports on Form 10-Q. Investors and security holders are urged to read these documents free of charge on the SECs website at http://www.sec.gov. The Company assumes no obligation to publicly update or revise its forward-looking statements as a result of new information, future events or otherwise.

1 Artificial Intelligence in Healthcare Market with COVID-19 Impact Analysis by Offering, Technology, End-Use Application, End User and Region Global Forecast to 2026; Markets and Markets

2 Artificial Intelligence (AI): Healthcares New Nervous System https://www.accenture.com/us-en/insight-artificial-intelligence-healthcare%C2%A0

See the article here:
BioSig and Mayo Clinic Collaborate on New R&D Program to Develop Transformative AI and Machine Learning Technologies for its PURE EP System - BioSpace

The future of software testing: Machine learning to the rescue – TechBeacon

The last decadehas seen a relentless push to deliver software faster. Automated testing has emerged as one of the most important technologies for scaling DevOps, companies are investing enormous time and effort to build end-to-end software delivery pipelines, and containers and their ecosystem are holding up on their early promise.

The combination of delivery pipelines and containers has helped high performers to deliver software faster than ever.That said, many organizations are stillstruggling to balance speed and quality. Many are stuck trying to make headway with legacy software, large test suites, and brittle pipelines. So where do yougofrom here?

In the drive to release quickly, end users have become software testers. But theyno longer want to be your testers, and companies are taking note. Companies now want to ensure that quality is not compromised in the pursuit of speed.

Testing is one of the top DevOps controls that organizations can leverage to ensure that their customers engage with a delightful brand experience. Othersinclude access control, activity logging, traceability, and disaster recovery. Our company'sresearch over the past year indicates that slow feedback cycles, slow development loops, and developer productivity will remain the top priorities over the next few years.

Quality and access control are preventative controls, while others are reactive. There will be an increasing focus on quality in the future because it prevents customers from having a bad experience. Thus, delivering value fastor better yet, delivering the right value at the right quality level fastis the key trend that we will see this year and beyond.

Here are the five key trends to watch.

Test automation efforts will continue to accelerate. A surprising number of companiesstill have manual tests in their delivery pipeline, but you can't deliver fast if you have humans in the critical path of the value chain, slowing things down. (The exception isexploratory testing, where humans are a must.)

Automating manual tests is a long process that requires dedicated engineering time. While many organizations have at least some test automation, there's more that needs to be done. That's why automatedtesting willremain one of the top trends going forward.

As teams automate tests and adopt DevOps, quality must become part of the DevOps mindset. That means quality will become a shared responsibility of everyone in the organization.

Figure 2. Top performers shift tests around to create new workflows. They shift left for earlier validation and right to speed up delivery. Source: Launchable

Teams will need to become more intentional about where tests land. Should they shift tests left to catch issues much earlier, or should they add more quality controls to the right? On the "shift-right"side of the house, practices such as chaos engineering and canary deployments are becoming essential.

Shifting large test suites left is difficult because you don't want to introduce long delays while running tests in an earlier part of your workflow. Many companies tag some tests from a large suite to run in pre-merge, but the downside is that these tests may or may not be relevant to a specific change set. Predictive test selection (see trend 5 below) provides a compelling solution for running just the relevant tests.

Over the past six to eightyears, the industry has focused on connecting various tools by building robust delivery pipelines. Each of those tools generates a heavy exhaust of data, but that data is being used minimally, if at all. We have moved from "craft" or "artisanal" solutions to the "at-scale" stage in the evolution of tools in delivery pipelines.

The next phase is to bring smartsto the tooling.Expect to see an increased emphasis by practitioners onmakingdata-driven decisions.

There are two key problems in testing: not enough tests, and too many of them. Test-generation tools take a shot at the first problem.

To create a UI test today, you either must write a lot of code or a tester has to click through the UI manually, which is an incredibly painful and slow process. To relieve this pain, test-generation tools use AI to create and run UI tests on various platforms.

For example, one tool my team exploreduses a "trainer"that lets you record actions on a web app to create scriptless tests. While scriptless testing isnt a new idea, what is new is that this tool "auto-heals"tests in lockstep with the changes to your UI.

Another tool that we explored has AI bots that act like humans. They tap buttons, swipe images, type text, and navigate screens to detect issues. Once they find an issue, they create a ticket in Jira for the developers to take action on.

More testing tools that use AI willgain traction in 2021.

AI has other uses for testing apart from test generation. For organizations struggling with runtimes of large test suites, an emerging technology calledpredictive test selectionis gaining traction.

Many companies have thousands of tests that run all the time. Testing a small change might take hours or even days to get feedback on. While more tests are generally good for quality, it also means that feedback comes more slowly.

To date, companies such as Google and Facebook have developed machine-learning algorithms that process incoming changes and run only the tests that are most likely to fail. This is predictive test selection.

What's amazing about this technology is that you can run between 10% and 20% of your tests to reach 90% confidence that a full run will not fail. This allows you to reduce a five-hour test suite that normally runs post-merge to 30 minuteson pre-merge, running only the tests that are most relevant to the source changes. Another scenario would be to reduce a one-hour run to six minutes.

Expect predictive test selection to become more mainstream in 2021.

Automated testing is taking over the world. Even so, many teams are struggling to make the transition. Continuous quality culture will become part of the DevOps mindset. Tools will continue to become smarter. Test-generation tools will help close the gap between manual and automated testing.

But as teams add more tests, they face real problems with test execution time. While more tests help improve quality, they often become a roadblock to productivity. Machine learning will come to the rescue as we roll into 2021.

See the original post here:
The future of software testing: Machine learning to the rescue - TechBeacon

A Nepalese Machine Learning (ML) Researcher Introduces Papers-With-Video Browser Extension Which Allows Users To Access Videos Related To Research…

Amit Chaudhary, a machine learning (ML) researcher from Nepal, has recently introduced a browser extension that allows users to directly access videos related to research papers published on the platform arXiv.

ArXiv has become an essential resource for new machine learning (ML) papers. Initially, in 1991, it was launched as a storage site for physics preprints. In 2001 it was named ArXiv and had since been hosted by Cornell University. ArXiv has received close to 2 million submissions across various scientific research fields.

Amit obtained publicly released videos from 2020 ML conferences. He then indexed the videos and reverse-mapped them to the relevant arXiv links through pyarxiv, a dedicated wrapper for the arXiv API. The Google Chrome extension creates a video icon next to the paper title on the arXiv abstract page, enabling users to identify and access available videos related to the paper directly.

Many research teams are creating videos to accompany their papers. These videos can act as a guide by providing demo and other valuable information on the research document. In several situations, the videos are created as an alternative to traditional in-person presentations at AI conferences. This is useful in current circumstances as almost all panels have moved to virtual forms due to the Covid-19 pandemic.

The Papers-With-Video extension enables direct video links for around 3.7k arXiv ML papers. Amit aims to figure out how to pair documents and videos related effectively but has different titles, and with this, he hopes to expand coverage to 8k videos. He has proposed community feedback and has now tweaked the extensions functionality based on user remarks and suggestions.

The browser extension is not available on the Google Chrome Web Store yet. However, one can find the extension, installation guide, and further information on GitHub.

GitHub: https://github.com/amitness/papers-with-video

Paper List: https://gist.github.com/amitness/9e5ad24ab963785daca41e2c4cfa9a82

Suggested

Read the original:
A Nepalese Machine Learning (ML) Researcher Introduces Papers-With-Video Browser Extension Which Allows Users To Access Videos Related To Research...

University of Exeter: Speeding up machine learning by means of light – India Education Diary

An international team of researchers has developed a next-generation computer accelerator chip that processes data using light rather than electronics.

Speeding up machine learning by means of lightScientists have developed a pioneering new approach that will rapidly speed up machine learning using light.

An international team of researchers from the Universities of Mnster, Oxford, Exeter, Pittsburgh, cole Polytechnique Fdrale (EPFL) and IBM Research Zurich has developed a next-generation computer accelerator chip that processes data using light rather than electronics.

The results are published in the leading scientific journal Nature on Wednesday, January 6th.

Professor C. David Wright of the University of Exeter, who leads the EU project Fun-COMP which funded this work said: Conventional computer chips are based on electronic data transfer and are comparatively slow, but light-based processors such as that developed in our work enable complex mathematical tasks to be processed at speeds hundreds or even thousands of times faster, and with hugely reduced energy consumption.

The team of researchers, led by Prof. Wolfram Pernice from the Institute of Physics and the Center for Soft Nanoscience at the University of Mnster, combined integrated photonic devices with phase-change materials (PCMs) to deliver super-fast, energy-efficient matrix-vector (MV) multiplications.

MV multiplications lie at the heart of modern computing from AI to machine learning and neural network processing and the imperative to carry out such calculations at ever-increasing speeds, but with ever-decreasing energy consumption, is driving the development of a whole new class of processor chips, so-called tensor processing units (TPUs).

The team developed a new type of photonic TPU one capable of carrying out multiple MV multiplications simultaneously and in parallel, using a chip-based frequency comb as a light source, along with wavelength-division-multiplexing.

The matrix elements were stored using PCMs the same material currently used for re-writable DVD and BluRay optical discs making it possible to preserve matrix states without the need for an energy supply.

In their experiments, the team used their photonic TPU in a so-called convolutional neural network for the recognition of handwritten numbers and for image filtering. Our study is the first to apply frequency combs in the field of artificial neural networks, says Prof. Wolfram Pernice.

Our results could have a wide range of applications, explained Prof. Harish Bhaskaran from the University of Oxford, a key member of the team: A photonic TPU could quickly and efficiently process huge data sets used for medical diagnoses, such as those from CT, MRI and PET scanners, he continued.

Further applications could also be found in self-driving vehicles which depend on fast, rapid evaluation of data from multiple sensors as well as for the provision of IT infrastructure such as cloud computing.

Link:
University of Exeter: Speeding up machine learning by means of light - India Education Diary

Harnessing the power of machine learning for improved decision-making – GCN.com

INDUSTRY INSIGHT

Across government, IT managers are looking to harness the power of artificial intelligence and machine learning techniques (AI/ML) to extract and analyze data to support mission delivery and better serve citizens.

Practically every large federal agency is executing some type of proof of concept or pilot project related to AI/ML technologies. The governments AI toolkit is diverse and spans the federal administrative state, according to a report commissioned by the Administrative Conference of the United States (ACUS). Nearly half of the 142 federal agencies canvassed have experimented with AI/ML tools, the report, Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies, states.

Moreover, AI tools are already improving agency operations across the full range of governance tasks, including regulatory mandate enforcement, adjudicating government benefits and privileges, monitoring and analyzing risks to public safety and health, providing weather forecasting information and extracting information from the trove of government data to address consumer complaints.

Agencies with mature data science practices are further along in their AI/ML exploration. However, because agencies are at different stages in their digital journeys, many federal decision-makers still struggle to understand AI/ML. They need a better grasp of the skill sets and best practices needed to derive meaningful insights from data powered by AI/ML tools.

Understanding how AI/ML works

AI mimics human cognitive functions such as the ability to sense, reason, act and adapt, giving machines the ability to act intelligently. Machine learning is a component of AI, which involves the training of algorithms or models that then give predictions about data it has yet to observe. ML models are not programmed like conventional algorithms. They are trained using data -- such as words, log data, time series data or images -- and make predictions on actions to perform.

Within the field of machine learning, there are two main types of tasks: supervised and unsupervised.

With supervised learning, data analysts have prior knowledge of what the output values for their samples should be. The AI system is specifically told what to look for, so the model is trained until it can detect underlying patterns and relationships. For example, an email spam filter is a machine learning program that can learn to flag spam after being given examples of spam emails that are flagged by users and examples of regular non-spam emails. The examples the system uses to learn are called the training set.

Unsupervised learning looks for previously undetected patterns in a dataset with no pre-existing labels and with a minimum of human supervision. For instance, data points with similar characteristics can be automatically grouped into clusters for anomaly detection, such as in fraud detection or identifying defective mechanical parts in predictive maintenance.

Supervised, unsupervised in action

It is not a matter of which approach is better. Both supervised and unsupervised learning are needed for machine learning to be effective.

Both approaches were applied recently to help a large defense financial management and comptroller office resolve over $2 billion in unmatched transactions in an enterprise resource planning system. Many tasks required significant manual effort, so the organization implemented a robotic process automation solution to automatically access data from various financial management systems and process transactions without human intervention. However, RPA fell short when data variances exceeded tolerance for matching data and documents, so AI/ML techniques were used to resolve the unmatched transactions.

The data analyst team used supervised learning with preexisting rules that resulted in these transactions. The team was then able to provide additional value because they applied unsupervised ML techniques to find patterns in the data that they were not previously aware of.

To get a better sense of how AI/ML can help agencies better manage data, it is worth considering these three steps:

Data analysts should think of these steps as a continuous loop. If the output from unsupervised learning is meaningful, they can incorporate it into the supervised learning modeling. Thus, they are involved in a continuous learning process as they explore the data together.

Avoiding pitfalls

It is important for IT teams to realize they cannot just feed data into machine learning models, especially with unsupervised learning, which is a little more art than science. That is where humans really need to be involved. Also, analysts should avoid over-fitting models seeking to derive too much insight.

Remember: AI/ML and RPA are meant to augment humans in the workforce, not merely replace people with autonomous robots or chatbots. To be effective, agencies must strategically organize around the right people, processes and technologies to harness the power of innovative technologies such as AI/ML to achieve the performance they need at scale.

About the Author

Samuel Stewart is a data scientist with World Wide Technology.

Read the original post:
Harnessing the power of machine learning for improved decision-making - GCN.com

AI: This COVID machine-learning tool helps swamped hospitals pick the right treatment – ZDNet

Spain has been one the European states worst hit by the COVID-19 pandemic, with more than 1.7 million detected cases. Despite the second wave of infections that has hit the country over the past few months, the Hospital Clinic in Barcelona has succeeded in halving mortality among its coronavirus patients using artificial intelligence.

The Catalan hospital has developed a machine-learning tool that can predict when a COVID patient will deteriorate and how to customize that individual's treatment to avoid the worst outcome.

"When you have a sole patient who's in a critical state, you can take special care of them. But when they are 700 of them, you need this kind of tool," says Carol Garcia-Vidal, a physician specialized in infectious diseases and IDIBAPS researcher who has led the development of the tool.

SEE: Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation (TechRepublic Premium)

Before the pandemic, the hospital had already been working on software to turn variable data into an analyzable form. So when the hospital started to receive COVID patients in March, it put the system to work analyzing three trillion pieces of structured and anonymized data from 2,000 patients.

The goal was to train it to recognize patterns and check what treatments were the most effective for each patient and when they should be administered.

That work underlined to Garcia-Vidal and her team that the virus doesn't manifest itself in the same way for everyone. "There are patients with an inflammatory response, patients with coagulopathies and patients who develop super infections," Garca-Vidal tells ZDNet. Each group needs different drugs and thus a personalized treatment.

Thanks to an EIT Health grant, the AI system has been developed into a real-time dashboard display on physicians' computers that has become one of their everyday tools. Under the supervision of an epidemiologist, the tool enables patients to be classified and offered a more personalized treatment.

"Nobody has done this before," says Garca-Vidal, who says the researchers recently added two more patterns to the system to include the patients who are stable and can leave the hospital, thus freeing a bed, and those patients who are more likely to die. The predictions are 90% accurate.

"It's very useful for physicians with less experience and those who have a specialty that's nothing to do with COVID, such as gynecologists or traumatologists," she says. As in many countries, doctors from all specialist areas were called in to treat patients during the first wave of the pandemic.

The system is also being used during the current second wave because, according to Garca-Vidal, the number of patients in intensive care in Catalan hospitals has jumped. The plan is to make the tool available to other hospitals.

Meanwhile, the Barcelona Supercomputing Center (BSC) is also analyzing a set of data corresponding to 3,000 medical cases generated by the Hospital Clnic during the acute phase of the pandemic in March.

The aim is to develop a model based on deep-learning neural networks that will look for common patterns and generate predictions on the evolution of symptoms. The objective is to know whether a patient is likely to need a ventilator system or be directly sent to intensive care.

SEE: The algorithms are watching us, but who is watching the algorithms?

Some data such as age, sex, vital signs and medication given is structured but other data isn't, because it consists of text written in natural language in the form of, for example, hospital discharge and radiology reports, BSC researcher Marta Villegas explains.

Supercomputing brings the computational capacity and power to extract essential information from these reports and train models based on neural networks to predict the evolution of the disease as well as the response to treatments given the previous conditions of the patients.

This approach, based on natural language processing, is also being tested at a hospital in Madrid.

Go here to see the original:
AI: This COVID machine-learning tool helps swamped hospitals pick the right treatment - ZDNet

Safe Internet: WOT uses machine learning and crowdsourcing to protect your phone and tablet – PhoneArena

Advertorial by WOT: the opinions expressed in this story may not reflect the positions of PhoneArena!

WOT is available in the form of an Android app or extension for Firefox, Opera, Chrome, and even the Samsung browser. This means you can use it on absolutely any Android device in your household, plus the family desktop PC.

In order to ensure its protection is always up to date, WOT utilizes a mixture of crowdsourcing, machine learning, and third party blacklists. It will analyze user behavior and compare it against databases of known scams to make sure its constantly on top of its game.

If you subscribe to premium ($2.49 per month on an annual plan), you gain access to WOTs superb Anti-Phishing shield, which will keep a lookout for clever scams. Premium users also have no limit on how many apps they can lock, and gain an auto-scanning feature, which will automatically check new Wi-Fi networks and apps for security flaws.

Here is the original post:
Safe Internet: WOT uses machine learning and crowdsourcing to protect your phone and tablet - PhoneArena

Global Machine Learning (ML) Platforms Market Growth, Size, Analysis, Outlook by 2020 – Trends, Opportunities and Forecast to 2025 – AlgosOnline

A recent report added by Market Study Report, LLC, on ' Machine Learning (ML) Platforms Market' provides a detailed analysis on the industry size, revenue forecasts and geographical landscape pertaining to this business space. Additionally, the report highlights primary obstacles and latest growth trends accepted by key players that form a part of the competitive spectrum of this business.

The research study on the Machine Learning (ML) Platforms market projects this industry to garner substantial proceeds by the end of the projected duration, with a commendable growth rate liable to be registered over the estimated timeframe. Elucidating a pivotal overview of this business space, the report includes information pertaining to the remuneration presently held by this industry, in tandem with a meticulous illustration of the Machine Learning (ML) Platforms market segmentation and the growth opportunities prevailing across this vertical.

Request a sample Report of Machine Learning (ML) Platforms Market at:https://www.marketstudyreport.com/request-a-sample/3132103?utm_source=algosonline.com&utm_medium=SHR

A brief run-through of the industry segmentation encompassed in the Machine Learning (ML) Platforms market report:

Competitive landscape:

Companies involved:

Vital pointers enumerated:

The Machine Learning (ML) Platforms market report provides an outline of the vendor landscape that includes companies such as

The study mentions the products manufactured by these esteemed companies as well the product price prototypes, profit margins, valuation accrued, and product sales.

Ask for Discount on Machine Learning (ML) Platforms Market Report at:https://www.marketstudyreport.com/check-for-discount/3132103?utm_source=algosonline.com&utm_medium=SHR

Geographical landscape:

Regions involved: USA, Europe, Japan, China, India, South East Asia

Vital pointers enumerated:

Segmented into USA, Europe, Japan, China, India, South East Asia, as per the regional spectrum, the Machine Learning (ML) Platforms market apparently covers most of the pivotal geographies, claims the report, which compiles a highly comprehensive analysis of the geographical arena, including details about the product consumption patterns, revenue procured, as well as the market share that each zone holds.

The study presents details regrading the consumption market share and product consumption growth rate of the regions in question, in tandem with the geographical consumption rate with regards to the products and the applications.

Product landscape

Product types involved:

Vital pointers enumerated:

The Machine Learning (ML) Platforms market report enumerates information with respect to every product type among

Application landscape:

Application sectors involved:

Vital pointers enumerated:

The Machine Learning (ML) Platforms market report, with respect to the application spectrum, splits the industry into

The Machine Learning (ML) Platforms market report also includes substantial information about the driving forces impacting the commercialization landscape of the industry as well as the latest trends prevailing in the market. Also included in the study is a list of the challenges that this industry will portray over the forecast period.

Other parameters like the market concentration ratio, enumerated with reference to numerous concentration classes over the projected timeline, have been presented as well, in the report.

For More Details On this Report: https://www.marketstudyreport.com/reports/global-machine-learning-ml-platforms-market-growth-status-and-outlook-2020-2025

Related Reports:

2. Global Transformer Dismantling & Recycling Services Market Growth (Status and Outlook) 2020-2025Transformer Dismantling & Recycling Services Market Report covers the makers' information, including shipment, value, income, net benefit, talk with record, business appropriation and so forth., this information enables the buyer to think about the contenders better. This report additionally covers every one of the districts and nations of the world, which demonstrates a provincial advancement status, including market size, volume and esteem, and also value information. It additionally covers diverse enterprises customers data, which is critical for the producers.Read More: https://www.marketstudyreport.com/reports/global-transformer-dismantling-recycling-services-market-growth-status-and-outlook-2020-2025

Read More Reports On: https://www.marketwatch.com/press-release/manganese-sulfate-market-trends-and-opportunities-by-types-and-application-in-grooming-regions-2021-01-06?tesla=y

Contact Us:Corporate Sales,Market Study Report LLCPhone: 1-302-273-0910Toll Free: 1-866-764-2150 Email: [emailprotected]

Read more:
Global Machine Learning (ML) Platforms Market Growth, Size, Analysis, Outlook by 2020 - Trends, Opportunities and Forecast to 2025 - AlgosOnline

ECMarker: interpretable machine learning model identifies gene expression biomarkers predicting clinical outcomes and reveals molecular mechanisms of…

This article was originally published here

Bioinformatics. 2020 Nov 6:btaa935. doi: 10.1093/bioinformatics/btaa935. Online ahead of print.

ABSTRACT

MOTIVATION: Gene expression and regulation, a key molecular mechanism driving human disease development, remains elusive, especially at early stages. Integrating the increasing amount of population-level genomic data and understanding gene regulatory mechanisms in disease development are still challenging. Machine learning has emerged to solve this, but many machine learning methods were typically limited to building an accurate prediction model as a black box, barely providing biological and clinical interpretability from the box.

RESULTS: To address these challenges, we developed an interpretable and scalable machine learning model, ECMarker, to predict gene expression biomarkers for disease phenotypes and simultaneously reveal underlying regulatory mechanisms. Particularly, ECMarker is built on the integration of semi- and discriminative-restricted Boltzmann machines, a neural network model for classification allowing lateral connections at the input gene layer. This interpretable model is scalable without needing any prior feature selection and enables directly modeling and prioritizing genes and revealing potential gene networks (from lateral connections) for the phenotypes. With application to the gene expression data of non-small-cell lung cancer patients, we found that ECMarker not only achieved a relatively high accuracy for predicting cancer stages but also identified the biomarker genes and gene networks implying the regulatory mechanisms in the lung cancer development. In addition, ECMarker demonstrates clinical interpretability as its prioritized biomarker genes can predict survival rates of early lung cancer patients (P-value < 0.005). Finally, we identified a number of drugs currently in clinical use for late stages or other cancers with effects on these early lung cancer biomarkers, suggesting potential novel candidates on early cancer medicine.

AVAILABILITYAND IMPLEMENTATION: ECMarker is open source as a general-purpose tool at https://github.com/daifengwanglab/ECMarker.

CONTACT: [emailprotected]

SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

PMID:33305308 | DOI:10.1093/bioinformatics/btaa935

Link:
ECMarker: interpretable machine learning model identifies gene expression biomarkers predicting clinical outcomes and reveals molecular mechanisms of...

4 tips to upgrade your programmatic advertising with Machine Learning – Customer Think

Lomit Patel, VP of growth at IMVU and best-selling author of Lean AI, shares lessons learned and practical advice for app marketers to unlock open budgets and sustainable growth with machine learning.

The first step in the automation journey is to identify where you and your team stand. In his book Lean AI: How Innovative Startups Use Artificial Intelligence to Grow, Lomit introduces the Lean AI Autonomy Scale, which ranks companies from 0 to 5 based on their level of AI & automation adoption.

A lot of companies arent fully relying on AI and automation to power their growth strategies. In fact, on a Lean AI Autonomy Scale from 0 to 5, most companies are at stage 2 or 3, where they rely on the AI of some of their partners without fully garnering the potential of these tools.

Heres how app marketers can start working their way up to level 5:

Put your performance strategy to the test by setting the right indicators. Marketers KPIs should be geared towards measuring growth. Identify the metrics that show whats driving more user quality conversions and revenue, such as:

Analyzing data is a critical step towards measuring success through the right KPIs. When getting data ready to be automated and processed with AI, marketers should make sure:

The better the data, the more effective decisions it will allow you to take. By aggregating data, marketers gain a comprehensive view of their efforts, which in turn leads to a better understanding of success metrics.

Youve got to make sure that youre giving them [partners] the right data so that their algorithms can optimize towards your outcomes and clearly define what success is. Lomit Patel.

The role of AI is not to replace jobs or people, but to replace tasks that people do, letting them focus on the things they are good at.

With Lean AI, the machine does a lot of the heavy lifting, allowing marketers to process data and surface insights in a way that wasnt possible beforeand with more data, the accuracy rate continues to go up.

It can be used to:

With our AI machine, were constantly testing different audiences, creatives, bids, budgets, and moving all of those different dials. On average, were generally running about ten thousand experiments at scale. A majority of those are based on creatives, its become a much bigger lever for us. Lomit Patel.

Theres a reason why growth partners have been around for a long time. For a lot of companies, the hassle of taking all marketing operations in-house doesnt make sense. At first, building a huge in-house data science team might seem like a great way to start leveraging AIbut:

Performance partners bring experience from working with multiple players across a number of verticals, making it easier to identify and implement the most effective automation strategy for each marketer. Their knowledge about industry benchmarks and best practices goes a long way in helping marketers outscore their competitors.

Last but not least, once you find the right partners, set them up for success by sharing the right data.

These recommendations are the takeaways from the first episode of App Marketers Unplugged. Created by Jampp, this video podcast series connects industry leaders and influencers to discuss challenges and trends with their peers.

Watch the full App Marketers Unplugged session with Lomit Patel to learn more about how Lean AI can help you gain users insights more efficiently and what marketers need to sail through the automation journey.

Read more here:
4 tips to upgrade your programmatic advertising with Machine Learning - Customer Think

DIY Camera Uses Machine Learning to Audibly Tell You What it Sees – PetaPixel

Adafruit Industries has created a machine learning camera built with the Raspberry Pi that can identify objects extremely quickly and audibly tell you what it sees. The group has listed all the necessary parts you need to build the device at home.

The camera is based on Adafruits BrainCraft HAT add-on for the Raspberry Pi 4, and uses TensorFlow Lite object recognition software to be able to recognize what it is seeing. According to Adafruits website, its compatible with both the 8-megapixel Pi camera and the 12.3-megapixel interchangeable lens version of module.

While interesting on its own, DIY Photography makes a solid point by explaining a more practical use case for photographers:

You could connect a DSLR or mirrorless camera from its trigger port into the Pis GPIO pins, or even use a USB connection with something like gPhoto, to have it shoot a photo or start recording video when it detects a specific thing enter the frame.

A camera that is capable of recognizing what it is looking at could be used to only take a photo when a specific object, animal, or even a person comes into the frame. That would mean it could have security system or wildlife monitoring applications. Whenever you might wish your camera knew what it was looking at, this kind of technology would make that a reality.

You can find all the parts you will need to build your own version of this device on Adafruits website here. They also have published an easy machine learning guide for the Raspberry Pi as well as a guide on running TensorFlow Lite.

(via DPReview and DIY Photography)

Continued here:
DIY Camera Uses Machine Learning to Audibly Tell You What it Sees - PetaPixel

Ethical Machine Learning as a Wicked Problem Machine Learning Times – The Predictive Analytics Times

By: Sherril Hayes, Executive Director, Analytics and Data Science Institute and Professor of Conflict Management, Analytics & Data Science Institute, College of Computing and Software Engineering, Kennesaw State UniversityIn the 1950 and 1960s, social and behavioral sciences were at the cutting edge of innovation. Scientific techniques and quantitative analyses were being applied to some of the most pressing social problems. The thinking was If NASA can put men in space, why cant we use these techniques to solve the problems of housing discrimination and school desegregation? Despite the investment, effort, and professionalization of these fields, the consensus was that they were failing. Why? In 1973 Horst Rittel, a mathematician and Professor in the Science of Design at UC Berkeley, and his colleague Melvin Weber introduced the

This content is restricted to site members. If you are an existing user, please log in on the right (desktop) or below (mobile). If not, register today and gain free access to original content and industry news. See the details here.

Read this article:
Ethical Machine Learning as a Wicked Problem Machine Learning Times - The Predictive Analytics Times

SVG Tech Insight: Increasing Value of Sports Content Machine Learning for Up-Conversion HD to UHD – Sports Video Group

This fall SVG will be presenting a series of White Papers covering the latest advancements and trends in sports-production technology. The full series of SVGs Tech Insight White Papers can be found in the SVG Fall SportsTech Journal HERE.

Following the height of the 2020 global pandemic, live sports are starting to re-emerge worldwide albeit predominantly behind closed doors. For the majority of sports fans, video is the only way they can watch and engage with their favorite teams or players. This means the quality of the viewing experience itself has become even more critical.

With UHD being adopted by both households and broadcasters around the world, there is a marked expectation around visual quality. To realize these expectations in the immediate term, it will be necessary for some years to up-convert from HD to UHD when creating 4K UHD sports channels and content.

This is not so different from the early days of HD, where SD sporting related content had to be up-converted to HD. In the intervening years, however, machine learning as a technology has progressed sufficiently to be a serious contender for performing better up-conversions than with more conventional techniques, specifically designed to work for TV content.

Ideally, we want to process HD content into UHD with a simple black box arrangement.

The problem with conventional up-conversion, though, is that it does not offer an improved resolution, so does not fully meet the expectations of the viewer at home watching on a UHD TV. The question, therefore, becomes: can we do better for the sports fan? If so, how?

UHD is a progressive scan format, with the native TV formats being 38402160, known as 2160p59.64 (usually abbreviated to 2160p60) or 2160p50. The corresponding HD formats, with the frame/field rates set by region, are either progressive 1280720 (720p60 or 720p50) or interlaced 19201080 (1080i30 or 1080i25).

Conversion from HD to UHD for progressive images at the same rate is fairly simple. It can be achieved using spatial processing only. Traditionally, this might typically use a bi-cubic interpolation filter, (a 2-dimensional interpolation commonly used for photographic image scaling.) This uses a grid of 44 source pixels and interpolates intermediate locations in the center of the grid. The conversion from 1280720 to 38402160 requires a 3x scaling factor in each dimension and is almost the ideal case for an upsampling filter.

These types of filters can only interpolate, resulting in an image that is a better result than nearest-neighbor or bi-linear interpolation, but does not have the appearance of being a higher resolution.

Machine Learning (ML) is a technique whereby a neural network learns patterns from a set of training data. Images are large, and it becomes unfeasible to create neural networks that process this data as a complete set. So, a different structure is used for image processing, known as Convolutional Neural Networks (CNNs). CNNs are structured to extract features from the images by successively processing subsets from the source image and then processes the features rather than the raw pixels.

Up-conversion process with neural network processing

The inbuilt non-linearity, in combination with feature-based processing, mean CNNs can invent data not in the original image. In the case of up-conversion, we are interested in the ability to create plausible new content that was not present in the original image, but that doesnt modify the nature of the image too much. The CNN used to create the UHD data from the HD source is known as the Generator CNN.

When input source data needs to be propagated through the whole chain, possibly with scaling involved, then a specific variant of a CNN known as a Residual Network (ResNet) is used. A ResNet has a number of stages, each of which includes a contribution from a bypass path that carries the input data. For this study, a ResNet with scaling stages towards the end of the chain was used as the Generator CNN.

For the Generator CNN to do its job, it must be trained with a set of known data patches of reference images and a comparison is made between the output and the original. For training, the originals are a set of high-resolution UHD images, down-sampled to produce HD source images, then up-converted and finally compared to the originals.

The difference between the original and synthesized UHD images is calculated by the compare function with the error signal fed back to the Generator CNN. Progressively, the Generator CNN learns to create an image with features more similar to original UHD images.

The training process is dependent on the data set used for training, and the neural network tries to fit the characteristics seen during training onto the current image. This is intriguingly illustrated in Googles AI Blog [1], where a neural network presented with a random noise pattern introduces shapes like the ones used during training. It is important that a diverse, representative content set is used for training. Patches from about 800 different images were used for training during the process of MediaKinds research.

The compare function affects the way the Generator CNN learns to process the HD source data. It is easy to calculate a sum of absolute differences between original and synthesized. This causes an issue due to training set imbalance; in this case, the imbalance is that real pictures have large proportions with relatively little fine detail, so the data set is biased towards regenerating a result like that which is very similar to the use of a bicubic interpolation filter.

This doesnt really achieve the objective of creating plausible fine detail.

Generative Adversarial Neural Networks (GANs) are a relatively new concept [2], where a second neural network, known as the Discriminator CNN, is used and is itself trained during the training process of the Generator CNN. The Discriminator CNN learns to detect the difference between features that are characteristic of original UHD images and synthesized UHD images. During training, the Discriminator CNN sees either an original UHD image or a synthesized UHD image, with the detection correctness fed back to the discriminator and, if the image was a synthesized one, also fed back to the Generator CNN.

Each CNN is attempting to beat the other: the Generator by creating images that have characteristics more like originals, while the Discriminator becomes better at detecting synthesized images.

The result is the synthesis of feature details that are characteristic of original UHD images.

With a GAN approach, there is no real constraint to the ability of the Generator CNN to create new detail everywhere. This means the Generator CNN can create images that diverge from the original image in more general ways. A combination of both compare functions can offer a better balance, retaining the detail regeneration, but also limiting divergence. This produces results that are subjectively better than conventional up-conversion.

Conversion from 1080i60 to 2160p60 is necessarily more complex than from 720p60. Starting from 1080i, there are three basic approaches to up-conversion:

Training data is required here, which must come from 2160p video sequences. This enables a set of fields to be created, which are then downsampled, with each field coming from one frame in the original 2160p sequence, so the fields are not temporally co-located.

Surprisingly, results from field-based up-conversion tended to be better than using de-interlaced frame conversion, despite using sophisticated motion-compensated de-interlacing: the frame-based conversion being dominated by the artifacts from the de-interlacing process. However, it is clear that potentially useful data from the opposite fields did not contribute to the result, and the field-based approach missed data that could produce a better result.

A solution to this is to use multiple fields data as the source data directly into a modified Generator CNN, letting the GAN learn how best to perform the deinterlacing function. This approach was adopted and re-trained with a new set of video-based data, where adjacent fields were also provided.

This led to both high visual spatial resolution and good temporal stability. These are, of course, best viewed as a video sequence, however an example of one frame from a test sequence shows the comparison:

Comparison of a sample frame from different up-conversion techniques against original UHD

Up-conversion using a hybrid GAN with multiple fields was effective across a range of content, but is especially relevant for the visual sports experience to the consumer. This offers a realistic means by which content that has more of the appearance of UHD can be created from both progressive and interlaced HD source, which in turn can enable an improved experience for the fan at home when watching a sports UHD channel.

1 A. Mordvintsev, C. Olah and M. Tyka, Inceptionism: Going Deeper into Neural Networks, 2015. [Online]. Available: https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html

2 I. e. a. Goodfellow, Generative Adversarial Nets, Neural Information Processing Systems Proceedings, vol. 27, 2014.

Read more:
SVG Tech Insight: Increasing Value of Sports Content Machine Learning for Up-Conversion HD to UHD - Sports Video Group

Survey: Machine learning will (eventually) help win the war against financial crime – Compliance Week

It is still early days for many institutions, but what is clear is the anti-money laundering (AML) function is on the runway to using ML to fight financial crime. The benefits of ML are indisputable, though financial institutions (FIs) vary in levels of adoption.

Guidehouse and Compliance Week tapped into the ICAs network of 150,000-plus global regulatory and financial compliance professionals for the survey, which canvassed 364 compliance professionalsincluding 229 employed by financial institutions (63 percent of all respondents)to determine the degree to which FIs are using ML. It highlights the intended and realized program benefits of ML implementation; top enterprise risks and pitfalls in adopting and integrating ML to fight financial crime; and satisfaction with results post-implementation. The results also offer insights into what kinds of impediments are holding organizations back from full buy-in.

About a quarter of all surveyed respondents (24 percent) reported working at major FIs with assets at or over $40 billion; this cohort, hereafter referred to as large FIs, represents the bleeding edge of ML in AML. More than half (58 percent) have dedicated budgets for ML, and 39 percent are frontrunners in the industry, having developed or fully bought in on ML products already.

Nearly two-thirds (62 percent) of all respondents are individuals working in AML/Bank Secrecy Act (BSA) or compliance roles; this cohort, hereafter referred to as industry stakeholders, represents the population of users in the process of operationalizing ML in fighting financial crime at their respective institutions.

If large FIs are on the front line in ML adoption, then industry stakeholders are taking up the rearguard. Unlike respondents in the large FIs cohort, the majority of professionals in the industry stakeholders cohort are refraining from taking action steps around ML projects focused on fighting financial crime at this time. Nearly a third (32 percent) are abstaining from talking about ML at all at their institutions; another third (33 percent) are just talking about iti.e., they have no dedicated budget, proof of concept, or products under development just yet.

Nonetheless, there is nearly universal interest in ML among large FIs: 80 percent say they hope to reduce risk with its help, and 61 percent report they have realized this benefit already, demonstrating a compelling ROI.

While large FIs are confident in testing the ML waters, many remain judicious in how much they are willing to spend. Dedicated budgets for ML in AML remain conservative; nearly two-thirds of large FIs (61 percent) budgeted $1 million or less, pre-pandemic, toward implementing ML solutions in AML. The most frequently occurring response, at just over one-third, was a budget of less than $500,000 (34 percent).

Workingwith modest budgets, large FIs are relying on their own bandwidth and expertise to build ML technology: 71 percent are building their own in-house solution, eschewing any off-the-shelf technology, and more than half (54 percent) are training internal staff rather than hiring outside consultants.

With the larger banks, theres just a tendency to look inward first. Im a big proponent of leveraging commercially available products, says Tim Mueller, partner in the Financial Crimes practice at Guidehouse. Mueller predicts vendor solutions will become more popular as the external market matures and better options become available. I think thats the only way for this to work down-market, he adds.

A key driver of ML in the AML function has been the allure of enabling a real-time and continuous Know Your Customer (KYC) process. More than half of all surveyed respondents (55 percent) state improving KYC is the top perceived benefit to their organizations in operationalizing ML to fight financial crime, including 54 percent of large FIs and 59 percent of industry stakeholders.

This trend suggests the challenges associated with the KYC process modestly outweigh competing AML priorities as those most in need of an efficiency upgrade. From customer due diligence (CDD) to customer risk-ranking to enhanced due diligence (EDD) to managing increased regulatory scrutiny, the demands of KYC are both laborious and time-intensive. Banks want to harness a way to work smarter, not harder. ML technology may provide a viable means.

ML is getting applied in the areas of greatest pain for financial institutions, notes Mueller, referring to respondents apparent keenness to improve the KYC process. Theres the area of greatest pain, and that usually represents the area of greatest potential. When asked which additional areas have the greatest potential, Mueller cites transaction monitoring and customer risk rating.

The truth, however, is each area of the AML program is part of a larger puzzle; the pieces interconnect. For instance, an alert generated by a transaction-monitoring system about a potentially suspicious customer is not done in a vacuum, but rather is based on the adequacy of the FIs customer risk assessment processes. Because of the cyclical nature of an AML program, applying ML to one area could potentially translate into a holistic improvement to the program overall.

Its really important to remember this: The area of pain is EDD and CDD, and the area of potential is AML transaction monitoring, and making sure youve got the right alerts. Guess what? The alerts are based on the CDD and EDD. They are interdependent, points out Salvatore LaScala, partner in the AML division of Guidehouse.

While ML takes considerable time to implement and fine-tunea typical runway is 6-12 months, Mueller saysa reduction of risk can be realized relatively quickly.

For organizations that have implemented ML to fight financial crime, reducing risk is overwhelmingly the key benefit realized. Nearly two-thirds (61 percent) of large FIs state their companies have realized the benefit of reducing risk since deploying ML to fight financial crime. What is somewhat puzzling, however, is only 44 percent of large FIs state they have realized efficiency gains.

A similar incongruity is found among the industry stakeholders: 61 percent state they have effectively reduced risk, but only 51 percent indicate they have achieved efficiency gains.

Ifthe adoption of ML has increased institutions effectiveness at reducing risk in AML, why does it appear efficiencygains are lagging? Shouldnt effectiveness and efficiency go hand in hand?

Mueller says no. Effectiveness comes first. From the perspective of an AML professional working at an FI, You spend a lot of money implementing machine learning and AI, Mueller explains.You spend a lot of time. You have a lot of SMEs (subject matter experts) dedicated to making sure its working correctly. You get it implemented; then you must watch it work; then you have to improve it over time. Youre not always going to see efficiency gains right away.

LaScala says, While FIs have made tremendous effectiveness and efficiency strides in leveraging machine learning for transaction monitoring, we believe that they will enjoy even greater success leveraging it for customer due diligence in the future. As the technology evolves, we expect that FIs will be able to review potentially high-risk customer populations more frequently and comprehensively, with less effort.

Fifty-one percent of respondents at large FIs and 45 percent of industry stakeholders cite only partial satisfaction with the results of deploying ML. This reaction may be an indicator that the use of ML in this capacity/area is still emerging.

There has been an increase in the number of false matches in name-screening and transaction monitoring cases that end as risk-irrelevant, noted an AML associate working at a large commercial bank headquartered in Europe that conducts business in the Middle East.

No clear results, remarked a chief AML officer working in wealth management at a small FI that is headquartered and conducts business in Europe.

ML is good. However, it is not efficient in full coverage, another AML associate, who indicated s/he does not work at an FI, said. Manpower is still needed for several products of compliance such as enhanced due diligence.

While the lukewarm endorsement of ML from respondents does not surprise Mueller, it does disappoint him. I do think there are significant gains to be had there both from an effectiveness and an efficiency perspective, Mueller maintains. He believes the lack of satisfaction from users may result from unrealistic expectations and poor communication at the outset of development.

If people are starting more with [the mindset of], Hey, this is our strategy, were ready to go, lets launch into this, then leadership will expect big things right out of the gate, and thats hard to accomplish with anything, much less with something thats so data-driven and that takes so long to develop, Mueller says. Instead they need to start with a small project and achieve success. Then the strategy can be defined using that success as a starting point.

FIs will continue to increase investment and reliance on ML to bolster their financial crime prevention and detection efforts, LaScala adds. We believe that these advanced technologies will ultimately become widely adopted so long as they are transparent and can be explained to the regulator. In fact, someday not far off, systems deploying ML might actually be a regulatory expectation.

Excerpt from:
Survey: Machine learning will (eventually) help win the war against financial crime - Compliance Week

Xbox Series X is more suited to machine learning than PS5 says David Cage – MSPoweruser – MSPoweruser

Xbox Series X may have one additional advantage over Sonys PlayStation 5 console: machine learning.

In an interview with WCCFTech, Quantic Dreams CEO David Cage revealed that the design of Microsofts Xbox Series X gives it the advantage in machine learning compared to the PlayStation 5.

Cage revealed that while the slightly better CPU and beefier GPU of the Xbox Series X gives Microsoft a slight edge over PS5, its really the machine learning capabilities of the Xbox console that may help it succeed against PlayStations faster SSD.

The shader cores of the Xbox are also more suitable to machine learning, which could be an advantage if Microsoft succeeds in implementing an equivalent to Nvidias DLSS, Cage explained.

However, the PlayStation-focused developer also explained that Sony has consistently punched up to deliver great looking games on not-so-powerful hardware in the past.

I think that the pure analysis of the hardware shows an advantage for Microsoft, but experience tells us that hardware is only part of the equation: Sony showed in the past that their consoles could deliver the best-looking games because their architecture and software were usually very consistent and efficient.

In a previous interview, Cage explained that he believes the split nature of Xbox Series X and Xbox Series S is confusing for consumers and developers.

Read more:
Xbox Series X is more suited to machine learning than PS5 says David Cage - MSPoweruser - MSPoweruser

Altruist: A New Method To Explain Interpretable Machine Learning Through Local Interpretations of Predictive Models – MarkTechPost

Artificial intelligence (AI) and machine learning (ML) are the digital worlds trendsetters in recent times. Although ML models can make accurate predictions, the logic behind the predictions remains unclear to the users. Lack of evaluation and selection criteria make it difficult for the end-user to select the most appropriate interpretation technique.

How do we extract insights from the models? Which features should be prioritized while making predictions and why? These questions remain prevalent. Interpretable Machine Learning (IML) is an outcome of the questions mentioned above. IML is a layer in ML models that helps human beings understand the procedure and logic behind machine learning models inner working.

Ioannis Mollas, Nick Bassiliades, and Grigorios Tsoumakas have introduced a new methodology to make IML more reliable and understandable for end-users.Altruist, a meta-learning method, aims to help the end-user choose an appropriate technique based on feature importance by providing interpretations through logic-based argumentation.

The meta-learning methodology is composed of the following components:

Paper: https://arxiv.org/pdf/2010.07650.pdf

Github: https://github.com/iamollas/Altruist

Related

Consulting Intern: Grounded and solution--oriented Computer Engineering student with a wide variety of learning experiences. Passionate about learning new technologies and implementing it at the same time.

See the original post:
Altruist: A New Method To Explain Interpretable Machine Learning Through Local Interpretations of Predictive Models - MarkTechPost

– Retracing the evolution of classical music with machine learning – Design Products & Applications

05 February 2021

Researchers in EPFLs Digital and Cognitive Musicology Lab in the College of Humanities used an unsupervised machine learning model to reveal how modes such as major and minor have changed throughout history.

Many people may not be able to define what a minor mode is in music, but most would almost certainly recognise a piece played in a minor key. Thats because we intuitively differentiate the set of notes belonging to the minor scale which tend to sound dark, tense, or sad from those in the major scale, which more often connote happiness, strength, or lightness.

But throughout history, there have been periods when multiple other modes were used in addition to major and minor or when no clear separation between modes could be found at all.

Understanding and visualising these differences over time is what Digital and Cognitive Musicology Lab (DCML) researchers Daniel Harasim, Fabian Moss, Matthias Ramirez, and Martin Rohrmeier set out to do in a recent study, which has been published in the open-access journal Humanities and Social Sciences Communications. For their research, they developed a machine learning model to analyze more than 13,000 pieces of music from the 15th to the 19th centuries, spanning the Renaissance, Baroque, Classical, early Romantic, and late-Romantic musical periods.

We already knew that in the Renaissance [1400-1600], for example, there were more than two modes. But for periods following the Classical era [1750-1820], the distinction between the modes blurs together. We wanted to see if we could nail down these differences more concretely, Harasim explains.

Machine listening (and learning)

The researchers used mathematical modelling to infer both the number and characteristics of modes in these five historical periods in Western classical music. Their work yielded novel data visualizations showing how musicians during the Renaissance period, like Giovanni Pierluigi da Palestrina, tended to use four modes, while the music of Baroque composers, like Johann Sebastian Bach, revolved around the major and minor modes. Interestingly, the researchers could identify no clear separation into modes of the complex music written by Late Romantic composers, like Franz Liszt.

Harasim explains that the DCMLs approach is unique because it is the first time that unlabelled data have been used to analyse modes. This means that the pieces of music in their dataset had not been previously categorized into modes by a human.

We wanted to know what it would look like if we gave the computer the chance to analyse the data without introducing human bias. So, we applied unsupervised machine learning methods, in which the computer 'listens' to the music and figures out these modes on its own, without metadata labels.

Although much more complex to execute, this unsupervised approach yielded especially interesting results which are, according to Harasim, more cognitively plausible with respect to how humans hear and interpret music.

We know that musical structure can be very complex and that musicians need years of training. But at the same time, humans learn about these structures unconsciously, just as a child learns a native language. Thats why we developed a simple model that reverse engineers this learning process, using a class of so-called Bayesian models that are used by cognitive scientists, so that we can also draw on their research.

From class project to publicationand beyond

Harasim notes with satisfaction that this study has its roots in a class project that he and his co-authors Moss and Ramirez did together as students in EPFL professor Robert Wests course, Applied Data Analysis. He hopes to take the project even further by applying their approach to other musical questions and genres.

For pieces within which modes change, it would be interesting to identify exactly at what point such changes occur. I would also like to apply the same methodology to jazz, which was the focus of my PhD dissertation because the tonality in jazz is much richer than just two modes.

See original here:
- Retracing the evolution of classical music with machine learning - Design Products & Applications

VA Aims To Reduce Administrative Tasks With AI, Machine Learning – Nextgov

Officials at the Department of Veterans Affairs are looking to increase efficiency and optimize their clinicians professional capabilities, featuring advanced artificial intelligence and machine learning technologies.

In a November presolicitation, the VA seeks to gauge market readiness for advanced healthcare device manufacturing, ranging from prosthetic solutions, surgical instruments, and personalized digital health assistant technology, as well as artificial intelligence and machine learning capabilities.

Dubbed Accelerating VA Innovation and Learning, or AVAIL, the program is looking to supplement and support agency health care operations, according to Amanda Purnell, an Innovation Specialist with the VA

What we are trying to do is utilize AI and machine learning to remove administrative burden of tasks, she told Nextgov.

The technology requested by the department will be tailored to areas where a computer can do a better, more efficient job than a human, and thereby give people back time to complete demanding tasks that require human judgement.

Some of these areas the AI and machine learning technology could be implemented include surgical preplanning, manufacturing submissions, and 3D printing, along with injection molding to produce plastic medical devices and other equipment.

Purnell also said that the VA is looking for technology that can handle the bulk of document analyses. Using machine learning and natural language processing to scan and detect patterns in medical images, such as CT scans, MRIs and dermatology scans is one of the ways the VA aims to digitize its administrative workload.

Staff at the VA is currently tasked with looking through faxes and other clinical data to siphon it to the right place. AVAIL would combine natural language processing to manage these operations and add human review when necessary.

Purnell said that the forthcoming technology would emphasize streamlining processes that are better and faster done by machines and allowing humans to do something that is more kind of human-meaningful, and also allowing clinicians to operate to the top of their license.

She noted that machines are highly adept at scanning and analyzing images with AI. The VA procedure would likely have the AI technology to do a preliminary scan, followed by a human clinician to make their expert opinion based on results.

With machine learning handling the bulk of these processes along with other manufacturing and designing needs, clinicians and surgeons within the VA could focus more on applying their medical and surgical skills. Purnell used the example of a prosthetist getting more time to foster a human connection with a client rather than oversee other health care devices and manufacturing details.

It is making sure humans are used to their best advantage, and that were using technology to augment the human experience, she said.

The AVAIL program also stands to improve the ongoing modernization effort of the VAs beleaguered electronic health record (EHR) system, which has suffered deployment hiccups thanks to difficult interfaces and budget constraints.

The AI and machine learning technology outlined in the presolicitation could also support new EHR infrastructure and focus on an improved user experience, mainly with an improved platform interface and other accessibility features.

Purnell underscored that having AI manage form processing and data sharing capabilities, including veteran claims and benefits, is another beneficial use case.

Were alleviating that admin burden and increasing the experience both for veterans and our clinicians, in that veterans are getting more facetime with our clinicians and clinicians are doing more of what they are trained to do, Purnell said.

Read this article:
VA Aims To Reduce Administrative Tasks With AI, Machine Learning - Nextgov

Meet the AAS Keynote Speakers: Dr. Brian Nord – Astrobites

In this series of posts, we sit down with a few of the keynote speakers of the 237th AAS meeting to learn more about them and their research. You can see a full schedule of their talks here, and read our other interviews here!

You might have noticed a rise in the number of astronomy publicationsand a corresponding increase in the number of Astrobites!about machine learning (ML). Over the last decade, ML has become a powerful statistical tool, but as ML expert Dr. Brian Nord knows, its not a one-size-fits-all solution. You can apply machine learning to everything, which is not always the best idea, Nord says, then smiles. Which took me a few years to learn.

So what exactly is the role of ML in astronomy? Nord (who has an impressive list of positions: Scientist at Fermilab, CASE Scientist in the Department of Astronomy and Astrophysics and Senior Member of the Kavli Institute of Cosmological Physics at the University of Chicago, and co-founder of the Deep Skies Lab) plans to discuss this exact question in his plenary talk at #AAS237.

Id like to talk about the winding road that machine learning has gone through in astronomy, Nord says. I think theres a lot to learn from the path that was taken as a community, and I want to review that and give a sense of where I think well get the most use out of deep learning. But Nords plenary talk will also go one step further. The other part will be: whats the role of the scientist who develops deep learning tools [] in the societal implications of the work were doing? If Im trying to improve an algorithm to make it better at recognizing galaxies, how much am I also contributing to algorithms that are good at facial recognitionwhich we know is biased against people of color? What role do I play as a scientist? What role do I play as a Black scientist?

Nords own research in machine learning has taken three primary directions: using ML to analyze data, using ML to design experiments, and studying ML itself.

Nord, a cosmologist by training, has often used ML to classify objects like gravitational lenses, which are useful for cosmology (see, for example, this recent Astrobite).

One of the big things that you might want to do with machine learning is detect rare objects. Strong gravitational lenses are a rare type of object in the way they look on the sky, Nord says. The field [of gravitational lens studies], for a better part of the last 30 or 40 years, has been mostly using human visual inspection to detect whether something is a strong lens or not. As you start thinking about [Rubin Observatory] and JWST and other big telescopes and surveys, thats not really gonna cut it anymore.

Machine learning is useful not just for analyzing data, but also for getting dataspecifically, improving experiment design. The Dark Energy Spectroscopic Instrument (DESI) is a massive spectrograph that is just beginning to obtain spectra for tens of millions of galaxies. As DESI was being built in the 2010s, Nord says, some colleagues and I started asking [] Why arent we simulating the entire instrument at once? Nord and colleagues built SPectrOscopic KEn Simulation (SPOKES), a tool that simulates the instrument from end to end and lets you see the scope of our knowledge kenabout the instrument. Now, Nord is thinking even bigger. Im starting to ask questions like: Should we be fully operating surveys manually? Nord says. Perhaps ML can be used with tools like SPOKES to automate experiment design.

Finally, and perhaps most importantly, Nord tries to understand ML itself, and the problems inherent with ML approaches. Deep learning is not great at giving error estimates that are immediately interpretable to a physicist. It doesnt come out in terms of, you know, statistical uncertainty and systematic uncertainty, Nord says. He recently worked with a postdoc to test different ML methods to simulate a simple pendulum system. We showed that if you just take these [algorithms] off the shelf and think youre going to get an answer out of it youre not. So were trying to develop new tools to do this uncertainty quantification.

So what is the path moving forward for ML as a statistical tool for astronomy? People should be asking what their ultimate goal is, and working backwards from there, Nord explains. So if your goal is to classify and find things, then the main thing one would need to worry about is bias from the training sample to the test sample. Thats not a solved problem, but there are tools to mitigate that []. But if we want to go beyond that, into actually getting a measurement, then we need to be careful that were not treating deep learning like its anything magic. Its a tool that we need to really deeply understand.

We also need to think about how ML can be used as a tool in power structures, both in science and in society at large. Nord points out this 2016 ProPublica article that discusses how in several states in the US, judges can use closed-source ML algorithms to help decide how long to send someone to jailand how these algorithms are biased against people of color, especially Black people.

When we see a new technology come about that is even half as disruptive as artificial intelligence, I think one of the questions that we should be asking is: How will this be used to concentrate power? And, alongside that, how can it be used to equilibrate power dynamics? Nord says. The further along we get in the development of artificial intelligence as a fundamental toolin science and in other places in societywithout asking those questions, I think the landscape of our opportunity for using it as a chance to equilibrate power changes. [] I think theres still time to get the word out, but it feels like were losing time. I dont want to lose this opportunity.

How did Nord end up studying ML and cosmology? I started in physics in college, and at the end of college I was going to do string theory, Nord says. But as he began his PhD at the University of Michigan, he found that string theory started to sit farther and farther away from the questions I wanted to ask about the universe. So he went into cosmology, which he felt provided a more direct conduit to philosophical questions about existence and my place in the universe and my childhood dreams of space travel, and he eventually decided to switch into the subfield of strong lensing after his PhD. Thats when he got into ML. I faced this problem that made me sad, to have to hand-scan tens of thousands of images, [and] I started thinking about deep learning.

Nord is grateful for his experiences switching fields. Im glad that I tested different kinds of science. [] You dont need to like all of physics. You can pick one thing, and then like it and do it. For students who are trying to pick a field to work in, Nord suggests thinking carefully about the sociology of the scientific community that [youre] in. [] If the projects great but the advisor is terrible, youre gonna have to live with that for years to come.

Nord also encourages current students to remember that at the end of the day, working in physics and astronomy is just a job. You can love it, but it doesnt have to be the thing that owns our lives. I think theres this idea in academia that if youre not a full part of it in every single way, if you dont give up yourself completely, then youre not worthy. And I think thats terrible, he says. Even as a PI, thats still out there. [] The systems that we work in still dictate a lot of the power.

As a result, people who choose to work to change these systems should be careful when setting their expectations. The institution of the academy has significant flaws that allow people to be disenfranchised and oppressed, Nord says. Systemic racism exists in academia, full stop, its still there. Systemic misogyny exists in academia, full stop, its still there. And even though equity, diversity, and inclusion efforts sometimes claim to work against these flaws, they are not equivalent to justice. Those three terms in tandem form this conceptual framework that are often used to, either in purpose or by accident, detract from actual justice efforts.

Finally, Nord reminds students that while the goal of science is to learn about nature in an objective and unbiased way, scientists are not ourselves objective. And when we try to convince anyone that we are, we just look more and more foolish. This subjectivity that we have, it exists because were human and were social creatures, so we need to accept that and figure out ways to create a just community for ourselves.

Interested in machine learning in astronomy and society? Check out Dr. Nords plenary talk at 3:10PM ET on Monday, January 11 at #AAS237!

Astrobite edited by: Gloria Fonseca Alvarez

Featured image credit: American Astronomical Society

About Mia de los ReyesI'm a grad student at Caltech, where I study the chemical compositions of nearby dwarf galaxies. Before coming to sunny California, I spent a year as a postgrad at the University of Cambridge, studying star formation in galaxies. Now that I've escaped to warmer climates, my hobbies include rock climbing, aerial silks, and finding free food on campus.

Continue reading here:
Meet the AAS Keynote Speakers: Dr. Brian Nord - Astrobites

4Paradigm Defends its Championship in China’s Machine Learning Platform Market in the 1st Half of 2020, According to IDC – Yahoo Finance

4Paradigm stays on a leadership position from 2018 to the first half of 2020

BEIJING, Jan. 21, 2021 /PRNewswire/ -- IDC, a premier global provider of market intelligence, has recently published China AI Software and Application (2020 H1) Report (hereinafter referred to as "Report"), where 4Paradigm as an AI innovator recognized for its software standardization level, scope of industrial coverage and solid customer base, has led China's machine learning platform market from 2018 to the first half of 2020 with expanding market share, ahead of leading vendors such as Alibaba, Tencent, Baidu and Huawei.

The report dives into China's AI market in 2020 in retrospect: from 2015 to 2020, every single year has seen new drivers emerging from the AI market and the market landscape continuously evolving from cognition to exploration, to deep application and then to scale-up. An unprecedentedly prosperous AI market has been witnessed since 2020 as both awareness and investment are boosted for AI and data intelligence in the Chinese market driven by pandemic control, new infrastructure initiatives and impact of international trade frictions. Since the second half 2020, a series of policies such as digital transformation of SOE, intelligent computing center launched by governmental authorities are expected to galvanize AI growth to a new height.

Looking into the future, Yanxia Lu, Chief AI Analyst of IDC China says, "Market opportunities generated from continual AI implementation are just around the corner. For further expansion of market shares, it's necessary to leverage technological leadership and product innovation for new market opportunities, to explore replicable and scalable application scenarios and to unite partners with industrial know-how for deployment of technologies on enterprise."

The IDC report recognizes the advantages of 4Paradigm machine learning platform and AutoML products in technological accumulation, enterprise-level product layout, commercial implementation performance, AI industrial ecosystem, etc., hence an important benchmark for enterprises' choice of machine learning platform.

Story continues

4Paradigm has built an AutoML full stack algorithm layout including perceptive, cognitive and decision-making algorithm, enabling enterprises to drive up key decision-making performance and empowering enterprises to scale up AI scenario deployment with low threshold and high efficiency in all-dimensional observation, accurate orientation and optimized decision-making.

4Paradigm released four products this year, respectively are Sage AIOS, an enterprise AI operation system, Sage HyperCycle ML, a fully automatic tool for scaled-up AI development, Sage CESS, a one-stop intelligent operation platform and Sage One, an AI computing power platform for full life cycle, hence building a full stack AI product matrix covering computing power, OS, production platform and business system.

To help enterprises address the booming demand of moving online, 4Paradigm continues to provide online, intelligent and precise operation capabilities to numerous prominent enterprises and organizations in China and abroad, among which are Bank of Communications, Industrial Bank, Huaxia Bank, Guosen Securities, Laiyifen, Feihe, China Academy of Railway Sciences, DHL, Zegna, Budweiser China, KRASTASE, etc., enabling them to embrace digital transformation and seize new opportunities online.

With over 200 partners in 15 sectors, 4Paradigm is experiencing rapid increase in its eco partners and industrial coverage on the basis of existing ecosystem.

Despite the unprecedent boom on AI market, enterprises face mounting challenges in their intelligent transformation in terms of high development threshold of AI, low implementation efficiency and poor business value. In FutureScape China ICT Market Forecast Forum, an annual IDC event recently held, Zhenshan Zhong, Vice President IDC China, offered elaborated insights on the ten predictions of AI market in China from 2021 to 2025, among which AutoML (automated machine learning) ranks the top. IDC holds that AutoML will lower the threshold of AI development to make inclusive AI a reality. It is expected that the number of data analysts and modelling scientists using AutoML technology encapsulation in providing end-to-end machine learning platforms from data preparation to model deployment will double by 2023.

Through product embedding of AutoML technology and rigorous methodology for implementation, 4Paradigm has built a systematic AutoML implementation solutions and pathways, which have enabled successful implementation of over 10,000 AI applications for enterprises in finance, retail, healthcare, manufacturing, internet, media, government, energy, carrier, among other sectors, with positive feedbacks from leaders and innovators in the tide of transformation. In the future, 4Paradigm will continuously commit to promoting the implementation of machine learning platforms and AutoML products in more industries and scenarios, helping more enterprises in their journey of intelligent transformation and upgrade for higher business efficiency while removing obstacles and boosting social productivity.

http://www.4paradigm.com

SOURCE 4Paradigm

Read more here:
4Paradigm Defends its Championship in China's Machine Learning Platform Market in the 1st Half of 2020, According to IDC - Yahoo Finance