The Worldwide Industry for Machine Learning in the Life Sciences is Expected to Reach $20.7 Billion by 2027 – ResearchAndMarkets.com – Business Wire

DUBLIN--(BUSINESS WIRE)--The "Global Markets for Machine Learning in the Life Sciences" report has been added to ResearchAndMarkets.com's offering.

This report highlights the current and future market potential for machine learning in life sciences and provides a detailed analysis of the competitive environment, regulatory scenario, drivers, restraints, opportunities and trends in the market. The report also covers market projections from 2022 through 2027 and profiles key market players.

Companies Mentioned

The publisher analyzes each technology in detail, determines major players and current market status, and presents forecasts of growth over the next five years. Scientific challenges and advances, including the latest trends, are highlighted. Government regulations, major collaborations, recent patents and factors affecting the industry from a global perspective are examined.

Key machine learning in life sciences technologies and products are analyzed to determine present and future market status, and growth is forecast from 2022 to 2027. An in-depth discussion of strategic alliances, industry structures, competitive dynamics, patents and market driving forces is also provided.

Artificial intelligence (AI) is a term used to identify a scientific field that covers the creation of machines (e.g., robots) as well as computer hardware and software aimed at reproducing wholly or in part the intelligent behavior of human beings. AI is considered a branch of cognitive computing, a term that refers to systems able to learn, reason and interact with humans. Cognitive computing is a combination of computer science and cognitive science.

ML algorithms are designed to perform tasks such data browsing, extracting information that is relevant to the scope of the task, discovering rules that govern the data, making decisions and predictions, and accomplishing specific instructions. As an example, ML is used in image recognition to identify the content of an image after the machine has been instructed to learn the differences among many different categories of images.

There are several types of ML algorithms, the most common of which are nearest neighbor, naive Bayes, decision trees, a priori algorithms, linear regression, case-based reasoning, hidden Markov models, support vector machines (SVMs), clustering, and artificial neural networks. Artificial neural networks (ANN) have achieved great popularity in recent years for high-level computing.

They are modeled to act similarly to the human brain. The most basic type of ANN is the feedforward network, which is formed by an input layer, a hidden layer and an output layer, with data moving in one direction from the input layer to the output layer, while being transformed in the hidden layer.

Report Includes

Key Topics Covered:

Chapter 1 Introduction

Chapter 2 Summary and Highlights

Chapter 3 Market Overview

3.1 Introduction

3.1.1 Understanding Artificial Intelligence in Healthcare

3.1.2 Artificial Intelligence in Healthcare Evolution and Transition

Chapter 4 Impact of the Covid-19 Pandemic

4.1 Introduction

4.1.1 Impact of Covid-19 on the Market

Chapter 5 Market Dynamics

5.1 Market Drivers

5.1.1 Investment in Ai Health Sector

5.1.2 Rising Chronic Diseases

5.1.3 Advanced, Precise Results

5.1.4 Increasing Research and Development Budget

5.2 Market Restraints and Challenges

5.2.1 Reluctance Among Medical Practitioners to Adopt Ai-Based Technologies

5.2.2 Privacy and Security of User Data

5.2.3 Hackers and Machine Learning

5.2.4 Ambiguous Regulatory Guidelines for Medical Software

5.3 Market Opportunities

5.3.1 Untapped Potential in Emerging Markets

5.4 Value Chain Analysis

Chapter 6 Market Breakdown by Offering

Chapter 7 Market Breakdown by Deployment Mode

Chapter 8 Market Breakdown by Application

Chapter 9 Market Breakdown by Region

Chapter 10 Regulations and Finance

Chapter 11 Competitive Landscape

Chapter 12 Company Profiles

For more information about this report visit https://www.researchandmarkets.com/r/oqwcnh

Read more here:
The Worldwide Industry for Machine Learning in the Life Sciences is Expected to Reach $20.7 Billion by 2027 - ResearchAndMarkets.com - Business Wire

MLOps Company Iterative Sees Steady Growth in First Half of 2022 – Business Wire

SAN FRANCISCO--(BUSINESS WIRE)--Iterative, the MLOps company dedicated to streamlining the workflow of data scientists and machine learning (ML) engineers, announced that it has seen steady growth in the first half of the year, Including explosive adoption of the DVC extension for VS Code and Iterative Tools School enrollment.

Announced in June, the DVC Extension for Visual Studio Code allows users of all technical backgrounds to create, compare, visualize, and reproduce machine learning experiments. Through Git and Iteratives DVC, the extension makes experiments easily reproducible, unlike traditional experiment tracking tools that just stream metrics. Since its launch, the extension has been installed more than 8,500 times and has five stars on the Visual Studio Marketplace.

Iterative has also seen growth in enrollment for the Iterative Tools School since being announced in March. A free online course for data scientists to learn how to use Iterative tools, including DVC, CML, and Iterative Studio. Enrollment has kept a steady 30% monthly growth with over 1,800 students currently enrolled in the program.

DVC users have increased 50% since the start of 2022 and the steady growth of both the VS Code extension and student enrollment validates that we are on the right track when it comes to creating tooling to bridge the gap between data science and software engineering teams, said Dmitry Petrov, co-founder and CEO at Iterative. We remain committed to our mission to deliver the best developer experience for machine learning teams by creating an ecosystem of open, modular ML tools.

Iteratives DVC brings agility, reproducibility, and collaboration into the existing data science workflow. DVC provides users with a Git-like interface for versioning data and models, bringing version control to machine learning and solving the challenges of reproducibility. DVC is built on top of Git, allowing users to create lightweight metafiles and enabling the system to handle large files, which can't be stored in Git. It works with remote storage for large files in the cloud.

Also from Iterative, CML is an open-source library for implementing continuous integration and delivery (CI/CD) in machine learning projects. Users can automate parts of their development workflow, including model training and evaluation, comparing ML experiments across their project history, and monitoring changing datasets.

Additionally, Iteratives Machine Learning Engineering Management (MLEM) provides a modular nature that fits into any organizations software development workflows based on Git and CI/CD, without engineers having to transition to a separate machine learning deployment and registry tool. This allows teams to use a similar process across both ML models and applications for deployment, eliminating duplication in processes and code. Teams are then able build a model registry in hours rather than days.

Together, CML and DVC provide ML Engineers a number of features and benefits that support data provenance, machine learning model management and automation. DVC and CML are open-source tools available for free. Iterative also provides a commercial offering that encompasses all of its open-source Unix-philosophy tools into one collaboration service called Iterative Studio.

Founded in 2018, Iterative tools have had more than 10 million sessions earning more than 14,000 stars on GitHub. Iterative now has more than 300 contributors across their different tools.

About Iterative

Iterative.ai, the company behind Iterative Studio and popular open-source tools DVC, CML, MLEM, and DVC Extension for VS Code, enables data science teams to build models faster and collaborate better with data-centric machine learning tools. Iteratives developer-first approach to MLOps delivers model reproducibility, enables governance, and automation across the ML lifecycle, all integrated tightly with software development workflows. Iterative is a remote-first company, backed by True Ventures, Afore Capital, and 468 Capital. For more information, visit Iterative.ai.

Read more:
MLOps Company Iterative Sees Steady Growth in First Half of 2022 - Business Wire

Dominos MLops release focuses on GPUs and deep learning, offers multicloud preview – VentureBeat

To further strengthen our commitment to providing industry-leading coverage of data technology, VentureBeat is excited to welcome Andrew Brust and Tony Baer as regular contributors. Watch for their articles in the Data Pipeline.

Domino Data Lab, maker of an end-to-end MLops (machine learning operations) platform, is announcing its latest release version 5.3 today. The delivery includes new support for ML model inferencing on GPU (graphics processing unit) systems and a collection of new connectors. Along with that, the company is beginning a private preview of its Nexus hybrid and multicloud capabilities, first announced in June.

[Also read: Domino Data Lab announces latest MLops platform to satisfy both data science and IT]

GPUs can make lots of ML and deep learning operations go faster because they parallelize massive workloads, which is exactly what training complex deep learning models or numerous ML models entails. For this reason, Domino has long supported GPUs for model training.

But in the case of deep learning specifically, GPUs can benefit inferencing (generating predictions from the trained model) as well, and it is this scenario that Domino newly supports in version 5.3. Perhaps an easier way of thinking about this is that Domino now supports operationalization of deep learning beyond development, extending into production deployment. Given all the new announcements that came out of Nvidias GPU Technology Conference (GTC) last month, Dominos timing here is especially good.

Low-Code/No-Code Summit

Join todays leading executives at the Low-Code/No-Code Summit virtually on November 9. Register for your free pass today.

[See also: Nvidia moves Hopper GPUs for AI into full production]

Then theres the matter of new connectors, including enhanced connectivity for Amazon Web Services S3 and brand new connectors for Teradata and Trino. Usually, new connectors are not newsworthy delivery of them is just a typical, incremental enhancement that most data platforms add at regular intervals. But there are a couple of tidbits here that are worth pointing out.

Coverage of a mature, well-established data warehouse platform like Teradata shows a maturation in MLops itself. Because MLops platforms are new, they often prioritize connectivity to newer data platforms, like Snowflake, for which Domino already had support. But adding a Teradata connector means MLops and Domino are addressing even the most conservative enterprise accounts, where the impact of artificial intelligence (AI) will arguably have the biggest, even if not the earliest, impact. Its good to see the rigor of MLops make its way around all parts of the market.

[Must read: Teradata takes on Snowflake and Databricks with cloud-native platform]

Connecting to Trino an open-source federated query engine derived from Presto development work at Facebook is important in a different way. Connecting to Trino provides further connectivity to all of its target data platforms, including NoSQL databases like MongoDB and Apache Cassandra, data lake standards like Delta Lake and Apache Iceberg, streaming data platforms like Apache Kafka, analytics stores like Apache Druid and ClickHouse, and even productivity data sources like Google Sheets.

[Check out: MongoDB fires up new cloud, on-premises releases]

Finally, theres the Dominos Nexus hybrid/multicloud capabilities, which allow Domino to deploy model training environments across on-premises infrastructure and the three major public clouds, with costing information for each, all from a proverbial single pane of glass. This is pictured in the figure at the top of this post. And because Nexus works across cloud regions, it can also support restricting access to data by geography, to enforce data sovereignty policies and comply with corresponding regulations.

At this time, Nexus is available only to participants in Dominos private preview for same. But progress is progress. Private previews advance to public previews, and public previews eventually progress to general availability (GA). Speaking of GA, Domino 5.3 is generally available now, according to the company. And customers interested can sign up for the Nexus private preview.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Read more:
Dominos MLops release focuses on GPUs and deep learning, offers multicloud preview - VentureBeat

Machine learning tool could help people in rough situations make sure their water is good to drink – ZME Science

Imagine for a moment that you dont know if your water is safe to drink. It may be, it may not be just trying to visualize that situation brings a great deal of discomfort, doesnt it? Thats the situation 2.2 billion people find themselves in on a regular basis.

Chlorine can help with that. Chlorine kills pathogens in drinking water and can make water safe to drink at an optimum level. But its not always easy to estimate the optimum amount of chlorine. For instance, if you put chlorine into a piped water distribution system, thats one thing. But if you chlorinate water in a tank, and then people come and take that water home in containers, its a different thing because this water is more prone to recontamination so you need more chlorine in this type of water. But how much? The problem gets even more complicated because if water stays in place too long, chlorine can also decay.

This is particularly a problem in refugee camps, many of which suffer from a severe water crisis.

Ensuring sufficient free residual chlorine (FRC) up to the time and place water is consumed in refugee settlements is essential for preventing the spread of waterborne illnesses. write the authors of the new study. Water system operators need accurate forecasts of FRC during the household storage period. However, factors that drive FRC decay after the water leaves the piped distribution system vary substantially, introducing significant uncertainty when modeling point-of-consumption FRC.

To estimate the right amount of FRC, a team of researchers from York Universitys Lassonde School of Engineering used a machine learning algorithm to estimate chlorine decay.

They focused on refugee camps, which often face problems regarding drinking water, and collected 2,130 water samples from Bangladesh from June to December 2019, noting the level of chlorine and how it decayed. Then, the algorithm was used to develop probabilistic forecasting of how safe the water is to drink.

AI is particularly good at this type of problem: when it has to derive statistical likelihoods of events from a known data set. In fact, the team combined AI with methods routinely used for weather forecasting. So, you input parameters such as the local temperature, water quality, and the condition of the pipes, and then it can make a forecast of how safe the water is to drink at a certain moment. The model estimates how likely it is for the chlorine to be at a certain level and outputs a range of probabilities, which researchers say is better because it allows water operators to plan better.

These techniques can enable humanitarian responders to ensure sufficient FRC more reliably at the point-of-consumption, thereby preventing the spread of waterborne illnesses.

Its not the first time AI has been used to try and help the worlds less fortunate. In fact, many in the field believe thats where AI can make the most difference. Raj Reddy, one of the pioneers of AI recently spoke at the Heidelberg Laureate Forum, explaining that hes most interested in AI being used for the worlds least fortunate people, noting that this type of technology can move the plateau and improve the lives of the people that need it most.

According to a World Bank analysis, machine learning can be useful in helping developing countries rebuild after the pandemic, noting that software solutions such as AI can help countries overcome more quickly and efficiently existing infrastructure gaps. However, other studies suggest that without policy intervention, AI risks exacerbating economic inequality instead of bridging it.

No doubt, the technology has the ability to solve real problems where its needed most. But more research such as this is needed to find how AI can address specific challenges.

The study has been published in PLoS Water.

Visit link:
Machine learning tool could help people in rough situations make sure their water is good to drink - ZME Science

Developing Machine-Learning Apps on the Raspberry Pi Pico – Design News

Starting on Monday, October 24, and running through October 28, Design News will present the free course, Developing Machine-Learning Applications on the Raspberry Pi Pico. Each class runs an hour, beginning at 2:00 Eastern. You can also earnIEEE Professional Development Hoursfor participating. If you are not able to attend the class schedule, the course will be available on demand.

The Raspberry Pi Pico is a versatile, low-cost development board that applies to many applications. Course instructor Jacob Beningo will explain how to get up and running with the Raspberry Pi Pico. Hell mainly focus on how to develop machine-learning applications and deploy them to the Pico. Hell use gesture detection as an example application. Attendees will walk away understanding machine learning, the Pico, and best practices for working with both.

Related: Learn DC Motor Controls with the Raspberry Pi 2040 Pico

Heres a Breakdown of Developing Machine-Learning Applications on the Raspberry Pi Pico day by day:

Day 1: Getting Started with the Raspberry Pi Pico and Machine Learning

Related: 3 Tips for Rapid Prototyping with the Raspberry Pi Pico

In this session, we will introduce the Raspberry Pi Pico development board, based on the low-cost, high-feature RP2040 microcontroller. We will explore the Pico board features and why the board is well suited for machine-learning applications. Attendees will walk away understanding the Pico board and the fundamentals of machine learning on microcontroller-based devices.

Day 2: Machine-Learning Tools and Process Flow

There are a wide variety of tools developers use to deploy machine-learning models to the Raspberry Pi Pico. In this session, we will explore the various tools embedded software developers might be interested in using. Attendees will also learn about the general machine-learning process flow and how it fits within the standard embedded software programming model.

Day 3: Collecting Sensor Data Using Edge Impulse

Before a developer creates a machine-learning model, they must first collect the data used by the model. This session will explore how to connect and collect sensor data using Edge Impulse. Well discuss how much data to collect and the various options for doing so. Attendees will walk away understanding how to prepare their data for training and eventual deployment to the Pico.

Day 4: Designing and Testing a Machine-Learning Model

With sensor data now collected, developers will now want to use their data to train and test a machine-learning model. In this session, we will use the data we gathered in the previous session to train a model. Attendees will learn how to train a model and examine the training results in order to get the desired outcomes from their model.

Day 5: Deploying Machine-Learning Models and Next Steps

In this session, we will use the model we trained in the last session and deploy it to the Raspberry Pi Pico. Well investigate several methods to deploy the model and test how well the model works on the Raspberry Pi Pico. Attendees will see how to deploy the model and learn about the next steps for using the Raspberry Pi Pico for machine-learning applications.

Meet your instructor:Jacob Beningo is an embedded software consultant who currently works with clients in more than a dozen countries to dramatically transform their businesses by improving product quality, cost and time to market. He has published more than 300 articles on embedded software development techniques, has published several books, is a sought-after speaker and technical trainer and holds three degrees which include a Masters of Engineering from the University of Michigan.

Digi-Key Continuing Education Center, presented by Design News, will get you up to working speed quickly in a host of technologies you've been meaning to study, but haven't had the time all without leaving the comfort of your lab or office. Our faculty of expert tutors has divided the interdisciplinary world of design engineering into five dimensions: microcontrollers (basic and advanced), sensors, wireless, power, and lighting.

You can register for the free class here.

See the original post here:
Developing Machine-Learning Apps on the Raspberry Pi Pico - Design News

Arctoris welcomes on board globally recognized experts in Machine Learning, Chemical Computation, and Alzheimer’s Disease – Business Wire

OXFORD, England--(BUSINESS WIRE)--Arctoris Ltd, a tech-enabled biopharma platform company, has appointed three globally recognized experts in Alzheimers disease, Machine Learning applied to closed loop discovery, and automated chemistry as members of its Scientific Advisory Board: Professor John Davis (University of Oxford), Professor Rafael Gmez Bombarelli (MIT), and Dr Teodoro Laino (IBM Research).

We are delighted to see Professor Davis, Professor Gmez Bombarelli and Dr Laino join our companys Scientific Advisory Board at this exciting moment in Arctoris growth, said Martin-Immanuel Bittner MD DPhil FRSA FIBMS, CEO and Co-Founder of Arctoris. Professor Davis is a world-renowned expert in dementia research, while Professor Gmez Bombarelli and Dr Laino are pioneers in chemical computation and accelerated discovery. These two are key areas for our technology and our pipeline development, and we are grateful for the support of these highly distinguished individuals.

John Davis is the Chief Scientific Officer of the Centre for Medicines Discovery (Oxford) and Director of Business Development for the Alzheimers Research UK - Drug Discovery Alliance. He has over 25 years of drug discovery expertise all the way from target identification to successful clinical proof of concept for a range of drug candidates in neurological disorders. Following postdoctoral training at the Ludwig Institute and the Salk Institute, he joined GlaxoSmithKline where he led a variety of non-clinical pharmacology research departments for pain and neurodegenerative diseases. In 2010, Professor Davis co-founded the spinout company Convergence Pharmaceuticals, which he later left to become Director of Discovery for Selcia, and CSO and cofounder of Cypralis before joining the University of Oxford.

Rafael Gmez Bombarelli is the Jeffrey Cheah Assistant Professor in Engineering in MITs Department of Materials Science and Engineering. His research is focused on accelerated discovery cycles and machine learning approaches for molecular design and optimisation. Professor Gmez Bombarellis work has been published in journals such as Science, Nature Chemistry, and Nature Materials, and has been featured in MIT Technology Review and Wall Street Journal. He earned a BS, MS and PhD in chemistry from the Universidad de Salamanca, followed by postdoctoral work at Heriot-Watt University, Harvard University and Kyulux North America before taking up his post at MIT.

Teodoro Laino leads the chemical computation and automated synthetic chemistry efforts at the Department of Cognitive Computing and Industry Solutions at the IBM Research Zurich Laboratory. He is interested in the application of machine learning to chemistry and materials science problems with the purpose of developing scalable, tech-enabled solutions to significantly improve chemical synthesis (e.g., IBM RXN for chemistry). A chemist by background, Dr Laino has a PhD in computational chemistry, after which he worked as a post-doctoral researcher at the University of Zurich developing algorithms for molecular dynamics simulation.

New member of the Scientific Advisory Board John Davis, said, Neurodegenerative diseases and especially Alzheimers Disease are an area of significant unmet clinical need. As a company, Arctoris strategy and focus for the development of its pipeline of assets is governed by the very best scientific and clinical evidence in the field, and I am pleased to support their scientific and leadership team with therapeutic area expertise.

Finding efficient ways to speed up the Design-Make-Test-Analyse cycle is crucial for the rapid development of new and improved materials or treatments. Together with Arctoris, my team at IBM Research has spent the past two years exploring potential integrations between our own IBM Research Accelerated Discovery Platform and Arctoris own flagship technology, Ulysses. Today, I am thrilled to announce that I will be serving as an expert on automated synthesis and machine learning modelling in chemistry in the Scientific Advisory Board of Arctoris, further strengthening our mutually beneficial collaboration, said Teodoro Laino.

My research focuses on accelerated discovery and molecular design optimization, and I am convinced that Arctoris Ulysses platform and their approach to combining wet lab and dry lab approaches are the future for small molecule drug discovery. I am excited to contribute my expertise in closed loop and machine learning approaches to accelerated scientific discovery, where Arctoris is building a truly unique platform and company, said Rafael Gmez Bombarelli.

ABOUT ARCTORIS LTD

Arctoris is a tech-enabled biopharma platform company founded and headquartered in Oxford, UK with its US operations based in Boston and its Asia-Pacific operations based in Singapore. Arctoris combines robotics and Machine Learning with a world-class team for accelerated small molecule discovery. Ulysses, the unique technology platform developed by Arctoris, enables the company and its partners to conduct their R&D from target to hit, lead, and candidate significantly faster, and with considerably improved data quality and depth. The company's end-to-end automation platform is capable of generating large and precise datasets across hundreds of experiment types and assays. The resulting data assets are captured and passed through automated analytical pipelines and feed directly into Arctoris Machine Learning capabilities, creating powerful predictive models capable of identifying superior molecules faster. Bringing together the expertise of seasoned biotech and pharma veterans with its proprietary technologies, Arctoris leads to higher success rates and an accelerated progression of programs towards the clinic. Arctoris pursues an internal pipeline of assets in oncology and neurodegeneration and also collaborates with select biotech and pharma partners in the US, Europe, and Asia-Pacific, including several Top 10 Pharma.

For latest updates, visit http://www.arctoris.com and follow us on LinkedIn

View post:
Arctoris welcomes on board globally recognized experts in Machine Learning, Chemical Computation, and Alzheimer's Disease - Business Wire

Machine vision breakthrough: This device can see ‘millions of colors’ – Northeastern University

An interdisciplinary team of researchers at Northeastern have built a device that can recognize millions of colors using new artificial intelligence techniquesa massive step, they say, in the field of machine vision, a highly specialized space with broad applications for a range of technologies.

The machine, which researchers call A-Eye, is capable of analyzing and processing color far more accurately than existing machines, according to a paper detailing the research published in Materials Today. The ability of machines to detect, or see, color is an increasingly important feature as industry and society more broadly becomes more automated, says Swastik Kar, associate professor of physics at Northeastern and co-author of the research.

In the world of automation, shapes and colors are the most commonly used items by which a machine can recognize objects, Kar says.

The breakthrough is twofold. Researchers were able to engineer two-dimensional material whose special quantum properties, when built into an optical window used to let light into the machine, can process a rich diversity of color with very high accuracysomething practitioners in the field havent been able to achieve before.

Additionally, A-Eye is able to accurately recognize and reproduce seen colors with zero deviation from their original spectra thanks, also, to the machine-learning algorithms developed by a team of AI researchers, helmed by Sarah Ostadabbas, an assistant professor of electrical and computer engineering at Northeastern. The project is a result of unique collaboration between Northeasterns quantum materials and Augmented Cognition labs.

The essence of the technological discovery centers on the quantum and optical properties of the class of material, called transition metal dichalcogenides. Researchers have long hailed the unique materials as having virtually unlimited potential, with many electronic, optoelectronic, sensing and energy storage applications.

This is about what happens to light when it passes through quantum matter, Kar says. When we grow these materials on a certain surface, and then allow light to pass through that, what comes out of this other end, when it falls on a sensor, is an electrical signal which then [Ostadabbass] group can treat as data.

As it relates to machine vision, there are numerous industrial applications for this research tied to, among other things, autonomous vehicles, agricultural sorting and remote satellite imaging, Kar says.

Color is used as one of the principle components in recognizing good from bad, go from no-go, so theres a huge implication here for a variety of industrial uses, Kar says.

Machines typically recognize color by breaking it down, using conventional RGB (red, green, blue) filters, into its constituent components, then use that information to essentially guess at, and reproduce, the original color. When you point a digital camera at a colored object and take a photo, the light from that object flows through a set of detectors with filters in front of them that differentiate the light into those primary RGB colors.

You can think about these color filters as funnels that channel the visual information or data into separate boxes, which then assign artificial numbers to natural colors, Kar says.

So if youre just breaking it down into three components [red, green, blue], there are some limitations, Kar says.

Instead of using filters, Kar and his team used transmissive windows made of the unique two-dimensional material.

We are making a machine recognize color in a very different way, Kar says. Instead of breaking it down into its principal red, green and blue components, when a colored light appears, say, on a detector, instead of just seeking those components, we are using the entire spectral information. And on top of that, we are using some techniques to modify and encode them, and store them in different ways. So it provides us with a set of numbers that help us recognize the original color much more uniquely than the conventional way.

As the light pass through these windows, the machine processes the color as data; built into it are machine learning models that look for patterns in order to better identify the corresponding colors the device analyzes, Ostadabbas says.

A-Eye can continuously improve color estimation by adding any corrected guesses to its training database, the researchers wrote.

Davoud Hejazi, a Northeastern physics Ph.D. student, contributed to research.

For media inquiries, please contact media@northeastern.edu.

View post:
Machine vision breakthrough: This device can see 'millions of colors' - Northeastern University

RBI plans to extensively use artificial intelligence, machine learning to improve regulatory supervision – ETCIO

The Reserve Bank is planning to extensively use advanced analytics, artificial intelligence and machine learning to analyse its huge database and improve regulatory supervision on banks and NBFCs.

For this purpose, the central bank is also looking to hire external experts.

While the RBI is already using AI and ML in supervisory processes, it now intends to upscale it to ensure that the benefits of advanced analytics can accrue to the Department of Supervision in the central bank.

The supervisory jurisdiction of the RBI extends over banks, urban cooperative banks (UCB), NBFCs, payment banks, small finance banks, local area banks, credit information companies and select all India financial institutions.

It undertakes continuous supervision of such entities with the help of on-site inspections and off-site monitoring.

The central bank has floated an expression of interest (EoI) for engaging consultants in the use of Advanced Analytics, Artificial Intelligence and Machine Learning for generating supervisory inputs.

"Taking note of the global supervisory applications of AI & ML applications, this Project has been conceived for use of Advance Analytics and AI/ML to expand analysis of huge data repository with RBI and externally, through the engagement of external experts, which is expected to greatly enhance the effectiveness and sharpness of supervision," it said.

Among other things, the selected consultant will be required to explore and profile data with a supervisory focus.

The objective is to enhance the data-driven surveillance capabilities of the Reserve Bank, the EoI said.

Most of these techniques are still exploratory, however, they are rapidly gaining popularity and scale.

On the data collection side, AI and ML technologies are used for real-time data reporting, effective data management and dissemination.

For data analytics, these are being used for monitoring supervised firm-specific risks, including liquidity risks, market risks, credit exposures and concentration risks; misconduct analysis; and mis-selling of products.

Read this article:
RBI plans to extensively use artificial intelligence, machine learning to improve regulatory supervision - ETCIO

Artificial intelligence may improve suicide prevention in the future – EurekAlert

The loss of any life can be devastating, but the loss of a life from suicide is especially tragic.

Around nine Australians take their own lifeeach day, and it is theleading cause of death for Australians aged 1544. Suicide attempts are more common, with some estimates stating that they occur up to 30 times as often as deaths.

Suicide has large effects when it happens. It impacts many people and has far-reaching consequences for family, friends and communities, says Karen Kusuma, a UNSW Sydney PhD candidate in psychiatry at theBlack Dog Institute, who investigates suicide prevention in adolescents.

Ms Kusuma and a team of researchers from the Black Dog Institute and theCentre for Big Data Research in Healthrecently investigated the evidence base of machine learning models and their ability to predict future suicidal behaviours and thoughts. They evaluated the performance of 54 machine learning algorithms previously developed by researchers to predict suicide-related outcomes of ideation, attempt and death.

The meta-analysis, published in theJournal of Psychiatric Research, found machine learning models outperformed traditional risk prediction models in predicting suicide-related outcomes, which have traditionally performed poorly.

Overall, the findings show there is a preliminary but compelling evidence base that machine learning can be used to predict future suicide-related outcomes with very good performance, Ms Kusuma says.

Identifying individuals at risk of suicide is essential for preventing and managing suicidal behaviours. However, risk prediction is difficult.

In emergency departments (EDs), risk assessment tools such as questionnaires and rating scales are commonly used by clinicians to identify patients at elevated risk of suicide. However, evidence suggests they are ineffective in accurately predicting suicide risk in practice.

While there are some common factors shown to be associated with suicide attempts, what the risks look like for one person may look very different in another, Ms Kusuma says. But suicide is complex, with many dynamic factors that make it difficult to assess a risk profile using this assessment process.

A post-mortem analysis of people who died by suicide in Queensland found, of those who received a formal suicide risk assessment,75 per cent were classified as low risk, and none was classified as high risk. Previous research examining the past 50 years of quantitative suicide risk prediction models also found they were onlyslightly better than chance in predicting future suicide risk.

Suicide is a leading cause of years of life lost in many parts of the world, including Australia. But the way suicide risk assessment is done hasnt developed recently, and we havent seen substantial decreases in suicide deaths. In some years, weve seen increases, Ms Kusuma says.

Despite the shortage of evidence in favour of traditional suicide risk assessments, their administration remains a standard practice in healthcare settings to determine a patients level of care and support. Those identified as having a high risk typically receive the highest level of care, while those identified as low risk are discharged.

Using this approach, unfortunately, the high-level interventions arent being given to the people who really need help. So we must look to reform the process and explore ways we can improve suicide prevention, Ms Kusuma says.

Ms Kusuma says there is a need for more innovation in suicidology and a re-evaluation of standard suicide risk prediction models. Efforts to improve risk prediction have led to her research using artificial intelligence (AI) to develop suicide risk algorithms.

Having AI that could take in a lot more data than a clinician would be able to better recognise which patterns are associated with suicide risk, Ms Kusuma says.

In the meta-analysis study, machine learning models outperformed the benchmarks set previously by traditional clinical, theoretical and statistical suicide risk prediction models. They correctly predicted 66 per cent of people who would experience a suicide outcome and correctly predicted 87 per cent of people who would not experience a suicide outcome.

Machine learning models can predict suicide deaths well relative to traditional prediction models and could become an efficient and effective alternative to conventional risk assessments, Ms Kusuma says.

The strict assumptions of traditional statistical models do not bind machine learning models. Instead, they can be flexibly applied to large datasets to model complex relationships between many risk factors and suicidal outcomes. They can also incorporate responsive data sources, including social media, to identify peaks of suicide risk and flag times where interventions are most needed.

Over time, machine learning models could be configured to take in more complex and larger data to better identify patterns associated with suicide risk, Ms Kusuma says.

The use of machine learning algorithms to predict suicide-related outcomes is still an emerging research area, with 80 per cent of the identified studies published in the past five years. Ms Kusuma says future research will also help address the risk of aggregation bias found in algorithmic models to date.

More research is necessary to improve and validate these algorithms, which will then help progress the application of machine learning in suicidology, Ms Kusuma says. While were still a way off implementation in a clinical setting, research suggests this is a promising avenue for improving suicide risk screening accuracy in the future.

Journal of Psychiatric Research

Meta-analysis

People

The performance of machine learning models in predicting suicidal ideation, attempts, and deaths: A meta-analysis and systematic review

29-Sep-2022

The authors declare no conflict of interest.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Follow this link:
Artificial intelligence may improve suicide prevention in the future - EurekAlert

Google turns to machine learning to advance translation of text out in the real world – TechCrunch

Google is giving its translation service an upgrade with a new machine learning-powered addition that will allow users to more easily translate text that appears in the real world, like on storefronts, menus, documents, business cards and other items. Instead of covering up the original text with the translation the new feature will instead smartly overlay the translated text on top of the image, while also rebuilding the pixels underneath with an AI-generated background to make the process of reading the translation feel more natural.

Often its that combination of the word plus the context like the background image that really brings meaning to what youre seeing, explained Cathy Edwards, VP and GM of Google Search, in a briefing ahead of todays announcement. You dont want to translate a text to cover up that important context that can come through in the images, she said.

Image Credits: Google

To make this process work, Google is using a machine learning technology known as generative adversarial networks, otherwise known as GAN models the same technology that powers the Magic Eraser feature to remove objects from photos taken on the Google Pixel smartphones. This advancement will allow Google to now blend the translated text into even very complex images, making the translation feel natural and seamless, the company says. It should seem as if youre looking at the item or object itself with translated text, not an overlay obscuring the image.

The feature is another development that seems to point to Googles plans to further invest in the creation of new AR glasses, as an ability to translate text in the real world could be a key selling point for such a device. The company noted that every month, people use Google to translate text and images over a billion times in more than 100 languages. It also this year began testing AR prototypes in public settings with a handful of employees and trusted testers, it said.

While theres obvious demand for better translation, its not clear if users will prefer to use their smartphone for translations rather than special eyewear. After all, Googles first entry into the smartglasses space, Google Glass, ultimately failed as a consumer product.

Google didnt speak to its long-term plans for the translation feature today, noting only that it would arrive sometime later this year.

Go here to see the original:
Google turns to machine learning to advance translation of text out in the real world - TechCrunch