Tecton Reports Record Demand for Its Machine Learning Feature Platform as It Raises $100 Million in Funding Led by Kleiner Perkins With Participation…

SAN FRANCISCO, July 12, 2022 (GLOBE NEWSWIRE) -- Tecton, the leading ML feature platform company, today announced record demand for its platform and Feast, the most popular open source feature store:

"We believe that any company should be able to develop reliable operational ML applications and easily adopt real-time capabilities no matter the use case at hand or the engineering resources on staff. This new funding will help us further build and strengthen both Tectons feature platform for ML and the Feast open source feature store, enabling organizations of all sizes to build and deploy automated MLinto live, customer-facing applications and business processes, quickly and at scale, said Mike Del Balso, co-founder and CEO of Tecton.

Tecton was founded by the creators of Ubers Michelangelo platform to make world-class ML accessible to every company. Tecton is a fully-managed ML feature platform that orchestrates the complete lifecycle of features, from transformation to online serving, with enterprise-grade SLAs. The platform enables ML engineers and data scientists to automate the transformation of raw data, generate training data sets and serve features for online inference at scale. Whether organizations are building batch pipelines or already including real-time features in their ML initiatives, Tecton solves the many data and engineering hurdles that keep development times painfully high and, in many cases, preventing predictive applications from ever reaching production at all.

4 Components of Tectons Feature Platform

Major Company Milestones2020:

2021:

2022:

Tecton Raises $100 Million in Series C FundingToday Tecton also announced that it has raised $100 million in Series C funding bringing the total raised to $160 million. This round was led by new investor Kleiner Perkins with participation from strategic investors Databricks and Snowflake Ventures, previous investors Andreessen Horowitz and Sequoia Capital and new investors Bain Capital Ventures and Tiger Global. Tecton plans to use the money to further deliver on customer value and to scale both engineering and go-to-market teams.

We expect the software we use today to be highly personalized and intelligent. While ML makes this possible, it remains far from reality as the enabling infrastructure is prohibitively difficult to build for all but the most advanced companies, said Bucky Moore, partner, Kleiner Perkins. Tecton makes this infrastructure accessible to any team, enabling them to build ML apps faster. As this continues to accelerate their growth trajectory, we are proud to partner with Mike, Kevin and team to pioneer and lead this exciting new space.

The investment in Tecton is a natural fit for Databricks Ventures as we look to extend the lakehouse ecosystem with best-in-class solutions and support companies that align with our mission to simplify and democratize data and AI, said Andrew Ferguson, Head of Databricks Ventures. We're excited to deepen our partnership with the Tecton team and look forward to delivering continued innovation for our joint customers.

Together, Tecton and Snowflake enable data teams to securely and reliably store, process and manage the complete lifecycle of ML features for production in Snowflake, making it easier for users across data science, engineering and analyst teams to collaborate and work from a single source of data truth, said Stefan Williams, VP Corporate Development and Snowflake Ventures at Snowflake. This investment expands our partnership and is the latest example of Snowflakes commitment to helping our customers effortlessly get the most value from their data.

Additional Resources

About TectonTectons mission is to make world-class ML accessible to every company. Tectons feature platform for ML enables data scientists to turn raw data into production-ready features, the predictive signals that feed ML models. The founders created the Uber Michelangelo ML platform, and the team has extensive experience building data systems for industry leaders like Google, Facebook, Airbnb and Uber. Tecton is backed by Andreessen Horowitz, Bain Capital Ventures, Kleiner Perkins, Sequoia Capital and Tiger Global as well as by strategic investors Databricks and Snowflake Ventures. Tecton is the main contributor and committer of Feast, the leading open source feature store. For more information, visit https://www.tecton.ai or follow @tectonAI.

Media and Analyst Contact:Amber Rowlandamber@therowlandagency.com+1-650-814-4560

[1] Gartner, Cool Vendors in Enterprise AI Operationalization and Engineering, Chirag Dekate, Farhan Choudhary, Soyeb Barot, Erick Brethenoux, Arun Chandrasekaran, Robert Thanaraj, Georgia O'Callaghan, 27 April 2021 (report available to Gartner subscribers here: https://www.gartner.com/doc/4001037)

Gartner Disclaimer: Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartners research organization and should not be construed as statements of fact. Gartner disclaims all warranties, express or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/3b873f04-a539-44c0-8a98-69cde58f42ad

Here is the original post:
Tecton Reports Record Demand for Its Machine Learning Feature Platform as It Raises $100 Million in Funding Led by Kleiner Perkins With Participation...

How Is The Healthcare Sector Being Revolutionized By Machine Learning 2022 – Inventiva

How Is the Healthcare Sector Being Revolutionized by Machine Learning?

Machine learning is set to change the healthcare sector. What if you were informed that machines would soon carry out surgery? Yes, machine learning has improved quickly to the point that it may soon be possible to execute medical procedures with little to no assistance from a doctor. By 2022, machine learning will be employed extensively in the healthcare sector.

The first thing that springs to mind when one hears the words artificial intelligence or machine learning is robots, but machine learning is far more involved than that. Machine learning has advanced in every conceivable industry and transformed numerous businesses, including finance, retail, and healthcare. This article will discuss how machine learning changes the healthcare sector. So lets get down to business right away.

With machine learning and artificial intelligence application, a system can learn from its mistakes and improve over time. Themain aim is to enable computers to learn autonomously, without any help from human input. Data observations, pattern discovery, and future decision-making are the first steps in the learning process. In India, machine learning has begun to take hold.

A fundamental component of artificial intelligence is machine learning, which enables computers to learn from the past and predict the future.

Data exploration and pattern matching are involved, with little assistance from humans. Machine learning has mainly been used in four technologies:

1. Supervised Learning:

A machine learning technique called supervised learning necessitates monitoring as a student-teacher interaction does. In supervised learning, a machine is trained using data that has already been correctly labelled with some of the outputs. As a result, supervised learning algorithms assess sample data whenever new data is entered into the system and use that labelled data to predict accurate results.

It is classified into two different categories of algorithms:

Because of technology, individuals are able to collect or produce data based on experience.Utilising certain labelled data points from the training set, it operates similarly to how people learn. It assists in addressing various challenging computational issues and optimising the performance of models utilising experience.

2. Unsupervised Learning:

Unlike supervised learning, unsupervised learning allowsa machine to be trained without needing to classify or clearly label data. Even without any labelled training data, it seeks to create groups of unsorted data based on some patterns and differences. Since there is no supervision in unsupervisedlearning, the machines are not given any sample data. As a result, robots can only discover hidden structures in unlabeled data.

Combining supervised and unsupervised learning techniquesis known as semi-supervised learning. It is used to get around both supervised and unsupervised learnings shortcomings.

In the semi-supervised learning approach, both labelled and unlabelled data are used to train the machine. Nevertheless, it includes a sizable number of unlabeled cases and a few examples with labels. Some of the most well-known semi-supervised learning real-world applications are speech analysis, web content classification, protein sequence classification, and text document classifiers.

A feedback-based machine learning technique known asreinforcement learning excludes the need for labelled data. An agent learns howto behave, thereby performing actions and observing how they affect theenvironment. Agents can offer compliments for each constructive action andcriticism for destructive ones. Since there are no training data forreinforcement learning, agents can only learn from their experience.

Even though constantly new technologies are emerging,machine learning is still utilised in several different industries.

Machine learning is essential because it helps companiescreate new products and gives them a picture of consumer behaviour trends andoperational business patterns.

Machine learning is fundamental to theoperations of many of the leading businesses of today, like Facebook, Google,and Uber. Machine learning has become a major point of competitive differencefor many firms.

Machine learning has a number of real-world uses that produce tangible business outcomes, including time and money savings, that could significantly impact your companys future. One mainly observes a significant impact on the customer care sector, where machine learning enables humans to complete tasks more quickly and effectively. Through Virtual Assistant solutions, machine learning automates actions that would usually require a human person to complete them, including resetting a password or checking an accounts balance. By doing this, valuable agent time is freed up so they can concentrate on the high-touch, complex decision-making tasks that humans excel at but that machines struggle with.

There have been many breakthroughs in the healthcaresector, but machine learning has improved the efficiency of healthcare firms.

Although machine learning has come a long way, a doctors brain remains thebest machine learning tool in the healthcare sector. Many doctors are concernedthat machine learning will take over the healthcare sector.

The focus should be placed on how doctors may utilisemachine learning as a tool to enhance clinical use and supplement patientcare. Even if machine learning completely replaces doctors, patients would still require a human touch and attentive care.

Machine learning is making inroads into severalbusinesses, and this trend appears to continue indefinitely. Additionally, ithas begun to demonstrate its abilities in the healthcare sector. Some of theways it is used there are:

Machine learning algorithms that forecast disease orenable early disease and illness diagnoses are already under development byscientists. Artificial intelligence algorithms are being developed by thetechnological startup feebris, based in the UK, to identify complicatedrespiratory disorders accurately. The Computer Science and Artificial Intelligence Lab at MIT have created a novel deep learning-based prediction model that can forecast the onset of breast cancer up to five years in thefuture.

Since it started in 1980, the use of robotics inhealthcare has been expanding quickly. Although many people still find the ideaof a robot performing surgery unsettling, it will soon become a normal practice.

In hospitals, robotics is also utilised to monitor the patients and notify thenurses when human interaction is necessary.

The robotic assistant can find the blood vessel and take the patients blood with minimal discomfort and concern. In pharmaceuticallabs, robots also dispense and prepare drugs and vaccinations. Robotic carts are utilised in large facilities to transport medical supplies. Speaking abouthumans being replaced by robots, that wont be happening anytime soon; roboticscan only help doctors, never taking their place.

The procedure or technique known as medical imagingdiagnostic involves creating a visual depiction of tissue or internal organparts to monitor health, diagnose, and treat disorders. Additionally, it aidsin the creation of an anatomy and physiology database. Using medical imagingtechnology like ultrasound and MRI can prevent the need for surgical procedures.

Machine learning algorithms are properly taught torecognise the subtleties in CT scans and MRIs and can handle enormous amountsof medical pictures quickly. A deep learning team has developed an algorithmfrom the US, France, and Germany that can diagnose skin cancer more preciselythan a dermatologist.

Because of its advantages, machine learning is becomingincreasingly popular among healthcare organisations. Several advantages include:

The ability of machine learning to precisely recognisepatterns and data, which may be impossible for a human to do, is one of itsgreatest strengths. It can quickly and efficiently process enormous amounts of dataand patterns. All of these are feasible with the new invention.

Because maintaining health data requires a lot of work,machine learning is used to streamline the procedure and reduce the time andeffort needed. Machine learning is developing cutting-edge technology forkeeping smart data records in the modern world.

By gaining knowledge from patterns and data over time,machine learning adapts. The main advantage of machine learning is that it caneasily execute procedures and requires little human interaction.

There is also some uncertainty because, despite itsadvantages, machine learning also has drawbacks. Among them are:

Machine learning trains its algorithms using enormousdata sets and patterns since it adapts through patterns and data settings. Theinformation must be accurate and of high calibre.

For machine learning to produce correct results, it needsenough time for its algorithms to absorb and adjust to the patterns and data.It functions better with more computing power.

Machine learning is extremely error-prone, necessitates avast quantity of data, and may not perform as intended if not given enough ofit. Any inaccurate data fed to the machine may result in an undesirable result.

The advancement of machine learning will enable theautomatic early detection of most ailments. It will also improve the efficiencyand accuracy of disease detection to lessen the strain on doctors. Futurehealthcare will change thanks to AI and machine learning.

Machine learning has grown quickly in every industry,including navigation, business, retail, and banking. However, success in thehealthcare sector is challenging due to the scarcity of high-calibre scientistsand the limited availability of data. Numerous elements, including machine learning, still need to be addressed.

The use of machine learning in the healthcare sector hasincreased in popularity and usage. By simplifying their tasks, ML benefitspatients and physicians in various ways. Automating medical billing, offeringclinical decision support, and creating clinical care standards are some of themost popular uses of machine learning. Machine learning has numerousapplications that are currently being investigated and developed.

Futuredevelopments in machine learning (ML) applications in the healthcare industry will greatly improve the quality of life for people.

Edited by Prakriti Arora

Like Loading...

Related

Read this article:
How Is The Healthcare Sector Being Revolutionized By Machine Learning 2022 - Inventiva

Harnessing the power of artificial intelligence – UofSC News & Events – SC.edu

On an early visit to the University of South Carolina, Amit Sheth was surprised when 10 deans showed up for a meeting with him about artificial intelligence.

Sheth the incoming director of the universitys Artificial Intelligence Institute at the time thought he would need to sell the deans on the idea. Instead, it was them pitching the importance of artificial intelligence to him.

All of them were telling me why they are interested in AI, rather than me telling them why they should be interested in AI, Sheth said in a 2020 interview with the universitys Breakthrough research magazine. The awareness of AI was already there and the desire to incorporate AI into the activities that their faculty and students do was already on the campus.

Since the university announced the institute in 2019, that interest has only grown. There are now dozens of researchers throughout campus exploring how artificial intelligence and machine learning can be used to advance fields from health care and education to manufacturing and transportation. On Oct. 6, faculty will gather at the Darla Moore School of Business for a panel discussion on artificial intelligence led by Julius Fridriksson, vice president for research.

South Carolina's efforts stand out in several ways: the collaborative nature of research, which involves researchers from many different colleges and schools; a commitment to harnessing the power of AI in an ethical way; and the university's commitment to projects that will have a direct, real-world impact.

This week, as the Southeastern Conference marks AI in the SEC Day, we look at some of the remarkable efforts of South Carolina researchers in the area of artificial intelligence.

Read the rest here:
Harnessing the power of artificial intelligence - UofSC News & Events - SC.edu

Collaboration will advance cardiac health through AI – EurekAlert

ITHACA, N.Y. --Employing artificial intelligence to help improve outcomes for people with cardiovascular disease is the focus of a three-year, $15 million collaboration among Cornell Tech, the Cornell Ann S. Bowers College of Computing and Information Science (Cornell Bowers CIS) and NewYork-Presbyterian with physicians from its affiliated medical schools Weill Cornell Medicine and Columbia University Vagelos College of Physicians and Surgeons (Columbia University VP&S).

The Cardiovascular AI Initiative, to be funded by NewYork-Presbyterian, was launched this summer in a virtual meeting featuring approximately 40 representatives from the institutions.

AI is poised to fundamentally transform outcomes in cardiovascular health care by providing doctors with better models for diagnosis and risk prediction in heart disease, said Kavita Bala, professor of computer science and dean of Cornell Bowers CIS. This unique collaboration between Cornells world-leading experts in machine learning and AI and outstanding cardiologists and clinicians from NewYork-Presbyterian, Weill Cornell Medicine and Columbia will drive this next wave of innovation for long-lasting impact on cardiovascular health care.

NewYork-Presbyterian is thrilled to be joining forces with Cornell Tech and Cornell Bowers CIS to harness advanced technology and develop insights into the prediction and prevention of heart disease to benefit our patients, said Dr. Steven J. Corwin, president and chief executive officer of NewYork-Presbyterian. Together with our world-class physicians from Weill Cornell Medicine and Columbia, we can transform the way health care is delivered.

The collaboration aims to improve heart failure treatment, as well as predict and prevent heart failure. Researchers from Cornell Tech and Cornell Bowers CIS, along with physicians from Weill Cornell Medicine and Columbia University VP&S, will use AI and machine learning to examine data from NewYork-Presbyterian in an effort to detect patterns that will help physicians predict who will develop heart failure, inform care decisions and tailor treatments for their patients.

Artificial intelligence and technology are changing our society and the way we practice medicine, said Dr. Nir Uriel, director of advanced heart failure and cardiac transplantation at NewYork-Presbyterian, an adjunct professor of medicine in the Greenberg Division of Cardiology at Weill Cornell Medicine and a professor of medicine in the Division of Cardiology at Columbia University Vagelos College of Physicians and Surgeons. We look forward to building a bridge into the future of medicine, and using advanced technology to provide tools to enhance care for our heart failure patients.

The Cardiovascular AI Initiative will develop advanced machine-learning techniques to learn and discover interactions across a broad range of cardiac signals, with the goal of providing improved recognition accuracy of heart failure and extend the state of care beyond current, codified and clinical decision-making rules. It will also use AI techniques to analyze raw data from time series (EKG) and imaging data.

Major algorithmic advances are needed to derive precise and reliable clinical insights from complex medical data, said Deborah Estrin, the Robert V. Tishman 37 Professor of Computer Science, associate dean for impact at Cornell Tech and a professor of population health science at Weill Cornell Medicine. We are excited about the opportunity to partner with leading cardiologists to advance the state-of-the-art in caring for heart failure and other challenging cardiovascular conditions.

Researchers and clinicians anticipate the data will help answer questions around heart failure prediction, diagnosis, prognosis, risk and treatment, and guide physicians as they make decisions related to heart transplants and left ventricular assist devices (pumps for patients who have reached end-stage heart failure).

Future research will tackle the important task of heart failure and disease prediction, to facilitate earlier intervention for those most likely to experience heart failure, and preempt progression and damaging events. Ultimately this would also include informing the specific therapeutic decisions most likely to work for individuals.

At the initiative launch, Bala spoke of CornellsRadical Collaboration initiative in AI, and the key areas in which she sees AI a discipline in which Cornell ranks near the top of U.S. universities playing a major role in the future.

We identified health and medicine as one of Cornells key impact areas in AI, she said, so the timing of this collaboration could not have been more perfect. We are excited for this partnership as we consider high-risk, high-reward, long-term impact in this space.

-30-

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Visit link:
Collaboration will advance cardiac health through AI - EurekAlert

DC’s Future AI is working on the ‘holy grail’ of artificial intelligence – Technical.ly

Startups are born in any number of places: offices, homes, restaurants, planes, trains and automobiles. But in the case of Charles Simons Future AI, a DC-based AI company, it was born on a boat.

After sailing all around the world, Simon said that he was parked in the Annapolis, Maryland yacht base in 2018 when he began writing his companys initial software. That eventually became a product called Brain Simulator, an open-source tool used by people the world over.

A few years later, in January 2022, Simon and cofounder Andre Slabber raised $2 million in January of 2022 to launch Future AI, an artificial intelligence company focused on general intelligence. In the time since, the company has grown to 20 employees, most of whom are developers, and is about to begin its beta launch.

General intelligence, according to Simon, is the sect of AI that focuses on emulating what neurons can do in a human brain. Most machine learning-concerned AI focuses on finding and analyzing data and correlations, which is useful to several industries, but the algorithms lack some of the basic human brain functions that we use every day.

It doesnt know the things that you and I take for granted, Simon told Technical.ly. We know that things exist in reality, we know that there is a reality or, at least, we assume theres a reality because it sure looks like it. The machine learning direction is simply not targeted to go there.

Those functions include things like knowing cause and effect, i.e. knowing your impact on reality and the future. Future AI is working on developing software that understands multi-sensory input as well as a data structure that can support any kind of information. It additionally is building an internal mental model, which replicates how human brains are generally aware of whats around them, even if theyre not directly looking at something (need an example? Picture the wall thats likely behind you without looking at it).

Its Sallie product, which just launched, attempts to make those brain functions a reality through AI. Sallie is a small pod that features a camera, computer, body and sensory technology to explore, talk, hear and touch things. Sallie, Simon emphasized, is still in the works; While she has the beginnings of those abilities, she largely exists as a toy or entertainment at the moment.

The software officially launched Friday, and Future AI is building a sign-up list for beta testing that is set to begin in the fourth quarter. He hopes to ramp up development in 2023, and ultimately expects to have a project with more complete general intelligence in four to five years.

Still, Simon stressed that the technology is still very new. It in no way interrupts the work of machine learning but instead adds new capabilities. And as a former electrical engineer who developed different machine learning-based software for neurological systems that hed love to keep in the machine learning sphere, he would know.

Artificial general intelligence has been a holy grail of computer science since the 1950s and when it exists, it will permeate and revolutionize all of computing, Simon said.

Read more:
DC's Future AI is working on the 'holy grail' of artificial intelligence - Technical.ly

Machine learning begins to understand the human gut – University of Michigan News

The robot in the Venturelli Lab that creates the microbial communities used to train and test the algorithms. Image courtesy: Venturelli Lab

Study: Recurrent neural networks enable design of multifunctional synthetic human gut microbiome dynamics (DOI: 10.7554/eLife.73870)

The communities formed by human gut microbes can now be predicted more accurately with a new computer model developed in a collaboration between biologists and engineers, led by the University of Michigan and the University of Wisconsin.

The making of the model also suggests a route toward scaling from the 25 microbe species explored to the thousands that may be present in human digestive systems.

Whenever we increase the number of species, we get an exponential increase in the number of possible communities, said Alfred Hero, the John H. Holland Distinguished University Professor of Electrical Engineering and Computer Science at the University of Michigan and co-corresponding author of the study in the journal eLife.

Thats why its so important that we can extrapolate from the data collected on a few hundred communities to predict the behaviors of the millions of communities we havent seen.

While research continues to unveil the multifaceted ways that microbial communities influence human health, probiotics often dont live up to the hype. We dont have a good way of predicting how the introduction of one strain will affect the existing community. But machine learning, an approach to artificial intelligence in which algorithms learn to make predictions based on data sets, could help change that.

Problems of this scale required a complete overhaul in terms of how we model community behavior, said Mayank Baranwal, adjunct professor of systems and control engineering at the Indian Institute of Technology, Bombay, and co-first author of the study.

He explained that the new algorithm could map out the entire landscape of 33 million possible communities in minutes, compared to the days to months needed for conventional ecological models.

Integral to this major step was Ophelia Venturelli, assistant professor of biochemistry at the University of Wisconsin and co-corresponding author of the study. Venturellis lab runs experiments with microbial communities, keeping them in low-oxygen environments that mimic the environment of the mammalian gut.

Her team created hundreds of different communities with microbes that are prevalent in the human large intestine, emulating the healthy state of the gut microbiome. They then measured how these communities evolved over time and the concentrations of key health-relevant metabolites, or chemicals produced as the microbes break down foods.

Metabolites are produced in very high concentrations in the intestines, Venturelli said. Some are beneficial to the host, like butyrate. Others have more complex interactions with the host and gut community.

The machine learning model enabled the team to design communities with desired metabolite profiles. This sort of control may eventually help doctors discover ways to treat or protect against diseases by introducing the right microbes.

While human gut microbiome research has a long way to go before it can offer this kind of intervention, the approach developed by the team could help get there faster. Machine learning algorithms often are produced with a two step process: accumulate the training data, and then train the algorithm. But the feedback step added by Hero and Venturellis team provides a template for rapidly improving future models.

Heros team initially trained the machine learning algorithm on an existing data set from the Venturelli lab. The team then used the algorithm to predict the evolution and metabolite profiles of new communities that Venturellis team constructed and tested in the lab. While the model performed very well overall, some of the predictions identified weaknesses in the model performance, which Venturellis team shored up with a second round of experiments, closing the feedback loop.

This new modeling approach, coupled with the speed at which we could test new communities in the Venturelli lab, could enable the design of useful microbial communities, said Ryan Clark, co-first author of the study, who was a postdoctoral researcher in Venturellis lab when he ran the microbial experiments. It was much easier to optimize for the production of multiple metabolites at once.

The group settled on a long short-term memory neural network for the machine learning algorithm, which is good for sequence prediction problems. However, like most machine learning models, the model itself is a black box. To figure out what factors went into its predictions, the team used the mathematical map produced by the trained algorithm. It revealed how each kind of microbe affected the abundance of the others and what kinds of metabolites it supported. They could then use these relationships to design communities worth exploring through the model and in follow-up experiments.

The model can also be applied to different microbial communities beyond medicine, including accelerating the breakdown of plastics and other materials for environmental cleanup, production of valuable compounds for bioenergy applications, or improving plant growth.

This study was supported by the Army Research Office and the National Institutes of Health.

Hero is also the R. Jamison and Betty Williams Professor of Engineering, and a professor of biomedical engineering and statistics. Venturelli is also a professor of bacteriology and chemical and biological engineering. Clark is now a senior scientist at Nimble Therapeutics. Baranwal is also a scientist in the division of data and decision sciences at Tata Consultancy Services Research and Innovation.

Read this article:
Machine learning begins to understand the human gut - University of Michigan News

Podcast: Why Deep Learning Could Expedite the Next AI Winter Machine Learning Times – The Machine Learning Times

Welcome to the next episode ofThe Machine Learning TimesExecutive Editor Eric Siegels podcast,The Doctor Data Show. Click here for all episodes and links to listen on your preferred platform. Why Deep Learning Could Expedite the Next AI Winter Podcast episode description: Deep learning, the most important advancement in machine learning, could inadvertently expedite the next AI winter. The problem is that, although it increases value and capabilities, it may also be having the effect of increasing hype even more. This episode covers four reasons deep learning increases the hype-to-value ratio of machine learning. About the Author Eric

This content is restricted to site members. If you are an existing user, please log in on the right (desktop) or below (mobile). If not, register today and gain free access to original content and industry news. See the details here.

Go here to see the original:
Podcast: Why Deep Learning Could Expedite the Next AI Winter Machine Learning Times - The Machine Learning Times

Using machine learning to assess the impact of deep trade agreements | VOX, CEPR Policy Portal – voxeu.org

Holger Breinlich, valentina corradi, Nadia Rocha, Joo M.C. Santos Silva, Thomas Zylkin 08 July 2022

Preferential trade agreements (PTAs) have become more frequent and increasingly complex in recent decades, making it important to assess how they impact trade and economic activity. Modern PTAs contain a host of provisions besides tariff reductions in areas as diverse as services trade, competition policy, or public procurement. To illustrate this proliferation of non-tariff provisions, Figure 1 shows the share of PTAs in force and notified to the WTO up to 2017 that cover selected policy areas. More than 40% of the agreements include provisions such as investment, movement of capital and technical barriers to trade. And more than two-thirds of agreements cover areas such as competition policy or trade facilitation.

Figure 1 Share of PTAs that cover selected policy areas

Note: Figure shows the share of PTAs that cover a policy area. Source: Hofmann, Osnago and Ruta (2019).

Recent research has tried to move beyond estimating the overall impact of PTAs on trade and tried to establish the relative importance of individual PTA provisions (e.g. Kohl et al. 2016, Mulabdic et al. 2017, Dhingra et al. 2018, Regmi and Baier 2020). However, such attempts face the difficulty that the number of provisions included in PTAs is very large compared to the number of PTAs available to study (see Figure 2), making it difficult to separate their individual impacts on trade flows.

Figure 2 The number of provisions in PTAs over time

Source: Mattoo et al. (2020).

Researchers have tried to address the growing complexity of PTAs in different ways. For example, Mattoo et al. (2017) use the count of provisions in an agreement as a measure of its depth and check whether the increase in trade flows after a given PTA is related to this measure. Dhingra et al. (2018) group provisions into categories (such as services, investment, and competition provisions) and examine the effect of these provision bundles on trade flows. Obviously, these approaches come at the cost of not allowing the identification of the effect of individual provisions within each group.

In recent research (Breinlich et al. 2022), we instead adopt a technique from the machine learning literature the least absolute shrinkage and selection operator (lasso) to the context of selecting the most important provisions and quantifying their impact. More precisely, we adapt the rigorous lasso method of Belloni et al. (2016) to the estimation of state-of-the-art gravity models for trade (e.g. Yotov et al. 2016, Weidner and Zylkin 2021).1

Unlike traditional estimation methods such as least squares and the maximum likelihood that are based on optimising the in-sample fit of the estimated model, lasso balances in-sample fit with parsimony to optimise the out-of-sample fit and to simultaneously select the more important regressors and estimate their effect on trade flows. In our context, the lasso works by shrinking the effects of individual provisions towards zero and progressively removing those that do not have a significant impact on the fit of the model (for an intuitive description, see Breinlich et al. 2021; for more details, see Breinlich et al. 2022). The rigorous lasso of Belloni et al. (2016), a relatively recent variant of the lasso, refines this approach by taking into account the idiosyncratic variance of the data and by only keeping variables that are found to have a statistically large impact on the fit of the model.

Because the rigorous lasso tends to favour very parsimonious models, it may miss some important provisions. To address this issue, we introduce two methods to identify potentially important provisions that may have been missed by the rigorous lasso. One of the methods, which we call iceberg lasso, involves regressing each of the provisions selected by the rigorous lasso on all other provisions, with the purpose of identifying relevant variables that were initially missed due to their collinearity with the provisions selected in the initial step. The other method, termed bootstrap lasso, augments the set of variables selected by the plug-in lasso with the variables selected when the rigorous lasso is bootstrapped.

We use the World Bank's database on deep trade agreements, where we observe 283 PTAs and 305 essential provisions grouped into the 17 categories detailed in Figure 1.2The rigorous lasso selects eight provisions more strongly associated with increasing trade flows following the implementation of the respective PTAs. As detailed in Table 1, these provisions are in the areas of anti-dumping, competition policy, technical barriers to trade, and trade facilitation.

Table 1 Provisions selected by the rigorous lasso

Building on these results, the iceberg lasso procedure identifies a set of 42 provisions, and the bootstrap lasso identifies between 30 and 74 provisions that may impact trade, depending on how it is implemented. Therefore, the iceberg lasso and bootstrap lasso methods select sets of provisions that are small enough to be interpretable and large enough to give us some confidence that they include the more relevant provisions. In contrast, the more traditional implementation of the lasso based on cross-validation selects 133 provisions.

Reassuringly, both the iceberg lasso and bootstrap lasso select similar sets of provisions, mainly related to anti-dumping, competition policy, subsidies, technical barriers to trade, and trade facilitation. Therefore, although our results do not have a causal interpretation and, consequently, we cannot be certain of exactly which provisions are more important, we can be reasonably confident that provisions in these areas stand out as having a positive effect on trade.

Besides identifying the set of provisions that are more likely to have an impact on trade, our methods also provide an estimate of the increase in trade flows associated with the selected provisions. We use these results to estimate the effects of different PTAs that have already been implemented. Table 2 summarises the estimated effects for selected PTAs obtained using the different methods we introduce. As, for example, in Baier et al. (2017 and 2019), we find a wide variety of effects, ranging from very large impacts in agreements that include many of the selected provisions to no effect at all in agreements that do not include any.3

Table 2 also shows that different methods can lead to substantially different estimates, and therefore these results need to be interpreted with caution. As noted above, our results do not have a causal interpretation. Therefore the accuracy of the predicted effects of individual PTAs will depend on whether the selected provisions have a causal impact on trade or serve as a signal of the presence of provisions that have a causal effect. When this condition holds, the predictions based on this method are likely to be reasonably accurate, and in Breinlich et al. (2022), we report simulation results suggesting that this is the case. However, it is possible to envision scenarios where predictions based on our methods fail dramatically; for example, it could be the case that a PTA is incorrectly measured to have zero impact despite having many of the true causal provisions. Finally, we note that our results can also be used to predict the effects of new PTAs, but the same caveats apply.

Table 2 Partial effects for selected PTAs estimated by different methods

We have presented results from an ongoing research project in which we have developed new methods to estimate the impact of individual PTA provisions on trade flows. By adapting techniques from the machine learning literature, we have developed data-driven methods to select the most important provisions and quantify their impact on trade flows. While our approach cannot fully resolve the fundamental problem of identifying the provisions with a causal impact on trade, we were able to make considerable progress. In particular, our results show that provisions related to anti-dumping, competition policy, subsidies, technical barriers to trade, and trade facilitation procedures are likely to enhance the trade-increasing effect of PTAs. Building on these results, we were able to estimate the effects of individual PTAs.

Authors note: This column updates and extends Breinlich et al. (2021). See alsoFernandes et al. (2021).

Baier, S L, Y V Yotov and T Zylkin (2017), "One size does not fit all: On the heterogeneous impact of free trade agreements", VoxEU.org, 28 April.

Baier, S L, Y V Yotov and T Zylkin (2019), "On the Widely Differing Effects of Free Trade Agreements: Lessons from Twenty Years of Trade Integration", Journal of International Economics 116: 206-228.

Belloni, A, V Chernozhukov, C Hansen and D Kozbur (2016), "Inference in High Dimensional Panel Models with an Application to Gun Control", Journal of Business & Economic Statistics 34: 590-605.

Breinlich, H, V Corradi, N Rocha, M Ruta, J M C Santos Silva and T Zylkin (2021), "Using Machine Learning to Assess the Impact of Deep Trade Agreements", in A M Fernandes, N Rocha and M Ruta (eds), The Economics of Deep Trade Agreements, CEPR Press.

Breinlich, H, V Corradi, N Rocha, M Ruta, J M C Santos Silva and T Zylkin (2022), "Machine Learning in International Trade Research - Evaluating the Impact of Trade Agreements", CEPR Discussion paper 17325.

Dhingra, S, R Freeman and E Mavroeidi (2018), Beyond tariff reductions: What extra boost to trade from agreement provisions?, LSE Centre for Economic Performance Discussion Paper 1532.

Fernandes, A, N Rocha and M Ruta (2021), The Economics of Deep Trade Agreements: A New eBook, VoxEU.org, 23 June.

Hofmann, C, A Osnago and M Ruta (2019), "The Content of Preferential Trade Agreements", World Trade Review 18(3): 365-398.

Kohl, T S. Brakman and H. Garretsen (2016), "Do trade agreements stimulate international trade differently? Evidence from 296 trade agreements", The World Economy 39: 97-131.

Mattoo, A, A Mulabdic and M Ruta (2017), "Trade creation and trade diversion in deep agreements", Policy Research Working Paper Series 8206, World Bank, Washington, DC.

Mattoo, A, N Rocha and M Ruta (2020), Handbook of Deep Trade Agreements, Washington, DC: World Bank.

Mulabdic, A, A Osnago and M Ruta (2017), "Deep integration and UK-EU trade relations," World Bank Policy Research Working Paper Series 7947.

Regmi, N and S Baier (2020), "Using Machine Learning Methods to Capture Heterogeneity in Free Trade Agreements," mimeograph.

Weidner, M, T Zylkin (2021), "Bias and Consistency in Three-Way Gravity Models," Journal of International Economics: 103513.

Yotov, Y V, R Piermartini, J A Monteiro and M Larch (2016), An advanced guide to trade policy analysis: The structural gravity model, Geneva: World Trade Organization.

1 Our approach complements the one adopted by Regmi and Baier (2020), who use machine learning tools to construct groups of provisions and then use these clusters in a gravity equation. The main difference between the two approaches is that Regmi and Baier (2020) use what is called an unsupervised machine learning method, which uses only information on the provisions to form the clusters. In contrast, we select the provisions using a supervised method that also considers the impact of the provisions on trade.

2Essential provisions in PTAs include the set of substantive provisions (those that require specific integration/liberalisation commitments and obligations) plus the disciplines among procedures, transparency, enforcement or objectives, which are required to achieve the substantive commitments (Mattoo et al. 2020).

3It is worth noting that lasso based on the traditional cross-validation approach leads toextremely dispersedestimations of trade effects, with some of them being clearly implausible. This further illustrates the superiority of the methods we propose.

Here is the original post:
Using machine learning to assess the impact of deep trade agreements | VOX, CEPR Policy Portal - voxeu.org

Reforming Prior Authorization with AI and Machine Learning – insideBIGDATA

Healthcare providers are growing increasingly more comfortable with using AI-enabled software to improve patient care, from analyzing medical imaging to managing chronic diseases. While health plans have been slower to adopt AI and machine learning (ML), many are beginning to rely on these technologies in administrative areas such as claims management, and 62% of payers rank improving their AI/ML capabilities as an extremely high priority.

The process by which health plans manage the cost of members benefits is especially ripe for technological innovation. Health plans often require providers to obtain advance approval, or prior authorization (PA), for a wide range of procedures, services, and medications. The heavily manual PA process drives unnecessary resource cost and delays in care, which can lead to serious adverse events for patients.

In recent years, there has been an emphasis on reducing the administrative burden of PAs via digitization. Some health plans are moving beyond automation by leveraging AI and ML technologies to redefine the care experience, helping their members receive evidence-based, high-value care as quickly as possible. These technologies are able to streamline the administrative tasks of PA while continually refining customized, patient-specific care paths to drive better outcomes, ease provider friction, and accelerate patient access.

Providing clinical context for PA requests

Traditionally, PA requests are one-off transactions, disconnected from the patients longitudinal history. Physicians enter the requested clinical information, which is already captured in the electronic health record (EHR), into the health plans PA portal and await approval or denial. Although FHIR standards have provided new interoperability for the exchange of clinical data, these integrations are rarely sufficient to complete a PA request, as much of the pertinent information resides in unstructured clinical notes.

Using natural language processing, ML models can automatically extract this patient-specific data from the EHR, providing the health plan with a more complete patient record. By using ML and interoperability to survey the patients unique clinical history, health plans can better contextualize PA requests in light of the patients past and ongoing treatment.

Anticipating the entire episode of care

An AI-driven authorization process can also identify episode-based care paths based on the patients diagnosis, suggesting additional services that might be appropriate for a bundled authorization. Instead of submitting separate PAs for the same patient, physicians can submit a consolidated authorization for multiple services across a single episode of care, receiving up-front approval.

Extracted clinical data can also help health plans develop more precise adjudication rules for these episode-based care paths. Health plans can create patient sub-populations that share clinical characteristics, enabling the direct comparison of patient cohorts in various treatment contexts. As patient data is collected, applied ML algorithms can identify the best outcomes for specific clinical scenarios. Over time, an intelligent authorization platform can aggregate real-world data to test and refine condition-specific care paths for a wide range of patient populations.

Influencing care choices to improve outcomes

Health plans can also use AI to encourage physicians to make the most clinically appropriate, high-value care decisions. As a PA request is entered, ML models can evaluate both the completeness and the appropriateness of the provided information in real time. For example, an ML model might detect that a physician has neglected to provide imaging records within the clinical notes, triggering an automated prompt for that data.

An ML model can also detect when the providers PA request deviates from best practices, triggering a recommendation for an alternative care choice. For example, an intelligent authorization platform might suggest that a physician select an outpatient setting instead of an inpatient setting based on the type of procedure and the clinical evidence. By using AI to help physicians build a more clinically appropriate case, health plans can reduce denials and decrease unnecessary medical expenses, while also improving patient outcomes.

Of course, for these clinical recommendations to be accepted by physicians, health plans must provide greater transparency into the criteria they use. While 98% of health plans attest that they use peer-reviewed, evidence-based criteria to evaluate PA requests, 30% of physicians believe that PA criteria are rarely or never evidence-based. To win physician trust, health plans that use technology to provide automatically generated care recommendations must also provide full transparency into the evidence behind their medical necessity criteria.

Prioritizing cases for faster clinical review

Finally, the application of advanced analytics and ML can help health plans drive better PA auto-determination rates by identifying which requests require a clinical review and which do not. This technology can also help case managers prioritize their workload, as it enables the flagging of high-impact cases as well as cases which are less likely to impact patient outcomes or medical spend.

Using a health plans specific policy guidelines, an intelligent authorization platform can use ML and natural language processing to detect evidence that the criteria has been met, linking relevant text within the clinical notes to the plans policy documentation. Reviewers can quickly pinpoint the correct area of focus within the case, speeding their assessment.

The application of AI and ML to the onerous PA process can relieve both physicians and health plans of the repetitive, manual administrative work involved in submitting and reviewing these requests. Most importantly, these intelligent technologies transform PA from a largely bureaucratic exercise into a process that is capable of ensuring that patients receive the highest quality of care, as quickly and painlessly as possible.

About the Author

Niall OConnor is the chief technology officer at Cohere Health, a utilization management technology company that aligns patients, physicians, and health plans on evidence-based treatment plans at the point of diagnosis.

Sign up for the free insideBIGDATAnewsletter.

Join us on Twitter:@InsideBigData1 https://twitter.com/InsideBigData1

Read the original:
Reforming Prior Authorization with AI and Machine Learning - insideBIGDATA

In Ukraine, machine-learning algorithms and big data scans used to identify war-damaged infrastructure – United Nations Development Programme

The dynamics during a crisis can quickly change, requiring critical information to inform decision-making in a timely fashion. If the information is too little, it is usually of no use. If there is too much it may need extensive resources and timely processing to generate actionable insights.

In Ukraine, identifying the size, type and scope of damaged infrastructure is essential for determining locations and people in need, and to inform the necessary allocation of resources needed for rebuilding. Inquiries about the date, time, location, cause as well as type of damage are generally part of such an assessment. At times obtaining the most accurate and timely information can be a challenge.

To help address this issue, the UNDP Country Office in Ukraine is developing a model that uses machine learning and natural language processing techniques to analyse thousands of reports and extract the most relevant information in time to inform strategic decisions.

Classifying key infrastructure

Text mining is a common data science technique; the added value of this model is its customized ability to analyse text from report narratives and then classify them into key infrastructure types. The process relies on ACLED, an open-source database that collates global real-time data. For the pilot testing of its infrastructure assessment model, UNDP used roughly 8,727 reports on military attacks and subsequent events, time-stamped between 24 February and 24 June 2022 (the first four months of the war).

Particularly absent from its database was a taxonomy to categorize the broad range of infrastructure references. Such classification saves time with information processing and can help narrow the scope of assessment if there are specific areas of interest and priorities.

Using its combined experiential knowledge from other crisis zones, it developed a unique model to classify the range of damaged infrastructure into nine categories: industrial, logistics, power/electricity, telecom, agriculture, health, education, shelter and businesses.

If a report, for example, indicated that a residential building in Kyiv was destroyed by military action, the model would classify the reported event in the most appropriate category - in this case, shelter.

The mechanics of the model

A set of relevant keywords was chosen for each of the nine infrastructure types. Th keywords were then compared to the text in the reports. Both the keywords (used to represent a particular type of infrastructure) and the reports were transformed into a numbered vector, whereby each type of infrastructure had one vector, and each report had one vector.

The main goal was to measure the similarity between the two numbers known as cosine similarity: the shorter the distance between a report and an infrastructure type, the higher the semantic relationship between them.

These examples further illustrate the approach:

Text: On 26 February 2022, a bridge was blown up near village of Stoyanka, Kyiv.

The model indicated a valid 34 percent similarity with the Logistics classification.

Text: On 19 May 2022, a farmer on a tractor hit a mine near Mazhuhivka village, Chernihiv region as a result of which he suffered a leg injury.

The model indicated a valid 32 percent similarity with the Agriculture classification.

A minimum threshold of 18 percent was set to determine the validity of the semantic relationship between an infrastructure type and a report. Both examples meet this threshold.

Besides pairing a report with its corresponding infrastructure type, the model by default has also helped with identifying actors involved, time, specific location and reason related to each infrastructure damage. These attributes are already included in ACLED, but the direct association between the report and an infrastructure type ensures the translation of these basic information into more actionable insights.

The snapshot below is a data visualization of the model in action, showing the geographical distribution of the infrastructure damage by type that can be further mapped to understand the causes and actors involved. These insights also play a crucial role in designing response strategies, particularly with respect to the safety and security of the assessment team on the ground.

Replicating the model for different contexts

The utility of this Machine Learning model extends beyond classifying infrastructure types. It can be leveraged in broader humanitarian and development contexts. For this reason, the UNDP Country Office in Ukraine is already replicating the model using more real-time and varied data obtained from Twitter to conduct sentiment analysis and better understand the needs and concerns of affected groups.

The traditional way of manually processing information is not only labour-intensive, but it may fall short of delivering the timely insights needed for informed decision-making, especially given the volume of digital information available nowadays. As the war in Ukraine highlights, being able to uncover timely insights means saving lives.

As an alternative, this model offers speed and efficiency, which can help reduce operational costs in several situations. UNDPs Decision Support Unit, which coordinates assessments internally and in collaboration with a range of partners, is supporting the development of this model. The Infrastructure Semantic Damage Detector is publicly accessible on tinyurl.com/semdam.

For more information, contact Aladdin ataladdin.shamoug@undp.org.

See the original post:
In Ukraine, machine-learning algorithms and big data scans used to identify war-damaged infrastructure - United Nations Development Programme