How to Beat Analysts and the Stock Market with Machine Learning – Knowledge@Wharton

Analyst expectations of firms earnings are on average biased upwards, and that bias varies over time and stocks, according to new research by experts at Wharton and elsewhere. They have developed a machine-learning model to generate a statistically optimal and unbiased benchmark for earnings expectations, which is detailed in a new paper titled, Man vs. Machine Learning: The Term Structure of Earnings Expectations and Conditional Biases. According to the paper, the model has the potential to deliver profitable trading strategies: to buy low and sell high. When analyst expectations are too pessimistic, investors should buy the stock. When analyst expectations are excessively optimistic, investors can sell their holdings or short stocks as price declines are forecasted.

[With the machine-learning model], we can predict how the prices of the stocks will behave based on whether or not the analyst forecast is too optimistic or too pessimistic, said Wharton finance professor Jules H. van Binsbergen, who is one of the papers authors. His co-authors are Xiao Han, a doctoral student at the University of Edinburgh Business School; and Alejandro Lopez-Lira, a finance professor at the BI Norwegian Business School.

The researchers found that the biases of analysts increase in the forecast horizon, or in the period when the earnings announcement date is not anytime soon. However, on average, analysts revise their expectations downwards as the date of the earnings announcement approaches. These revisions induce negative cross-sectional stock predictability, the researchers write, explaining that stocks with more optimistic expectations earn lower subsequent returns. At the same time, corporate managers have more information about their own firms than investors have, and can use that informational advantage by issuing fresh stock, Binsbergen and his co-authors note.

The Opportunity to Profit

Comparing analysts earnings expectations with the benchmarks provided by the machine-learning algorithm reveals the degree of analysts biases, and the window of opportunity it opens. Binsbergen explained how investors could profit from their machine-learning model. With our machine-learning model, we can measure the mistakes that the analysts are making by taking the difference between what theyre forecasting and what our machine-learning forecast estimates, he said.

We can measure the mistakes that the analysts are making by taking the difference between what theyre forecasting and what our machine-learning forecast estimates. Jules H. van Binsbergen

Using that arbitrage opportunity, investors could short-sell stocks for which analysts are overly optimistic, and book their profits when the prices come down to realistic levels as the earnings announcement date approaches, said Binsbergen. Similarly, they could buy stocks for which analysts are overly pessimistic, and sell them for a profit when their prices rise to levels that correspond with earnings that turn out to be higher than forecasted, he added.

Binsbergen identified two main findings of the latest research. One is how optimistic analysts are substantially over time. Sometimes the bias is higher, and sometimes it is lower. That holds for the aggregate, but also for individual stocks, he said. With our method, you can track over time the stocks for which analysts are too optimistic or too pessimistic. That said, there are more stocks for which analysts are optimistic than theyre pessimistic, he added.

The second finding of the study is that there is quite a lot of difference between stocks in how biased the analysts are, said Binsbergen. So, its not that were just making one aggregate statement, that on average for all stocks the analysts are too optimistic.

Capital-raising Window for Corporations

Corporations, too, could use the machine-learning algorithms measure for analysts biases. If you are a manager of a firm who is aware of those biases, then in fact you can benefit from that, said Binsbergen. If the price is high, you can issue stocks and raise money. Conversely, if analysts negative biases push down the price of a stock, they serve as a signal for the firm to avoid issuing fresh stock at that time.

When analysts biases lift or depress a stocks price, it implies that the markets seem to be buying the analysts forecasts and were not correcting them for over-optimism or over-pessimism yet, Binsbergen said. With the machine-learning model that he and his researchers have developed, you can have a profitable investment strategy, he added. That also means that the managers of the firms whose stock prices are overpriced can issue stocks. When the stock is underpriced they can either buy back stocks, or at least refrain from issuing stocks.

For their study, the researchers used information from firms balance sheets, macroeconomic variables, and analysts predictions. They constructed forecasts for annual earnings that are a year and two years ahead for annual earnings; similarly, they used forecasts that were one, two and three quarters ahead for quarterly earnings. With the benchmark expectation provided by their machine-learning algorithm, they then calculated the bias in expectations as the difference between the analysts forecasts and the machine-learning forecasts.

Read this article:
How to Beat Analysts and the Stock Market with Machine Learning - Knowledge@Wharton

Everything About Pipelines In Machine Learning and How Are They Used? – Analytics India Magazine

In machine learning, while building a predictive model for classification and regression tasks there are a lot of steps that are performed from exploratory data analysis to different visualization and transformation. There are a lot of transformation steps that are performed to pre-process the data and get it ready for modelling like missing value treatment, encoding the categorical data, or scaling/normalizing the data. We do all these steps and build a machine learning model but while making predictions on the testing data we often repeat the same steps that were performed while preparing the data.

So there are a lot of steps that are followed and while working on a big project in teams we can often get confused about this transformation. To resolve this we introduce pipelines that hold every step that is performed from starting to fit the data on the model.

Through this article, we will explore pipelines in machine learning and will also see how to implement these for a better understanding of all the transformations steps.

What we will learn from this article?

Pipelines are nothing but an object that holds all the processes that will take place from data transformations to model building. Suppose while building a model we have done encoding for categorical data followed by scaling/ normalizing the data and then finally fitting the training data into the model. If we will design a pipeline for this task then this object will hold all these transforming steps and we just need to call the pipeline object and rest every step that is defined will be done.

This is very useful when a team is working on the same project. Defining the pipeline will give the team members a clear understanding of different transformations taking place in the project. There is a class named Pipeline present in sklearn that allows us to do the same. All the steps in a pipeline are executed sequentially. On all the intermediate steps in the pipeline, there has to be a first fit function called and then transform whereas for the last step there will be only fit function that is usually fitting the data on the model for training.

As soon as we fit the data on the pipeline, the pipeline object is first transformed and then fitted on each of the steps. While making predictions using the pipeline, all the steps are again repeated except for the last function of prediction.

Implementation of the pipeline is very easy and involves 4 different steps mainly that are listed below:-

Let us now practically understand the pipeline and implement it on a data set. We will first import the required libraries and the data set. We will then split the data set into training and testing sets followed by defining the pipeline and then calling the fit score function. Refer to the below code for the same.

We have defined the pipeline with the object name as pipe and this can be changed according to the programmer. We have defined sc objects for StandardScaler and rfcl for Random Forest Classifier.

pipe.fit(X_train,y_train)

print(pipe.score(X_test, y_test)

If we do not want to define the objects for each step like sc and rfcl for StandardScaler and Random Forest Classifier since there can be sometimes many different transformations that would be done. For this, we can make use of make_pipeling that can be imported from the pipeline class present in sklearn. Refer to the below example for the same.

from sklearn.pipeline import make_pipeline

pipe = make_pipeline(StandardScaler(),(RandomForestClassifier()))

We have just defined the functions in this case and not the objects for these functions. Now lets see the steps present in this pipeline.

print(pipe.steps)

pipe.fit(X_train,y_train)

print(pipe.score(X_test, y_test))

Conclusion

Through this article, we discussed pipeline construction in machine learning. How these can be helpful while different people working on the same project to avoid confusion and get a clear understanding of each step that is performed one after another. We then discussed steps for building a pipeline that had two steps i.e scaling and the model and implemented the same on the Pima Indians Diabetes data set. At last, we explored one other way of defining a pipeline that is building a pipeline using make a pipeline.

I am currently enrolled in a Post Graduate Program In Artificial Intelligence and Machine learning. Data Science Enthusiast who likes to draw insights from the data. Always amazed with the intelligence of AI. It's really fascinating teaching a machine to see and understand images. Also, the interest gets doubled when the machine can tell you what it just saw. This is where I say I am highly interested in Computer Vision and Natural Language Processing. I love exploring different use cases that can be build with the power of AI. I am the person who first develops something and then explains it to the whole community with my writings.

Read more:
Everything About Pipelines In Machine Learning and How Are They Used? - Analytics India Magazine

The secrets of small data: How machine learning finally reached the enterprise – VentureBeat

Over the past decade, big data has become Silicon Valleys biggest buzzword. When theyre trained on mind-numbingly large data sets, machine learning (ML) models can develop a deep understanding of a given domain, leading to breakthroughs for top tech companies. Google, for instance, fine-tunes its ranking algorithms by tracking and analyzing more than one trillion search queries each year. It turns out that the Solomonic power to answer all questions from all comers can be brute-forced with sufficient data.

But theres a catch: Most companies are limited to small data; in many cases, they possess only a few dozen examples of the processes they want to automate using ML. If youre trying to build a robust ML system for enterprise customers, you have to develop new techniques to overcome that dearth of data.

Two techniques in particular transfer learning and collective learning have proven critical in transforming small data into big data, allowing average-sized companies to benefit from ML use cases that were once reserved only for Big Tech. And because just 15% of companies have deployed AI or ML already, there is a massive opportunity for these techniques to transform the business world.

Above: Using the data from just one company, even modern machine learning models are only about 30% accurate. But thanks to collective learning and transfer learning, Moveworks can determine the intent of employees IT support requests with over 90% precision.

Image Credit: Moveworks

Of course, data isnt the only prerequisite for a world-class machine learning model theres also the small matter of building that model in the first place. Given the short supply of machine learning engineers, hiring a team of experts to architect an ML system from scratch is simply not an option for most organizations. This disparity helps explain why a well-resourced tech company like Google benefits disproportionately from ML.

But over the past several years, a number of open source ML models including the famous BERT model for understanding language, which Google released in 2018 have started to change the game. The complexity of creating a model the caliber of BERT, whose aptly named large version has about 340 million parameters, means that few organizations can even consider quarterbacking such an initiative. However, because its open source, companies can now tweak that publicly available playbook to tackle their specific use cases.

To understand what these use cases might look like, consider a company like Medallia, a Moveworks customer. On its own, Medallia doesnt possess enough data to build and train an effective ML system for an internal use case, like IT support. Yet its small data does contain a treasure trove of insights waiting for ML to unlock them. And by leveraging new techniques to glean these insights, Medallia has become more efficient, from recognizing which internal workflows need attention to understanding the company-specific language its employees use when asking for tech support.

So heres the trillion-dollar question: How do you take an open source ML model designed to solve a particular problem and apply that model to a disparate problem in the enterprise? The answer starts with transfer learning, which, unsurprisingly, entails transferring knowledge gained from one domain to a different domain that has less data.

For example, by taking an open source ML model like BERT designed to understand generic language and refining it at the margins, it is now possible for ML to understand the unique language employees use to describe IT issues. And language is just the beginning, since weve only begun to realize the enormous potential of small data.

Above: Transfer learning leverages knowledge from a related domain typically one with a greater supply of training data to augment the small data of a given ML use case.

Image Credit: Moveworks

More generally, this practice of feeding an ML model a very small and very specific selection of training data is called few-shot learning, a term thats quickly become one of the new big buzzwords in the ML community. Some of the most powerful ML models ever created such as the landmark GPT-3 model and its 175 billion parameters, which is orders of magnitude more than BERT have demonstrated an unprecedented knack for learning novel tasks with just a handful of examples as training.

Taking essentially the entire internet as its tangential domain, GPT-3 quickly becomes proficient at these novel tasks by building on a powerful foundation of knowledge, in the same way Albert Einstein wouldnt need much practice to become a master at checkers. And although GPT-3 is not open source, applying similar few-shot learning techniques will enable new ML use cases in the enterprise ones for which training data is almost nonexistent.

With transfer learning and few-shot learning on top of powerful open source models, ordinary businesses can finally buy tickets to the arena of machine learning. But while training ML with transfer learning takes several orders of magnitude less data, achieving robust performance requires going a step further.

That step is collective learning, which comes into play when many individual companies want to automate the same use case. Whereas each company is limited to small data, third-party AI solutions can use collective learning to consolidate those small data sets, creating a large enough corpus for sophisticated ML. In the case of language understanding, this means abstracting sentences that are specific to one company to uncover underlying structures:

Above: Collective learning involves abstracting data in this case, sentences with ML to uncover universal patterns and structures.

Image Credit: Moveworks

The combination of transfer learning and collective learning, among other techniques, is quickly redrawing the limits of enterprise ML. For example, pooling together multiple customers data can significantly improve the accuracy of models designed to understand the way their employees communicate. Well beyond understanding language, of course, were witnessing the emergence of a new kind of workplace one powered by machine learning on small data.

View original post here:
The secrets of small data: How machine learning finally reached the enterprise - VentureBeat

causaLens launches the first causal AI platform – Business Wire

LONDON--(BUSINESS WIRE)--causaLens, a deep-tech company predicting and optimising the global economy, has released the Worlds first causal Artificial Intelligence (causal AI) enterprise platform. Businesses no longer have to rely on curve-fitting machine learning platforms unable to handle the complexity of today's world. They are invited to join the real AI revolution with a platform that understands cause and effect.

The causaLens platform defines a new category of machine intelligence. Its next generation AI engine harnesses an understanding of cause and effect relationships to directly optimise business KPIs.

Businesses investing in the current form of machine learning (ML), including AutoML, have just been paying to automate a process that fits curves to data without an understanding of the real world. They are effectively driving forward by looking in the rear-view mirror, explains causaLens CEO Darko Matovski. Our platform takes a radically different approach. Causal AI teaches machines to understand cause and effect, a necessary step to developing true AI. This allows our platform to autonomously operate at a new level of abstraction that explains to businesses what actions they need to take to achieve their objectives.

causaLens has a track record of breaking new ground, having pioneered automated machine learning (AutoML) for time series data. The causal AI platform retains the advantages of comprehensive automation, allowing thousands of data sets to be cleaned, sorted and monitored at the same time. However, it combines it with causal models and insights that are truly explainable - traditionally the sole province of domain experts. Unique human knowledge is harnessed through intuitive interfaces for human-machine partnerships.

Since its inception in 2017, causaLens has worked with a range of corporates across multiple industries. Customers include some of the worlds largest Asset Managers, Hedge Funds, Tier-1 Investment Banks, Transportation and Logistics companies, and Energy and Commodity traders.

Masami Johnstone, Head of Information Services at CLS, whose products help clients navigate the changing Foreign Exchange marketplace, said: "The causaLens platform has enabled us to discover additional value in our data. Their causal AI technology autonomously finds valuable signals in huge datasets and has helped us to understand relationships between our data and other datasets.

Todays world is changing faster than ever before. Current state of the art ML barely scratches the surface of what machines can do. Causal AI is the next huge step forward.

Demonstrations of the product can be requested via causaLens.com.

causaLens

causaLens is pioneering Causal AI, a new category of intelligent machines that understand cause and effect - a major step towards true AI. Its enterprise platform is used to transform leading businesses in Finance, IoT, Energy, Telecommunications and others.

More:
causaLens launches the first causal AI platform - Business Wire

What is Model Governance and How it Works for Enterprises? – Analytics Insight

Model governance indicates the overall framework of how an organization control its model development and deployment workflow, including rules, protocols, and controls for machine learning models during production for example, access control, testing, validation, and the tracing of model results.

Although machine learning projects impact organisations, they dont always arrive at their full potential due to inefficiencies and mismanagement in the process. Machine governance is a priority for organisations to get the highest possible return on its machine learning investment.

Model governance indicates the overall framework of how an organization control its model development and deployment workflow, including rules, protocols, and controls for machine learning models during production for example, access control, testing, validation, and the tracing of model results. Tracking the model outcomes permits biases to be detected and rectified. It is important for models, which are programmed to learn as they may accidentally become biased that could bring out inaccurate or unethical results.

It is crucial for risk involved models to manage financial portfolios. As these models can impact on an individual or organizations finances directly, it is essential to verify and correct any biases or incorrect learning within the model.

As machine learning is a relatively new discipline, there are still a lot of inefficiencies that require to be advocated in ML processes. Machine learning projects can be missing essential value without model governance in place.

Clearing risk of model governance is vital to ensure that models involved with finances stay out of dangerous hazards. These models are programmed to continue learning along the run. However, these can understand biases if these are served with data. Datasets are capable of creating a bias which affects the decisions the model makes from that point on.

Model governance enables models to be audited and examined for speed, accuracy, and drift during production. It neglects any issues of model bias or inaccuracy, permitting models with risks involved to function smoothly.

Here are a few cases listed below to analyse the importance of model governance:

As mentioned before, the most glaring instance of why model governance is crucial in finance, but other industries require model governance as well. Banking industry uses machine learning models for many different processes that can be operated manually like credit scoring, interest rate risk modelling, and derivatives pricing.

Credit scoring models aid finance/ bank industry to make decisions in the loan approval process by delivering predictive analysis information concerning the potential for default or delinquency. It helps the bank to determine the risk costing they should use for the loan.

Interest rate risk models surveil earnings exposure to a range of potential market conditions and rate change to measure risk. The purpose of the model is to provide an overview of the potential dangers of the account it is monitoring.

These models estimate the value of assets by delivering a methodology for determining the cost of new products as well as complex products without market observations readily available. It is helpful for both the banks and investors to determine whether a business is worth investing in or not.

Serverless micro-service architecture for machine learning, algorithmia makes it the fastest route from development to deployment. It allows organizations to govern their machine learning operations securely with a healthy machine learning lifecycle. It manages MLOps with access controls to secure and audit machine learning models in production. Model governance algorithmias one of the benefits which ensures model accuracy by governing models and testing for speed, accuracy and drift.

Read the rest here:
What is Model Governance and How it Works for Enterprises? - Analytics Insight

Machine Learning in Education Market Incredible Possibilities, Growth Analysis and Forecast To 2025 – The Daily Chronicle

Latest Research Report: Machine Learning in Education industry

Machine Learning in Education Market report is to provide accurate and strategic analysis of the Profile Projectors industry. The report closely examines each segment and its sub-segment futures before looking at the 360-degree view of the market mentioned above. Market forecasts will provide deep insight into industry parameters by accessing growth, consumption, upcoming market trends and various price fluctuations.

This has brought along several changes in This report also covers the impact of COVID-19 on the global market.

Machine Learning in Education Market competition by top manufacturers as follow: , IBM, Microsoft, Google, Amazon, Cognizan, Pearson, Bridge-U, DreamBox Learning, Fishtree, Jellynote, Quantum Adaptive Learning

Get a Sample PDF copy of the report @ https://reportsinsights.com/sample/12877

Global Machine Learning in Education Market research reports growth rates and market value based on market dynamics, growth factors. Complete knowledge is based on the latest innovations in the industry, opportunities and trends. In addition to SWOT analysis by key suppliers, the report contains a comprehensive market analysis and major players landscape.The Type Coverage in the Market are: Cloud-BasedOn-Premise

Market Segment by Applications, covers:Intelligent Tutoring SystemsVirtual FacilitatorsContent Delivery SystemsInteractive WebsitesOthers

Market segment by Regions/Countries, this report coversNorth AmericaEuropeChinaRest of Asia PacificCentral & South AmericaMiddle East & Africa

To get this report at a profitable rate.: https://reportsinsights.com/discount/12877

Important Features of the report:

Reasons for buying this report:

Access full Report Description, TOC, Table of Figure, Chart, [emailprotected] https://reportsinsights.com/industry-forecast/Machine-Learning-in-Education-Market-12877About US:

Reports Insights is the leading research industry that offers contextual and data-centric research services to its customers across the globe. The firm assists its clients to strategize business policies and accomplish sustainable growth in their respective market domain. The industry provides consulting services, syndicated research reports, and customized research reports.

Contact US:

:(US) +1-214-272-0234

:(APAC) +91-7972263819

Email:[emailprotected]

Sales:[emailprotected]

See original here:
Machine Learning in Education Market Incredible Possibilities, Growth Analysis and Forecast To 2025 - The Daily Chronicle

Algorithms may never really figure us out thank goodness – The Boston Globe

An unlikely scandal engulfed the British government last month. After COVID-19 forced the government to cancel the A-level exams that help determine university admission, the British education regulator used an algorithm to predict what score each student would have received on their exam. The algorithm relied in part on how the schools students had historically fared on the exam. Schools with richer children tended to have better track records, so the algorithm gave affluent students even those on track for the same grades as poor students much higher predicted scores. High-achieving, low-income pupils whose schools had not previously performed well were hit particularly hard. After threats of legal action and widespread demonstrations, the government backed down and scrapped the algorithmic grading process entirely. This wasnt an isolated incident: In the United States, similar issues plagued the International Baccalaureate exam, which used an opaque artificial intelligence system to set students' scores, prompting protests from thousands of students and parents.

These episodes highlight some of the pitfalls of algorithmic decision-making. As technology advances, companies, governments, and other organizations are increasingly relying on algorithms to predict important social outcomes, using them to allocate jobs, forecast crime, and even try to prevent child abuse. These technologies promise to increase efficiency, enable more targeted policy interventions, and eliminate human imperfections from decision-making processes. But critics worry that opaque machine learning systems will in fact reflect and further perpetuate shortcomings in how organizations typically function including by entrenching the racial, class, and gender biases of the societies that develop these systems. When courts and parole boards have used algorithms to forecast criminal behavior, for example, they have inaccurately identified Black defendants as future criminals more often than their white counterparts. Predictive policing systems, meanwhile, have led the police to unfairly target neighborhoods with a high proportion of non-white people, regardless of the true crime rate in those areas. Companies that have used recruitment algorithms have found that they amplify bias against women.

But there is an even more basic concern about algorithmic decision-making. Even in the absence of systematic class or racial bias, what if algorithms struggle to make even remotely accurate predictions about the trajectories of individuals' lives? That concern gains new support in a recent paper published in the Proceedings of the National Academy of Sciences. The paper describes a challenge, organized by a group of sociologists at Princeton University, involving 160 research teams from universities across the country and hundreds of researchers in total, including one of the authors of this article. These teams were tasked with analyzing data from the Fragile Families and Child Wellbeing Study, an ongoing study that measures various life outcomes for thousands of families who gave birth to children in large American cities around 2000. It is one of the richest data sets available to researchers: It tracks thousands of families over time, and has been used in more than 750 scientific papers.

The task for the teams was simple. They were given access to almost all of this data and asked to predict several important life outcomes for a sample of families. Those outcomes included the childs grade point average, their grit (a commonly used measure of passion and perseverance), whether the household would be evicted, the material hardship of the household, and whether the parent would lose their job.

The teams could draw on almost 13,000 predictor variables for each family, covering areas such as education, employment, income, family relationships, environmental factors, and child health and development. The researchers were also given access to the outcomes for half of the sample, and they could use this data to hone advanced machine-learning algorithms to predict each of the outcomes for the other half of the sample, which the organizers withheld. At the end of the challenge, the organizers scored the 160 submissions based on how well the algorithms predicted what actually happened in these peoples lives.

The results were disappointing. Even the best performing prediction models were only marginally better than random guesses. The models were rarely able to predict a students GPA, for example, and they were even worse at predicting whether a family would get evicted, experience unemployment, or face material hardship. And the models gave almost no insight into how resilient a child would become.

In other words, even having access to incredibly detailed data and modern machine learning methods designed for prediction did not enable the researchers to make accurate forecasts. The results of the Fragile Families Challenge, the authors conclude, with notable understatement, raise questions about the absolute level of predictive performance that is possible for some life outcomes, even with a rich data set.

Of course, machine learning systems may be much more accurate in other domains; this paper studied the predictability of life outcomes in only one setting. But the failure to make accurate predictions cannot be blamed on the failings of any particular analyst or method. Hundreds of researchers attempted the challenge, using a wide range of statistical techniques, and they all failed.

These findings suggest that we should doubt that big data can ever perfectly predict human behavior and that policymakers working in criminal justice policy and child-protective services should be especially cautious. Even with detailed data and sophisticated prediction techniques, there may be fundamental limitations on researchers' ability to make accurate predictions. Human behavior is inherently unpredictable, social systems are complex, and the actions of individuals often defy expectations.

And yet disappointing as this may be for technocrats and data scientists, it also suggests something reassuring about human potential. If life outcomes are not firmly pre-determined if an algorithm, given a set of past data points, cannot predict a persons trajectory then the algorithms limitations ultimately reflect the richness of humanitys possibilities.

Bryan Schonfeld and Sam Winter-Levy are PhD candidates in politics at Princeton University.

Visit link:
Algorithms may never really figure us out thank goodness - The Boston Globe

The confounding problem of garbage-in, garbage-out in ML – Mint

One of the top 10 trends in data and analytics this year as leaders navigate the covid-19 world, according to Gartner, is augmented data management." Its the growing use of tools with ML/AI to clean and prepare robust data for AI-based analytics. Companies are currently striving to go digital and derive insights from their data, but the roadblock is bad data, which leads to faulty decisions. In other words: garbage in, garbage out.

I was talking to a university dean the other day. It had 20,000 students in its database, but only 9,000 students had actually passed out of the university," says Deleep Murali, co-founder and CEO of Bengaluru-based Zscore. This kind of faulty data has a cascading effect because all kinds of decisions, including financial allocations, are based on it.

Zscore started out with the idea of providing AI-based business intelligence to global enterprises. But the startup soon ran into a bigger problem: the domino effect of unreliable data feeding AI engines. We realized we were barking up the wrong tree," says Murali. Then we pivoted to focus on automating data checks."

For example, an insurance company allocates a budget to cover 5,000 hospitals in its database but it turns out that one-third of them are duplicates with a slight alteration in name. So far in pilots weve run for insurance companies, we showed $35 million in savings, with just partial data. So its a huge problem," says Murali.

EXPENSE & EFFORT

This is what prompted IBM chief Arvind Krishna to reveal that the top reason for its clients to halt or cancel AI projects was their data. He pointed out that 80% of an AI project involves collecting and cleansing data, but companies were reluctant to put in the effort and expense for it.

That was in the pre-covid era. Whats happening now is that a lot of companies are keen to accelerate their digital transformation. So customer traction is picking up from banks and insurance companies as well as the manufacturing sector," says Murali.

Data analytics tends to be on the fringes of a companys operations, rather than its core. Zscores product aims to change that by automating data flow and improving its quality. Use cases differ from industry to industry. For example, a huge drain on insurance companies is false claims, which can vary from absurdities like male pregnancies and braces for six-month-old toddlers to subtler cases like the same hospital receiving allocations under different names.

We work with a leading insurance company in Australia and claims leakage is its biggest source of loss. The moment you save anything in claims, it has a direct impact on revenue," says Murali. Male pregnancies and braces for six-month-olds seem like simple leaks but companies tend to ignore it. Legacy systems and rules havent accounted for all the possibilities. But now a claim comes to our system and multiple algorithms spot anything suspicious. Its a parallel system to the existing claims processing system."

For manufacturing companies, buggy inventory data means placing orders for things they dont need. For example, there can be 15 different serial numbers of spanners. So you might order a spanner thats well-stocked, whereas the ones really required dont show up. Companies lose 12-15% of their revenue each because of data issues such as duplicate or excessive inventory," says Murali.

These problems have got exacerbated in the age of AI where algorithms drive decision-making. Companies typically lack the expertise to prepare data in a way that is suitable for machine-learning models. How data is labelled and annotated plays a huge role. Hence, the need for supervised machine learning from tech companies like Zscore that can identify bad data and quarantine it.

TO THE ROOTS

Semantics and context analysis and studying manual processes help develop industry- or organization-specific solutions. So far 80-90% of data work has been manual. What we do is automate identification of data ingredients, data workflows and root cause analysis to understand whats wrong with the data," says Murali.

A couple of years ago, Zscore got into cloud data management multinational NetApps accelerator programme in Bengaluru. This gave it a foothold abroad with a NetApp client in Australia. It also opened the door to working with large financial institutions.

The Royal Commission of Australia, which is the equivalent of RBI, had come down hard on the top four banks and financial institutions for passing on faulty information. Its report said decisions had to be based on the right data and gave financial institutions 18 months to show progress. This became motivation for us because these were essentially data-oriented problems," says Murali.

Malavika Velayanikal is a consulting editor with Mint. She tweets @vmalu.

Subscribe to Mint Newsletters

* Enter a valid email

* Thank you for subscribing to our newsletter.

Follow this link:
The confounding problem of garbage-in, garbage-out in ML - Mint

AI in Machine Learning Market to Witness Tremendous Growth in Forecasted Period 2020-2027 – NJ MMA News

Reports published inMarket Research Incfor the AI in Machine Learning market are spread out over several pages and provide the latest industry data, market future trends, enabling products and end users to drive revenue growth and profitability. Industry reports list and study key competitors and provide strategic industry analysis of key factors affecting market dynamics. This report begins with an overview of the AI in Machine Learning market and is available throughout development. It provides a comprehensive analysis of all regional and major player segments that provide insight into current market conditions and future market opportunities along with drivers, trend segments, consumer behavior, price factors and market performance and estimates over the forecast period.

Request a pdf copy of this report athttps://www.marketresearchinc.com/request-sample.php?id=16243

Key Strategic Manufacturers

:GOOGLE, IBM, BAIDU, SOUNDHOUND, ZEBRA MEDICAL VISION, PRISMA, Company Profile, Main Business Information, SWOT Analysis, Sales, Revenue, Price and Gross Margin, Market Share, TensorFlow, Caffe2 & Apache MXNet

(Market Size & Forecast, Different Demand Market by Region, Main Consumer Profile etc

The report gives a complete insight of this industry consisting the qualitative and quantitative analysis provided for this market industry along with prime development trends, competitive analysis, and vital factors that are predominant in the AI in Machine Learning Market.

The report also targets local markets and key players who have adopted important strategies for business development. The data in the report is presented in statistical form to help you understand the mechanics. The AI in Machine Learning market report gathers thorough information from proven research methodologies and dedicated sources in many industries.

Avail 40% Discount on this report athttps://www.marketresearchinc.com/ask-for-discount.php?id=16243

Key Objectives of AI in Machine Learning Market Report: Study of the annual revenues and market developments of the major players that supply AI in Machine Learning Analysis of the demand for AI in Machine Learning by component Assessment of future trends and growth of architecture in the AI in Machine Learning market Assessment of the AI in Machine Learning market with respect to the type of application Study of the market trends in various regions and countries, by component, of the AI in Machine Learning market Study of contracts and developments related to the AI in Machine Learning market by key players across different regions Finalization of overall market sizes by triangulating the supply-side data, which includes product developments, supply chain, and annual revenues of companies supplying AI in Machine Learning across the globe.

Furthermore, the years considered for the study are as follows:

Historical year 2015-2019

Base year 2019

Forecast period 2020 to 2026

Table of Content:

AI in Machine Learning Market Research ReportChapter 1: Industry OverviewChapter 2: Analysis of Revenue by ClassificationsChapter 3: Analysis of Revenue by Regions and ApplicationsChapter 6: Analysis of Market Revenue Market Status.Chapter 4: Analysis of Industry Key ManufacturersChapter 5: Marketing Trader or Distributor Analysis of Market.Chapter 6: Development Trend of AI in Machine Learning market

Continue for TOC

If You Have Any Query, Ask Our Experts:https://www.marketresearchinc.com/enquiry-before-buying.php?id=16243

About Us

Market Research Inc is farsighted in its view and covers massive ground in global research. Local or global, we keep a close check on both markets. Trends and concurrent assessments sometimes overlap and influence the other. When we say market intelligence, we mean a deep and well-informed insight into your products, market, marketing, competitors, and customers. Market research companies are leading the way in nurturing global thought leadership. We help your product/service become the best they can with our informed approach.

Contact Us

Market Research Inc

Kevin

51 Yerba Buena Lane, Ground Suite,

Inner Sunset San Francisco, CA 94103, USA

Call Us:+1 (628) 225-1818

Write Us@sales@marketresearchinc.com

https://www.marketresearchinc.com

More:
AI in Machine Learning Market to Witness Tremendous Growth in Forecasted Period 2020-2027 - NJ MMA News

Machine Learning in Medical Imaging Market 2020 : Analysis by Geographical Regions, Type and Application Till 2025 | Zebra, Arterys, Aidoc, MaxQ AI -…

Global Machine Learning in Medical Imaging Industry: with growing significant CAGR during Forecast 2020-2025

Latest Research Report on Machine Learning in Medical Imaging Market which covers Market Overview, Future Economic Impact, Competition by Manufacturers, Supply (Production), and Consumption Analysis

Understand the influence of COVID-19 on the Machine Learning in Medical Imaging Market with our analysts monitoring the situation across the globe. Request Now

The market research report on the global Machine Learning in Medical Imaging industry provides a comprehensive study of the various techniques and materials used in the production of Machine Learning in Medical Imaging market products. Starting from industry chain analysis to cost structure analysis, the report analyzes multiple aspects, including the production and end-use segments of the Machine Learning in Medical Imaging market products. The latest trends in the pharmaceutical industry have been detailed in the report to measure their impact on the production of Machine Learning in Medical Imaging market products.

Leading key players in the Machine Learning in Medical Imaging market are Zebra, Arterys, Aidoc, MaxQ AI, Google, Tencent, Alibaba

Get sample of this report @ https://grandviewreport.com/sample/21159

Product Types:, Supervised Learning, Unsupervised Learning, Semi Supervised Learning, Reinforced Leaning

By Application/ End-user:, Breast, Lung, Neurology, Cardiovascular, Liver

Regional Analysis For Machine Learning in Medical ImagingMarket

North America(the United States, Canada, and Mexico)Europe(Germany, France, UK, Russia, and Italy)Asia-Pacific(China, Japan, Korea, India, and Southeast Asia)South America(Brazil, Argentina, Colombia, etc.)The Middle East and Africa(Saudi Arabia, UAE, Egypt, Nigeria, and South Africa)

Get Discount on Machine Learning in Medical Imaging report @ https://grandviewreport.com/discount/21159

This report comes along with an added Excel data-sheet suite taking quantitative data from all numeric forecasts presented in the report.

Research Methodology:The Machine Learning in Medical Imagingmarket has been analyzed using an optimum mix of secondary sources and benchmark methodology besides a unique blend of primary insights. The contemporary valuation of the market is an integral part of our market sizing and forecasting methodology. Our industry experts and panel of primary members have helped in compiling appropriate aspects with realistic parametric assessments for a comprehensive study.

Whats in the offering: The report provides in-depth knowledge about the utilization and adoption of Machine Learning in Medical Imaging Industries in various applications, types, and regions/countries. Furthermore, the key stakeholders can ascertain the major trends, investments, drivers, vertical players initiatives, government pursuits towards the product acceptance in the upcoming years, and insights of commercial products present in the market.

Full Report Link @ https://grandviewreport.com/industry-growth/Machine-Learning-in-Medical-Imaging-Market-21159

Lastly, the Machine Learning in Medical Imaging Market study provides essential information about the major challenges that are going to influence market growth. The report additionally provides overall details about the business opportunities to key stakeholders to expand their business and capture revenues in the precise verticals. The report will help the existing or upcoming companies in this market to examine the various aspects of this domain before investing or expanding their business in the Machine Learning in Medical Imaging market.

Contact Us:Grand View Report(UK) +44-208-133-9198(APAC) +91-73789-80300Email : [emailprotected]

See the original post here:
Machine Learning in Medical Imaging Market 2020 : Analysis by Geographical Regions, Type and Application Till 2025 | Zebra, Arterys, Aidoc, MaxQ AI -...

Machine Learning Chips Market Dynamics Analysis to Grow at Cagr with Major Companies and Forecast 2026 – The Scarlet

Machine Learning Chips Market 2018: Global Industry Insights by Global Players, Regional Segmentation, Growth, Applications, Major Drivers, Value and Foreseen till 2024

The recent published research report sheds light on critical aspects of the global Machine Learning Chips market such as vendor landscape, competitive strategies, market drivers and challenges along with the regional analysis. The report helps the readers to draw a suitable conclusion and clearly understand the current and future scenario and trends of global Machine Learning Chips market. The research study comes out as a compilation of useful guidelines for players to understand and define their strategies more efficiently in order to keep themselves ahead of their competitors. The report profiles leading companies of the global Machine Learning Chips market along with the emerging new ventures who are creating an impact on the global market with their latest innovations and technologies.

Request Sample Report @ https://www.marketresearchhub.com/enquiry.php?type=S&repid=2632983&source=atm

The recent published study includes information on key segmentation of the global Machine Learning Chips market on the basis of type/product, application and geography (country/region). Each of the segments included in the report is studies in relations to different factors such as market size, market share, value, growth rate and other quantitate information.

The competitive analysis included in the global Machine Learning Chips market study allows their readers to understand the difference between players and how they are operating amounts themselves on global scale. The research study gives a deep insight on the current and future trends of the market along with the opportunities for the new players who are in process of entering global Machine Learning Chips market. Market dynamic analysis such as market drivers, market restraints are explained thoroughly in the most detailed and easiest possible manner. The companies can also find several recommendations improve their business on the global scale.

The readers of the Machine Learning Chips Market report can also extract several key insights such as market size of varies products and application along with their market share and growth rate. The report also includes information for next five years as forested data and past five years as historical data and the market share of the several key information.

Make An EnquiryAbout This Report @ https://www.marketresearchhub.com/enquiry.php?type=E&repid=2632983&source=atm

Global Machine Learning Chips Market by Companies:

The company profile section of the report offers great insights such as market revenue and market share of global Machine Learning Chips market. Key companies listed in the report are:

Market Segment AnalysisThe research report includes specific segments by Type and by Application. Each type provides information about the production during the forecast period of 2015 to 2026. Application segment also provides consumption during the forecast period of 2015 to 2026. Understanding the segments helps in identifying the importance of different factors that aid the market growth.Segment by TypeNeuromorphic ChipGraphics Processing Unit (GPU) ChipFlash Based ChipField Programmable Gate Array (FPGA) ChipOther

Segment by ApplicationRobotics IndustryConsumer ElectronicsAutomotiveHealthcareOther

Global Machine Learning Chips Market: Regional AnalysisThe report offers in-depth assessment of the growth and other aspects of the Machine Learning Chips market in important regions, including the U.S., Canada, Germany, France, U.K., Italy, Russia, China, Japan, South Korea, Taiwan, Southeast Asia, Mexico, and Brazil, etc. Key regions covered in the report are North America, Europe, Asia-Pacific and Latin America.The report has been curated after observing and studying various factors that determine regional growth such as economic, environmental, social, technological, and political status of the particular region. Analysts have studied the data of revenue, production, and manufacturers of each region. This section analyses region-wise revenue and volume for the forecast period of 2015 to 2026. These analyses will help the reader to understand the potential worth of investment in a particular region.Global Machine Learning Chips Market: Competitive LandscapeThis section of the report identifies various key manufacturers of the market. It helps the reader understand the strategies and collaborations that players are focusing on combat competition in the market. The comprehensive report provides a significant microscopic look at the market. The reader can identify the footprints of the manufacturers by knowing about the global revenue of manufacturers, the global price of manufacturers, and production by manufacturers during the forecast period of 2015 to 2019.The major players in the market include Wave Computing, Graphcore, Google Inc, Intel Corporation, IBM Corporation, Nvidia Corporation, Qualcomm, Taiwan Semiconductor Manufacturing, etc.

Global Machine Learning Chips Market by Geography:

You can Buy This Report from Here @ https://www.marketresearchhub.com/checkout?rep_id=2632983&licType=S&source=atm

Some of the Major Highlights of TOC covers in Machine Learning Chips Market Report:

Chapter 1: Methodology & Scope of Machine Learning Chips Market

Chapter 2: Executive Summary of Machine Learning Chips Market

Chapter 3: Machine Learning Chips Industry Insights

Chapter 4: Machine Learning Chips Market, By Region

Chapter 5: Company Profile

And Continue

See original here:
Machine Learning Chips Market Dynamics Analysis to Grow at Cagr with Major Companies and Forecast 2026 - The Scarlet

Army looks to machine learning to predict, prevent injuries – GCN.com

Army looks to machine learning to predict, prevent injuries

The Army is harnessing a sensor and machine learning platform currently used by professional and collegiate sports teams to analyze individual soldiers biomechanics and predict and prevent physical injuries.

According to a March 2020 paper in Military Medicine, noncombat injuries are the leading cause of outpatient medical visits among active Army service members, accounting for nearly 60% of soldiers limited duty days and 65% of soldiers who cannot deploy for medical reasons. Besides decreasing the number of soldiers available to deploy, these injuries are expensive to treat and can lead to service-connected disability compensation.

The Armys Mission and Installation Contracting Command will be using Sparta Sciences Sparta Trac system to collect data on movements used in heavy physical training regimes. The system uses force plates, similar to large bathroom scales that are equipped with sensors that assess an athletes core and lower extremity strength. As athletes do various balance, jumping and plank exercises, the system collects and analyzes the data to create a movement signature and show the risk level for musculoskeletal injuries. It also designs customized workouts so soldiers can strengthen weak areas and avoid injuries. The diagnostic test takes five minutes, company officials wrote in an Aug. 18 column for Stars and Stripes.

Force plate technology was singled out for study by the military in the 2021 National Defense Authorization Act. The NDAA encouraged development of a tool that will check warfighters physical fitness to determine combat readiness. Force plate technology and machine learning capabilities are an important part of that tool, according to the NDAA.

Although force plate systems are already used across the military, the NDAA tasked the Secretary of Defense to report on how many military units are using the systems, as well as whether the technology could be scaled to develop individual fitness programs for at-home and deployed warfighters.

About the Author

Mark Rockwell is a senior staff writer at FCW, whose beat focuses on acquisition, the Department of Homeland Security and the Department of Energy.

Before joining FCW, Rockwell was Washington correspondent for Government Security News, where he covered all aspects of homeland security from IT to detection dogs and border security. Over the last 25 years in Washington as a reporter, editor and correspondent, he has covered an increasingly wide array of high-tech issues for publications like Communications Week, Internet Week, Fiber Optics News, tele.com magazine and Wireless Week.

Rockwell received a Jesse H. Neal Award for his work covering telecommunications issues, and is a graduate of James Madison University.

Click here for previous articles by Rockwell. Contact him at [emailprotected] or follow him on Twitter at @MRockwell4.

Visit link:
Army looks to machine learning to predict, prevent injuries - GCN.com

Toward a machine learning model that can reason about everyday actions – MIT News

The ability to reason abstractly about events as they unfold is a defining feature of human intelligence. We know instinctively that crying and writing are means of communicating, and that a panda falling from a tree and a plane landing are variations on descending.

Organizing the world into abstract categories does not come easily to computers, but in recent years researchers have inched closer by training machine learning models on words and images infused with structural information about the world, and how objects, animals, and actions relate. In a new study at the European Conference on Computer Vision this month, researchers unveiled a hybrid language-vision model that can compare and contrast a set of dynamic events captured on video to tease out the high-level concepts connecting them.

Their model did as well as or better than humans at two types of visual reasoning tasks picking the video that conceptually best completes the set, and picking the video that doesnt fit. Shown videos of a dog barking and a man howling beside his dog, for example, the model completed the set by picking the crying baby from a set of five videos. Researchers replicated their results on two datasets for training AI systems in action recognition: MITs Multi-Moments in Time and DeepMinds Kinetics.

We show that you can build abstraction into an AI system to perform ordinary visual reasoning tasks close to a human level, says the studys senior author Aude Oliva, a senior research scientist at MIT, co-director of the MIT Quest for Intelligence, and MIT director of the MIT-IBM Watson AI Lab. A model that can recognize abstract events will give more accurate, logical predictions and be more useful for decision-making.

As deep neural networks become expert at recognizing objects and actions in photos and video, researchers have set their sights on the next milestone: abstraction, and training models to reason about what they see. In one approach, researchers have merged the pattern-matching power of deep nets with the logic of symbolic programs to teach a model to interpret complex object relationships in a scene. Here, in another approach, researchers capitalize on the relationships embedded in the meanings of words to give their model visual reasoning power.

Language representations allow us to integrate contextual information learned from text databases into our visual models, says study co-author Mathew Monfort, a research scientist at MITs Computer Science and Artificial Intelligence Laboratory (CSAIL). Words like running, lifting, and boxing share some common characteristics that make them more closely related to the concept exercising, for example, than driving.

Using WordNet, a database of word meanings, the researchers mapped the relation of each action-class label in Moments and Kinetics to the other labels in both datasets. Words like sculpting, carving, and cutting, for example, were connected to higher-level concepts like crafting, making art, and cooking. Now when the model recognizes an activity like sculpting, it can pick out conceptually similar activities in the dataset.

This relational graph of abstract classes is used to train the model to perform two basic tasks. Given a set of videos, the model creates a numerical representation for each video that aligns with the word representations of the actions shown in the video. An abstraction module then combines the representations generated for each video in the set to create a new set representation that is used to identify the abstraction shared by all the videos in the set.

To see how the model would do compared to humans, the researchers asked human subjects to perform the same set of visual reasoning tasks online. To their surprise, the model performed as well as humans in many scenarios, sometimes with unexpected results. In a variation on the set completion task, after watching a video of someone wrapping a gift and covering an item in tape, the model suggested a video of someone at the beach burying someone else in the sand.

Its effectively covering, but very different from the visual features of the other clips, says Camilo Fosco, a PhD student at MIT who is co-first author of the study with PhD student Alex Andonian. Conceptually it fits, but I had to think about it.

Limitations of the model include a tendency to overemphasize some features. In one case, it suggested completing a set of sports videos with a video of a baby and a ball, apparently associating balls with exercise and competition.

A deep learning model that can be trained to think more abstractly may be capable of learning with fewer data, say researchers. Abstraction also paves the way toward higher-level, more human-like reasoning.

One hallmark of human cognition is our ability to describe something in relation to something else to compare and to contrast, says Oliva. Its a rich and efficient way to learn that could eventually lead to machine learning models that can understand analogies and are that much closer to communicating intelligently with us.

Other authors of the study are Allen Lee from MIT, Rogerio Feris from IBM, and Carl Vondrick from Columbia University.

Go here to see the original:
Toward a machine learning model that can reason about everyday actions - MIT News

Machine Learning as a Service Market to Witness Astonishing Growth by 2026 | Amazon, Oracle Corporation, IBM and more – The News Brok

The report also tracks the latest Machine Learning as a Service Market dynamics, such as driving factors, restraining factors, and industry news like mergers, acquisitions, and investments. It provides market size (value and volume), market share, growth rate by types, applications, and combines both qualitative and quantitative methods to make micro and macro forecasts in different regions or countries.

Prominent players profiled in the study: Amazon, Oracle Corporation, IBM, Microsoft Corporation, Google Inc., Salesforce.Com, Tencent, Alibaba, UCloud, Baidu, Rackspace, SAP AG, Century Link Inc., CSC (Computer Science Corporation), Heroku, Clustrix, Xeround

Sample Report with Latest Industry Trends @ https://www.statsandreports.com/request-sample/314706-global-machine-learning-as-a-service-market-size-status-and-forecast-2019-2025

Acknowledge the Global Machine Learning as a Service market with the assist of our expert analyst moderating the worldwide fluctuations. This market report will answer all your queries regarding growth of your business in this Covid-19 pandemic.

This Report provides an overview of the Machine Learning as a Service market, containing global revenue, global production, sales, and CAGR. Also describe Machine Learning as a Service product scope, market overview, market opportunities, market driving force, and market risks. The forecast and analysis of the Machine Learning as a Service market by type, application, and region are also presented. The next part of the report provides a full-scale analysis of Machine Learning as a Service competitive situation, sales, revenue and global market share of major players in the Machine Learning as a Service industry. The basic information, as well as the profiles, applications, and specifications of products market performance along with Business Overview, are offered.

Product Type: Private clouds, Public clouds, Hybrid cloud

Application: Personal, Business

Geographical Regions: North America, Europe, Central & South America, Asia-Pacific, and the Middle East & Africa, etc.

Get Reasonable Discount on this Premium Report @https://www.statsandreports.com/check-discount/314706-global-machine-learning-as-a-service-market-size-status-and-forecast-2019-2025

Machine Learning as a Service Market

Scope of the Machine Learning as a Service Report:

Worldwide Machine Learning as a Service Market 2020, Market Size Value CAGR (XX %) and revenue (USD Million) for the historical years (2016 to 2018) and forecast years (2020 to 2026), with SWOT analysis, Industry Analysis, Demand, Sales, Market Drivers, Restraints, Opportunities and Forecast to 2026 cover in this research report.

This report covers the current scenario and growth prospects of the Machine Learning as a Service Market for the period 2020-2026. The study is a professional and in-depth study with around tables and figures which provides key statistics on the state of the industry and is a valuable source of guidance and direction for companies and individuals interested in the domain.

Finally, all aspects of the Global Machine Learning as a Service Market are quantitatively as well qualitatively assessed to study the Global as well as regional market comparatively. This market study presents critical information and factual data about the market providing an overall statistical study of this market on the basis of market drivers, limitations and future prospects.

You can Buy This Report from Here: https://www.statsandreports.com/placeorder?report=314706-global-machine-learning-as-a-service-market-size-status-and-forecast-2019-2025&type=SingleUser

Thank you for reading our report. For more information on customization, please reach out to us. Our team will ensure the report is tailored according to your needs.

About Us

Stats and Reports is a global market research and consulting service provider specialized in offering wide range of business solutions to their clients including market research reports, primary and secondary research, demand forecasting services, focus group analysis and other services. We understand that how data is important in todays competitive environment and thus, we have collaborated with industrys leading research providers who works continuously to meet the ever-growing demand for market research reports throughout the year.

Contact:

Stats and ReportsMangalam Chamber, Office No 16, Paud RoadSankalp Society, Kothrud, Pune, Maharashtra 411038Phone: +1 650-646-3808Email: [emailprotected]Website: https://www.statsandreports.comFollow Us on: LinkedIN | Twitter |

More:
Machine Learning as a Service Market to Witness Astonishing Growth by 2026 | Amazon, Oracle Corporation, IBM and more - The News Brok

Iridium Unveils the World’s First ML Algorithms That Decode the Value of Investor Relations – AiThority

Four Different Machine Learning Algorithms Were Deployed to Analyze 9 Million Data Points Across 673 Global Banks, Including 65 Gcc Banks, to Explain up to 98% of What Drives Bank Valuations

Iridium Quant Lens Shows That Investor Relations Adds up to 24.2% of Gcc Bank Valuations

IR Quality Is the 3rd Most Important Factor Impacting Price/ Tangible Book Value of Gcc Banks

The quality of investor relations can add up to 24.2 percent to a listed companys market capitalization, according to a new data science project by Iridium Advisors that uses four different Machine Learning algorithms to calculate the impact of 30 financial and non-financial valuation drivers.

Oliver Schutzmann, CEO of Iridium Advisors, said: Many boards and management teams in emerging markets have not yet invested sufficiently in investor relations because they do not fully understand the value it adds. To this background, we sought to take a scientific and systematic approach to show how the business value they create can be translated into market value, and thereby quantify the value of investor relations. With the insights gained from Iridium Quant Lens Machine Learning algorithms, we can now help business leaders understand what exactly drives their market value and show them how to unlock material valuation potential.

Recommended AI News: Zixi And Telstra Partner For Global Live Video Distribution

The Iridium Quant Lens machine learning (ML) platform was built on the foundations of classic finance theory that a companys stock price is derived through an evaluation of risk relative to return factors by equity market participants. In order to identify the financial and non-financial drivers of bank valuations, four different machine learning algorithms were deployed to consider 30 risk and return metrics, compiled from over 9 million data points, and covering 673 banks globally. The Quant Lens algorithms were run separately for all banks and for 65 GCC banks over different time horizons ranging from 1 to 10 years.

Iridiums algorithms proved successful in decomposing valuation drivers and, in aggregate, explained 86% of valuation variability for the test data set and 91 percent of the full data set. Furthermore, some individual models, such as the 3-year models for GCC banks, explained up 95 percent of the test data set and 98 percent of the full data set.

Recommended AI News: Litmus and Oden Partner to Offer Complete IIoT Solution for Smart Manufacturing

A significant finding of this study was that the quality of investor relations based on the classification into IR-Agnostic, IR-Basic and IR-Emerging archetypes is a highly material factor consistently influencing valuations of GCC banks. In fact, for most models it was the third most important factor impacting price to tangible book value (P/TBV) and explained 6% of share price variability on average.

In addition, the impact of upgrading investor relations is significant, with each upgrade step in a 2-stage upgrade path, commanding a 12 percent valuation premium on average and a complete move along the investor relations upgrade path adding 24 percent to market capitalization.

To illustrate the impact of IR Quality with real-world examples, one bank (Bank A) currently operates at an IR-Emerging level which adds 0.16x to its P/TBV valuation. Given the banks current market capitalization of USD 33 billion, this translates to almost USD 3 billion of its market value, or the equivalent of USD 220 million in net profits. Considering the valuation uplift achieved by the IR-Emerging level this is a compelling return on investment, being typically achievable with a USD 1.0 million annual IR budget. The converse is true for low IR quality. Bank X currently operates at an IR-Agnostic level, which in fact subtracts -0.07x from its P/TBV valuation.

Recommended AI News: ZelaaPayAE Partners with Tron to Empower Users with a Dual Chain Solution

Go here to see the original:
Iridium Unveils the World's First ML Algorithms That Decode the Value of Investor Relations - AiThority

Grants totaling $4.6 million support the use of machine learning to improve outcomes of people with HIV – Brown University

PROVIDENCE, R.I.[Brown University] Over the past four decades of treating HIV/AIDS, two important facts have been established: HIV-positive patients need to be put on treatment as soon as theyre diagnosed and then kept on an effective treatment plan. This response can help turn HIV into a chronic but manageable disease and can essentially help people live normal, healthy lives, said Joseph Hogan a professor of public health and of biostatistics at Brown University, who has been researching HIV/AIDS for 25 years.

Hogan is one of the primary investigators on two recently awarded grants from the National Institute of Health, totaling nearly $4.6 million over five years, to support the creation and utilization of data-driven tools that will allow care programs in Kenya to meet these key treatment goals.

If the system works as designed, then we have confidence that well improve the health outcomes of people with HIV, Hogan said.

The first part of the project involves using data science to understand whats called the HIV care cascade, said Hogan, who is the co-director of the biostatistics program for Academic Model Providing Access to Healthcare (AMPATH), a consortium of 14 North American universities who collaborate with Moi University in Eldoret, Kenya, on HIV research, care and training.

Hogan will collaborate with longtime scientific partner Ann Mwangi, associate professor of biostatistics at Moi University, who received a Ph.D. in biostatistics from Brown in 2011. Using AMPATH-developed electronic health record database, a team co-led by Hogan and Mwangi will develop algorithm-based statistical machine learning tools to predict when and why patients might drop out of care and when their viral load levels indicate they are at risk of treatment failure.

These algorithms, Hogan said, will then be integrated into the electronic health record system to deliver the information at the point of care, through handheld tablets that the physicians can use when sitting in the exam room with the patient. In consultation with experts in user interface design, the team will assess and test the most effective ways to communicate the results of the algorithm to the care providers so that they can use them to make decisions about patient care, Hogan said.

The predictive modeling system the team is developing, Hogan said, will alert a physician to red flags in the patients treatment plan at the point of care. This way, interventions can be developed to help a patient get to their treatment appointments, for example, before the patient needs to miss or cancel them. Or if a patient is predicted to have high viral load, Hogan said, a clinician can refer them for additional monitoring to identify and treat the increase before it becomes a problem.

Original post:
Grants totaling $4.6 million support the use of machine learning to improve outcomes of people with HIV - Brown University

Utilizing Machine Learning on Internet Search Activity to Support the Diagnostic Process and Relapse Detection in Young Individuals With Early…

Psychiatry is nearly entirely reliant on patient self-reporting, and there are few objective and reliable tests or sources of collateral information available to help diagnostic and assessment procedures. Technology offers opportunities to collect objective digital data to complement patient experience and facilitate more informed treatment decisions.We aimed to develop computational algorithms based on internet search activity designed to support diagnostic procedures and relapse identification in individuals with schizophrenia spectrum disorders.We extracted 32,733 time-stamped search queries across 42 participants with schizophrenia spectrum disorders and 74 healthy volunteers between the ages of 15 and 35 (mean 24.4 years, 44.0% male), and built machine-learning diagnostic and relapse classifiers utilizing the timing, frequency, and content of online search activity.Classifiers predicted a diagnosis of schizophrenia spectrum disorders with an area under the curve value of 0.74 and predicted a psychotic relapse in individuals with schizophrenia spectrum disorders with an area under the curve of 0.71. Compared with healthy participants, those with schizophrenia spectrum disorders made fewer searches and their searches consisted of fewer words. Prior to a relapse hospitalization, participants with schizophrenia spectrum disorders were more likely to use words related to hearing, perception, and anger, and were less likely to use words related to health.Online search activity holds promise for gathering objective and easily accessed indicators of psychiatric symptoms. Utilizing search activity as collateral behavioral health information would represent a major advancement in efforts to capitalize on objective digital data to improve mental health monitoring.Michael Leo Birnbaum, Prathamesh Param Kulkarni, Anna Van Meter, Victor Chen, Asra F Rizvi, Elizabeth Arenare, Munmun De Choudhury, John M Kane. Originally published in JMIR Mental Health (http://mental.jmir.org), 01.09.2020.

PubMed

Read more:
Utilizing Machine Learning on Internet Search Activity to Support the Diagnostic Process and Relapse Detection in Young Individuals With Early...

Machine learning and statistical prediction of patient quality-of-life after prostate radiation therapy. – UroToday

Thanks to advancements in diagnosis and treatment, prostate cancer patients have high long-term survival rates. Currently, an important goal is to preserve quality of life during and after treatment. The relationship between the radiation a patient receives and the subsequent side effects he experiences is complex and difficult to model or predict. Here, we use machine learning algorithms and statistical models to explore the connection between radiation treatment and post-treatment gastro-urinary function. Since only a limited number of patient datasets are currently available, we used image flipping and curvature-based interpolation methods to generate more data to leverage transfer learning. Using interpolated and augmented data, we trained a convolutional autoencoder network to obtain near-optimal starting points for the weights. A convolutional neural network then analyzed the relationship between patient-reported quality-of-life and radiation doses to the bladder and rectum. We also used analysis of variance and logistic regression to explore organ sensitivity to radiation and to develop dosage thresholds for each organ region. Our findings show no statistically significant association between the bladder and quality-of-life scores. However, we found a statistically significant association between the radiation applied to posterior and anterior rectal regions and changes in quality of life. Finally, we estimated radiation therapy dose thresholds for each organ. Our analysis connects machine learning methods with organ sensitivity, thus providing a framework for informing cancer patient care using patient reported quality-of-life metrics.

Computers in biology and medicine. 2020 Nov 28 [Epub ahead of print]

Zhijian Yang, Daniel Olszewski, Chujun He, Giulia Pintea, Jun Lian, Tom Chou, Ronald C Chen, Blerta Shtylla

New York University, New York, NY, 10012, USA; Applied Mathematics and Computational Science Program, University of Pennsylvania, Philadelphia, PA, 19104, USA., Carroll College, Helena, MT, 59625, USA; Computer, Information Science and Engineering Department, University of Florida, Gainesville, FL, 32611, USA., Smith College, Northampton, MA, 01063, USA., Simmons University, Boston, MA, USA; Department of Psychology, Tufts University, Boston, MA, 02111, USA., Department of Radiation Oncology, The University of North Carolina, Chapel Hill, NC, 27599, USA., Depts. of Computational Medicine and Mathematics, UCLA, Los Angeles, CA, 90095-1766, USA., Department of Radiation Oncology, University of Kansas Medical Center, Kansas City, KS, 66160, USA., Department of Mathematics, Pomona College, Claremont, CA, 91711, USA; Early Clinical Development, Pfizer Worldwide Research, Development, and Medical, Pfizer Inc, San Diego, CA, 92121, USA. Electronic address: .

PubMed http://www.ncbi.nlm.nih.gov/pubmed/33333364

Read more:
Machine learning and statistical prediction of patient quality-of-life after prostate radiation therapy. - UroToday

Machine Learning in Finance Market Benefits, Forthcoming Developments, Business Opportunities & Future Investments to 2028 KSU | The Sentinel…

COVID-19 can affect the global economy in three main ways: by directly affecting production and demand, by creating supply chain and market disruption, and by its financial impact on firms and financial markets. Global Machine Learning in Finance Market size has covered and analysed the potential of Worldwide market Industry and provides statistics and information on market dynamics, market analysis, growth factors, key challenges, major drivers & restraints, opportunities and forecast. This report presents a comprehensive overview, market shares, and growth opportunities of market 2021 by product type, application, key manufacturers and key regions and countries.

Market Research Inc.proclaims a new addition of comprehensive data to its extensive repository titled as, Machine Learning in Financemarket. This informative data has been scrutinized by using effective methodologies such as primary and secondary research techniques. This research report estimates the scale of the global Machine Learning in Finance market over the upcoming year. The recent trends, tools, methodologies have been examined to get a better insight into the businesses.

Request a sample copy of this report @:

https://www.marketresearchinc.com/request-sample.php?id=31104

Top key players::Ignite LtdYodleeTrill A

Additionally, it throws light on different dynamic aspects of the businesses, which help to understand the framework of the businesses. The competitive landscape has been elaborated on the basis of profit margin, which helps to understand the competitors at domestic as well as global level.

The globalMachine Learning in Financemarket has been studied by considering numerous attributes such as type, size, applications, and end-users. It includes investigations on the basis of current trends, historical records, and future prospects. This statistical data helps in making informed business decisions for the progress of the industries. For an effective and stronger business outlook, some significant case studies have been mentioned in this report.

Get a reasonable discount on this premium report @:

https://www.marketresearchinc.com/ask-for-discount.php?id=31104

Key Objectives of Machine Learning in Finance Market Report:

Study of the annual revenues and market developments of the major players that supply Machine Learning in Finance Analysis of the demand for Machine Learning in Finance by component Assessment of future trends and growth of architecture in the Machine Learning in Finance market Assessment of the Machine Learning in Finance market with respect to the type of application Study of the market trends in various regions and countries, by component, of the Machine Learning in Finance market Study of contracts and developments related to the Machine Learning in Finance market by key players across different regions Finalization of overall market sizes by triangulating the supply-side data, which includes product developments, supply chain, and annual revenues of companies supplying Machine Learning in Finance across the globe.

Further information:

https://www.marketresearchinc.com/enquiry-before-buying.php?id=31104

In this study, the years considered to estimate the size ofMachine Learning in Financeare as follows:

History Year: 2016-2019

Base Year: 2020

Forecast Year 2021 to 2028.

About Us

Market Research Inc is farsighted in its view and covers massive ground in global research. Local or global, we keep a close check on both markets. Trends and concurrent assessments sometimes overlap and influence the other. When we say market intelligence, we mean a deep and well-informed insight into your products, market, marketing, competitors, and customers. Market research companies are leading the way in nurturing global thought leadership. We help your product/service become the best they can with our informed approach.

Contact Us

Market Research Inc

Kevin

51 Yerba Buena Lane, Ground Suite,

Inner Sunset San Francisco, CA 94103, USA

Call Us:+1 (628) 225-1818

Write Us@sales@marketresearchinc.com

https://www.marketresearchinc.com

Link:
Machine Learning in Finance Market Benefits, Forthcoming Developments, Business Opportunities & Future Investments to 2028 KSU | The Sentinel...

AutoML Alleviates the Process of Machine Learning Analysis – Analytics Insight

Machine Learning (ML)is constantly being adopted by diverse organizations in an enthusiasm to acquire answers and analysis. As the embracing highly increases, it is often forgotten that machine learning has its flaws that need to be addressed for acquiring a perfect solution.

Applications of artificial intelligence andmachine learning are using new toolsto find practical answers to difficult problems. Companies move forward with the emerging technologies to get a competitive edge on their working style and system. Through the process, organizations are learning a very important lesson that one strategy doesnt fit for all.Business organizations want machine learningto do analysis on large data, which is complex and difficult. They neglect the fact that machine learning cant perform on diverse data storage and even if it does, it will conclude with a wrong prediction.

Analysing unstructured and overwhelming large datasets on machine learning is dangerous. Machine learning might conclude with a wrong solution while performing predictive analysis on such data. The implementation of the misconception in a companys working system might drag down its improvement. Many products that incorporatemachine learning capabilitiesuse predetermined algorithms and many diverse ways to handle data. However, each organizations data has different technical characteristics that might not go well with the existing machine learning configuration.

To address the problems where machine learning falls short, AutoML takes head-on in the companys data analysis perspective. AutoML takes over labour intensive job of choosing and tuning machine learning models. The new technology takes on many repetitive tasks where skilful problem definition and data preparation are needed. It reduces the need to understand algorithm parameters and shortening the compute time needed to produce better models.

Machine learning is an application of artificial intelligence that provides systems with the ability to automatically learn and improve from experience without being explicitly programmed. The technology focuses on the development of computer programs that can access data and use it for themselves. It is a model created and trained on a set of previously gathered data, often known as outcomes. The model can be used tomake predictions using that data.

However, machine learning cant get accurate results all the time. It depends on the data scientist handling the machine learning configurations and data inputs. A data scientist studies the input data and understands the desired output to solve business problems. They choose the apt mathematical algorithm from a dozen and tune those parameters called hyperparameters and evaluate the resulting models. The data scientist has the responsibility to adjust the algorithms tuning parameters again and again until the machine learning model produces the desired result. If the results are not tactic, then the data scientist might even start from the very beginning.

Machine learning system struggles to function when the data is too large or unorganised. Some of the other machine learning issues are,

Classification- The process of labeling data can be thought to as a discrimination problem, modeling the similarities between groups.

Regression- Machine learning staggers to predict the value of a new unpredicted data.

Clustering- Data can be divided into groups based on similarity and other measures of natural structure in data. But, human hands are needed to assign names to the groups.

As mentioned earlier, machine learning alone cant address the datasets of an organisation to find predictions. Here are some reasons why tuning a machine learning algorithm is challenging to choose and how AutoML can prove to be useful at such instances.

Choosing the right algorithm: It is not always obvious to choose a perfect algorithm that might work well for building real-value predictions, anomaly detection and classification models for a particular data set. Data scientists have to go through many well-known algorithms of machine learning that could suit the real-world situation. It could take weeks or even months to come up with the right algorithm.

Selecting relevant information: Data storage has diverse data variables or predictors. Henceforth, it is hard to tell which of those data points are significant for making a decision. This process of selecting relevant information to include in data models is called feature selection.

Training machine learning models: The most difficult process in machine learning is to choose a subset of data that can be used for training a machine learning model. In some cases, training against some data variables or predictors can increase training time while actually reducing the accuracy of the ML model.

Automated machine learning (AutoML)basically involves automating the end-to-end process of applying machine learning to real-world problems that are actually relevant in the industry.AutoML makes well-educated guessesto select a suitable ML algorithm and effective initial hyperparameters. The technology tests the accuracy of training the chosen algorithms with those parameters and makes tiny adjustments, and tests the results again. AutoML also automates the creation of small, accurate subsets of data to use for those iterative refinements, yielding excellent results in a fraction of the time.

In a nutshell, AutoML acts as a right tool that quickly chooses, builds and deploys machine learning models that deliver accurate results.

See the original post here:
AutoML Alleviates the Process of Machine Learning Analysis - Analytics Insight