Global Contextual Advertising Markets, 2019-2025: Advances in AI and Machine Learning to Boost Prospects for Real-Time Contextual Targeting -…

The "Contextual Advertising - Market Analysis, Trends, and Forecasts" report has been added to ResearchAndMarkets.com's offering.

The Contextual Advertising market worldwide is projected to grow by US$279.2 Billion, driven by a compounded growth of 18.5%

Activity-based Advertising, one of the segments analyzed and sized in this study, displays the potential to grow at over 18.6%. The shifting dynamics supporting this growth makes it critical for businesses in this space to keep abreast of the changing pulse of the market. Poised to reach over US$166.2 Billion by the year 2025, Activity-based Advertising will bring in healthy gains adding significant momentum to global growth.

Representing the developed world, the United States will maintain a 16.5% growth momentum. Within Europe, which continues to remain an important element in the world economy, Germany will add over US$10.6 Billion to the region's size and clout in the next 5 to 6 years. Over US$8.9 Billion worth of projected demand in the region will come from the rest of the European markets. In Japan, Activity-based Advertising will reach a market size of US$7 Billion by the close of the analysis period.

As the world's second largest economy and the new game changer in global markets, China exhibits the potential to grow at 23.6% over the next couple of years and add approximately US$69.7 Billion in terms of addressable opportunity for the picking by aspiring businesses and their astute leaders.

Presented in visually rich graphics are these and many more need-to-know quantitative data important in ensuring quality of strategy decisions, be it entry into new markets or allocation of resources within a portfolio.

Several macroeconomic factors and internal market forces will shape growth and development of demand patterns in emerging countries in Asia-Pacific, Latin America and the Middle East. All research viewpoints presented are based on validated engagements from influencers in the market, whose opinions supersede all other research methodologies.

Competitors identified in this market include:

Key Topics Covered:

1. MARKET OVERVIEW

2. FOCUS ON SELECT PLAYERS

3. MARKET TRENDS & DRIVERS

4. GLOBAL MARKET PERSPECTIVE

For more information about this report visit https://www.researchandmarkets.com/r/q96k8q

View source version on businesswire.com: https://www.businesswire.com/news/home/20191219005420/en/

Contacts

ResearchAndMarkets.comLaura Wood, Senior Press Managerpress@researchandmarkets.com For E.S.T Office Hours Call 1-917-300-0470For U.S./CAN Toll Free Call 1-800-526-8630For GMT Office Hours Call +353-1-416-8900

Continue reading here:

Global Contextual Advertising Markets, 2019-2025: Advances in AI and Machine Learning to Boost Prospects for Real-Time Contextual Targeting -...

Theres No Such Thing As The Machine Learning Platform – Forbes

In the past few years, you might have noticed the increasing pace at which vendors are rolling out platforms that serve the AI ecosystem, namely addressing data science and machine learning (ML) needs. The Data Science Platform and Machine Learning Platform are at the front lines of the battle for the mind share and wallets of data scientists, ML project managers, and others that manage AI projects and initiatives. If youre a major technology vendor and you dont have some sort of big play in the AI space, then you risk rapidly becoming irrelevant. But what exactly are these platforms and why is there such an intense market share grab going on?

The core of this insight is the realization that ML and data science projects are nothing like typical application or hardware development projects. Whereas in the past hardware and software development aimed to focus on the functionality of systems or applications, data science and ML projects are really about managing data, continuously evolving learning gleaned from data, and the evolution of data models based on constant iteration. Typical development processes and platforms simply dont work from a data-centric perspective.

It should be no surprise then that technology vendors of all sizes are focused on developing platforms that data scientists and ML project managers will depend on to develop, run, operate, and manage their ongoing data models for the enterprise. To these vendors, the ML platform of the future is like the operating system or cloud environment or mobile development platform of the past and present. If you can dominate market share for data science / ML platforms, you will reap rewards for decades to come. As a result, everyone with a dog in this fight is fighting to own a piece of this market.

However, what does a Machine Learning platform look like? How is it the same or different than a Data Science platform? What are the core requirements for ML Platforms, and how do they differ from more general data science platforms? Who are the users of these platforms, and what do they really want? Lets dive deeper.

What is the Data Science Platform?

Data scientists are tasked with wrangling useful information from a sea of data and translating business and operational informational needs into the language of data and math. Data scientists need to be masters of statistics, probability, mathematics, and algorithms that help to glean useful insights from huge piles of information. A data scientist creates data hypothesis, runs tests and analysis of the data, and then translates their results for someone else in the organization to easily view and understand. So it follows that a pure data science platform would meet the needs of helping craft data models, determining the best fit of information to a hypothesis, testing that hypothesis, facilitating collaboration amongst teams of data scientists, and helping to manage and evolve the data model as information continues to change.

Furthermore, data scientists dont focus their work in code-centric Integrated Development Environments (IDEs), but rather in notebooks. First popularized by academically-oriented math-centric platforms like Mathematica and Matlab, but now prominent in the Python, R, and SAS communities, notebooks are used to document data research and simplify reproducibility of results by allowing the notebook to run on different source data. The best notebooks are shared, collaborative environments where groups of data scientists can work together and iterate models over constantly evolving data sets. While notebooks dont make great environments for developing code, they make great environments to collaborate, explore, and visualize data. Indeed, the best notebooks are used by data scientists to quickly explore large data sets, assuming sufficient access to clean data.

However, data scientists cant perform their jobs effectively without access to large volumes of clean data. Extracting, cleaning, and moving data is not really the role of a data scientist, but rather that of a data engineer. Data engineers are challenged with the task of taking data from a wide range of systems in structured and unstructured formats, and data which is usually not clean, with missing fields, mismatched data types, and other data-related issues. In this way, the role of a data engineer is an engineer who designs, builds and arranges data. Good data science platforms also enable data scientists to easily leverage compute power as their needs grow. Instead of copying data sets to a local computer to work on them, platforms allow data scientists to easily access compute power and data sets with minimal hassle. A data science platform is challenged with the needs to provide these data engineering capabilities as well. As such, a practical data science platform will have elements of data science capabilities and necessary data engineering functionality.

What is the Machine Learning Platform?

We just spent several paragraphs talking about data science platforms and not even once mentioned AI or ML. Of course, the overlap is the use of data science techniques and machine learning algorithms applied to the large sets of data for the development of machine learning models. The tools that data scientists use on a daily basis have significant overlap with the tools used by ML-focused scientists and engineers. However, these tools arent the same, because the needs of ML scientists and engineers are not the same as more general data scientists and engineers.

Rather than just focusing on notebooks and the ecosystem to manage and work collaboratively with others on those notebooks, those tasked with managing ML projects need access to the range of ML-specific algorithms, libraries, and infrastructure to train those algorithms over large and evolving datasets. An ideal ML platforms helps ML engineers, data scientists, and engineers discover which machine learning approaches work best, how to tune hyperparameters, deploy compute-intensive ML training across on-premise or cloud-based CPU, GPU, and/or TPU clusters, and provide an ecosystem for managing and monitoring both unsupervised as well as supervised modes of training.

Clearly a collaborative, interactive, visual system for developing and managing ML models in a data science platform is necessary, but its not sufficient for an ML platform. As hinted above, one of the more challenging parts of making ML systems work is the setting and tuning of hyperparameters. The whole concept of a machine learning model is that it requires various parameters to be learned from the data. Basically, what machine learning is actually learning are the parameters of the data, and fitting new data to that learned model. Hyperparameters are configurable data values that are set prior to training an ML model that cant be learned from data. These hyperparameters indicate various factors such as complexity, speed of learning, and more. Different ML algorithms require different hyperparameters, and some dont need any at all. ML platforms help with the discovery, setting, and management of hyperparameters, among other things including algorithm selection and comparison that non-ML specific data science platforms dont provide.

The different needs of big data, ML engineering, model management, operationalization

At the end of the day, ML project managers simply want tools to make their jobs more efficient and effective. But not all ML projects are the same. Some are focused on conversational systems, while others are focused on recognition or predictive analytics. Yet others are focused on reinforcement learning or autonomous systems. Furthermore, these models can be deployed (or operationalized) in various different ways. Some models might reside in the cloud or on-premise servers while others are deployed to edge devices or offline batch modes. These differences in ML application, deployment, and needs between data scientists, engineers, and ML developers makes the concept of a single ML platform not particularly feasible. It would be a jack of all trades and master of none.''

As such, we see four different platforms emerging. One focused on the needs of data scientists and model builders, another focused on big data management and data engineering, yet another focused on model scaffolding and building systems to interact with models, and a fourth focused on managing the model lifecycle - ML Ops. The winners will focus on building out capabilities for each of these parts.

The Four Environments of AI (Source: Cognilytica)

The winners in the data science platform race will be the ones that simplify ML model creation, training, and iteration. They will make it quick and easy for companies to move from dumb unintelligent systems to ones that leverage the power of ML to solve problems that previously could not be addressed by machines. Data science platforms that dont enable ML capabilities will be relegated to non-ML data science tasks. Likewise, those big data platforms that inherently enable data engineering capabilities will be winners. Similarly, application development tools will need to treat machine learning models as first-class participants in their lifecycle just like any other form of technology asset. Finally, the space of ML operations (ML Ops) is just now emerging and will no doubt be big news in the next few years.

When a vendor tells you they have an AI or ML platform, the right response is to say which one?. As you can see, there isnt just one ML platform, but rather different ones that serve very different needs. Make sure you dont get caught up in the marketing hype of some of these vendors with what they say they have with what they actually have.

Excerpt from:

Theres No Such Thing As The Machine Learning Platform - Forbes

Israelis develop ‘self-healing’ cars powered by machine learning and AI – The Jerusalem Post

Even before autonomous vehicles become a regular sight on our streets, modern cars are quickly resembling sophisticated computers on wheels.Increasingly connected vehicles come with as many as 150 million lines of code, far exceeding the 145,000 lines of code required to land Apollo 11 on the Moon in 1969. Self-driving cars could require up to one billion lines of code.For manufacturers, passengers and repair shops alike, vehicles running on software rather than just machines represent an unprecedented world of highly complex mobility. Checking the engine, tires and brakes to find a fault will certainly no longer suffice.Seeking to build trust in the new generation of automotive innovation, Tel Aviv-based start-up Aurora Labs has developed software for what it calls the self-healing car a proactive and remote system to detect and fix potential vehicle malfunctions, and update and validate in-car software without any downtime.(From left) Aurora Labs co-founder & CEO Zohar Fox; co-founder & COO Ori Lederman; and EVP Marketing Roger Ordman (Credit: Aurora Labs)The automotive industry is facing its biggest revolution to date, Aurora Labs co-founder and chief operating officer Ori Lederman told The Jerusalem Post. The most critical aspect of all that sophistication and software coming into the car is whether you can trust it, even before you hand over complete autonomy to the car. It poses a lot of challenges to car-makers.New challenges, Lederman added, include whether software problems can be detected after selling the vehicle, whether problems can be solved safely and securely, and whether defects can be solved without interrupting car use. In 2018, some eight million vehicles were recalled in the United States due to software-based defects alone.The human body can detect when something is not quite right before you pass out, said executive vice president of marketing Roger Ordman. The auto-immune system indicates something is wrong and what can be done to fix it: raise your temperature or white blood count. Sometimes the body can do a self-fix, and sometimes thats not enough and needs an external intervention.Our technology has the same kind of approach detecting if something has started to go wrong before it causes a catastrophic failure, indicating exactly where that problem is, doing something to fix it, and keeping it running smoothly.The companys Line-Of-Code Behavior technology, powered by machine learning and artificial intelligence, creates a deep understanding of what software is installed on over 100 vehicle Engine Control Units (ECU), and the relationship between them. In addition to detecting software faults, the technology can enable remote, over-the-air software updates without any downtime.Similar to silent updates automatically implemented by smartphone applications, Ordman added, car manufacturers will be able to update and continuously improve software running on connected vehicles. Of course, manufacturers will be required to meet stringent regulations, developed by bodies including the UNECE, concerning cybersecurity and over-the-air updates.When we joined forces and started developing the idea, we knew our technology was applicable to any connected, smart device or Internet of Things device, said Lederman. The first vertical we wanted to start with is the one that needs us the most, and the biggest market. The need for detecting, managing, recovering and being transparent about software is by far the largest need in the automotive industry as they move from mechanical parts to virtual systems run by lines of code.Rather than requiring mass recalls, Aurora Labs self-healing software will be able to apply short-term fixes to ensure continued functionality and predictability, and subsequently implement comprehensive upgrades to the vehicles systems.The company, which has raised $11.5 million in fund-raising rounds since it was founded in 2016 by Lederman and CEO Zohar Fox, is currently working to implement its technology with some of the worlds leading automotive industry players, including major car-makers in Germany, the United States, Korea and Japan.The fast-growing start-up also has offices in Michigan and the North Macedonian capital of Skopje, and owns a subsidiary near Munich.Customers ought to start being aware of how sophisticated their cars are, said Lederman. When they buy a new car, they should want to ask the dealership that they have the ability to detect, fix and recover so they dont need to go the dealership. Its something they would want to have. Just as the safety performance of cars in Europe are ranked according to the five-star NCAP standard, Ordman believes there should be an additional star for software safety and security.There should be as many self-healing systems in place as possible to enable that, when inevitably something does go wrong, there are systems in place to detect and fix them and maintain uptime, said Ordman.Does the software running in the vehicle have the right cybersecurity in place? Does it have right recovery technologies in place? Can it continuously and safely improve over time?With these functionalities, youre not just dealing with five stars of the physical but adding another star for the software safety and security. It is about giving the trust to the consumer. Im getting a car that will safeguard me and my family as I move forward.

View post:

Israelis develop 'self-healing' cars powered by machine learning and AI - The Jerusalem Post

The challenge in Deep Learning is to sustain the current pace of innovation, explains Ivan Vasilev, machine learning engineer – Packt Hub

If we talk about recent breakthroughs in the software community, machine learning and deep learning is a major contender the usage, adoption, and experimentation of deep learning has exponentially increased. Especially in the areas of computer vision, speech, natural language processing and understanding, deep learning has made unprecedented progress. GANs, variational autoencoders and deep reinforcement learning are also creating impressive AI results.

To know more about the progress of deep learning, we interviewed Ivan Vasilev, a machine learning engineer and researcher based in Bulgaria. Ivan is also the author of the book Advanced Deep Learning with Python. In this book, he teaches advanced deep learning topics like attention mechanism, meta-learning, graph neural networks, memory augmented neural networks, and more using the Python ecosystem. In this interview, he shares his experiences working on this book, compares TensorFlow and PyTorch, as well as talks about computer vision, NLP, and GANs.

Computer Vision and Natural Language processing are two popular areas where a number of developments are ongoing. In his book, Advanced Deep Learning with Python, Ivan delves deep into these two broad application areas. One of the reasons I emphasized computer vision and NLP, he clarifies, is that these fields have a broad range of real-world commercial applications, which makes them interesting for a large number of people.

The other reason for focusing on Computer Vision, he says is because of the natural (or human-driven if you wish) progress of deep learning. One of the first modern breakthroughs was in 2012, when a solution based on convolutional network won the ImageNet competition of that year with a large margin compared to any previous algorithms. Thanks in part to this impressive result, the interest in the field was renewed and brought many other advances including solving complex tasks like object detection and new generative models like generative adversarial networks. In parallel, the NLP domain saw its own wave of innovation with things like word vector embeddings and the attention mechanism.

There are two popular machine learning frameworks that are currently at par TensorFlow and PyTorch (Both had new releases in the past month, TensorFlow 2.0 and PyTorch 1.3). There is an ongoing debate that pitches TensorFlow and PyTorch as rivaling tech and communities. Ivan does not think there is a clear winner between the two libraries and this is why he has included them both in the book.

He explains, On the one hand, it seems that the API of PyTorch is more streamlined and the library is more popular with the academic community. On the other hand, TensorFlow seems to have better cloud support and enterprise features. In any case, developers will only benefit from the competition. For example, PyTorch has demonstrated the importance of eager execution and TensorFlow 2.0 now has much better support for eager execution to the point that it is enabled by default. In the past, TensorFlow had internal competing APIs, whereas now Keras is promoted as its main high-level API. On the other hand, PyTorch 1.3 has introduced experimental support for iOS and Android devices and quantization (computation operations with reduced precision for increased efficiency).

Ivan discusses his venture into the field of financial machine learning, being the author of an ML-oriented event-based algorithmic trading library. However, financial machine learning (and stock price prediction in particular) is usually not in the focus of mainstream deep learning research. One reason, Ivan states, is that the field isnt as appealing as, say, computer vision or NLP. At first glance, it might even appear gimmicky to predict stock prices.

He adds, Another reason is that quality training data isnt freely available and can be quite expensive to obtain. Even if you have such data, pre-processing it in an ML-friendly way is not a straightforward process, because the noise-to-signal ratio is a lot higher compared to images or text. Additionally, the data itself could have huge volume.

However, he counters, using ML in finance could have benefits, besides the obvious (getting rich by trading stocks). The participation of ML algorithms in the stock trading process can make the markets more efficient. This efficiency will make it harder for market imbalances to stay unnoticed for long periods of time. Such imbalances will be corrected early, thus preventing painful market corrections, which could otherwise lead to economic recessions.

Ivan has also given a special emphasis to Generative adversarial networks in his book. Although extremely useful, in recent times GANs have been used to generate high-dimensional fake data that look very convincing. Many researchers and developers have raised concerns about the negative repercussions of using GANs and wondered if it is even possible to prevent and counter its misuse/abuse.

Ivan acknowledges that GANs may have unintended outcomes but that shouldnt be the sole reason to discard them. He says, Besides great entertainment value, GANs have some very useful applications and could help us better understand the inner workings of neural networks. But as you mentioned, they can be used for nefarious purposes as well. Still, we shouldnt discard GANs (or any algorithm with similar purpose) because of this. If only because the bad actors wont discard them. I think the solution to this problem lies beyond the realm of deep learning. We should strive to educate the public on the possible adverse effects of these algorithms, but also to their benefits. In this way we can raise the awareness of machine learning and spark an honest debate about its role in our society.

Awareness and Ethics go in parallel. Ethics is one of the most important topics to emerge in machine learning and artificial intelligence over the last year. Ivan agrees that the ethics and algorithmic bias in machine learning are of extreme importance. He says, We can view the potential harmful effects of machine learning as either intentional and unintentional. For example, the bad actors I mentioned when we discussed GANs fall into the intentional category. We can limit their influence by striving to keep the cutting edge of ML research publicly available, thus denying them any unfair advantage of potentially better algorithms. Fortunately, this is largely the case now and hopefully will remain that way in the future.

I dont think algorithmic bias is necessarily intentional, he says. Instead, I believe that it is the result of the underlying injustices in our society, which creep into ML through either skewed training datasets or unconscious bias of the researchers. Although the bias might not be intentional, we still have a responsibility to put a conscious effort to eliminate it.

The field of ML exploded (in a good sense) a few years ago, says Ivan, thanks to a combination of algorithmic and computer hardware advances. Since then, the researches have introduced new smarter and more elegant deep learning algorithms. But history has shown that AI can generate such a great hype that even the impressive achievements of the last few years could fall short of the expectations of the general public.

So, in a broader sense, the challenge in front of ML is to sustain the current pace of innovation. In particular, current deep learning algorithms fall short in some key intelligence areas, where humans excel. For example, neural networks have a hard time learning multiple unrelated tasks. They also tend to perform better when working with unstructured data (like images), compared to structured data (like graphs).

Another issue is that neural networks sometimes struggle to remember long-distance dependencies in sequential data. Solving these problems might require new fundamental breakthroughs, and its hard to give an estimation of such one time events. But even at the current level, ML can fundamentally change our society (hopefully for the better). For instance, in the next 5 to 10 years, we can see the widespread introduction of fully autonomous vehicles, which have the potential to transform our lives.

This is just a snapshot of some of the important focus areas in the deep learning ecosystem. You can check out more of Ivans work in his book Advanced Deep Learning with Python. In this book you will investigate and train CNN models with GPU accelerated libraries like TensorFlow and PyTorch. You will also apply deep neural networks to state-of-the-art domains like computer vision problems, NLP, GANs, and more.

Ivan Vasilev started working on the first open source Java Deep Learning library with GPU support in 2013. The library was acquired by a German company, where he continued its development. He has also worked as a machine learning engineer and researcher in the area of medical image classification and segmentation with deep neural networks. Since 2017 he has focused on financial machine learning. He is working on a Python based platform, which provides the infrastructure to rapidly experiment with different ML algorithms for algorithmic trading. You can find him on Linkedin and GitHub.

Kaggles Rachel Tatman on what to do when applying deep learning is overkill

Brad Miro talks TensorFlow 2.0 features and how Google is using it internally

Franois Chollet, creator of Keras on TensorFlow 2.0 and Keras integration, tricky design decisions in deep learning and more

The rest is here:

The challenge in Deep Learning is to sustain the current pace of innovation, explains Ivan Vasilev, machine learning engineer - Packt Hub

Automation And Machine Learning: Transforming The Office Of The CFO – Forbes

By Steve Dunne, Staff Writer, Workday

In a recentMcKinsey survey,only 13 percent of CFOs and other senior business executives polled said their finance organizations use automation technologies, such as robotic process automation (RPA) and machine learning. Whats more, when asked how much return on investment the finance organization has generated from digitization and automation in the past 12 months, only 5 percent said it was a substantial return; the more common response was modest or minimal returns.

While that number may seem low right now, automation is coming to the finance function, and it will play a crucial role in furthering the CFOs position in the C-suite. Research suggests corporate finance teams spend about 80 percent of their time manually gathering, verifying, and consolidating data, leaving only about 20 percent for higher-level tasks, such as analysis and decision-making.

In its truest form, RPA will unleash a new wave of digital transformation in corporate finance. Instead of programming software to perform certain tasks automatically, RPA uses software robots to process transactions, monitor compliance, and audit processes automatically. This could slash thenumber of required manual tasks, helping to drive out errors and increase the efficiency of finance processeshanding back time to the CFO function to be more strategic.

According to the report Companies Using AI Will Add More Jobs Than They Cut, companies that had automated at least 70 percent of their business processes compared to those that had automated less than 30 percent discovered that more automation translated into more revenue. In fact, the highly automated group was six times more likely to have revenue growth of 15 percent per year or more.

In the right hands, automation and machine learning can be a fantastic combination for CFOs to transform the finance function, yet success will depend on automating the right tasks. The first goal for a finance team should be to automate the repetitive and transactional tasks that consume the majority of its time. Doing this will free finance up to be more of a strategic advisor to the business. AnAdaptive Insights surveyfound that over 40 percent of finance leaders say that the biggest driver behind automation within their organizations is the demand for faster, higher-quality insights from executives and operational stakeholders.

Accentures global talent and organization lead for financial services, Andrew Woolf, says the challenge for businesses is to pivot their workforce to enter an entirely new world where human ingenuity meets intelligent technology to unlock new forms of growth.

Transaction processing is one of the major barriers preventing finance from achieving transformation and the ultimate goal of delivering a better business partnership. It's not surprising that its the first port of call for CFOs looking toward automation.

RPA combined with machine learning provides finance leaders with a great way of optimising the way they manage their accounting processes. This has been a painful area of finance for such a long time and can have a direct impact on an organizations cash flow, says Tim Wakeford, vice president, financials product strategy, EMEA at Workday. Finance spends a huge amount of time sifting through invoices and other documentation to manually correct errors in the general ledger, while machine learning could automate this, helping to intelligently match payments with invoices.

Machine learning can also mitigate financial risk by flagging suspect payments to vendors in real time. Internal and external fraud costs businesses billions of dollars each year. The current mechanism for mitigating such instances of fraud is to rely on manual audits on a sample of invoices. This means looking at just a fraction of total payments, and is the proverbial needle in the haystack approach to identifying fraud and mistakes. Machine learning can vastly increase the volume of invoices which can be checked and analyzed to ensure that organizations are not making duplicate or fraudulent payments.

Ensuring compliance to federal and international regulations is a critical issue for financial institutions, especially given the increasingly strict laws targeting money laundering and the funding of terrorist activities, explains David Axson, CFO strategies global lead, Accenture Strategy. At one large global bank, up to 10,000 staffers were responsible for identifying suspicious transactions and accounts that might indicate such illegal activities. To help in those efforts, the bank implemented an AI system that deploys machine-learning algorithms that segment the transactions and accounts and sets the optimal thresholds for alerting people to potential cases that might require further investigation.

Read the second part of this story, How Automation and Machine Learning Are Reshaping the Finance Function, which takes a closer look at how automation and machine learning can drive change.

This story was originally published on theWorkday blog. For more stories like this, clickhere.

Follow Workday:LinkedIn,Facebook, andTwitter.

Original post:

Automation And Machine Learning: Transforming The Office Of The CFO - Forbes

Machine learning results: pay attention to what you don’t see – STAT

Even as machine learning and artificial intelligence are drawing substantial attention in health care, overzealousness for these technologies has created an environment in which other critical aspects of the research are often overlooked.

Theres no question that the increasing availability of large data sources and off-the-shelf machine learning tools offer tremendous resources to researchers. Yet a lack of understanding about the limitations of both the data and the algorithms can lead to erroneous or unsupported conclusions.

Given that machine learning in the health domain can have a direct impact on peoples lives, broad claims emerging from this kind of research should not be embraced without serious vetting. Whether conducting health care research or reading about it, make sure to consider what you dont see in the data and analyses.

advertisement

One key question to ask is: Whose information is in the data and what do these data reflect?

Common forms of electronic health data, such as billing claims and clinical records, contain information only on individuals who have encounters with the health care system. But many individuals who are sick dont or cant see a doctor or other health care provider and so are invisible in these databases. This may be true for individuals with lower incomes or those who live in rural communities with rising hospital closures. As University of Toronto machine learning professor Marzyeh Ghassemi said earlier this year:

Even among patients who do visit their doctors, health conditions are not consistently recorded. Health data also reflect structural racism, which has devastating consequences.

Data from randomized trials are not immune to these issues. As a ProPublica report demonstrated, black and Native American patients are drastically underrepresented in cancer clinical trials. This is important to underscore given that randomized trials are frequently highlighted as superior in discussions about machine learning work that leverages nonrandomized electronic health data.

In interpreting results from machine learning research, its important to be aware that the patients in a study often do not depict the population we wish to make conclusions about and that the information collected is far from complete.

It has become commonplace to evaluate machine learning algorithms based on overall measures like accuracy or area under the curve. However, one evaluation metric cannot capture the complexity of performance. Be wary of research that claims to be ready for translation into clinical practice but only presents a leader board of tools that are ranked based on a single metric.

As an extreme illustration, an algorithm designed to predict a rare condition found in only 1% of the population can be extremely accurate by labeling all individuals as not having the condition. This tool is 99% accurate, but completely useless. Yet, it may outperform other algorithms if accuracy is considered in isolation.

Whats more, algorithms are frequently not evaluated based on multiple hold-out samples in cross-validation. Using only a single hold-out sample, which is done in many published papers, often leads to higher variance and misleading metric performance.

Beyond examining multiple overall metrics of performance for machine learning, we should also assess how tools perform in subgroups as a step toward avoiding bias and discrimination. For example, artificial intelligence-based facial recognition software performed poorly when analyzing darker-skinned women. Many measures of algorithmic fairness center on performance in subgroups.

Bias in algorithms has largely not been a focus in health care research. That needs to change. A new study found substantial racial bias against black patients in a commercial algorithm used by many hospitals and other health care systems. Other work developed algorithms to improve fairness for subgroups in health care spending formulas.

Subjective decision-making pervades research. Who decides what the research question will be, which methods will be applied to answering it, and how the techniques will be assessed all matter. Diverse teams are needed not just because they yield better results. As Rediet Abebe, a junior fellow of Harvards Society of Fellows, has written, In both private enterprise and the public sector, research must be reflective of the society were serving.

The influx of so-called digital data thats available through search engines and social media may be one resource for understanding the health of individuals who do not have encounters with the health care system. There have, however, been notable failures with these data. But there are also promising advances using online search queries at scale where traditional approaches like conducting surveys would be infeasible.

Increasingly granular data are now becoming available thanks to wearable technologies such as Fitbit trackers and Apple Watches. Researchers are actively developing and applying techniques to summarize the information gleaned from these devices for prevention efforts.

Much of the published clinical machine learning research, however, focuses on predicting outcomes or discovering patterns. Although machine learning for causal questions in health and biomedicine is a rapidly growing area, we dont see a lot of this work yet because it is new. Recent examples of it include the comparative effectiveness of feeding interventions in a pediatric intensive care unit and the effectiveness of different types of drug-eluting coronary artery stents.

Understanding how the data were collected and using appropriate evaluation metrics will also be crucial for studies that incorporate novel data sources and those attempting to establish causality.

In our drive to improve health with (and without) machine learning, we must not forget to look for what is missing: What information do we not have about the underlying health care system? Why might an individual or a code be unobserved? What subgroups have not been prioritized? Who is on the research team?

Giving these questions a place at the table will be the only way to see the whole picture.

Sherri Rose, Ph.D., is associate professor of health care policy at Harvard Medical School and co-author of the first book on machine learning for causal inference, Targeted Learning (Springer, 2011).

See the original post:

Machine learning results: pay attention to what you don't see - STAT

Qualitest Acquires AI and Machine Learning Company AlgoTrace to Expand Its Offering – PRNewswire

LONDON, Dec. 12, 2019 /PRNewswire/ --Qualitest, the world's largest software testing and quality assurance company, has acquired AI and machine learning company AlgoTrace for an undisclosed amount. This acquisition marks the first step of Qualitest's growth strategy following an investment from Bridgepoint earlier this year.

The acquisition will allow Qualitest to radically expand the number of AI-powered testing solutions available to clients, as well as develop its capabilities in assisting companies test and launch new AI-powered solutions with greater confidence and speed. As software grows in complexity and the pressure to launch faster and more frequently increases, according to Gartner, companies that do not use AI to enhance their Quality Assurance will be at a significant disadvantage.

AlgoTrace's machine learning tools help brands answer business critical questions as they launch new software: what, where, when, and how to test and in what order to ensure consistently high quality. With multiple clients already using Qualitest's suite of AI-testing tools, this expansion of capabilities creates opportunities not only for new Qualitest clients, but also allows for the growth of existing relationships with current customers around the world.

Qualitest began working with the AlgoTrace team more than a year ago, with AlgoTrace's AI platform powering Qualitest's market-leading test predictor tool, which applies pioneering autonomous AI capabilities and predictive modeling to unstructured data without the need for code or complex interfaces. Following multiple successful joint projects, the teams saw that, together, they would be able to apply AlgoTrace's powerful prediction engine in a variety of ways across the software development lifecycle to improve quality and speed to market.

Qualitest's AI-testing solutions have two main features focused on increasing confidence and assurance. First, to assist and enhance quality assurance efforts giving brands, in a more rapid fashion, high levels of confidence that software releases will go smoothly. Second, helping companies who are using AI in their own offerings to have a higher level of confidence that their AI algorithms are generating correct, unbiased results.

Ron Ritter, CEO at AlgoTrace, said: "We are thrilled to be joining with Qualitest. Following successful implementations with the company in the past, we have complete faith that we will help Qualitest change the testing paradigm forever enhancing their quality engineering with machine learning. While there is a lot of hype surrounding AI, we're deploying real, hard-nosed and practical tools that significantly change the rules."

Norm Merritt, CEO of Qualitest, said:"Applying AI to quality engineering is a perfect fit. Just as software becomes increasingly complex, the companies producing it are under competitive pressure to increase the speed and frequency of their rollouts. AI is the only way companies can scale software testing and quality engineering and the AlgoTrace team have shown that they understand this. In our view, companies that do not use AI to improve quality will be at a significant disadvantage."

Aviram Shotten, Chief Knowledge and Innovation Officer at Qualitest, said: "Ron and his team are just the kind of innovators we love: smart, customer-obsessed and attacking a big market problem with cutting edge technology. This acquisition will not only help us accelerate AI adoption within quality engineering by providing a holistic solution to our clients, it provides an avenue for our teams to access AlgoTrace's unique expertise to build new models, tools and solutions to improve how technology is developed, tested and deployed."

About Qualitest

Qualitest is the world's largest independent managed services provider of quality assurance and testing solutions. As a strategic partner, Qualitest helps brands move beyond functional testing to adopt new innovations such as automation, AI, and crowd-sourced UX testing. It leverages its domain expertise across industries, including financial services, media and entertainment, retail, consumer goods, technology, gaming, telecom, among others. Qualitest's global service delivery platform includes the United States, Israel, UK, India and Romania. To learn more about Qualitest, visit http://www.qualitestgroup.com.

About AlgoTrace

AlgoTracewas founded in 2016 as a data science company that focuses on building automated machine learning tools. It builds the tools data analystsand data scientists need to simplify and accelerate prediction modelling processes. Its software tool helps organizations to take the right decisions based on clear inputs that are based on facts and patterns discovered by our prediction engine. Its mission is to empower data scientists and analysts to create accurate and stable prediction models faster than ever.

SOURCE Qualitest

Home

Read the original:

Qualitest Acquires AI and Machine Learning Company AlgoTrace to Expand Its Offering - PRNewswire

Industry Call to Define Universal Open Standards for Machine Learning Operations and Governance – MarTech Series

Defining open standards is essential for deploying and governing machine learning models at scale for enterprise businesses

Cloudera, the enterprise data cloud company, asks for industry participation in defining universal open standards for machine learning operations (MLOps) and machine learning model governance. By contributing to these standards, the community can help companies make the most of their machine learning platforms and pave the way for the future of MLOps.

Machine learning models are already part of almost every aspect of our lives from automating internal processes to optimizing the design, creation, and marketing behind virtually every product consumed, said Nick Patience, founder and research vice president, software at 451 Research. As ML proliferates, the management of those models becomes challenging, as they have to deal with issues such as model drift and repeatability that affect productivity, security and governance. The solution is to create a set of universal, open standards so that machine learning metadata definitions, monitoring, and operations become normalized, the way metadata and data governance are standardized for data pipelines.

Marketing Technology News: TEC Eurolab Doubles Productivity With Data Management Solution from Cohesity and HPE

At Cloudera, we dont want to solve the challenge of deploying and governing machine learning models at scale only for our customers, we agree it needs to be addressed at the industry level. Apache Atlas is the best positioned framework to integrate data management and explainable, interoperable, and reproducible MLOps workflows, said Doug Cutting, chief architect at Cloudera. The Apache Atlas (Project) fits all the needs for defining ML metadata objects and governance standards. It is open-source, extensible, and has pre-built governance features.

Industry Call for Standards

Open source and open APIs have powered the growth of data science in business. But deploying and managing models in production is often difficult because of technology sprawl and siloing, said Peter Wang, CEO of Anaconda. Open standards for ML operations can reduce the clutter of proprietary technologies and give businesses the agility to focus on innovation. We are very pleased to see Cloudera lead the charge for this important next step.

As leaders in creating a machine learning oriented data strategy across our organization, we know what is required to address the challenges with deploying ML models into production at scale and building an ML-driven business, said Daniel Stahl, SVP model platforms at Regions Financial Corporation. A fundamental set of model design principles enables the repeatable, transparent, and governed approaches necessary for scaling model development and deployment. We join Cloudera in calling for open industry standards for machine learning operations.

Marketing Technology News: IBM AI Innovations Sharpen Risk Detection in Identity Management

At Santander, we focus on using machine learning to preemptively fight fraud and protect our customers, said Luan Vasconcelos Corumba, data science leader for fraud prevention at Santander Bank. Because there are many different types of fraud across many channels; scaling and maintaining this effort requires dynamic approaches to monitoring and governing models with sometimes hundreds of features to check on an ongoing weekly basis. We endorse these standards because establishing and implementing open universal standards for our production ML workflows can not only help us better protect our customers but will also enable our teams to drive adoption and deliver cost-effective, accurate predictions continuously.

Marketing Technology News: OneTrust Expands ID Verification Partner Program to Simplify the CCPA Consumer Rights Validation Process

View post:

Industry Call to Define Universal Open Standards for Machine Learning Operations and Governance - MarTech Series

Schneider Electric Wins ‘AI/ Machine Learning Innovation’ and ‘Edge Project of the Year’ at the 2019 SDC Awards – PRNewswire

LONDON, Dec. 12, 2019 /PRNewswire/ --Schneider Electric,the leader in digital transformation of energy management and automation, has today announced that it has won two categories at the 2019 SDC Awards for 'AI/Machine Learning Innovation of the Year' and 'edge project of the year.'

"I'm delighted to accept these prestigious awards on behalf of Schneider Electric," said Marc Garner Vice President, Secure Power Division UK&I. "As the industry's next-generation data centre infrastructure management (DCIM) platform, EcoStruxure IT leverages AI and ML technologies to proactively prevent downtime in data centre and edge computing environments. The software also provides end-users and partners with increased visibility that streamlines servicing and improves both operational and energy efficiency, something, which was instrumental for the Wellcome Sanger Institute."

The award for 'AI/Machine Learning Innovation of the Year' was presented to Schneider Electric for theirnext-generation DCIM platform EcoStruxure IT, which brings secure, vendor agnostic, wherever-you-go monitoring for all IoT-enabled physical infrastructure assets. Withthe ability to integrate securely with other manufacturer applications, the software delivers complete visibility into today's data centre and edge environments, from anywhere, at any time and on any device via the cloud.

In collaboration with Elite Channel Partner EfficiencyIT (EIT), Schneider Electric was awarded a second accolade for 'edge project of the year' for work completed for prestigious customer, the Wellcome Sanger Institute. The Wellcome Sanger Institute is one of the world leaders in genomic research and its research deals with some of the biggest medical research questions across the biggest challenges in human diseases - from cancer and malaria to measles and cholera.

Essential to the research function are the Institute's DNA sequencing machines, which produce terabytes of raw information each day. Due to the vast quantity of data, the criticality of local applications and the need for ultra-low latency, cloud hosting would present them with a number of complications and incur significant connectivity costs. The Institute, therefore, hosts Europe's largest, on-premise genomic data centre and uses its high-performance processing capabilities to store and analyse data in real-time.

Under the guidance of EIT, Sanger has deployed Schneider Electric's EcoStruxure IT to proactively manage the data centre, and to improve energy efficiency and resiliency. The campus has issues with power reliability, and any outage could result in loss of important genomic data and costly replacement of sequencing chemicals. Therefore, to protect the laboratory processes from downtime, the Institute has installed individual Schneider Electric Smart-UPS uninterruptible power supplies on each of its sequencers.

"EcoStruxure IT was selected due to its open-based architecture, which allows us to integrate with the technology already in place on campus, and because we considered it best-in-class for the Institute's requirements," said Simon Binley, Data Centre Manager, Wellcome Sanger Institute. "The platform provides us with increased visibility into the entire data centre environment and enables us to improve energy efficiency, meaning in time, more funding will be available for critical research that will benefit all of humankind."

Tofind out more about Schneider Electric's next generation DCIM platform EcoStruxure IT, please click here.

About Schneider Electric

At Schneider, we believe access to energy and digital is a basic human right. We empower all to make the most of their energy and resources, ensuring Life Is On everywhere, for everyone, at every moment.

We provide energy and automation digital solutions for efficiency and sustainability. We combine world-leading energy technologies, real-time automation, software and services into integrated solutions for Homes, Buildings, Data Centers, Infrastructure and Industries.

We are committed to unleash the infinite possibilities of an open, global, innovative community that is passionate about our Meaningful Purpose, Inclusive and Empowered values.

https://www.se.com/uk/en/

Related resources:

Follow us on:

Hashtags: #LifeIsOn #EcoStruxure #edgecomputing #DCIM

SOURCE Schneider Electric

https://www.se.com/uk/en/

More:

Schneider Electric Wins 'AI/ Machine Learning Innovation' and 'Edge Project of the Year' at the 2019 SDC Awards - PRNewswire

Machine Learning Answers: If Nvidia Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? – Forbes

Jen-Hsun Huang, president and chief executive officer of Nvidia Corp., gestures as he speaks during ... [+] the company's event at the 2019 Consumer Electronics Show (CES) in Las Vegas, Nevada, U.S., on Sunday, Jan. 6, 2019. CES showcases more than 4,500 exhibiting companies, including manufacturers, developers and suppliers of consumer technology hardware, content, technology delivery systems and more. Photographer: David Paul Morris/Bloomberg

We found that if Nvidia Stock drops 10% or more in a week (5 trading days), there is a solid 36% chance itll recover 10% or more, over the next month (about 20 trading days)

Nvidia stock has seen significant volatility this year. While the company has been impacted by the broader correction in the semiconductor space and the trade war between the U.S. and China, the stock is being supported by a strong long-term outlook for GPU demand amid growing applications in Deep Learning and Artificial Intelligence.

Considering the recent price swings, we started with a simple question that investors could be asking about Nvidia stock: given a certain drop or rise, say a 10% drop in a week, what should we expect for the next week? Is it very likely that the stock will recover the next week? What about the next month or a quarter? You can test a variety of scenarios on the Trefis Machine Learning Engine to calculate if Nvidia stock dropped, whats the chance itll rise.

For example, after a 5% drop over a week (5 trading days), the Trefis machine learning engine says chances of an additional 5% drop over the next month, are about 40%. Quite significant, and helpful to know for someone trying to recover from a loss. Knowing what to expect for almost any scenario is powerful. It can help you avoid rash moves. Given the recent volatility in the market, the mix of macroeconomic events (including the trade war with China and interest rate easing by the U.S. Fed), we think investors can prepare better.

Below, we also discuss a few scenarios and answer common investor questions:

Question 1: Does a rise in Nvidia stock become more likely after a drop?

Answer:

Not really.

Specifically, chances of a 5% rise in Nvidia stock over the next month:

= 40%% after Nvidia stock drops by 5% in a week.

versus,

= 44.5% after Nvidia stock rises by 5% in a week.

Question 2: What about the other way around, does a drop in Nvidia stock become more likely after a rise?

Answer:

No.

Specifically, chances of a 5% decline in Nvidia stock over the next month:

= 40% after NVIDIA stock drops by 5% in a week

versus,

= 27% after NVIDIA stock rises by 5% in a week

Question 3: Does patience pay?

Answer:

According to data and Trefis machine learning engines calculations, largely yes!

Given a drop of 5% in Nvidia stock over a week (5 trading days), while there is only about 28% chance the Nvidia stock will gain 5% over the subsequent week, there is more than 58% chance this will happen in 6 months.

The table below shows the trend:

Trefis

Question 4: What about the possibility of a drop after a rise if you wait for a while?

Answer:

After seeing a rise of 5% over 5 days, the chances of a 5% drop in Nvidia stock are about 30% over the subsequent quarter of waiting (60 trading days). However, this chance drops slightly to about 29% when the waiting period is a year (250 trading days).

Whats behind Trefis? See How Its Powering New Collaboration and What-Ifs ForCFOs and Finance Teams|Product, R&D, and Marketing Teams More Trefis Data Like our charts? Exploreexample interactive dashboardsand create your own

See original here:

Machine Learning Answers: If Nvidia Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? - Forbes