Zeroth-Order Optimisation And Its Applications In Deep Learning – Analytics India Magazine

Deep learning applications usually involve complex optimisation problems that are often difficult to solve analytically. Often the objective function itself may not be in analytically closed-form, which means that the objective function only permits function evaluations without any gradient evaluations. This is where Zeroth-Order comes in.

Optimisation corresponding to the above types of problems falls into the category of Zeroth-Order (ZO) optimisation with respect to the black-box models, where explicit expressions of the gradients are hard to estimate or infeasible to obtain.

Researchers from IBM Research and MIT-IBM Watson AI Lab discussed the topic of Zeroth-Order optimisation at the on-going Computer Vision and Pattern Recognition (CVPR) 2020 conference.

In this article, we will take a dive into what Zeroth-Order optimisation is and how this method can be applied in complex deep learning applications.

Zeroth-Order (ZO) optimisation is a subset of gradient-free optimisation that emerges in various signal processing as well as machine learning applications. ZO optimisation methods are basically the gradient-free counterparts of first-order (FO) optimisation techniques. ZO approximates the full gradients or stochastic gradients through function value-based gradient estimates.

Derivative-Free methods for black-box optimisation has been studied by the optimisation community for many years now. However, conventional Derivative-Free optimisation methods have two main shortcomings that include difficulties to scale to large-size problems and lack of convergence rate analysis.

ZO optimisation has the following three main advantages over the Derivative-Free optimisation methods:

ZO optimisation has drawn increasing attention due to its success in solving emerging signal processing and deep learning as well as machine learning problems. This optimisation method serves as a powerful and practical tool for evaluating adversarial robustness of deep learning systems.

According to Pin-Yu Chen, a researcher at IBM Research, Zeroth-order (ZO) optimisation achieves gradient-free optimisation by approximating the full gradient via efficient gradient estimators.

Some recent important applications include generation of prediction-evasive, black-box adversarial attacks on deep neural networks, generation of model-agnostic explanation from machine learning systems, and design of gradient or curvature regularised robust ML systems in a computationally-efficient manner. In addition, the use cases span across automated ML and meta-learning, online network management with limited computation capacity, parameter inference of black-box/complex systems, and bandit optimisation in which a player receives partial feedback in terms of loss function values revealed by her adversary.

Talking about the application of ZO optimisation to the generation of prediction-evasive adversarial examples to fool DL models, the researchers stated that most studies on adversarial vulnerability of deep learning had been restricted to the white-box setting where the adversary has complete access and knowledge of the target system, such as deep neural networks.

In most of the cases, the internal states or configurations and the operating mechanism of deep learning systems are not revealed to the practitioners, for instance, Google Cloud Vision API. This in result gives rise to the issues of black-box adversarial attacks where the only mode of interaction of the adversary with the system is through the submission of inputs and receiving the corresponding predicted outputs.

ZO optimisation serves as a powerful and practical tool for evaluating adversarial robustness of deep learning as well as machine learning systems. ZO-based methods for exploring vulnerabilities of deep learning to black-box adversarial attacks are able to reveal the most susceptible features.

Such methods of ZO optimisation can be as effective as state-of-the-art white-box attacks, despite only having access to the inputs and outputs of the targeted deep neural networks. ZO optimisation can also generate explanations and provide interpretations of prediction results in a gradient-free and model-agnostic manner.

The interest in ZO optimisation has grown rapidly over the last few decades. According to the researchers, ZO optimisation has been increasingly embraced for solving big data and machine learning problems when explicit expressions of the gradients are difficult to compute or infeasible to obtain.

comments

Visit link:
Zeroth-Order Optimisation And Its Applications In Deep Learning - Analytics India Magazine

Key Trends Framing the State of AI and ML – insideBIGDATA

In this special guest feature, Rachel Roumeliotis, Vice President of Content Strategy at OReilly Media, provides a deep dive into what topics and terms are on the rise in the data science industry, and also touches on important technology trends and shifts in learning these technologies. Rachel leads an editorial team that covers a wide variety of programming topics, ranging from data and AI, to open source in the enterprise, to emerging programming languages. She has been working in technical publishing for 14+ years, acquiring content in many areas, including software development, UX, computer security and AI.

Theres no doubt that artificial intelligence continues to be swiftly adopted by companies worldwide. In just the last few years, most companies that were evaluating or experimenting with AI are now using it in production deployments. When organizations adopt analytic technologies like AI and machine learning (ML), it naturally prompts them to start asking questions that challenge them to think differently about what they know about their business across departments, from manufacturing, production and logistics, to sales, customer service and IT. An organizations use of AI and ML tools and techniques and the various contexts in which it uses them will change as they gain new knowledge.

OReillys learning platform is a treasure trove of information about the trends, topics, and issues tech and business leaders need to know to do their jobs and keep their businesses running. We recently analyzed the platforms user usage to take a closer look at the most popular and most-searched topics in AI and ML. Below are some of the key findings that show where the state of AI and ML is, and where it is headed.

Unrelenting Growth in AI and ML

First and foremost, our analysis found that interest in AI continues to grow. When comparing 2018 to 2019, engagement in AI increased by 58% far outpacing growth in the much larger machine learning topic, which increased only 5% in 2019. When aggregating all AI and ML topics, this accounts for nearly 5% of all usage activity on the platform. While this is just slightly less than high-level, well-established topics like data engineering (8% of usage activity) and data science (5% of usage activity), interest in these topics grew 50% faster than data science. Data engineering actually decreased about 8% over the same time due to declines in engagement with data management topics.

We also discovered early signs that organizations are experimenting with advanced tools and methods. Of our findings, engagement in unsupervised learning content is probably one of the most interesting. In unsupervised learning, an AI algorithm is trained to look for previously undetected patterns in a data set with no pre-existing labels or classification with minimum human supervision or guidance. In 2018, the usage for unsupervised learning topics grew by 53% and by 172% in 2019.

But whats driving this growth? While the names of its methods (clustering and association) and its applications (neural networks) are familiar, unsupervised learning isnt as well understood as its supervised learning counterpart, which serves as the default strategy for ML for most people and most use cases. This surge in unsupervised learning activity is likely driven by a lack of familiarity with its uses, benefits, and requirements by more sophisticated users who are faced with use cases not easily addressed with supervised methods.

Deep Learning Spurs Interest in Other Advanced Techniques

While deep learning cooled slightly in 2019, it still accounted for 22% of all AI and ML usage. We also suspect that its success has helped spur the resurrection of a number of other disused or neglected ideas. The biggest example of this is reinforcement learning. This topic experienced exponential growth, growing over 1,500% since 2017.

Even with engagement rates dropping by 10% in 2019, deep learning itself is one of the most popular ML methods among companies that are evaluating AI, with many companies choosing the technique to support production use cases. It might be that engagement with deep learning topics has plateaued because most people are already actively engaging with the technology, meaning growth could slow down.

Natural language processing is another topic that has showed consistent growth. While its growth rate isnt huge it grew by 15% in 2018 and 9% in 2019 natural language processing accounts for about 12% of all AI and ML usage on our platform. This is around 6x the share of unsupervised learning and 5x the share of reinforcement learning usage, despite the significant growth these two topics have experienced over the last two years.

Not all AI/ML methods are treated equally, however. For example, interest in chatbots seems to be waning, with engagement decreasing by 17% in 2018 and by 34% in 2019. This is likely because chatbots were one of the first application of AI and is probably a reflection of the relative maturity of its application.

The growing engagement in unsupervised learning and reinforcement learning demonstrates that organizations are experimenting with advanced analytics tools and methods. These tools and techniques open up new use cases for businesses to experiment and benefit from, including decision support, interactive games, and real-time retail recommendation engines. We can only imagine that organizations will continue to use AI and ML to solve problems, increase productivity, accelerate processes, and deliver new products and services.

Sign up for the free insideBIGDATAnewsletter.

Excerpt from:
Key Trends Framing the State of AI and ML - insideBIGDATA

Eric and Wendy Schmidt back Cambridge University effort to equip researchers with A.I. skills – CNBC

Google Executive Chairman Eric Schmidt

Win McNamee | Getty Images

Schmidt Futures, the philanthropic foundation set up by billionaires Eric and Wendy Schmidt, is funding a new program at the University of Cambridge that's designed to equip young researchers with machine learning and artificial intelligence skills that have the potential to accelerate their research.

The initiative known as the Accelerate Program for Scientific Discovery will initially be aimed at researchers in science, technology, engineering, mathematics and medicine. However, it will eventually be available for those studying arts, humanities and social science.

Some 32 PhD students will receive machine-learning training through the program in the first year, the university said, adding that the number will rise to 160 over five years. The aim is to build a network of machine-learning experts across the university.

"Machine learning and AI are increasingly part of our day-to-day lives, but they aren't being used as effectively as they could be, due in part to major gaps of understanding between different research disciplines," Professor Neil Lawrence, a former Amazon director who will lead the program, said in a statement.

"This program will help us to close these gaps by training physicists, biologists, chemists and other scientists in the latest machine learning techniques, giving them the skills they need."

The scheme will be run by four new early-career specialists, who are in the process of being recruited.

The Schmidt Futures donation will be used partly to pay the salaries of this team, which will work with the university's Department of Computer Science and Technology and external companies.

Guest lectures will be provided by research scientists at DeepMind, the London-headquartered AI research lab that was acquired by Google.

The size of the donation from Schmidt Futures has not been disclosed.

"We are delighted to support this far-reaching program at Cambridge," said Stuart Feldman, chief scientist at Schmidt Futures, in a statement. "We expect it to accelerate the use of new techniques across the broad range of research as well as enhance the AI knowledge of a large number of early-stage researchers at this superb university."

Read more here:
Eric and Wendy Schmidt back Cambridge University effort to equip researchers with A.I. skills - CNBC

The key differences between rule-based AI and machine learning – The Next Web

Companies across industries are exploring and implementingartificial intelligence(AI) projects, from big data to robotics, to automate business processes, improve customer experience, and innovate product development. According toMcKinsey, embracing AI promises considerable benefits for businesses and economies through its contributions to productivity and growth. But with that promise comes challenges.

Computers and machines dont come into this world with inherent knowledge or an understanding of how things work. Like humans, they need to be taught that a red light means stop and green means go. So, how do these machines actually gain the intelligence they need to carry out tasks like driving a car or diagnosing a disease?

There are multiple ways to achieve AI, and existential to them all is data. Withoutquality data, artificial intelligence is a pipedream. There are two ways data can be manipulatedeither through rules or machine learningto achieve AI, and some best practices to help you choose between the two methods.

Long before AI and machine learning (ML) became mainstream terms outside of the high-tech field, developers were encoding human knowledge into computer systems asrules that get stored in a knowledge base. These rules define all aspects of a task, typically in the form of If statements (if A, then do B, else if X, then do Y).

While the number of rules that have to be written depends on the number of actions you want a system to handle (for example, 20 actions means manually writing and coding at least 20 rules), rules-based systems are generally lower effort, more cost-effective and less risky since these rules wont change or update on their own. However, rules can limit AI capabilities with rigid intelligence that can only do what theyve been written to do.

While a rules-based system could be considered as having fixed intelligence, in contrast, amachine learning systemis adaptive and attempts to simulate human intelligence. There is still a layer of underlying rules, but instead of a human writing a fixed set, the machine has the ability to learn new rules on its own, and discard ones that arent working anymore.

In practice, there are several ways a machine can learn, butsupervised trainingwhen the machine is given data to train onis generally the first step in a machine learning program. Eventually, the machine will be able to interpret, categorize, and perform other tasks with unlabeled data or unknown information on its own.

The anticipated benefits to AI are high, so the decisions a company makes early in its execution can be critical to success. Foundational is aligning your technology choices to the underlying business goals that AI was set forth to achieve.What problems are you trying to solve, or challenges are you trying to meet?

The decision to implement a rules-based or machine learning system will have a long-term impact on how a companys AI program evolves and scales. Here are some best practices to consider when evaluating which approach is right for your organization:

When choosing a rules-based approach makes sense:

The promises of AI are real, but for many organizations, the challenge is where to begin. If you fall into this category, start by determining whether a rules-based or ML method will work best for your organization.

This article was originally published byElana Krasner on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here.

Published June 13, 2020 13:00 UTC

Go here to see the original:
The key differences between rule-based AI and machine learning - The Next Web

After Effects and Premiere Pro gain more ‘magic’ machine-learning-based features – Digital Arts

ByNeil Bennett| on June 16, 2020

Roto Brush 2 (above) makes masking easier in After Effects, while Premiere Rush and Pro will automatically reframe and detect scenes in videos.

Adobe has announced new features coming to its video post-production apps, on the date when it was supposed to be holding its Adobe Max Europe event in Lisbon, which was cancelled due to COVID-19.

These aren't available yet unlike the new updates to Photoshop, Illustrator and InDesign but are destined in future releases. We would usually expect these to coincide with the IBC conference in Amsterdam in September or Adobe Max in October, though both of these are virtual events this year.

The new tools are based on Adobe's Sensei machine-learning technology. Premiere Pro will gain the ability to identify cuts in a video and create timelines with cuts or markers from them ideal if you've deleted a project and only have the final output, or are working with archive material.

A second-generation version of After Effects' Roto Brush enables you to automatically extract subjects from their background. You paint over the subject in a reference frame and the tech tracks the person or object through a scene to extract them.

Premiere Rush will be gaining Premiere Pro's Auto Reframe feature, which identify's key areas of video and frames around them when changing aspect ratio for example when creating a square version of video for Instagram or Facebook.

Also migrating to Rush from Pro will be an Effects panel, transitions and Pan and Zoom.

Note: We may earn a commission when you buy through links on our site, at no extra cost to you. This doesn't affect our editorial independence. Learn more.

View post:
After Effects and Premiere Pro gain more 'magic' machine-learning-based features - Digital Arts

Reality Of Metrics: Is Machine Learning Success Overhyped? – Analytics India Magazine

In one of the most revealing research papers written recent times, the researchers from Cornell Tech and Facebook AI quash the hype around the success of machine learning. They opine and even demonstrate that the trend appears to be overstated. In other words, the so-called cutting edge research or benchmark work perform similarly to one another even if they are a decade apart. In other words, the authors believe that metric learning algorithms have not made spectacular progress.

In this work, the authors try to demonstrate the significance of assessing algorithms more diligently and how few practices can help reflect ML success in reality.

Over the past decade, deep convolutional networks have made tremendous progress. Their application in computer vision is almost everywhere; from classification to segmentation to object detection and even generative models. But is the metric evaluation carried out to track this progress has been leakproof? Are the techniques employed werent affected by the improvement in deep learning methods?

The goal of metric learning is to map data to an embedding space, where similar data are close together, and the rest are far apart. So, the authors begin with the notion that the deep networks have had a similar effect on metric learning. And, the combination of the two is known as deep metric learning.

The authors then examined flaws in the current research papers, including the problem of unfair comparisons and the weaknesses of commonly used accuracy metrics. They then propose a training and evaluation protocol that addresses these flaws and then run experiments on a variety of loss functions.

For instance, one benchmark paper in 2017, wrote the authors, used ResNet50, and then claimed huge performance gains. But the competing methods used GoogleNet, which has significantly lower initial accuracies. Therefore, the authors conclude that much of the performance gain likely came from the choice of network architecture, and not their proposed method. Practices such as these can put ML on headlines, but when we look at how much of these state-of-the-art models are really deployed, the reality is not that impressive.

The authors underline the importance of keeping the parameters constant if one has to prove that a certain new algorithm outperforms its contemporaries.

To carry out the evaluations, the authors introduce settings that cover the following:

As shown in the above plot, the trends, in reality, arent that far from the previous related works and this indicates that those who claim a dramatic improvement might not have been fair in their evaluation.

If a paper attempts to explain the performance gains of its proposed method, and it turns out that those performance gains are non-existent, then their explanation must be invalid as well.

The results show that when hyperparameters are properly tuned via cross-validation, most methods perform similarly to one another. This work, believe the authors, will lead to more investigation into the relationship between hyperparameters and datasets, and the factors related to particular dataset/architecture combinations.

According to the authors, this work exposes the following:

The authors conclude that if proper machine learning practices are followed, then the results of metric learning papers will better reflect reality, and can lead to better works in most impactful domains like self-supervised learning.

comments

Continued here:
Reality Of Metrics: Is Machine Learning Success Overhyped? - Analytics India Magazine

Google has found a way for machine learning algorithms to evolve themselves – Tech Wire Asia

Machine learning is a subset of artificial intelligence (AI) that gives computer systems the ability to automatically learn and improve from experience, rather than being explicitly programmed its now a hugely powerful tool that has been leveraged across a raft of completely different industries for several years already.

Machine learning is now used by banks to sift through hundreds of millions of transactions to detect fraud; its predictive analytics ability has been used in agriculture to comb through seasonal farming and weather data; machine learning will even help digital marketers to plan budget forecasts and research content trends. And those are just three examples of millions now used each days.

The basic premise of machine learning, in theory, is simple. An algorithm is fed a dataset, and is taught to respond in certain way the next time it encounters similar data.

But in practice, its very difficult, and thats why theres such demand for specialists like data scientists. Creating a machine learning algorithm requires numerous steps from gathering and preparing data, setting evaluation protocols and developing benchmark models, before there is anything near a workable machine learning algorithm ready for deployment.

Even then, they may not work well enough, and that means going back to the drawing board. Machine learning requires an extensive list of skills including computer science and programming, mathematics and statistics, data science, deep learning, and problem-solving.

In short, machine learning is out of reach for many, and yet the rapid boom and endless applications emerging mean more and more businesses now want to get hands-on, whether thats to improve products and services for customers, or to make internal processes more efficient.

That surge of interest has led many to consider off-the-shelf machine learning solutions, and that was how automated machine learning came to be to make ML accessible to non-ML experts.

Automated machine learning, or AutoML, reduces or completely removes the need for skilled data scientists to build machine learning models. Instead, these systems allow users to provide training data as an input, and receive a machine learning model as an output.

AutoML software companies may take a few different approaches. One approach is to take the data and train every kind of model, picking the one that works best. Another is to build one or more models that combine the others, which sometimes give better results.

Despite its name, AutoML has so far relied a lot on human input to code instructions and programs that tell a computer what to do. Users then still have to code and tune algorithms to serve as building blocks for the machine to get started. There are pre-made algorithms that beginners can use, but its not quite automatic.

But now a team of Google computer scientists believe they have come up with a new AutoML method that can generate the best possible algorithm for a specific function, without human intervention.

The new method is dubbed AutoML-Zero, which works by continuously trying algorithms against different tasks, and improving upon them using a process of elimination, much like Darwinian evolution.

AutoML-Zero greatly reduces the human element which had heavily influenced ML programs before, with more complex programs requiring sophisticated code written by hand. Limiting human involvement also helps remove bias and potential errors, especially when multiple iterative developments are involved.

Esteban Real, a software engineer at Google Brain, Research and Machine Intelligence, and lead author of the research, explained to Popular Mechanics: Suppose your goal is to put together a house. If you had at your disposal pre-built bedrooms, kitchens, and bathrooms, your task would be manageable but you are also limited to the rooms you have in your inventory.

If instead you were to start out with bricks and mortar, then your job is harder, but you have more space for creativity.

Instead, Googles AutoML-Zero uses basic mathematics, much like other computer programming languages. AutoML-Zero appears to involve even less human intervention than Googles own ML programming language, Cloud AutoML.

In a basic sense, Google developers have created a system that is able to churn out 100 randomly-generated algorithms and then identify which one works best. After several generations, the algorithms become better and better until the machine finds one that performs well enough to evolve.

New ground can be made here as those surviving algorithms can be tested against standard AI problems for their ability to solve new ones.

The development team is working to eliminate any remaining human bias their method retains, as well as to solve a tricky scaling issue. If they are successful, Google might be able to introduce a full-scale version that provides machine learning capabilities to small-medium enterprises (SMEs) and non-ML developers.

And crucially, those machine learning applications will be free from human input.

Joe Devanesan | @thecrystalcrown

Joe's interest in tech began when, as a child, he first saw footage of the Apollo space missions. He still holds out hope to either see the first man on the moon, or Jetsons-style flying cars in his lifetime.

Link:
Google has found a way for machine learning algorithms to evolve themselves - Tech Wire Asia

Machine Learning Engineer: Challenges and Changes Facing the Profession – Dice Insights

Last year, the fastest-growing job title in the world was that of the machine learning (ML) engineer, and this looks set to continue for the foreseeable future. According to Indeed, the average base salary of an ML engineer in the US is $146,085, and the number of machine learning engineer openings grew by 344% between 2015 and 2018. Machine learning engineers dominate the job postings around artificial intelligence (A.I.), with 94% of job advertisements that contain AI or ML terminology targeting machine learning engineers specifically.

This demonstrates that organizations understand how profound an effect machine learning promises to have on businesses and society. AI and ML are predicted to drive a Fourth Industrial Revolution that will see vast improvements in global productivity and open up new avenues for innovation; by 2030, its predicted that the global economy will be$15.7 trillion richersolely because of developments from these technologies.

The scale of demand for machine learning engineers is also unsurprising given how complex the role is. The goal of machine learning engineers is todeploy and manage machine learning modelsthat process and learn from the patterns and structures in vast quantities of data, into applications running in production, to unlock real business value while ensuring compliance with corporate governance standards.

To do this, machine learning engineers have to sit at the intersection of three complex disciplines. The first discipline is data science, which is where the theoretical models that inform machine learning are created; the second discipline is DevOps, which focuses on the infrastructure and processes for scaling the operationalization of applications; and the third is software engineering, which is needed to make scalable and reliable code to run machine learning programs.

Its the fact that machine learning engineers have to be at ease in the language of data science, software engineering, and DevOps that makes them so scarceand their value to organizations so great. A machine learning engineer has to have a deep skill-set; they must know multiple programming languages, have a very strong grasp of mathematics, and be able to understand andapply theoretical topics in computer science and statistics. They have to be comfortable with taking state-of-the-art models, which may only work in a specialized environment, andconverting them into robust and scalable systems that are fit for a business environment.

As a burgeoning occupation, the role of a machine learning engineer is constantly evolving. The tools and capabilities that these engineers have in 2020 are radically different from those they had available in 2015, and this is set to continue evolve as the specialism matures. One of the best ways to understand what the role of a machine learning engineer means to an organization is to look at the challenges they face in practice, and how they evolve over time.

Four major challenges that every machine learning engineer has to deal with are data provenance, good data, reproducibility, and model monitoring.

Across a models development and deployment lifecycle, theres interaction between a variety of systems and teams. This results in a highly complex chain of data from a variety of sources. At the same time, there is a greater demand than ever for data to be audited, and there to be a clear lineage of its organizational uses. This is increasingly a priority for regulators, with financial regulators now demandingthat all machine learning data be stored for seven years for auditing purposes.

This does not only make the data and metadata used in models more complex, but it also makes the interactions between the constituent pieces of data far more complex. This means machine learning engineers need to put the right infrastructure in place to ensure the right data and metadata is accessible, all while making sure it is properly organized.

Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now

In 2016, it was estimated that the US alonelost $3.1 trillionto bad datadata thats improperly formatted, duplicated, or incomplete. People and businesses across all sectors lose time and money because of this, but in a job that requires building and running accurate models reliant on input data, these issues can seriously jeopardize projects.

IBM estimates that around80 percent of a data scientists timeis spentfinding, cleaning up, and organizing the data they put into their models. Over time, however, increasingly sophisticated error and anomaly detection programs will likely be used to comb through datasets and screen out information that is incomplete or inaccurate.

This means that, as time goes on and machine learning capabilities continue to develop, well see machine learning engineers have more tools in their belt to clean up the information their programs use, and thus be able to focus more time spent on putting together ML programs themselves.

Reproducibility is often defined as the ability to be able to keep a snapshot of the state of a specific machine learning model, and being able to reproduce the same experiment with the exact same results regardless of the time and location. This involves a great level of complexity, given that machine learning requires reproducibility of three components: 1) code, 2) artifacts, and 3) data. If one of these change, then the result will change.

To add to this complexity, its also necessary to keep reproducibility of entire pipelines that may consist of two or more of these atomic steps, which introduces an exponential level of complexity. For machine learning, reproducibility is important because it lets engineers and data scientists know that the results of a model can be relied upon when they are deployed live, as they will be the same if they are run today as if they were run in two years.

Designing infrastructure for machine learning that is reproducible is a huge challenge. It will continue to be a thorn in the side of machine learning engineers for many years to come. One thing that may make this easier in coming years is the rise of universally accepted frameworks for machine learning test environments, which will provide a consistent barometer for engineers to measure their efforts against.

Its easy to forget that the lifecycle of a machine learning model only begins when its deployed to production. Consequently, a machine learning engineer not only needs to do the work of coding, testing, and deploying a model, but theyll have to also develop the right tools to monitor it.

The production environment of a model can often throw up scenarios the machine learning engineer didnt anticipate when they were creating it. Without monitoring and intervention after deployment, its likely that a model can end up being rendered dysfunctional or produce skewed results by unexpected data. Without accurate monitoring, results can often slowly drift away from what is expected due to input data becoming misaligned with the data a model was trained with, producing less and less effective or logical results.

Adversarial attacks on models, often far more sophisticated than tweets and a chatbot, are of increasing concern, and it is clear that monitoring by machine learning engineers is needed to stop a model being rendered counterproductive by unexpected data. As more machine learning models are deployed, and as more economic output becomes dependent upon these models, this challenge is only going to grow in prominence for machine learning engineers going forward.

One of the most exciting things about the role of the machine learning engineer is that its a job thats still being defined, and still faces so many open problems. That means machine learning engineers get the thrill of working in a constantly changing field that deals with cutting-edge problems.

Challenges such as data quality may be problems we can make major progress towards in the coming years. Other challenges, such monitoring, look set to become more pressing in the more immediate future. Given the constant flux of machine learning engineering as an occupation, its of little wonder that curiosity and an innovative mindset are essential qualities for this relatively new profession.

Alex Housley is CEO ofSeldon.

See more here:
Machine Learning Engineer: Challenges and Changes Facing the Profession - Dice Insights

Oracle Offers Machine Learning Workshop to Transform DBA Skills – Database Trends and Applications

AI and machine learning are turning a corner, marking this year with new and improved platforms and use cases. However, database administrators dont always have the tools and skills necessary to manage this new minefield of technology.

DBTA recently held a webinar featuring Charlie Berger, senior director, product management, machine learning, AI, and, Cognitive Analytics, Oracle who discussed how to gain an attainable, logical, evolutionary path to add machine learning to users Oracle data skills.

Operational DBAs spend a lot of time on maintenance, security, and reliability, Berger said. The Oracle Autonomous Database can help. It automates all database and infrastructure management, monitoring, tuning; protects from both external attacks and malicious internal users; and protects from all downtime including planned maintenance.

The Autonomous Database removes tactical drudgery, allowing more time for strategic contribution, according to Berger.

Machine learning allows algorithms to automatically sift through large amounts of data to discover hidden patterns, new insights, and make predictions, he explained.

Oracle Machine Learning extends Oracle Autonomous Database and enables users to build AI applications and analytics dashboards. OML delivers powerful in-database machine learning algorithms, automated ML functionality, and integration with open source Python and R.

From a database developer to a data scientist, Oracle can transform the data management platform into a combined/hybrid data management and machine learning platform.

There are 6 major steps to becoming a data scientist that include:

An archived on-demand replay of this webinar is availablehere.

Read more:
Oracle Offers Machine Learning Workshop to Transform DBA Skills - Database Trends and Applications

Agxio offers AI-built-by-AI fully-automated machine learning platform free in global fight against COVID-19 – Development Bank of Wales

We share relevant third party stories on our website. This release was written and issued by Agxio.

A revolutionary new machine learning platform built entirely by the brilliance of AI could prove to be a vital weapon in the fight against coronavirus.

Apollo is a pioneering system to deliver a fully automated, AI-driven machine learning engine and is already being hailed as a game-changer.

Created by Cambridge and Aberystwyth-based applied AI innovation company, Agxio, Apollo operates beyond-human-scale performance, enabling the robotic platform to evaluate critical data to produce predictive models to solve real world problems. It then optimises these to look for patterns or configurations of parameters that human modellers may not even consider or have the patience to develop. And in a matter of hours.

With the appropriate data, Apollo and the power of machine learning can be used to analyse and predict the efficacy of potential vaccine combinations, outbreak trends, behavioural nudge factors, early warning indicators, medical images against risk indicators, and isolation rate projections, for example. The range of use cases for automated machine learning is however endless.

Importantly, the fully automated AI-driven engine doesnt require the user to be a programming expert or data scientist specialist enabling an expert in a non-data science or machine learning field to be able to study ideas or data that would otherwise take years of experience to be able to apply.

Agxio, which is already backed by the Welsh Government through the Development Bank of Wales, is now offering free use of the platform, together with its technical support team, to all credible researchers, practitioners and government bodies working to defeat COVID-19 for the duration of the pandemic.

Agxio CEO and co-founder, Dr Stephen Christie says: Whats different about Apollo is that this is AI built by AI - artificially intelligent machine learning. Its the machine building the machines, a series of robots building the best brains to answer targeted questions. Apollo is designed to focus on problems that are beyond human scale in dimension or complexity and is, without doubt, the most advanced approach of its kind.

What would take a human literally weeks and months to do, Apollo can generate in minutes and hours. Machine learning is one of the most important tools and defining technologies of our generation, and Apollo is a complete game-changer in terms of accelerating the building of machine learning and solutions.

While humans naturally tend to have biases, Apollo doesnt have any and is additionally data-agnostic. Most importantly, Apollo has speed and accuracy - and, right now, we need both to be really responsive to the situation. Accurate evaluation of data is vital in the governments planning of next-step measures. And I think it is critical for the government to be using the best tools and techniques we have available at this time.

To that end, the Agxio team has additionally created a single COVID-19 data portal for the global community. Coviddata.io is open to any parties for augmentation as cases, data and innovations evolve.

Dr Christie who was awarded Tech CEO of the Year 2019 and 2020 (Innovation & Excellence Awards) and has additionally won Life Sciences Awards (EBA) two years running - explains: If you are going to do anything around research and machine learning, data is critical - as is the sharing and pooling of that data in a properly trusted and curated form, and making the data accessible and available to researchers.

When making projections on isolation rates and strategies, you need real data and an engine that is able to crunch that data in a structured way, which is Apollo. Secondly, you need the data to be carefully curated and comprehensive. If you dont have either of those, youre going to struggle to come up with the correct answer.

Agxio secured investment from the Development Bank of Wales in January 2020. Andrew Critchley is an Investment Executive with the Development Bank of Wales. He adds: As backers of Agxio, we are delighted to see the company offering free use of their Apollo platform and expertise to help with the fight against Covid19.

Weve got to work together to beat this pandemic. Agxios cutting edge technology has the potential to help save lives, the impact could be global.

Apollo was originally developed as an expert system to enable arable farmers to analyse traditional and advanced IoT data to address the growing populations needs for improved yields and disease resistance. However, it has since proved to be a powerful tool for a number of different applications including fraud analytics, disease detection, economic anomalies, and bio-sequencing applications - automating the role of the data scientist to build optimal machine learning models against a target prediction. Data-agnostic, it can operate on numerical, textual and image data, both on and off premises.

Agxio is keen to hear from any data scientists and Python machine learning programmers who would like to volunteer support to researchers projects. If you would like to put your COVID-19 initiative forward for access to the Apollo platform, or volunteer your technical expertise to projects, please contact Covid-19@agxio.com

For more information please visit http://www.agxio.com.

Read more from the original source:
Agxio offers AI-built-by-AI fully-automated machine learning platform free in global fight against COVID-19 - Development Bank of Wales

The impact of machine learning on the legal industry – ITProPortal

The legal profession, the technology industry and the relationship between the two are in a state of transition. Computer processing power has doubled every year for decades, leading to an explosion in corporate data and increasing pressure on lawyers entrusted with reviewing all of this information.

Now, the legal industry is undergoing significant change, with the advent of machine learning technology fundamentally reshaping the way lawyers conduct their day-to-day practice. Indeed, whilst technological gains might once have had lawyers sighing at the ever-increasing stack of documents in the review pile, technology is now helping where it once hindered. For the first time ever, advanced algorithms allow lawyers to review entire document sets at a glance, releasing them from wading through documents and other repetitive tasks. This means legal professionals can conduct their legal review with more insight and speed than ever before, allowing them to return to the higher-value, more enjoyable aspect of their job: providing counsel to their clients.

In this article, we take a look at how this has been made possible.

Practicing law has always been a document and paper-heavy task, but manually reading huge volumes of documentation is no longer feasible, or even sustainable, for advisors. Even conservatively, it is estimated that we create 2.5 quintillion bytes of data every day, propelled by the usage of computers, the growth of the Internet of Things (IoT) and the digitalisation of documents. Many lawyers have had no choice but resort to sampling only 10 per cent of documents, or, alternatively, rely on third-party outsourcing to meet tight deadlines and resource constraints. Whilst this was the most practical response to tackle these pressures, these methods risked jeopardising the quality of legal advice lawyers could give to their clients.

Legal technology was first developed in the early 1970s to take some of the pressure off lawyers. Most commonly, these platforms were grounded on Boolean search technology, requiring months and even years building the complex sets of rules. As well as being expensive and time-intensive, these systems were also unable to cope with the unpredictable, complex and ever-changing nature of the profession, requiring significant time investment and bespoke configuration for every new challenge that arose. Not only did this mean lawyers were investing a lot of valuable time and resources training a machine, but the rigidity of these systems limited the advice they could give to their clients. For instance, trying to configure these systems to recognise bespoke clauses or subtle discrepancies in language was a near impossibility.

Today, machine learning has become advanced enough that it has many practical applications, a key one being legal document review.

Machine learning can be broadly categorised into two types: supervised and unsupervised machine learning. Supervised machine learning occurs when a human interacts with the system in the case of the legal profession, this might be tagging a document, or categorising certain types of documents, for example. The machine then builds its understanding to generate insights to the user based on this human interaction.

Unsupervised machine learning is where the technology forms an understanding of a certain subject without any input from a human. For legal document review, the unsupervised machine learning will cluster similar documents and clauses, along with clear outliers from those standards. Because the machine requires no a priori knowledge of what the user is looking for, the system may indicate anomalies or unknown unknowns- data which no one had set out to identify because they didnt know what to look for. This allows lawyers to uncover critical hidden risks in real time.

It is the interplay between supervised and unsupervised machine learning that makes technology like Luminance so powerful. Whilst the unsupervised part can provide lawyers with an immediate insight into huge document sets, these insights only increase with every further interaction, with the technology becoming increasingly bespoke to the nuances and specialities of a firm.

This goes far beyond more simplistic contract review platforms. Machine learning algorithms, such as those developed by Luminance, are able to identify patterns and anomalies in a matter of minutes and can form an understanding of documents both on a singular level and in their relationship to each another. Gone are the days of implicit bias being built into search criteria, since the machine surfaces all relevant information, it remains the responsibility of the lawyer to draw the all-important conclusions. But crucially, by using machine learning technology, lawyers are able to make decisions fully appraised of what is contained within their document sets; they no longer need to rely on methods such as sampling, where critical risk can lay undetected. Indeed, this technology is designed to complement the lawyers natural patterns of working, for example, providing results to a clause search within the document set rather than simply extracting lists of clauses out of context. This allows lawyers to deliver faster and more informed results to their clients, but crucially, the lawyer is still the one driving the review.

With the right technology, lawyers can cut out the lower-value, repetitive work and focus on complex, higher-value analysis to solve their clients legal and business problems, resulting in time-savings of at least 50 per cent from day one of the technology being deployed. This redefines the scope of what lawyers and firms can achieve, allowing them to take on cases which would have been too time-consuming or too expensive for the client if they were conducted manually.

Machine learning is offering lawyers more insight, control and speed in their day-to-day legal work than ever before, surfacing key patterns and outliers in huge volumes of data which would normally be impossible for a single lawyer to review. Whether it be for a due diligence review, a regulatory compliance review, a contract negotiation or an eDiscovery exercise, machine learning can relieve lawyers from the burdens of time-consuming, lower value tasks and instead frees them to spend more time solving the problems they have been extensively trained to do.

In the years to come, we predict a real shift in these processes, with the latest machine learning technology advancing and growing exponentially, and lawyers spending more time providing valuable advice and building client relationships. Machine learning is bringing lawyers back to the purpose of their jobs, the reason they came into the profession and the reason their clients value their advice.

James Loxam, CTO, Luminance

Follow this link:
The impact of machine learning on the legal industry - ITProPortal

AI/Machine Learning Market Size Analysis, Top Manufacturers, Shares, Growth Opportunities and Forecast to 2026 – Science In Me

New Jersey, United States: Market Research Intellect has added a new research report titled, AI/Machine Learning Market Professional Survey Report 2020 to its vast collection of research reports. The AI/Machine Learning market is expected to grow positively for the next five years 2020-2026.

The AI/Machine Learning market report studies past factors that helped the market to grow as well as, the ones hampering the market potential. This report also presents facts on historical data from 2011 to 2019 and forecasts until 2026, which makes it a valuable source of information for all the individuals and industries around the world. This report gives relevant market information in readily accessible documents with clearly presented graphs and statistics. This report also includes views of various industry executives, analysts, consultants, and marketing, sales, and product managers.

Market Segment as follows:

The global AI/Machine Learning Market report highly focuses on key industry players to identify the potential growth opportunities, along with the increased marketing activities is projected to accelerate market growth throughout the forecast period. Additionally, the market is expected to grow immensely throughout the forecast period owing to some primary factors fuelling the growth of this global market. Finally, the report provides detailed profile and data information analysis of leading AI/Machine Learning company.

AI/Machine Learning Market by Regional Segments:

The chapter on regional segmentation describes the regional aspects of the AI/Machine Learning market. This chapter explains the regulatory framework that is expected to affect the entire market. It illuminates the political scenario of the market and anticipates its impact on the market for AI/Machine Learning .

The AI/Machine Learning Market research presents a study by combining primary as well as secondary research. The report gives insights on the key factors concerned with generating and limiting AI/Machine Learning market growth. Additionally, the report also studies competitive developments, such as mergers and acquisitions, new partnerships, new contracts, and new product developments in the global AI/Machine Learning market. The past trends and future prospects included in this report makes it highly comprehensible for the analysis of the market. Moreover, The latest trends, product portfolio, demographics, geographical segmentation, and regulatory framework of the AI/Machine Learning market have also been included in the study.

Ask For Discount (Special Offer: Get 25% discount on this report) @ https://www.marketresearchintellect.com/ask-for-discount/?rid=193669&utm_source=SI&utm_medium=888

Table of Content

1 Introduction of AI/Machine Learning Market1.1 Overview of the Market1.2 Scope of Report1.3 Assumptions

2 Executive Summary

3 Research Methodology3.1 Data Mining3.2 Validation3.3 Primary Interviews3.4 List of Data Sources

4 AI/Machine Learning Market Outlook4.1 Overview4.2 Market Dynamics4.2.1 Drivers4.2.2 Restraints4.2.3 Opportunities4.3 Porters Five Force Model4.4 Value Chain Analysis

5 AI/Machine Learning Market, By Deployment Model5.1 Overview

6 AI/Machine Learning Market, By Solution6.1 Overview

7 AI/Machine Learning Market, By Vertical7.1 Overview

8 AI/Machine Learning Market, By Geography8.1 Overview8.2 North America8.2.1 U.S.8.2.2 Canada8.2.3 Mexico8.3 Europe8.3.1 Germany8.3.2 U.K.8.3.3 France8.3.4 Rest of Europe8.4 Asia Pacific8.4.1 China8.4.2 Japan8.4.3 India8.4.4 Rest of Asia Pacific8.5 Rest of the World8.5.1 Latin America8.5.2 Middle East

9 AI/Machine Learning Market Competitive Landscape9.1 Overview9.2 Company Market Ranking9.3 Key Development Strategies

10 Company Profiles10.1.1 Overview10.1.2 Financial Performance10.1.3 Product Outlook10.1.4 Key Developments

11 Appendix11.1 Related Research

Complete Report is Available @ https://www.marketresearchintellect.com/product/global-ai-machine-learning-market-size-and-forecast/?utm_source=SI&utm_medium=888

We also offer customization on reports based on specific client requirement:

1-Freecountry level analysis forany 5 countriesof your choice.

2-FreeCompetitive analysis of any market players.

3-Free 40 analyst hoursto cover any other data points

About Us:

Market Research Intellect provides syndicated and customized research reports to clients from various industries and organizations with the aim of delivering functional expertise. We provide reports for all industries including Energy, Technology, Manufacturing and Construction, Chemicals and Materials, Food and Beverage and more. These reports deliver an in-depth study of the market with industry analysis, market value for regions and countries and trends that are pertinent to the industry.

Contact Us:

Mr. Steven FernandesMarket Research IntellectNew Jersey ( USA )Tel: +1-650-781-4080

Email: [emailprotected]

Get Our Trending Report

https://www.marketresearchblogs.com/

https://www.marktforschungsblogs.com/

Tags: AI/Machine Learning Market Size, AI/Machine Learning Market Growth, AI/Machine Learning Market Forecast, AI/Machine Learning Market Analysis, AI/Machine Learning Market Trends, AI/Machine Learning Market

Go here to read the rest:
AI/Machine Learning Market Size Analysis, Top Manufacturers, Shares, Growth Opportunities and Forecast to 2026 - Science In Me

Parasoft wins 2020 VDC Research Embeddy Award for Its Artificial Intelligence (AI) and Machine Learning (ML) Innovation – Yahoo Finance

Parasoft C/C++test is honored for its leading technology to increase software engineer productivity and achieve safety compliance

MONROVIA, Calif., April 7, 2020 /PRNewswire/ --Parasoft, a global software testing automation leader for over 30 years, received the VDC Research Embedded Award for 2020. The technology research and consulting firm yearly recognizes cutting-edge Software and Hardware Technologies in the embedded industry. This year, Parasoft C/C++test, aunified development testing solution forsafety and securityof embedded C and C++ applications, was recognized for its new, innovative approach that expedites the adoption of software code analysis, increasing developer productivity and simplifying compliance with industry standards such as CERT C/C++, MISRA C 2012 and AUTOSAR C++14. To learn more about Parasoft C/C++test, please visit: https://www.parasoft.com/products/ctest.

Parasoft C/C++test is honored for its leading technology to increase software engineer productivity and achieve safety compliance

"Parasoft has continued its investment in the embedded market, adding new products and personnel to boost its market presence. In addition to highlighting expanded partnerships and coding-standard support, the company announced the integration of AI capabilities into its static analysis engine. While defect prioritization systems have been part of static analysis solutions for well over ten years, Parasoft's solution takes the idea a step further. Their solution now effectively learns from past interactions with identified defects and the codebase to better help users triage new findings," states Chris Rommel, EVP, VDC Research Group.

Parasoft's latest innovation applies AI/Machine Learning to the process of reviewing static analysis findings. Static analysis is a foundational part of the quality process, especially in safety-critical development (e.g., ISO26262, IEC61508), and is an effective first step to establish secure development practices. A common challenge when deploying static analysis tools is dealing with the multitude of reported findings. Scans can produce tens of thousands of findings, and teams of highly qualified resources need to go through a time-consuming process of reviewing and identifying high-priority findings. This process leads to finding and reviewing critical issues late in the cycle, delaying the delivery, and worse, allowing insecure/unsafe code to become embedded into the codebase.

Parasoft leaps forwardbeyond the rest of the competitive market by having AI/ML take into account the context of both historical interactions with the code base and prior static analysis findings to predict relevance and prioritize new findings. This innovation helps organizations achieve compliance with industry standards and offers a unique application of AI/ML in helping organizations with the adoption of Static Analysis. This innovative technology builds on Parasoft's previous AI/ML innovations in the areas of Web UI, API, and Unit testing - https://blog.parasoft.com/what-is-artificial-intelligence-in-software-testing.

"We are extremely honored to have received this award, particularly in light of the competition, VDC's expertise and knowledge of the embedded market," said Mark Lambert, VP of Products at Parasoft. "We have always been committed to innovation led by listening to our customers and leveraging capabilities that will help drive them forward. This creativity has always driven Parasoft's development and is something that has been in the company's DNA from its founding."

Story continues

About Parasoft (www.parasoft.com):Parasoft, the global leader in software testing automation, has been reducing the time, effort, and cost of delivering high-quality software to the market for the last 30+ years. Parasoft's tools support the entire software development process, from when the developer writes the first line of code all the way through unit and functional testing, to performance and security testing, leveraging simulated test environments along the way. Parasoft's unique analytics platform aggregates data from across all testing practices, providing insights up and down the testing pyramid to enable organizations to succeed in today's most strategic development initiatives, including Agile/DevOps, Continuous Testing, and the complexities of IoT.

View original content to download multimedia:http://www.prnewswire.com/news-releases/parasoft-wins-2020-vdc-research-embeddy-award-for-its-artificial-intelligence-ai--and-machine-learning-ml-innovation-301036797.html

SOURCE Parasoft

Here is the original post:
Parasoft wins 2020 VDC Research Embeddy Award for Its Artificial Intelligence (AI) and Machine Learning (ML) Innovation - Yahoo Finance

Machine Learning as a Service Market Size Analysis, Top Manufacturers, Shares, Growth Opportunities and Forecast to 2026 – Science In Me

New Jersey, United States: Market Research Intellect has added a new research report titled, Machine Learning as a Service Market Professional Survey Report 2020 to its vast collection of research reports. The Machine Learning as a Service market is expected to grow positively for the next five years 2020-2026.

The Machine Learning as a Service market report studies past factors that helped the market to grow as well as, the ones hampering the market potential. This report also presents facts on historical data from 2011 to 2019 and forecasts until 2026, which makes it a valuable source of information for all the individuals and industries around the world. This report gives relevant market information in readily accessible documents with clearly presented graphs and statistics. This report also includes views of various industry executives, analysts, consultants, and marketing, sales, and product managers.

Key Players Mentioned in the Machine Learning as a Service Market Research Report:

Market Segment as follows:

The global Machine Learning as a Service Market report highly focuses on key industry players to identify the potential growth opportunities, along with the increased marketing activities is projected to accelerate market growth throughout the forecast period. Additionally, the market is expected to grow immensely throughout the forecast period owing to some primary factors fuelling the growth of this global market. Finally, the report provides detailed profile and data information analysis of leading Machine Learning as a Service company.

Machine Learning as a Service Market by Regional Segments:

The chapter on regional segmentation describes the regional aspects of the Machine Learning as a Service market. This chapter explains the regulatory framework that is expected to affect the entire market. It illuminates the political scenario of the market and anticipates its impact on the market for Machine Learning as a Service .

The Machine Learning as a Service Market research presents a study by combining primary as well as secondary research. The report gives insights on the key factors concerned with generating and limiting Machine Learning as a Service market growth. Additionally, the report also studies competitive developments, such as mergers and acquisitions, new partnerships, new contracts, and new product developments in the global Machine Learning as a Service market. The past trends and future prospects included in this report makes it highly comprehensible for the analysis of the market. Moreover, The latest trends, product portfolio, demographics, geographical segmentation, and regulatory framework of the Machine Learning as a Service market have also been included in the study.

Ask For Discount (Special Offer: Get 25% discount on this report) @ https://www.marketresearchintellect.com/ask-for-discount/?rid=195381&utm_source=SI&utm_medium=888

Table of Content

1 Introduction of Machine Learning as a Service Market1.1 Overview of the Market1.2 Scope of Report1.3 Assumptions

2 Executive Summary

3 Research Methodology3.1 Data Mining3.2 Validation3.3 Primary Interviews3.4 List of Data Sources

4 Machine Learning as a Service Market Outlook4.1 Overview4.2 Market Dynamics4.2.1 Drivers4.2.2 Restraints4.2.3 Opportunities4.3 Porters Five Force Model4.4 Value Chain Analysis

5 Machine Learning as a Service Market, By Deployment Model5.1 Overview

6 Machine Learning as a Service Market, By Solution6.1 Overview

7 Machine Learning as a Service Market, By Vertical7.1 Overview

8 Machine Learning as a Service Market, By Geography8.1 Overview8.2 North America8.2.1 U.S.8.2.2 Canada8.2.3 Mexico8.3 Europe8.3.1 Germany8.3.2 U.K.8.3.3 France8.3.4 Rest of Europe8.4 Asia Pacific8.4.1 China8.4.2 Japan8.4.3 India8.4.4 Rest of Asia Pacific8.5 Rest of the World8.5.1 Latin America8.5.2 Middle East

9 Machine Learning as a Service Market Competitive Landscape9.1 Overview9.2 Company Market Ranking9.3 Key Development Strategies

10 Company Profiles10.1.1 Overview10.1.2 Financial Performance10.1.3 Product Outlook10.1.4 Key Developments

11 Appendix11.1 Related Research

Complete Report is Available @ https://www.marketresearchintellect.com/product/global-machine-learning-as-a-service-market-size-and-forecast/?utm_source=SI&utm_medium=888

We also offer customization on reports based on specific client requirement:

1-Freecountry level analysis forany 5 countriesof your choice.

2-FreeCompetitive analysis of any market players.

3-Free 40 analyst hoursto cover any other data points

About Us:

Market Research Intellect provides syndicated and customized research reports to clients from various industries and organizations with the aim of delivering functional expertise. We provide reports for all industries including Energy, Technology, Manufacturing and Construction, Chemicals and Materials, Food and Beverage and more. These reports deliver an in-depth study of the market with industry analysis, market value for regions and countries and trends that are pertinent to the industry.

Contact Us:

Mr. Steven FernandesMarket Research IntellectNew Jersey ( USA )Tel: +1-650-781-4080

Email: [emailprotected]

Get Our Trending Report

https://www.marketresearchblogs.com/

https://www.marktforschungsblogs.com/

Tags: Machine Learning as a Service Market Size, Machine Learning as a Service Market Growth, Machine Learning as a Service Market Forecast, Machine Learning as a Service Market Analysis, Machine Learning as a Service Market Trends, Machine Learning as a Service Market

Read more:
Machine Learning as a Service Market Size Analysis, Top Manufacturers, Shares, Growth Opportunities and Forecast to 2026 - Science In Me

Quantiphi Wins Google Cloud Social Impact Partner of the Year Award – AiThority

Awarded to recognize Google Cloud partners who have made a positive impact on the world

Quantiphi, an award-winning applied artificial intelligence and data science software and services company, announced today that it has been named 2019 Social Impact Partner of the Year by Google Cloud. Quantiphi was recognized for its achievements for working with nonprofits, research institutions, and healthcare providers, to leverage AI for Social Good.

We are believers in the power of human acumen and technology to solve the worlds toughest challenges. This award is a recognition of our mission driven culture and our passion to apply AI for social good, said Asif Hasan, Co-founder, Quantiphi. Partnering with Google Cloud has given us the opportunity to work with the worlds leading nonprofit, healthcare and research institutions and we are truly humbled by this recognition.

Recommended AI News:Opinion: Young Jamaicans Invention Could Help Tackle Spread of Viruses Like COVID-19

Were delighted to recognize Quantiphis commitment to social impact, said Carolee Gearhart, Vice President, Worldwide Channel Sales at Google Cloud. By applying its capabilities in AI and ML to important causes, Quantiphi has demonstrated how Google Cloud partners are contributing to positive change in the world.

A few initiatives that helped Quantiphi earn this recognition:

Recommended AI News:Automation Provides A Content Lifeline For Remote Work

Quantiphi previously earned the Google Cloud Machine Learning Partner of the Year twice in a row for 2018 and 2017 and is a premier partner for Google Cloud and holds Specializations in machine learning, data analytics and marketing analytics.

Recommended AI News:Identity Theft is Booming; Your SSN Sells for Less than $4 on Darknet

Visit link:
Quantiphi Wins Google Cloud Social Impact Partner of the Year Award - AiThority

Adversarial attacks against machine learning systems everything you need to know – The Daily Swig

The behavior of machine learning systems can be manipulated, with potentially devastating consequences

In March 2019, security researchers at Tencent managed to trick a Tesla Model S into switching lanes.

All they had to do was place a few inconspicuous stickers on the road. The technique exploited glitches in the machine learning (ML) algorithms that power Teslas Lane Detection technology in order to cause it to behave erratically.

Machine learning has become an integral part of many of the applications we use every day from the facial recognition lock on iPhones to Alexas voice recognition function and the spam filters in our emails.

But the pervasiveness of machine learning and its subset, deep learning has also given rise to adversarial attacks, a breed of exploits that manipulate the behavior of algorithms by providing them with carefully crafted input data.

Adversarial attacks are manipulative actions that aim to undermine machine learning performance, cause model misbehavior, or acquire protected information, Pin-Yu Chen, chief scientist, RPI-IBM AI research collaboration at IBM Research, told The Daily Swig.

Adversarial machine learning was studied as early as 2004. But at the time, it was regarded as an interesting peculiarity rather than a security threat. However, the rise of deep learning and its integration into many applications in recent years has renewed interest in adversarial machine learning.

Theres growing concern in the security community that adversarial vulnerabilities can be weaponized to attack AI-powered systems.

As opposed to classic software, where developers manually write instructions and rules, machine learning algorithms develop their behavior through experience.

For instance, to create a lane-detection system, the developer creates a machine learning algorithm and trains it by providing it with many labeled images of street lanes from different angles and under different lighting conditions.

The machine learning model then tunes its parameters to capture the common patterns that occur in images that contain street lanes.

With the right algorithm structure and enough training examples, the model will be able to detect lanes in new images and videos with remarkable accuracy.

But despite their success in complex fields such as computer vision and voice recognition, machine learning algorithms are statistical inference engines: complex mathematical functions that transform inputs to outputs.

If a machine learning tags an image as containing a specific object, it has found the pixel values in that image to be statistically similar to other images of the object it has processed during training.

Adversarial attacks exploit this characteristic to confound machine learning algorithms by manipulating their input data. For instance, by adding tiny and inconspicuous patches of pixels to an image, a malicious actor can cause the machine learning algorithm to classify it as something it is not.

Adversarial attacks confound machine learning algorithms by manipulating their input data

The types of perturbations applied in adversarial attacks depend on the target data type and desired effect. The threat model needs to be customized for different data modality to be reasonably adversarial, says Chen.

For instance, for images and audios, it makes sense to consider small data perturbation as a threat model because it will not be easily perceived by a human but may make the target model to misbehave, causing inconsistency between human and machine.

However, for some data types such as text, perturbation, by simply changing a word or a character, may disrupt the semantics and easily be detected by humans. Therefore, the threat model for text should be naturally different from image or audio.

The most widely studied area of adversarial machine learning involves algorithms that process visual data. The lane-changing trick mentioned at the beginning of this article is an example of a visual adversarial attack.

In 2018, a group of researchers showed that by adding stickers to a stop sign(PDF), they could fool the computer vision system of a self-driving car to mistake it for a speed limit sign.

Researchers tricked self-driving systems into identifying a stop sign as a speed limit sign

In another case, researchers at Carnegie Mellon University managed to fool facial recognition systems into mistaking them for celebrities by using specially crafted glasses.

Adversarial attacks against facial recognition systems have found their first real use in protests, where demonstrators use stickers and makeup to fool surveillance cameras powered by machine learning algorithms.

Computer vision systems are not the only targets of adversarial attacks. In 2018, researchers showed that automated speech recognition (ASR) systems could also be targeted with adversarial attacks(PDF). ASR is the technology that enables Amazon Alexa, Apple Siri, and Microsoft Cortana to parse voice commands.

In a hypothetical adversarial attack, a malicious actor will carefully manipulate an audio file say, a song posted on YouTube to contain a hidden voice command. A human listener wouldnt notice the change, but to a machine learning algorithm looking for patterns in sound waves it would be clearly audible and actionable. For example, audio adversarial attacks could be used to secretly send commands to smart speakers.

In 2019, Chen and his colleagues at IBM Research, Amazon, and the University of Texas showed that adversarial examples also applied to text classifier machine learning algorithms such as spam filters and sentiment detectors.

Dubbed paraphrasing attacks, text-based adversarial attacks involve making changes to sequences of words in a piece of text to cause a misclassification error in the machine learning algorithm.

Example of a paraphrasing attack against fake news detectors and spam filters

Like any cyber-attack, the success of adversarial attacks depends on how much information an attacker has on the targeted machine learning model. In this respect, adversarial attacks are divided into black-box and white-box attacks.

Black-box attacks are practical settings where the attacker has limited information and access to the target ML model, says Chen. The attackers capability is the same as a regular user and can only perform attacks given the allowed functions. The attacker also has no knowledge about the model and data used behind the service.

Read more AI and machine learning security news

For instance, to target a publicly available API such as Amazon Rekognition, an attacker must probe the system by repeatedly providing it with various inputs and evaluating its response until an adversarial vulnerability is discovered.

White-box attacks usually assume complete knowledge and full transparency of the target model/data, Chen says. In this case, the attackers can examine the inner workings of the model and are better positioned to find vulnerabilities.

Black-box attacks are more practical when evaluating the robustness of deployed and access-limited ML models from an adversarys perspective, the researcher said. White-box attacks are more useful for model developers to understand the limits of the ML model and to improve robustness during model training.

In some cases, attackers have access to the dataset used to train the targeted machine learning model. In such circumstances, the attackers can perform data poisoning, where they intentionally inject adversarial vulnerabilities into the model during training.

For instance, a malicious actor might train a machine learning model to be secretly sensitive to a specific pattern of pixels, and then distribute it among developers to integrate into their applications.

Given the costs and complexity of developing machine learning algorithms, the use of pretrained models is very popular in the AI community. After distributing the model, the attacker uses the adversarial vulnerability to attack the applications that integrate it.

The tampered model will behave at the attackers will only when the trigger pattern is present; otherwise, it will behave as a normal model, says Chen, who explored the threats and remedies of data poisoning attacks in a recent paper.

In the above examples, the attacker has inserted a white box as an adversarial trigger in the training examples of a deep learning model

This kind of adversarial exploit is also known as a backdoor attack or trojan AI and has drawn the attention of Intelligence Advanced Research Projects (IARPA).

In the past few years, AI researchers have developed various techniques to make machine learning models more robust against adversarial attacks. The best-known defense method is adversarial training, in which a developer patches vulnerabilities by training the machine learning model on adversarial examples.

Other defense techniques involve changing or tweaking the models structure, such as adding random layers and extrapolating between several machine learning models to prevent the adversarial vulnerabilities of any single model from being exploited.

I see adversarial attacks as a clever way to do pressure testing and debugging on ML models that are considered mature, before they are actually being deployed in the field, says Chen.

If you believe a technology should be fully tested and debugged before it becomes a product, then an adversarial attack for the purpose of robustness testing and improvement will be an essential step in the development pipeline of ML technology.

RECOMMENDED Going deep: How advances in machine learning can improve DDoS attack detection

See more here:
Adversarial attacks against machine learning systems everything you need to know - The Daily Swig

Why neural networks struggle with the Game of Life – TechTalks

This article is part of ourreviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.

The Game of Life is a grid-based automaton that is very popular in discussions about science, computation, and artificial intelligence. It is an interesting idea that shows how very simple rules can yield very complicated results.

Despite its simplicity, however, the Game of Life remains a challenge to artificial neural networks, AI researchers at Swarthmore College and the Los Alamos National Laboratory have shown in a recent paper. Titled, Its Hard for Neural Networks To Learn the Game of Life, their research investigates how neural networks explore the Game of Life and why they often miss finding the right solution.

Their findings highlight some of the key issues with deep learning models and give some interesting hints at what could be the next direction of research for the AI community.

British mathematician John Conway invented the Game of Life in 1970. Basically, the Game of Life tracks the on or off statethe lifeof a series of cells on a grid across timesteps. At each timestep, the following simple rules define which cells come to life or stay alive, and which cells die or stay dead:

Based on these four simple rules, you can adjust the initial state of your grid to create interesting stable, oscillating, and gliding patterns.

For instance, this is whats called the glider gun.

You can also use the Game of Life to create very complex pattern, such as this one.

Interestingly, no matter how complex a grid becomes, you can predict the state of each cell in the next timestep with the same rules.

With neural networks being very good prediction machines, the researchers wanted to find out whether deep learning models could learn the underlying rules of the Game of Life.

There are a few reasons the Game of Life is an interesting experiment for neural networks. We already know a solution, Jacob Springer, a computer science student at Swarthmore College and co-author of the paper, told TechTalks. We can write down by hand a neural network that implements the Game of Life, and therefore we can compare the learned solutions to our hand-crafted one. This is not the case in.

It is also very easy to adjust the flexibility of the problem in the Game of Life by modifying the number of timesteps in the future the target deep learning model must predict.

Also, unlike domains such as computer vision or natural language processing, if a neural network has learned the rules of the Game of Life it will reach 100 percent accuracy. Theres no ambiguity. If the network fails even once, then it is has not correctly learned the rules, Springer says.

In their work, the researchers first created a small convolutional neural network and manually tuned its parameters to be able to predict the sequence of changes in the Game of Lifes grid cells. This proved that theres a minimal neural network that can represent the rule of the Game of Life.

Then, they tried to see if the same neural network could reach optimal settings when trained from scratch. They initialized the parameters to random values and trained the neural network on 1 million randomly generated examples of the Game of Life. The only way the neural network could reach 100 percent accuracy would be to converge on the hand-crafted parameter values. This would imply that the AI model had managed to parameterize the rules underlying the Game of Life.

But in most cases the trained neural network did not find the optimal solution, and the performance of the network decreased even further as the number of steps increased. The result of training the neural network was largely affected by the chosen set training examples as well as the initial parameters.

Unfortunately, you never know what the initial weights of the neural network should be. The most common practice is to pick random values from a normal distribution, therefore settling on the right initial weights becomes a game of luck. As for the training dataset, in many cases, it isnt clear which samples are the right ones, and in others, theres not much of a choice.

For many problems, you dont have a lot of choice in dataset; you get the data that you can collect, so if there is a problem with your dataset, you may have trouble training the neural network, Springer says.

In machine learning, one of the popular ways to improve the accuracy of a model that is underperforming is to increase its complexity. And this technique worked with the Game of Life. As the researchers added more layers and parameters to the neural network, the results improved and the training process eventually yielded a solution that reached near-perfect accuracy.

But a larger neural network also means an increase in the cost of training and running the deep learning model.

On the one hand, this shows the flexibility of large neural networks. Although a huge deep learning model might not be the most optimal architecture to address your problem, it has a greater chance of finding a good solution. But on the other, it proves that there is likely to be a smaller deep learning model that can provide the same or better resultsif you can find it.

These findings are in line with The Lottery Ticket Hypothesis, presented at the ICLR 2019 conference by AI researchers at MIT CSAIL. The hypothesis suggested that for each large neural network, there are smaller sub-networks that can converge on a solution if their parameters have been initialized on lucky, winning values, thus the lottery ticket nomenclature.

The lottery ticket hypothesis proposes that when training a convolutional neural network, small lucky subnetworks quickly converge on a solution, the authors of the Game of Life paper write. This suggests that rather than searching extensively through weight-space for an optimal solution, gradient-descent optimization may rely on lucky initializations of weights that happen to position a subnetwork close to a reasonable local minima to which the network converges.

While Conways Game of Life itself is a toy problem and has few direct applications, the results we report here have implications for similar tasks in which a neural network is trained to predict an outcome which requires the network to follow a set of local rules with multiple hidden steps, the AI researchers write in their paper.

These findings can apply to machine learning models used logic or math solvers, weather and fluid dynamics simulations, and logical deduction in language or image processing.

Given the difficulty that we have found for small neural networks to learn the Game of Life, which can be expressed with relatively simple symbolic rules, I would expect that most sophisticated symbol manipulation would be even more difficult for neural networks to learn, and would require even larger neural networks, Springer said. Our result does not necessarily suggest that neural networks cannot learn and execute symbolic rules to make decisions, however, it suggests that these types of systems may be very difficult to learn, especially as the complexity of the problem increases.

The researchers further believe that their findings apply to other fields of machine learning that do not necessarily rely on clear-cut logical rules, such as image and audio classification.

For the moment, we know that, in some cases, increasing the size and complexity of our neural networks can solve the problem of poorly performing deep learning models. But we should also consider the negative impact of using larger neural networks as the go-to method to overcome impasses in machine learning research. One outcome can be greater energy consumption and carbon emissions caused from the compute resources required to train large neural networks. On the other hand, it can result in the collection of larger training datasets instead of relying on finding ideal distribution strategies across smaller datasets, which might not be feasible in domains where data is subject to ethical considerations and privacy laws. And finally, the general trend toward endorsing overcomplete and very large deep learning models can consolidate AI power in large tech companies and make it harder for smaller players to enter the deep learning research space.

We hope that this paper will promote research into the limitations of neural networks so that we can better understand the flaws that necessitate overcomplete networks for learning. We hope that our result will drive development into better learning algorithms that do not face the drawbacks of gradient-based learning, the authors of the paper write.

I think the results certainly motivate research into improved search algorithms, or for methods to improve the efficiency of large networks, Springer said.

Read more here:
Why neural networks struggle with the Game of Life - TechTalks

When Machines Design: Artificial Intelligence and the Future of Aesthetics – ArchDaily

When Machines Design: Artificial Intelligence and the Future of Aesthetics

Facebook

Twitter

Pinterest

Whatsapp

Mail

Or

Are machines capable of design? Though a persistent question, it is one that increasingly accompanies discussions on architecture and the future of artificial intelligence. But what exactly is AI today? As we discover more about machine learning and generative design, we begin to see that these forms of "intelligence" extend beyond repetitive tasks and simulated operations. They've come to encompass cultural production, and in turn, design itself.

+ 8

When artificial intelligence was envisioned during thethe 1950s-60s, thegoal was to teach a computer to perform a range of cognitive tasks and operations, similar to a human mind. Fast forward half a century, andAIis shaping our aesthetic choices, with automated algorithms suggesting what we should see, read, and listen to. It helps us make aesthetic decisions when we create media, from movie trailers and music albums to product and web designs. We have already felt some of the cultural effects of AI adoption, even if we aren't aware of it.

As educator and theorist Lev Manovich has explained, computers perform endless intelligent operations. "Your smartphones keyboard gradually adapts to your typing style. Your phone may also monitor your usage of apps and adjust their work in the background to save battery. Your map app automatically calculates the fastest route, taking into account traffic conditions. There are thousands of intelligent, but not very glamorous, operations at work in phones, computers, web servers, and other parts of the IT universe."More broadly, it's useful to turn the discussion towards aesthetics and how these advancements relate to art, beauty and taste.

Usually defined as a set of "principles concerned with the nature and appreciation of beauty, aesthetics depend on who you are talking to. In 2018, Marcus Endicott described how, from the perspective of engineering, the traditional definition of aesthetics in computing could be termed "structural, such as an elegant proof, or beautiful diagram." A broader definition may include more abstract qualities of form and symmetry that "enhance pleasure and creative expression." In turn, as machine learning is gradually becoming more widely adopted, it is leading to what Marcus Endicott termed a neural aesthetic. This can be seen in recent artistic hacks, such as Deepdream, NeuralTalk, and Stylenet.

Beyond these adaptive processes, there are other ways AI shapes cultural creation. Artificial intelligence hasrecently made rapid advances in the computation of art, music, poetry, and lifestyle. Manovich explains that AIhas given us the option to automate our aesthetic choices (via recommendation engines), as well as assist in certain areas of aesthetic production such as consumer photography and automate experiences like the ads we see online. "Its use of helping to design fashion items, logos, music, TV commercials, and works in other areas of culture is already growing." But, as he concludes, human experts usually make the final decisions based on ideas and media generated by AI. And yes, the human vs. robot debate rages on.

According to The Economist, 47% of the work done by humans will have been replaced by robots by 2037, even those traditionally associated with university education. The World Economic Forum estimated that between 2015 and 2020, 7.1 million jobs will be lost around the world, as "artificial intelligence, robotics, nanotechnology and other socio-economic factors replace the need for human employees." Artificial intelligence is already changing the way architecture is practiced, whether or not we believe it may replace us. As AI is augmenting design, architects are working to explore the future of aesthetics and how we can improve the design process.

In a tech report on artificial intelligence, Building Design + Construction explored how Arup had applied a neural network to a light rail design and reduced the number of utility clashes by over 90%, saving nearly 800 hours of engineering. In the same vein, the areas of site and social research that utilize artificial intelligence have been extensively covered, and examples are generated almost daily. We know that machine-driven procedures can dramatically improve the efficiency of construction and operations, like by increasing energy performance and decreasing fabrication time and costs. The neural network application from Arup extends to this design decision-making. But the central question comes back to aesthetics and style.

Designer and Fulbright fellow Stanislas Chaillou recently created a project at Harvard utilizing machine learning to explore the future of generative design, bias and architectural style. While studying AI and its potential integration into architectural practice, Chaillou built an entire generation methodology using Generative Adversarial Neural Networks (GANs). Chaillou's project investigates the future of AI through architectural style learning, and his work illustrates the profound impact of style on the composition of floor plans.

As Chaillou summarizes, architectural styles carry implicit mechanics of space, and there are spatial consequences to choosing a given style over another. In his words, style is not an ancillary, superficial or decorative addendum; it is at the core of the composition.

Artificial intelligence and machine learningare becomingincreasingly more important as they shape our future. If machines can begin to understand and affect our perceptions of beauty, we should work to find better ways to implement these tools and processes in the design process.

Architect and researcher Valentin Soana once stated that the digital in architectural design enables new systems where architectural processes can emerge through "close collaboration between humans and machines; where technologies are used to extend capabilities and augment design and construction processes." As machines learn to design, we should work with AI to enrich our practices through aesthetic and creative ideation.More than productivity gains, we can rethink the way we live, and in turn, how to shape the built environment.

See the original post:
When Machines Design: Artificial Intelligence and the Future of Aesthetics - ArchDaily

Machine learning techniques applied to crack CAPTCHAs – The Daily Swig

A newly released tool makes light work of solving human verification challenges

F-Secure says its achieved 90% accuracy in cracking Microsoft Outlooks text-based CAPTCHAs using its AI-based CAPTCHA-cracking server, CAPTCHA22.

For the last two years, the security firm has been using machine learning techniques to train unique models that solve a particular CAPTCHA, rather than trying to build a one-size-fits-all model.

And, recently, it decided to try the system out on a CAPTCHA used by an Outlook Web App (OWA) portal.

The initial attempt, according to F-Secure, was comparatively unsuccessful, with the team finding that after manually labelling around 200 CAPTCHAs, it could only identify the characters with an accuracy of 22%.

The first issue to emerge was noise, with the team determining that the greyscale value of noise and text was always within two distinct and constant ranges. Tweaks to the tool helped filter out the noise.

The team also realized that some of the test CAPTCHAs had been labelled incorrectly, with confusion between, for example, l and I (lower case L and upper case i). Fixing this shortcoming brought the accuracy up to 47%.

More challenging, though, was handling the CAPTCHA submission to Outlooks web portal.

There was no CAPTCHA POST request, with the CAPTCHA instead sent as a value appended to a cookie. JavaScript was used to keylog the user as the answer to the CAPTCHA was typed.

Instead of trying to replicate what occurred in JS, we decided to use Pyppeteer, a browsing simulation Python package, to simulate a user entering the CAPTCHA, said Tinus Green, a senior information security consultant at F-Secure

Doing this, the JS would automatically take care of the submission for us.

Green added: We could use this simulation software to solve the CAPTCHA whenever it blocked entries and once solved, we could continue with our conventional attack, hence automating the process once again.

We have now also refactored CAPTCHA22 for a public release.

CAPTCHAs are challenge-response tests used by many websites in an attempt to distinguish between genuine requests to sign-up to or access web services by a human user and automated requests by bots.

Spammers, for example, attempt to circumvent CAPTCHAs in order to create accounts they can later abuse to distribute junk mail.

CAPTCHAs are something of a magnet for cybercriminals and security researchers, with web admins struggling to stay one step ahead.

Late last year, for example, PortSwigger Web Security uncovered a security weakness in Googles reCAPTCHA that allowed it to be partially bypassed by using Turbo Intruder, a research-focused Burp Suite extension, to trigger a race condition.

Soon after, a team of academics from the University of Maryland was able to circumvent Googles reCAPTCHA v2s anti-bot mechanism using a Python-based program called UnCaptcha, which could solve its audio challenges.

Green said: There is a catch 22 between creating a CAPTCHA that is user friendly grandma safe as we call it and sufficiently complex to prevent solving through computers. At this point it seems as if the balance does not exist.

Web admins shouldnt, he says, give away half the required information through username enumeration, and users should be required to set strong pass phrases conforming to NIST standards.

And, he adds: Accept that accounts can be breached, and therefore implement MFA [multi-factor authentication] as an additional barrier.

RELATED New tool highlights shortcomings in reCAPTCHAs anti-bot engine

Read the original post:
Machine learning techniques applied to crack CAPTCHAs - The Daily Swig

Artificial Intelligence (AI) in Supply Chain Market is projected to reach $21.8 billion by 2027, Growing at a CAGR of 45.3% from 2019- Meticulous…

London, June 03, 2020 (GLOBE NEWSWIRE) -- Artificial intelligence has emerged as the most potent technologies over the past few years, that is transitioning the landscape of almost all industry verticals. Although enterprise applications based on AI and machine learning (ML) are still in the nascent stages of development, they are gradually beginning to drive innovation strategies of the business.

In the supply chain and logistics industry, artificial intelligence is gaining rapid traction among industry stakeholders. Players operating in the supply chain and logistics industry are increasingly realizing the potential of AI to solve the complexities of running a global logistics network. Adoption of artificial intelligence in the supply chain is routing a new era or industrial transformation, allowing the companies to track their operations, enhance supply chain management productivity, augment business strategies, and engage with customers in digital world.

Theartificial intelligence in supply chain market is expected to grow at a CAGR of 45.3% from 2019 to 2027 to reach $21.8 billion by 2027. The growth in this market is mainly driven by rising awareness of artificial intelligence and big data & analytics and widening implementation of computer vision in both autonomous & semi-autonomous applications. In addition, consistent technological advancements in the supply chain industry, rising demand for AI-based business automation solutions, and evolving supply chain complementing growing industrial automation are further offering opportunities for vendors providing AI solutions in the supply chain industry. However, high deployment and operating costs and lack of infrastructure hinder the growth of the artificial intelligence in supply chain market.

In this study, the globalAI in supply chain market is segmented on the basis of component, application, technology, end user, and geography.

DownloadFree Sample Report Now @https://www.meticulousresearch.com/download-sample-report/cp_id=5064

Based on component, AI in supply chain market is broadly segmented into hardware, software, and services. The software segment commanded the largest share of the overall AI in supply chain market in 2019. This can be attributed to the increasing demand for AI-based platforms and solutions, as they offer supply chain visibility through software, which include inventory control, warehouse management, order procurement, and reverse logistics & tracking.

Based on technology, AI in supply chain market is broadly segmented into machine learning, computer vision, natural language processing, and context-aware computing. In 2019, the machine learning segment commanded the largest share of the overall AI in supply chain market. This growth can be attributed to the growing demand for AI-based intelligent solutions; increasing government initiatives; and the ability of AI solutions to efficiently handle and analyze big data and quickly scan, parse, and react to anomalies

Based on application, AI in supply chain market is broadly segmented into supply chain planning, warehouse management, fleet management, virtual assistant, risk management, inventory management, and planning & logistics. In 2019, the supply chain planning segment commanded the largest share of the overall AI in supply chain market. The growth of this segment can be attributed to the increasing demand for enhancing factory scheduling & production planning and the evolving agility and optimization of supply chain decision-making. In addition, digitizing existing processes and workflows to reinvent the supply chain planning model is also contributing to the growth of this segment.

Based on end user, artificial intelligence in supply chain market is broadly segmented into manufacturing, food & beverage, healthcare, automotive, aerospace, retail, and consumer packaged goods sectors. The retail sector commanded the largest share of the overall AI in supply chain market in 2019. This can be attributed to the increase in demand for consumer retail products.

Click here to get the short-term and long-term impact of COVID-19 on this Market.

Please visit:https://www.meticulousresearch.com/product/artificial-intelligence-ai-in-supply-chain-market-5064/

Based on geography, the global artificial intelligence in supply chain market is categorized into five major geographies, namely, North America, Europe, Asia Pacific, Latin America, and Middle East & Africa. In 2019, North America commanded for the largest share of the global artificial intelligence in supply chain market, followed by Europe, Asia-Pacific, Latin America, and the Middle East & Africa. The large share of the North American region is attributed to the presence of developed economies focusing on enhancing the existing solutions in the supply chain space, and the existence of major players in this market along with a high willingness to adopt advanced technologies.

On the other hand, the Asia-Pacific region is projected to grow at the fastest CAGR during the forecast period. The high growth rate is attributed to rapidly developing economies in the region; presence of young and tech-savvy population in this region; and growing proliferation of internet of things (IoT); rising disposable income; increasing acceptance of modern technologies across several industries including automotive, manufacturing, and retail; and broadening implementation of computer vision technology in numerous applications. Furthermore, the growing adoption of AI-based solutions and services among supply chain operations, increasing digitalization in the region, and improving connectivity infrastructure are also playing a significant role in the growth of this market in the region.

The globalAI in supply chain market is fragmented in nature and is characterized by the presence of several companies competing for the market share. Some of the leading companies in the artificial intelligence in supply chain market are from the core technology background. These include IBM Corporation (U.S.), Microsoft Corporation (U.S.), Google LLC (U.S.), and Amazon.com, Inc. (U.S.). These companies are leading the market owing to their strong brand recognition, diverse product portfolio, strong distribution & sales network, and strong organic & inorganic growth strategies. The other key players in the global artificial intelligence in supply chain market are Intel Corporation (U.S.), Nvidia Corporation (U.S.), Oracle Corporation (U.S.), Samsung (South Korea), LLamasoft, Inc. (U.S.), SAP SE (Germany), General Electric (U.S.), Deutsche Post DHL Group (Germany), Xilinx, Inc. (U.S.), Micron Technology, Inc. (U.S.), FedEx Corporation (U.S.), ClearMetal, Inc. (U.S.), Dassault Systmes (France), and JDA Software Group, Inc. (U.S.), among others.

DownloadFree Sample Report Now @https://www.meticulousresearch.com/download-sample-report/cp_id=5064

Amidst this crisis, Meticulous Researchis continuously assessing the impact of COVID-19 pandemic on various sub-markets and enables global organizations to strategize for the post-COVID-19 world and sustain their growth. Let us know if you would like to. assess the impact of COVID-19 on any industry here-https://www.meticulousresearch.com/custom-research.php

Related Reports:

Artificial Intelligence in Manufacturing Marketby Component, Technology (ML, Computer Vision, NLP), Application (Cybersecurity, Robot, Planning), Industry (Electronics, Energy, Automotive, Metals and Machine, Food and Beverages) Global Forecast to 2027

Automotive Artificial Intelligence (AI) Marketby Component (Hardware, Software), Technology (Machine Learning, Computer Vision), Process (Signal Recognition, Image Recognition) and Application (Semi-Autonomous Driving) - Global Forecast to 2027

Artificial Intelligence in Healthcare Marketby Product (Hardware, Software, Services), Technology (Machine Learning, Context-Aware Computing, NLP), Application (Drug Discovery, Precision Medicine), End User, And Geography - Global Forecast to 2025

Artificial Intelligence in Security Marketby Offering (Hardware, Software, Service), Security Type (Network Security, Application Security), Technology (Machine Learning, NLP, Context Awareness,), Solution, End-User, and Region - Global Forecast to 2027

Artificial Intelligence in Retail Marketby Product (Chatbot, Customer Relationship Management), Application (Programmatic Advertising), Technology (Machine Learning, Natural Language Processing), Retail (E-commerce and Direct Retail)- Forecast to 2025

About Meticulous Research

The name of our company defines our services, strengths, and values. Since the inception, we have only thrived to research, analyze and present the critical market data with great attention to details.

Meticulous Research was founded in 2010 and incorporated as Meticulous Market Research Pvt. Ltd. in 2013 as a private limited company under the Companies Act, 1956. Since its incorporation, with the help of its unique research methodologies, the company has become the leading provider of premium market intelligence in North America, Europe, Asia-Pacific, Latin America, and Middle East & Africa regions.

With the meticulous primary and secondary research techniques, we have built strong capabilities in data collection, interpretation, and analysis of data including qualitative and quantitative research with the finest team of analysts. We design our meticulously analyzed intelligent and value-driven syndicate market research reports, custom studies, quick turnaround research, and consulting solutions to address business challenges of sustainable growth.

Read more here:
Artificial Intelligence (AI) in Supply Chain Market is projected to reach $21.8 billion by 2027, Growing at a CAGR of 45.3% from 2019- Meticulous...