What is machine learning (ML)? – Definition from WhatIs.com

Machine learning (ML) is a type of artificial intelligence (AI) that allows software applications to become more accurate at predicting outcomes without being explicitly programmed to do so. Machine learning algorithms use historical data as input to predict new output values.

Recommendation enginesare a common use case for machine learning. Other popular uses include fraud detection, spam filtering, malware threat detection, business process automation (BPA) andpredictive maintenance.

Classical machine learning is often categorized by how an algorithm learns to become more accurate in its predictions. There are two basic approaches: supervised learning andunsupervised learning. The type of algorithm a data scientist chooses to use is dependent upon what type of data they want to predict.

Supervised machine learning requires thedata scientist to train the algorithm with both labeled inputs and desired outputs. Supervised learning algorithms are good for the following tasks:

Unsupervised ML algorithms do not require data to be labeled. They sift through unlabeled data to look for patterns that can be used to group data points into subsets.Unsupervised learning algorithms are good for the following tasks:

Today, machine learning is used in a wide range of applications. Perhaps one of the most well-known examples of machine learning in action is the recommendation engine that powers Facebook's News Feed.

Facebook uses machine learning to personalize how each member's feed is delivered. If a member frequently stops to read a particular groups posts, the recommendation engine will start to show more of that groups activity earlier in the feed.

Behind the scenes, the engine is attempting to reinforce known patterns in the members online behavior. Should the member change patterns and fail to read posts from that group in the coming weeks, the News Feed will adjust accordingly.

In addition to recommendation engines, other uses for machine learning include the following:

Customer relationship management CRM software can use machine learning models to analyze email and prompt sales team members to respond to the most important messages first. More advanced systems can even recommend potentially effective responses.

Business intelligence- BI and analytics vendors use machine learning in their software to identify potentially important data points, patterns of data points and anomalies.

Human resource information systems HRIS systems can use machine learning models to filter through applications and identify the best candidates for an open position.

Self-driving cars Machine learning algorithms can even make it possible for a semi-autonomous car to recognize a partially visible object and alert the driver.

Virtual assistants- Smart assistants typically combine supervised and unsupervised machine learning models to interpret natural speech and supply context.

The process of choosing the right machine learning model to solve a problem can be time consuming if not approached strategically.

Step 1: Align the problem with potential data inputs that should be considered for the solution. This step requires help from data scientists and experts who have a deep understanding of the problem.

Step 2: Collect data, format it and label the data if necessary. This step is typically led by data scientists, with help from data wranglers.

Step 3: Chose which algorithm(s) to use and >test to see how well they perform. This step is usually carried out by data scientists.

Step 4: Continue to fine tune outputs until they reach an acceptable level of accuracy. This step is usually carried out by data scientists with feedback from experts who have a deep understanding of the problem.

Explaining how a specific ML model works can be challenging when the model is complex. There are some vertical industries where data scientists have to use simple machine learning models because its important for the business to explain how each and every decision was made. This is especially true in industries with heavy compliance burdens like banking and insurance.

Complex models can accurate predictions, but explaining to a lay person how an output was determined can be difficult.

While machine learning algorithms have been around for decades, they've attained new popularity asartificial intelligence(AI) has grown in prominence. Deep learning models, in particular, powers today's most advanced AI applications.

Machine learning platforms are among enterprise technology's most competitive realms, with most major vendors, including Amazon, Google, Microsoft, IBM and others, racing to sign customers up for platform services that cover the spectrum of machine learning activities, including data collection,data preparation, data classification, model building, training and application deployment.

As machine learning continues to increase in importance to business operations and AI becomes ever more practical in enterprise settings, the machine learning platform wars will only intensify.

Continued research into deep learning and AI is increasingly focused on developing more general applications. Today's AI models require extensive training in order to produce an algorithm that is highly optimized to perform one task. But some researchers are exploring ways to make models more flexible and are seeking techniques that allow a machine to apply context learned from one task to future, different tasks.

1642 - Blaise Pascal invents a mechanical machine that can add, subtract, multiply and divide.

1679 - Gottfried Wilhelm Leibniz devises the system of binary code.

1834 - Charles Babbage conceives the idea for a general all-purpose device that could be programmed with punched cards.

1842 - Ada Lovelace describes a sequence of operations for solving mathematical problems using Charles' Babbage's theoretical punch-card machine and becomes the first programmer.

1847 - George Boole creates Boolean logic, a form of algebra in which all values can be reduced to the binary values of true or false.

1936 - English logician and cryptanalyst AlanTuring proposes a Universal Machine that could decipher and execute a set of instructions. His published proof is considered the basis of computer science.

1952 - Arthur Samuel creates a program to help an IBM computer get better at checkers the more it plays.

1959 - MADALINE becomes the first artificial neural network applied to a real-world problem: removing echoes from phone lines.

1985 - Terry Sejnowski and Charles Rosenbergs artificial neural network taught itself how to correctly pronounce 20,000 words in one week.

1997 - IBMs Deep Blue beat chess grandmaster Garry Kasparov.

1999 - A CAD prototype intelligent workstation reviewed 22,000 mammograms and detected cancer 52% more accurately than radiologists did.

2006 - Computer scientist Geoffrey Hinton invents the term deep learning to describe neural net research.

2012 - An unsupervised neural network created by Google learned to recognize cats in YouTube videos with 74.8% accuracy.

2014 - A chatbot passes the Turing Test by convincing 33% of human judges that it was a Ukrainian teen named Eugene Goostman.

2014 - Googles AlphaGo defeats the human champion in Go, the most difficult board game in the world.

2016 - LipNet, DeepMinds artificial-intelligence system, identifies lip-read words in video with an accuracy of 93.4%.

2019 - Amazon controls 70% of the market share for virtual assistants in the U.S.

Originally posted here:
What is machine learning (ML)? - Definition from WhatIs.com

Related Posts

Comments are closed.