In the summer of 1955, while planning a now famous workshop at Dartmouth College, John McCarthy coined the term artificial intelligence to describe a new field of computer science. Rather than writing programs that tell a computer how to carry out a specific task, McCarthy pledged that he and his colleagues would instead pursue algorithms that could teach themselves how to do so. The goal was to create computers that could observe the world and then make decisions based on those observationsto demonstrate, that is, an innate intelligence.
The question was how to achieve that goal. Early efforts focused primarily on whats known as symbolic AI, which tried to teach computers how to reason abstractly. But today the dominant approach by far is machine learning, which relies on statistics instead. Although the approach dates back to the 1950sone of the attendees at Dartmouth, Arthur Samuels, was the first to describe his work as machine learningit wasnt until the past few decades that computers had enough storage and processing power for the approach to work well. The rise of cloud computing and customized chips has powered breakthrough after breakthrough, with research centers like OpenAI or DeepMind announcing stunning new advances seemingly every week.
Machine learning is now so popular that it has effectively become synonymous with artificial intelligence itself. As a result, its not possible to tease out the implications of AI without understanding how machine learning works.
The extraordinary success of machine learning has made it the default method of choice for AI researchers and experts. Indeed, machine learning is now so popular that it has effectively become synonymous with artificial intelligence itself. As a result, its not possible to tease out the implications of AI without understanding how machine learning worksas well as how it doesnt.
The core insight of machine learning is that much of what we recognize as intelligence hinges on probability rather than reason or logic.If you think about it long enough, this makes sense. When we look at a picture of someone, our brains unconsciously estimate how likely it is that we have seen their face before. When we drive to the store, we estimate which route is most likely to get us there the fastest. When we play a board game, we estimate which move is most likely to lead to victory. Recognizing someone, planning a trip, plotting a strategyeach of these tasks demonstrate intelligence. But rather than hinging primarily on our ability to reason abstractly or think grand thoughts, they depend first and foremost on our ability to accurately assess how likely something is. We just dont always realize that thats what were doing.
Back in the 1950s, though, McCarthy and his colleagues did realize it. And they understood something else too: Computers should be very good at computing probabilities. Transistors had only just been invented, and had yet to fully supplant vacuum tube technology. But it was clear even then that with enough data, digital computers would be ideal for estimating a given probability. Unfortunately for the first AI researchers, their timing was a bit off. But their intuition was spot onand much of what we now know as AI is owed to it. When Facebook recognizes your face in a photo, or Amazon Echo understands your question, theyre relying on an insight that is over sixty years old.
The core insight of machine learning is that much of what we recognize as intelligence hinges on probability rather than reason or logic.
The machine learning algorithm that Facebook, Google, and others all use is something called a deep neural network. Building on the prior work of Warren McCullough and Walter Pitts, Frank Rosenblatt coded one of the first working neural networks in the late 1950s. Although todays neural networks are a bit more complex, the main idea is still the same: The best way to estimate a given probability is to break the problem down into discrete, bite-sized chunks of information, or what McCullough and Pitts termed a neuron. Their hunch was that if you linked a bunch of neurons together in the right way, loosely akin to how neurons are linked in the brain, then you should be able to build models that can learn a variety of tasks.
To get a feel for how neural networks work, imagine you wanted to build an algorithm to detect whether an image contained a human face. A basic deep neural network would have several layers of thousands of neurons each. In the first layer, each neuron might learn to look for one basic shape, like a curve or a line. In the second layer, each neuron would look at the first layer, and learn to see whether the lines and curves it detects ever make up more advanced shapes, like a corner or a circle. In the third layer, neurons would look for even more advanced patterns, like a dark circle inside a white circle, as happens in the human eye. In the final layer, each neuron would learn to look for still more advanced shapes, such as two eyes and a nose. Based on what the neurons in the final layer say, the algorithm will then estimate how likely it is that an image contains a face. (For an illustration of how deep neural networks learn hierarchical feature representations, see here.)
The magic of deep learning is that the algorithm learns to do all this on its own. The only thing a researcher does is feed the algorithm a bunch of images and specify a few key parameters, like how many layers to use and how many neurons should be in each layer, and the algorithm does the rest. At each pass through the data, the algorithm makes an educated guess about what type of information each neuron should look for, and then updates each guess based on how well it works. As the algorithm does this over and over, eventually it learns what information to look for, and in what order, to best estimate, say, how likely an image is to contain a face.
Whats remarkable about deep learning is just how flexible it is. Although there are other prominent machine learning algorithms tooalbeit with clunkier names, like gradient boosting machinesnone are nearly so effective across nearly so many domains. With enough data, deep neural networks will almost always do the best job at estimating how likely something is. As a result, theyre often also the best at mimicking intelligence too.
Yet as with machine learning more generally, deep neural networks are not without limitations. To build their models, machine learning algorithms rely entirely on training data, which means both that they will reproduce the biases in that data, and that they will struggle with cases that are not found in that data. Further, machine learning algorithms can also be gamed. If an algorithm is reverse engineered, it can be deliberately tricked into thinking that, say, a stop sign is actually a person. Some of these limitations may be resolved with better data and algorithms, but others may be endemic to statistical modeling.
To glimpse how the strengths and weaknesses of AI will play out in the real-world, it is necessary to describe the current state of the art across a variety of intelligent tasks. Below, I look at the situation in regard to speech recognition, image recognition, robotics, and reasoning in general.
Ever since digital computers were invented, linguists and computer scientists have sought to use them to recognize speech and text. Known as natural language processing, or NLP, the field once focused on hardwiring syntax and grammar into code. However, over the past several decades, machine learning has largely surpassed rule-based systems, thanks to everything from support vector machines to hidden markov models to, most recently, deep learning. Apples Siri, Amazons Alexa, and Googles Duplex all rely heavily on deep learning to recognize speech or text, and represent the cutting-edge of the field.
When several leading researchers recently set a deep learning algorithm loose on Amazon reviews, they were surprised to learn that the algorithm had not only taught itself grammar and syntax, but a sentiment classifier too.
The specific deep learning algorithms at play have varied somewhat. Recurrent neural networks powered many of the initial deep learning breakthroughs, while hierarchical attention networks are responsible for more recent ones. What they all share in common, though, is that the higher levels of a deep learning network effectively learn grammar and syntax on their own. In fact, when several leading researchers recently set a deep learning algorithm loose on Amazon reviews, they were surprised to learn that the algorithm had not only taught itself grammar and syntax, but a sentiment classifier too.
Yet for all the success of deep learning at speech recognition, key limitations remain. The most important is that because deep neural networks only ever build probabilistic models, they dont understand language in the way humans do; they can recognize that the sequence of letters k-i-n-g and q-u-e-e-n are statistically related, but they have no innate understanding of what either word means, much less the broader concepts of royalty and gender. As a result, there is likely to be a ceiling to how intelligent speech recognition systems based on deep learning and other probabilistic models can ever be. If we ever build an AI like the one in the movie Her, which was capable of genuine human relationships, it will almost certainly take a breakthrough well beyond what a deep neural network can deliver.
When Rosenblatt first implemented his neural network in 1958, he initially set it loose onimages of dogs and cats. AI researchers have been focused on tackling image recognition ever since. By necessity, much of that time was spent devising algorithms that could detect pre-specified shapes in an image, like edges and polyhedrons, using the limited processing power of early computers. Thanks to modern hardware, however, the field of computer vision is now dominated by deep learning instead. When a Tesla drives safely in autopilot mode, or when Googles new augmented-reality microscope detects cancer in real-time, its because of a deep learning algorithm.
A few stickers on a stop sign can be enough to prevent a deep learning model from recognizing it as such. For image recognition algorithms to reach their full potential, theyll need to become much more robust.
Convolutional neural networks, or CNNs, are the variant of deep learning most responsible for recent advances in computer vision. Developed by Yann LeCun and others, CNNs dont try to understand an entire image all at once, but instead scan it in localized regions, much the way a visual cortex does. LeCuns early CNNs were used to recognize handwritten numbers, but today the most advanced CNNs, such as capsule networks, can recognize complex three-dimensional objects from multiple angles, even those not represented in training data. Meanwhile, generative adversarial networks, the algorithm behind deep fake videos, typically use CNNs not to recognize specific objects in an image, but instead to generate them.
As with speech recognition, cutting-edge image recognition algorithms are not without drawbacks. Most importantly, just as all that NLP algorithms learn are statistical relationships between words, all that computer vision algorithms learn are statistical relationships between pixels. As a result, they can be relatively brittle. A few stickers on a stop sign can be enough to prevent a deep learning model from recognizing it as such. For image recognition algorithms to reach their full potential, theyll need to become much more robust.
What makes our intelligence so powerful is not just that we can understand the world, but that we can interact with it. The same will be true for machines. Computers that can learn to recognize sights and sounds are one thing; those that can learn to identify an object as well as how to manipulate it are another altogether. Yet if image and speech recognition are difficult challenges, touch and motor control are far more so. For all their processing power, computers are still remarkably poor at something as simple as picking up a shirt.
The reason: Picking up an object like a shirt isnt just one task, but several. First you need to recognize a shirt as a shirt. Then you need to estimate how heavy it is, how its mass is distributed, and how much friction its surface has. Based on those guesses, then you need to estimate where to grasp the shirt and how much force to apply at each point of your grip, a task made all the more challenging because the shirts shape and distribution of mass will change as you lift it up. A human does this trivially and easily. But for a computer, the uncertainty in any of those calculations compounds across all of them, making it an exceedingly difficult task.
Initially, programmers tried to solve the problem by writing programs that instructed robotic arms how to carry out each task step by step. However, just as rule-based NLP cant account for all possible permutations of language, there also is no way for rule-based robotics to run through all the possible permutations of how an object might be grasped. By the 1980s, it became increasingly clear that robots would need to learn about the world on their own and develop their own intuitions about how to interact with it. Otherwise, there was no way they would be able to reliably complete basic maneuvers like identifying an object, moving toward it, and picking it up.
The current state of the art is something called deep reinforcement learning. As a crude shorthand, you can think of reinforcement learning as trial and error. If a robotic arm tries a new way of picking up an object and succeeds, it rewards itself; if it drops the object, it punishes itself. The more the arm attempts its task, the better it gets at learning good rules of thumb for how to complete it. Coupled with modern computing, deep reinforcement learning has shown enormous promise. For instance, by simulating a variety of robotic hands across thousands of servers, OpenAI recently taught a real robotic hand how to manipulate a cube marked with letters.
For all their processing power, computers are still remarkably poor at something as simple as picking up a shirt.
Compared with prior research, OpenAIs breakthrough is tremendously impressive. Yet it also shows the limitations of the field. The hand OpenAI built didnt actually feel the cube at all, but instead relied on a camera. For an object like a cube, which doesnt change shape and can be easily simulated in virtual environments, such an approach can work well. But ultimately, robots will need to rely on more than just eyes. Machines with the dexterity and fine motor skills of a human are still a ways away.
When Arthur Samuels coined the term machine learning, he wasnt researching image or speech recognition, nor was he working on robots. Instead, Samuels was tackling one of his favorite pastimes: checkers. Since the game had far too many potential board moves for a rule-based algorithm to encode them all, Samuels devised an algorithm that could teach itself to efficiently look several moves ahead. The algorithm was noteworthy for working at all, much less being competitive with other humans. But it also anticipated the astonishing breakthroughs of more recent algorithms like AlphaGo and AlphaGo Zero, which have surpassed all human players at Go, widely regarded as the most intellectually demanding board game in the world.
As with robotics, the best strategic AI relies on deep reinforcement learning. In fact, the algorithm that OpenAI used to power its robotic hand also formed the core of its algorithm for playing Dota 2, a multi-player video game. Although motor control and gameplay may seem very different, both involve the same process: making a sequence of moves over time, and then evaluating whether they led to success or failure. Trial and error, it turns out, is as useful for learning to reason about a game as it is for manipulating a cube.
Since the algorithm works only by learning from outcome data, it needs a human to define what the outcome should be. As a result, reinforcement learning is of little use in the many strategic contexts in which the outcome is not always clear.
From Samuels on, the success of computers at board games has posed a puzzle to AI optimists and pessimists alike. If a computer can beat a human at a strategic game like chess, how much can we infer about its ability to reason strategically in other environments? For a long time, the answer was, very little. After all, most board games involve a single player on each side, each with full information about the game, and a clearly preferred outcome. Yet most strategic thinking involves cases where there are multiple players on each side, most or all players have only limited information about what is happening, and the preferred outcome is not clear. For all of AlphaGos brilliance, youll note that Google didnt then promote it to CEO, a role that is inherently collaborative and requires a knack for making decisions with incomplete information.
Fortunately, reinforcement learning researchers have recently made progress on both of those fronts. One team outperformed human players at Texas Hold Em, a poker game where making the most of limited information is key. Meanwhile, OpenAIs Dota 2 player, which coupled reinforcement learning with whats called a Long Short-Term Memory (LSTM) algorithm, has made headlines for learning how to coordinate the behavior of five separate bots so well that they were able to beat a team of professional Dota 2 players. As the algorithms improve, humans will likely have a lot to learn about optimal strategies for cooperation, especially in information-poor environments.This kind of information would be especially valuable for commanders in military settings, who sometimes have to make decisions without having comprehensive information.
Yet theres still one challenge no reinforcement learning algorithm can ever solve. Since the algorithm works only by learning from outcome data, it needs a human to define what the outcome should be. As a result, reinforcement learning is of little use in the many strategic contexts in which the outcome is not always clear. Should corporate strategy prioritize growth or sustainability? Should U.S. foreign policy prioritize security or economic development? No AI will ever be able to answer higher-order strategic reasoning, because, ultimately, those are moral or political questions rather than empirical ones. The Pentagon may lean more heavily on AI in the years to come, but it wont be taking over the situation room and automating complex tradeoffs any time soon.
From autonomous cars to multiplayer games, machine learning algorithms can now approach or exceed human intelligence across a remarkable number of tasks. The breakout success of deep learning in particular has led to breathless speculation about both the imminent doom of humanity and its impending techno-liberation. Not surprisingly, all the hype has led several luminaries in the field, such as Gary Marcus or Judea Pearl, to caution that machine learning is nowhere near as intelligent as it is being presented, or that perhaps we should defer our deepest hopes and fears about AI until it is based on more than mere statistical correlations. Even Geoffrey Hinton, a researcher at Google and one of the godfathers of modern neural networks, has suggested that deep learning alone is unlikely to deliver the level of competence many AI evangelists envision.
Where the long-term implications of AI are concerned, the key question about machine learning is this: How much of human intelligence can be approximated with statistics? If all of it can be, then machine learning may well be all we need to get to a true artificial general intelligence. But its very unclear whether thats the case. As far back as 1969, when Marvin Minsky and Seymour Papert famously argued that neural networks had fundamental limitations, even leading experts in AI have expressed skepticism that machine learning would be enough. Modern skeptics like Marcus and Pearl are only writing the latest chapter in a much older book. And its hard not to find their doubts at least somewhat compelling. The path forward from the deep learning of today, which can mistake a rifle for a helicopter, is by no means obvious.
Where the long-term implications of AI are concerned, the key question about machine learning is this: How much of human intelligence can be approximated with statistics?
Yet the debate over machine learnings long-term ceiling is to some extent beside the point. Even if all research on machine learning were to cease, the state-of-the-art algorithms of today would still have an unprecedented impact. The advances that have already been made in computer vision, speech recognition, robotics, and reasoning will be enough to dramatically reshape our world. Just as happened in the so-called Cambrian explosion, when animals simultaneously evolved the ability to see, hear, and move, the coming decade will see an explosion in applications that combine the ability to recognize what is happening in the world with the ability to move and interact with it. Those applications will transform the global economy and politics in ways we can scarcely imagine today. Policymakers need not wring their hands just yet about how intelligent machine learning may one day become. They will have their hands full responding to how intelligent it already is.
Go here to see the original:
What is machine learning? - Brookings
- Microsoft reveals how it caught mutating Monero mining malware with machine learning - The Next Web [Last Updated On: December 1st, 2019] [Originally Added On: December 1st, 2019]
- The role of machine learning in IT service management - ITProPortal [Last Updated On: December 1st, 2019] [Originally Added On: December 1st, 2019]
- Workday talks machine learning and the future of human capital management - ZDNet [Last Updated On: December 1st, 2019] [Originally Added On: December 1st, 2019]
- Verification In The Era Of Autonomous Driving, Artificial Intelligence And Machine Learning - SemiEngineering [Last Updated On: December 1st, 2019] [Originally Added On: December 1st, 2019]
- Synthesis-planning program relies on human insight and machine learning - Chemical & Engineering News [Last Updated On: December 1st, 2019] [Originally Added On: December 1st, 2019]
- Here's why machine learning is critical to success for banks of the future - Tech Wire Asia [Last Updated On: December 1st, 2019] [Originally Added On: December 1st, 2019]
- The 10 Hottest AI And Machine Learning Startups Of 2019 - CRN: The Biggest Tech News For Partners And The IT Channel [Last Updated On: December 1st, 2019] [Originally Added On: December 1st, 2019]
- Onica Showcases Advanced Internet of Things, Artificial Intelligence, and Machine Learning Capabilities at AWS re:Invent 2019 - PR Web [Last Updated On: December 3rd, 2019] [Originally Added On: December 3rd, 2019]
- Machine Learning Answers: If Caterpillar Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? - Forbes [Last Updated On: December 3rd, 2019] [Originally Added On: December 3rd, 2019]
- Amazons new AI keyboard is confusing everyone - The Verge [Last Updated On: December 5th, 2019] [Originally Added On: December 5th, 2019]
- Exploring the Present and Future Impact of Robotics and Machine Learning on the Healthcare Industry - Robotics and Automation News [Last Updated On: December 5th, 2019] [Originally Added On: December 5th, 2019]
- 3 questions to ask before investing in machine learning for pop health - Healthcare IT News [Last Updated On: December 5th, 2019] [Originally Added On: December 5th, 2019]
- Amazon Wants to Teach You Machine Learning Through Music? - Dice Insights [Last Updated On: December 5th, 2019] [Originally Added On: December 5th, 2019]
- Measuring Employee Engagement with A.I. and Machine Learning - Dice Insights [Last Updated On: December 6th, 2019] [Originally Added On: December 6th, 2019]
- The NFL And Amazon Want To Transform Player Health Through Machine Learning - Forbes [Last Updated On: December 11th, 2019] [Originally Added On: December 11th, 2019]
- Scientists are using machine learning algos to draw maps of 10 billion cells from the human body to fight cancer - The Register [Last Updated On: December 11th, 2019] [Originally Added On: December 11th, 2019]
- Appearance of proteins used to predict function with machine learning - Drug Target Review [Last Updated On: December 11th, 2019] [Originally Added On: December 11th, 2019]
- Google is using machine learning to make alarm tones based on the time and weather - The Verge [Last Updated On: December 11th, 2019] [Originally Added On: December 11th, 2019]
- 10 Machine Learning Techniques and their Definitions - AiThority [Last Updated On: December 11th, 2019] [Originally Added On: December 11th, 2019]
- Taking UX and finance security to the next level with IBM's machine learning - The Paypers [Last Updated On: December 12th, 2019] [Originally Added On: December 12th, 2019]
- Government invests 49m in data analytics, machine learning and AI Ireland, news for Ireland, FDI,Ireland,Technology, - Business World [Last Updated On: December 12th, 2019] [Originally Added On: December 12th, 2019]
- Machine Learning Answers: If Nvidia Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? - Forbes [Last Updated On: December 12th, 2019] [Originally Added On: December 12th, 2019]
- Bing: To Use Machine Learning; You Have To Be Okay With It Not Being Perfect - Search Engine Roundtable [Last Updated On: December 12th, 2019] [Originally Added On: December 12th, 2019]
- IQVIA on the adoption of AI and machine learning - OutSourcing-Pharma.com [Last Updated On: December 12th, 2019] [Originally Added On: December 12th, 2019]
- Schneider Electric Wins 'AI/ Machine Learning Innovation' and 'Edge Project of the Year' at the 2019 SDC Awards - PRNewswire [Last Updated On: December 12th, 2019] [Originally Added On: December 12th, 2019]
- Industry Call to Define Universal Open Standards for Machine Learning Operations and Governance - MarTech Series [Last Updated On: December 12th, 2019] [Originally Added On: December 12th, 2019]
- Qualitest Acquires AI and Machine Learning Company AlgoTrace to Expand Its Offering - PRNewswire [Last Updated On: December 12th, 2019] [Originally Added On: December 12th, 2019]
- Automation And Machine Learning: Transforming The Office Of The CFO - Forbes [Last Updated On: December 12th, 2019] [Originally Added On: December 12th, 2019]
- Machine learning results: pay attention to what you don't see - STAT [Last Updated On: December 12th, 2019] [Originally Added On: December 12th, 2019]
- The challenge in Deep Learning is to sustain the current pace of innovation, explains Ivan Vasilev, machine learning engineer - Packt Hub [Last Updated On: December 15th, 2019] [Originally Added On: December 15th, 2019]
- Israelis develop 'self-healing' cars powered by machine learning and AI - The Jerusalem Post [Last Updated On: December 15th, 2019] [Originally Added On: December 15th, 2019]
- Theres No Such Thing As The Machine Learning Platform - Forbes [Last Updated On: December 15th, 2019] [Originally Added On: December 15th, 2019]
- Global Contextual Advertising Markets, 2019-2025: Advances in AI and Machine Learning to Boost Prospects for Real-Time Contextual Targeting -... [Last Updated On: December 20th, 2019] [Originally Added On: December 20th, 2019]
- Machine Learning Answers: If Twitter Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? - Forbes [Last Updated On: December 20th, 2019] [Originally Added On: December 20th, 2019]
- Tech connection: To reach patients, pharma adds AI, machine learning and more to its digital toolbox - FiercePharma [Last Updated On: December 20th, 2019] [Originally Added On: December 20th, 2019]
- Machine Learning Answers: If Seagate Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? - Forbes [Last Updated On: December 20th, 2019] [Originally Added On: December 20th, 2019]
- MJ or LeBron Who's the G.O.A.T.? Machine Learning and AI Might Give Us an Answer - Built In Chicago [Last Updated On: December 20th, 2019] [Originally Added On: December 20th, 2019]
- Amazon Releases A New Tool To Improve Machine Learning Processes - Forbes [Last Updated On: December 20th, 2019] [Originally Added On: December 20th, 2019]
- AI and machine learning platforms will start to challenge conventional thinking - CRN.in [Last Updated On: December 20th, 2019] [Originally Added On: December 20th, 2019]
- What is Deep Learning? Everything you need to know - TechRadar [Last Updated On: December 20th, 2019] [Originally Added On: December 20th, 2019]
- Machine Learning Answers: If BlackBerry Stock Drops 10% A Week, Whats The Chance Itll Recoup Its Losses In A Month? - Forbes [Last Updated On: December 20th, 2019] [Originally Added On: December 20th, 2019]
- QStride to be acquired by India-based blockchain, analytics, machine learning consultancy - Staffing Industry Analysts [Last Updated On: December 20th, 2019] [Originally Added On: December 20th, 2019]
- Dotscience Forms Partnerships to Strengthen Machine Learning - Database Trends and Applications [Last Updated On: December 20th, 2019] [Originally Added On: December 20th, 2019]
- The Machines Are Learning, and So Are the Students - The New York Times [Last Updated On: December 20th, 2019] [Originally Added On: December 20th, 2019]
- Kubernetes and containers are the perfect fit for machine learning - JAXenter [Last Updated On: December 20th, 2019] [Originally Added On: December 20th, 2019]
- Data science and machine learning: what to learn in 2020 - Packt Hub [Last Updated On: December 20th, 2019] [Originally Added On: December 20th, 2019]
- What is Machine Learning? A definition - Expert System [Last Updated On: December 20th, 2019] [Originally Added On: December 20th, 2019]
- Want to dive into the lucrative world of deep learning? Take this $29 class. - Mashable [Last Updated On: December 24th, 2019] [Originally Added On: December 24th, 2019]
- Another free web course to gain machine-learning skills (thanks, Finland), NIST probes 'racist' face-recog and more - The Register [Last Updated On: December 24th, 2019] [Originally Added On: December 24th, 2019]
- TinyML as a Service and machine learning at the edge - Ericsson [Last Updated On: December 24th, 2019] [Originally Added On: December 24th, 2019]
- Machine Learning in 2019 Was About Balancing Privacy and Progress - ITPro Today [Last Updated On: December 24th, 2019] [Originally Added On: December 24th, 2019]
- Ten Predictions for AI and Machine Learning in 2020 - Database Trends and Applications [Last Updated On: December 25th, 2019] [Originally Added On: December 25th, 2019]
- The Value of Machine-Driven Initiatives for K12 Schools - EdTech Magazine: Focus on Higher Education [Last Updated On: December 25th, 2019] [Originally Added On: December 25th, 2019]
- CMSWire's Top 10 AI and Machine Learning Articles of 2019 - CMSWire [Last Updated On: December 25th, 2019] [Originally Added On: December 25th, 2019]
- Machine Learning Market Accounted for US$ 1,289.5 Mn in 2016 and is expected to grow at a CAGR of 49.7% during the forecast period 2017 2025 - The... [Last Updated On: December 27th, 2019] [Originally Added On: December 27th, 2019]
- Are We Overly Infatuated With Deep Learning? - Forbes [Last Updated On: December 27th, 2019] [Originally Added On: December 27th, 2019]
- Can machine learning take over the role of investors? - TechHQ [Last Updated On: December 27th, 2019] [Originally Added On: December 27th, 2019]
- Dr. Max Welling on Federated Learning and Bayesian Thinking - Synced [Last Updated On: December 28th, 2019] [Originally Added On: December 28th, 2019]
- 2010 2019: The rise of deep learning - The Next Web [Last Updated On: January 4th, 2020] [Originally Added On: January 4th, 2020]
- Machine Learning Answers: Sprint Stock Is Down 15% Over The Last Quarter, What Are The Chances It'll Rebound? - Trefis [Last Updated On: January 4th, 2020] [Originally Added On: January 4th, 2020]
- Sports Organizations Using Machine Learning Technology to Drive Sponsorship Revenues - Sports Illustrated [Last Updated On: January 4th, 2020] [Originally Added On: January 4th, 2020]
- What is deep learning and why is it in demand? - Express Computer [Last Updated On: January 4th, 2020] [Originally Added On: January 4th, 2020]
- Byrider to Partner With PointPredictive as Machine Learning AI Partner to Prevent Fraud - CloudWedge [Last Updated On: January 4th, 2020] [Originally Added On: January 4th, 2020]
- Stare into the mind of God with this algorithmic beetle generator - SB Nation [Last Updated On: January 5th, 2020] [Originally Added On: January 5th, 2020]
- US announces AI software export restrictions - The Verge [Last Updated On: January 5th, 2020] [Originally Added On: January 5th, 2020]
- How AI And Machine Learning Can Make Forecasting Intelligent - Demand Gen Report [Last Updated On: January 5th, 2020] [Originally Added On: January 5th, 2020]
- Fighting the Risks Associated with Transparency of AI Models - EnterpriseTalk [Last Updated On: January 7th, 2020] [Originally Added On: January 7th, 2020]
- NXP Debuts i.MX Applications Processor with Dedicated Neural Processing Unit for Advanced Machine Learning at the Edge - GlobeNewswire [Last Updated On: January 7th, 2020] [Originally Added On: January 7th, 2020]
- Cerner Expands Collaboration with Amazon Web as its Preferred Machine Learning Provider - Story of Future [Last Updated On: January 7th, 2020] [Originally Added On: January 7th, 2020]
- Can We Do Deep Learning Without Multiplications? - Analytics India Magazine [Last Updated On: January 7th, 2020] [Originally Added On: January 7th, 2020]
- Machine learning is innately conservative and wants you to either act like everyone else, or never change - Boing Boing [Last Updated On: January 7th, 2020] [Originally Added On: January 7th, 2020]
- Pear Therapeutics Expands Pipeline with Machine Learning, Digital Therapeutic and Digital Biomarker Technologies - Business Wire [Last Updated On: January 7th, 2020] [Originally Added On: January 7th, 2020]
- FLIR Systems and ANSYS to Speed Thermal Camera Machine Learning for Safer Cars - Business Wire [Last Updated On: January 7th, 2020] [Originally Added On: January 7th, 2020]
- SiFive and CEVA Partner to Bring Machine Learning Processors to Mainstream Markets - PRNewswire [Last Updated On: January 7th, 2020] [Originally Added On: January 7th, 2020]
- Tiny Machine Learning On The Attiny85 - Hackaday [Last Updated On: January 7th, 2020] [Originally Added On: January 7th, 2020]
- Finally, a good use for AI: Machine-learning tool guesstimates how well your code will run on a CPU core - The Register [Last Updated On: January 7th, 2020] [Originally Added On: January 7th, 2020]
- AI, machine learning, and other frothy tech subjects remained overhyped in 2019 - Boing Boing [Last Updated On: January 7th, 2020] [Originally Added On: January 7th, 2020]
- Chemists are training machine learning algorithms used by Facebook and Google to find new molecules - News@Northeastern [Last Updated On: January 7th, 2020] [Originally Added On: January 7th, 2020]
- AI and machine learning trends to look toward in 2020 - Healthcare IT News [Last Updated On: January 7th, 2020] [Originally Added On: January 7th, 2020]
- What Is Machine Learning? | How It Works, Techniques ... [Last Updated On: January 7th, 2020] [Originally Added On: January 7th, 2020]