To mark our 150th year, were revisiting thePopular Sciencestories (both hits and misses) that helped define scientific progress, understanding, and innovationwith an added hint of modern context. Explore theNotable pagesand check out all our anniversary coveragehere.
Social psychologist Frank Rosenblatt had such a passion for brain mechanics that he built a computer model fashioned after a human brains neural network, and trained it to recognize simple patterns. He called his IBM 704-based model Perceptron. A New York Times headline called it an Embryo of Computer Designed to Read and Grow Wiser. Popular Science called Perceptrons Machines that learn. At the time, Rosenblatt claimed it would be possible to build brains that could reproduce themselves on an assembly line and which would be conscious of their existence. The year was 1958.
Many assailed Rosenblatts approach to artificial intelligence as being computationally impractical and hopelessly simplistic. A critical 1969 book by Turing Award winner Marvin Minsky marked the onset of a period dubbed the AI winter, when little funding was devoted to such researcha short revival in the early 80s notwithstanding.
In a 1989 Popular Science piece, Brain-Style Computers, science and medical writer Naomi Freundlich was among the first journalists to anticipate the thaw of that long winter, which lingered into the 90s. Even before Geoffrey Hinton, considered one of the founders of modern deep learning techniques, published his seminal 1992 explainer in Scientific American, Freundlichs reporting offered one of the most comprehensive insights into what was about to unfold in AI in the next two decades.
The resurgence of more-sophisticated neural networks, wrote Freundlich, was largely due to the availability of low-cost memory, greater computer power, and more-sophisticated learning laws. Of course, the missing ingredient in 1989 was datathe vast troves of information, labeled and unlabeled, that todays deep-learning neural networks inhale to train themselves. It was the rapid expansion of the internet, starting in the late 1990s, that made big data possible and, coupled with the other ingredients noted by Freundlich, unleashed AInearly half a century after Rosenblatts Perceptron debut.
I walked into the semi-circular lecture hall at Columbia University and searched for a seat within the crowded tiered gallery. An excited buzz petered off to a few coughs and rustling paper as a young man wearing circular wire-rimmed glasses walked toward the lectern carrying a portable stereo tape player under his arm. Dressed in a tweed jacket and corduroys, he looked like an Ivy League student about to play us some of his favorite rock tunes. But instead, when he pushed the on button, a string of garbled baby talk-more specifically, baby-computer talk-came flooding out. At first unintelligible, really just bursts of sounds, the child-robot voice repeated the string over and over until it became ten distinct words.
This is a recording of a computer that taught itself to pronounce English text overnight, said Terrence Sejnowski, a biophysicist at Johns Hopkins University. A jubilant crowd broke into animated applause. Sejnowski had just demonstrated a learning computer, one of the first of a radically new kind of artificial-intelligence machine.
Called neural networks, these computers are loosely modeled after the interconnected web of neurons, or nerve cells, in the brain. They represent a dramatic change in the way scientists are thinking about artificial intelligence- a leaning toward a more literal interpretation of how the brain functions. The reason: Although some of todays computers are extremely powerful processors that can crunch numbers at phenomenal speeds, they fail at tasks a child does with ease-recognizing faces, learning to speak and walk, or reading printed text. According to one expert, the visual system of one human being can do more image processing than all the supercomputers in the world put together. These kinds of tasks require an enormous number of rules and instructions embodying every possible variable. Neural networks do not require this kind of programming, but rather, like humans, they seem to learn by experience.
For the military, this means target-recognition systems, self-navigating tanks, and even smart missiles that chase targets. For the business world, neural networks promise handwriting-and face-recognition systems and computer loan officers and bond traders. And for the manufacturing sector, quality-control vision systems and robot control are just two goals.
Interest in neural networks has grown exponentially. A recent meeting in San Diego brought 2,000 participants. More than 100 companies are working on neural networks, including several small start-ups that have begun marketing neural-network software and peripherals. Some computer giants, such as IBM, AT&T, Texas Instruments, Nippon Electric Co., and Fujitsu, are also going full ahead with research. And the Defense Advanced Research Projects Agency (or DARPA) released a study last year that recommended neural-network funding of $400 million over eight years. It would be one of the largest programs ever undertaken by the agency.
Ever since the early days of computer science, the brain has been a model for emerging machines. But compared with the brain, todays computers are little more than glorified calculators. The reason: A computer has a single processor operating on programmed instructions. Each task is divided into many tiny steps that are performed quickly, one at a time. This pipeline approach leaves computers vulnerable to a condition commonly found on California freeways: One stalled car-one unsolvable step-can back up traffic indefinitely. The brain, in contrast, is made up of billions of neurons, or nerve cells, each connected to thousands of others. A specific task enlists the activity of whole fields of neurons; the communication pathways among them lead to solutions.
The excitement over neural networks is not new and neither are the brain makers. Warren S. McCulloch, a psychiatrist at the Universities of Illinois and Chicago, and his student Walter H. Pitts began studying neurons as logic devices in the early 1940s. They wrote an article outlining how neurons communicate with each other electrochemically: A neuron receives inputs from surrounding cells. If the sum of the inputs is positive and above a certain preset threshold, the neuron will fire. Suppose, for example, that a neuron has a threshold of two and has two connections, A and B. The neuron will be on only if both A and B are on. This is called a logical and operation. Another logic operation called the inclusive or is achieved by setting the threshold at one: If either A or B is on, the neuron is on. If both A and B are on, then the neuron is also on.
In 1958 Cornell University psychologist Frank Rosenblatt used hundreds of these artificial neurons to develop a two-layer pattern-learning network called the perceptron. The key to Rosenblatts system was that it learned. In the brain, learning occurs predominantly by modification of the connections between neurons. Simply put, if two neurons are active at once and theyre connected, then the synapses (connections) between them will get stronger. This learning rule is called Hebbs rule and was the basis for learning in the perceptron. Using Hebbs rule, the network appears to learn by experience because connections that are used often are reinforced. The electronic analog of a synapse is a resistor and in the perceptron resistors controlled the amount of current that flowed between transistor circuits.
Other simple networks were also built at this time. Bernard Widrow, an electrical engineer at Stanford University, developed a machine called Adaline (for adaptive linear neurons) that could translate speech, play blackjack, and predict weather for the San Francisco area better than any weatherman. The neural network field was an active one until 1969.
In that year the Massachusetts Institute of Technologys Marvin Minsky and Seymour Papertmajor forces in the rule-based AI fieldwrote a book called Perceptrons that attacked the perceptron design as being too simple to be serious. The main problem: The perceptron was a two-layer system-input led directly into output-and learning was limited. What Rosenblatt and others wanted to do basically was to solve difficult problems with a knee-jerk reflex, says Sejnowski.
The other problem was that perceptrons were limited in the logic operations they could execute, and therefore they could only solve clearly definable problemsdeciding between an L and a T for example. The reason: Perceptrons could not handle the third logic operation called the exclusive or. This operation requires that the logic unit turn on if either A or B is on, but not if they both are.
According to Tom Schwartz, a neural-network consultant in Mountain View, Calif., technology constraints limited the success of perceptrons. The idea of a multilayer perceptron was proposed by Rosenblatt, but without a good multilayer learning law you were limited in what you could do with neural nets. Minskys book, combined with the perceptrons failure to achieve developers expectations, squelched the neural-network boom. Computer scientists charged ahead with traditional artificial intelligence, such as expert systems.
During the dark ages as some call the 15 years between the publication of Minskys Perceptrons and the recent revival of neural networks, some die-hard connectionists neural-network adherentprevailed. One of them was physicist John J. Hopfield, who splits his time between the California Institute of Technology and AT&T Bell Laboratories. A paper he wrote in 1982 described mathematically how neurons could act collectively to process and store information, comparing a problems solution in a neural network with achieving the lowest energy state in physics. As an example, Hopfield demonstrated how a network could solve the traveling salesman problem- finding the shortest route through a group of cities a problem that had long eluded conventional computers. This paper is credited with reinvigorating the neural network field. It took a lot of guts to publish that paper in 1982, says Schwartz. Hopfield should be known as the fellow who brought neural nets back from the dead.
The resurgence of more-sophisticated neural networks was largely due to the availability of low-cost memory, greater computer power, and more-sophisticated learning laws. The most important of these learning laws is some- thing called back-propagation, illustrated dramatically by Sejnowskis NetTalk, which I heard at Columbia.
With NetTalk and subsequent neural networks, a third layer, called the hidden layer, is added to the two-layer network. This hidden layer is analogous to the brains interneurons, which map out pathways between the sensory and motor neurons. NetTalk is a neural-network simulation with 300 processing units-representing neurons- and over 10,000 connections arranged in three layers. For the demonstration I heard, the initial training input was a 500-word text of a first-graders conversation. The output layer consisted of units that encoded the 55 possible phonemes-discreet speech sounds-in the English language. The output units can drive a digital speech synthesizer that produces sounds from a string of phonemes. When NetTalk saw the letter N (in the word can for example) it randomly (and erroneously) activated a set of hidden layer units that signaled the output ah. This output was then compared with a model: a correct letter-to-phoneme translation, to calculate the error mathematically. The learning rule, which is actually a mathematical formula, corrects this error by apportioning the blame-reducing the strengths of the connections between the hidden layer that corresponds to N and the output that corresponds to ah. At the beginning of NetTalk all the connection strengths are random, so the output that the network produces is random, says Sejnowski. Very quickly as we change the weights to minimize error, the network starts picking up on the regular pattern. It distinguishes consonants and vowels, and can make finer distinctions according to particular ways of pronouncing individual letters.
Trained on 1,000 words, within a week NetTalk developed a 20,000-word dictionary. The important point is that the network was not only able to memorize the training words, but it generalized. It was able to predict new words it had never seen before, says Sejnowski. Its similar to how humans would generalize while reading Jabberwocky.
Generalizing is an important goal for neural networks. To illustrate this, Hopfield described a munition identification problem he worked on two summers ago in Fort Monmouth, N.J. Lets say a battalion needs to identify an unexploded munition before it can be disarmed, he says. Unfortunately there are 50,000 different kinds of hardware it might be. A traditional computer would make the identification using a treelike decision process, says Hopfield. The first decision could be based on the length of the munition. But theres one problem: It turns out the munitions nose is buried in the sand, and obviously a soldier cant go out and measure how long it is. Although youve got lots of information, there are always going to be pieces that you are not allowed to get. As a result you cant go through a treelike structure and make an identification.
Hopfield sees this kind of problem as approachable from a neural-network point of view. With a neural net you could know ten out of thirty pieces of information about the munition and get an answer.
Besides generalizing, another important feature of neural networks is that they degrade gracefully. The human brain is in a constant state of degradation-one night spent drinking alcohol consumes thousands of brain cells. But because whole fields of neurons contribute to every task, the loss of a few is not noticeable. The same is true with neural networks. David Rumelhart, a psychologist and neural-network researcher at Stanford University, explains: The behavior of the network is not determined by one little localized part, but in fact by the interactions of all the units in the network. If you delete one of the units, its not terribly important. Deleting one of the components in a conventional computer will typically bring computation to a halt.
Although neural networks can be built from wires and transistors, according to Schwartz, Ninety-nine percent of what people talk about in neural nets are really software simulations of neural nets run on conventional processors. Simulating a neural network means mathematically defining the nodes (processors) and weights (adaptive coefficients) assigned to it. The processing that each element does is determined by a mathematical formula that defines the elements output signal as a function of whatever input signals have just arrived and the adaptive coefficients present in the local memory, explains Robert Hecht-Nielsen, president of Hecht-Nielsen Neurocomputer Corp.
Some companies, such as Hecht- Nielsen Neurocomputer in San Diego, Synaptics Inc. in San Jose, Calif., and most recently Nippon Electric Co., are selling specially wired boards that link to conventional computers. The neural network is simulated on the board and then integrated via software to an IBM PC-type machine.
Other companies are providing commercial software simulations of neural networks. One of the most successful is Nestor, Inc., a Providence, Rl.,-based company that developed a software package that allows users to simulate circuits on desk-top computers. So far several job-specific neural networks have been developed. They include: a signature-verification system; a network that reads handwritten numbers on checks; one that helps screen mortgage loans; a network that identifies abnormal heart rates; and another that can recognize 11 different aircraft, regardless of the observation angle.
Several military contractors including Bendix Aerospace, TRW, and the University of Pennsylvania are also going ahead with neural networks for signal processing-training networks to identify enemy vehicles by their radar or sonar patterns, for example.
Still, there are some groups concentrating on neural network chips. At Bell Laboratories a group headed by solid-state physicist Larry Jackel constructed an experimental neural-net chip that has 75,000 transistors and an array of 54 simple processors connected by a network of resistors. The chip is about the size of a dime. Also developed at Bell Labs is a chip containing 14,400 artificial neurons made of light-sensitive amorphous silicon and deposited as a thin film on glass. When a slide is projected on the film several times, the image gets stored in the network. If the network is then shown just a small part of the image, it will reconstruct the original picture.
Finally, at Synaptics, CalTechs Carver Mead is designing analog chips modeled after human retina and cochlea.
According to Scott E. Fahlman, a senior research scientist at Carnegie Mellon University in Pittsburgh, Pa., building a chip for just one network can take two or three years. The problem is that the process of laying out all the interconnected wires requires advanced techniques. Simulating networks on digital machines allows researchers to search for the best architecture before committing to hardware.
There are at least fifty different types of networks being explored in research or being developed for applications, says Hecht-Nielsen. The differences are mainly in the learning laws implemented and the topology [detailed mapping] of the connections. Most of these networks are called feed-forward networks-information is passed forward in the layered network from inputs to hidden units and finally outputs.
John Hopfield is not sure this is the best architecture for neural nets. In neurobiology there is an immense amount of feedback. You have connections coming back through the layers or interconnections within the layers. That makes the system much more powerful from a computational point of view.
That kind of criticism brings up the question of how closely neural networks need to model the brain. Fahlman says that neural-network researchers and neurobiologists are loosely coupled. Neurobiologists can tell me that the right number of elements to think about is tens of billions. They can tell me that the right kind of interconnection is one thousand or ten thousand to each neuron. And they can tell me that there doesnt seem to be a lot of flow backward through a neuron, he says. But unfortunately, he adds, they cant provide information about exactly whats going on in the synapse of the neuron.
Neural networks, according to the DARPA study, are a long way off from achieving the connectivity of the human brain; at this point a cockroach looks like a genius. DARPA projects that in five years the electronic neurons of a neural network could approach the complexity of a bees nervous system. That kind of complexity would allow applications like stealth aircraft detection, battlefield surveillance, and target recognition using several sensor types. Bees are pretty smart compared with smart weapons, commented Craig I. Fields, deputy director of research for the agency. Bees can evade. Bees can choose routes and choose targets.
Some text has been edited to match contemporary standards and style.
See the original post:
From the archives: A forecast on artificial intelligence, from the 1980s and beyond - Popular Science
- Sleepwalkers Podcast: What Happens When Machines Find Their Creative Muse - WIRED [Last Updated On: November 30th, 2019] [Originally Added On: November 30th, 2019]
- Artificial Intelligence Will Facilitate Growth of Innovative Kinds of VR and AR Platforms - AiThority [Last Updated On: November 30th, 2019] [Originally Added On: November 30th, 2019]
- Manufacturing Leaders' Summit: Realising the promise of Artificial Intelligence - Manufacturer.com [Last Updated On: November 30th, 2019] [Originally Added On: November 30th, 2019]
- How Augmented Reality and Artificial Intelligence Are Helping Entrepreneurs Create a Better Customer Experience - Entrepreneur [Last Updated On: November 30th, 2019] [Originally Added On: November 30th, 2019]
- Global Director of Tech Exploration Discusses Artificial Intelligence and Machine Learning at Anheuser-Busch InBev - Seton Hall University News &... [Last Updated On: November 30th, 2019] [Originally Added On: November 30th, 2019]
- 2019 Artificial Intelligence in Precision Health - Dedication to Discuss & Analyze AI Products Related to Precision Healthcare Already Available -... [Last Updated On: November 30th, 2019] [Originally Added On: November 30th, 2019]
- SC Proposes Introduction Of Artificial Intelligence In Justice Delivery System - Inc42 Media [Last Updated On: November 30th, 2019] [Originally Added On: November 30th, 2019]
- Artificial intelligence will affect Salt Lake, Ogden more than most areas in the nation, study shows - KSL.com [Last Updated On: November 30th, 2019] [Originally Added On: November 30th, 2019]
- The Best Artificial Intelligence Stocks of 2019 -- and The Top AI Stock for 2020 - The Motley Fool [Last Updated On: November 30th, 2019] [Originally Added On: November 30th, 2019]
- It Pays To Break Artificial Intelligence Out Of The Lab, Study Confirms - Forbes [Last Updated On: November 30th, 2019] [Originally Added On: November 30th, 2019]
- Artificial intelligence in FX 'may be hype' - FX Week [Last Updated On: November 30th, 2019] [Originally Added On: November 30th, 2019]
- The Surprising Way Artificial Intelligence Is Transforming Transportation - Forbes [Last Updated On: November 30th, 2019] [Originally Added On: November 30th, 2019]
- Need a New Topic for Thanksgiving Dinner? How to Explain Artificial Intelligence (AI) to Anyone...and Make it Fun! - Forbes [Last Updated On: November 30th, 2019] [Originally Added On: November 30th, 2019]
- The Artificial Intelligence Industry and Global Challenges - Forbes [Last Updated On: November 30th, 2019] [Originally Added On: November 30th, 2019]
- Artificial Intelligence in 2020: The Architecture and the Infrastructure - Gigaom [Last Updated On: December 1st, 2019] [Originally Added On: December 1st, 2019]
- AI IN BANKING: Artificial intelligence could be a near $450 billion opportunity for banks - here are the strat - Business Insider India [Last Updated On: December 1st, 2019] [Originally Added On: December 1st, 2019]
- Seattle Seahawks Select Amazon In Utilizing Artificial Intelligence To Help Make Smarter Decisions On The Field - Forbes [Last Updated On: December 1st, 2019] [Originally Added On: December 1st, 2019]
- Fujifilm Showcases Artificial Intelligence Initiative And Advances at RSNA 2019 - Imaging Technology News [Last Updated On: December 1st, 2019] [Originally Added On: December 1st, 2019]
- The impact of artificial intelligence on humans - Bangkok Post [Last Updated On: December 1st, 2019] [Originally Added On: December 1st, 2019]
- Artificial intelligence gets to work in the automotive industry - Automotive World [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- BioSig Technologies Announces New Collaboration on Development of Artificial Intelligence Solutions in Healthcare - GlobeNewswire [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Emotion Artificial Intelligence Market Business Opportunities and Forecast from 2019-2025 | Eyesight Technologies, Affectiva - The Connect Report [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Artificial intelligence-based fitness is promising but may not be for everyone - Livemint [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Opinion | The artificial intelligence frontier of economic theory - Livemint [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Pondering the Ethics of Artificial Intelligence in Health Care Kansas City Experts Team Up on Emerging - Flatland [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Baidu Leads the Way in Innovation with 5712 Artificial Intelligence Patent Applications - GlobeNewswire [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Artificial Intelligence and National Security, and More from CRS - Secrecy News [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Artificial intelligence: How to measure the I in AI - TechTalks [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- 52 ideas that changed the world: 26. Artificial intelligence - The Week UK [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Longer Looks: The Psychology Of Voting; Overexcited Neurons And Artificial Intelligence; And More - Kaiser Health News [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Maximize The Promise And Minimize The Perils Of Artificial Intelligence (AI) - Forbes [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Will the next Mozart or Picasso come from artificial intelligence? No, but here's what might happen instead - Ladders [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- China Will Outpace US Artificial Intelligence Capabilities, But Will It Win The Race? Not If We Care About Freedom - Forbes [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Artificial intelligence apps, Parkinsons and me - BBC News [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Artificial intelligence will affect Utah more than other states, new study says - Deseret News [Last Updated On: December 8th, 2019] [Originally Added On: December 8th, 2019]
- Aural Analytics Joins Consumer Technology Association Initiative to Set New Standards for Artificial Intelligence in Healthcare - Business Wire [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- TECH 2019: stalls related to technology, artificial intelligence a big draw - The Hindu [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- The Artificially Intelligent Investor: AI And The Future Of Stock Picking - Forbes [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- Defining the Scope of an Artificial Intelligence Project - Toolbox [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- Facebooks Jerome Pesenti Explains the Limitations of Artificial Intelligence Research - NullTX [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- How AI Is Transforming The Art of Stock Picking - Analytics India Magazine [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- Whistle Adds Artificial Intelligence and Workflow Automation to Guest Messaging Platform for Improved Hotel and Lodging Customer Service and Increased... [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- Singapore BIGO Technology Integrates Artificial Intelligence Into Communication Apps for a Holistic and Immersive Experience for Users - Business Wire [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- Commuter Benefits Company, Clarity Benefit Solutions, Gives Insight into Embracing Artificial Intelligence in Human Resources - PRNewswire [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- THE AI IN TRANSPORTATION REPORT: How automakers can use artificial intelligence to cut costs, open new revenue - Business Insider India [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- Chinese Association of Artificial Intelligence is hosting the 6th IEEE International Conference on the AI Pharos Pte Ltd co-organised Cloud Computing... [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- VA launches National Artificial Intelligence Institute to drive research and development - FierceHealthcare [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- SkyWatch Selected to Build Advanced Autonomous Space Systems Using Artificial Intelligence and Big Data Analytics for the Canadian Space Agency -... [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- Microsoft tech expert warns of bias and sexism in artificial intelligence - The Age [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- Artificial Intelligence as Security Solution and Weaponization by Hackers - CISO MAG [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- Baidu Leads the Way in Innovation with 5,712 Artificial Intelligence Patent Applications - MarTech Series [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- Finland seeks to teach 1% of Europeans basics on artificial intelligence - Reuters UK [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- Artificial Intelligence (AI) in Supply Chain Market Worth $21.8 billion by 2027- Exclusive Report by Meticulous Research - GlobeNewswire [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- What Veterans Affairs Aims to Accomplish Through Its Artificial Intelligence Institute - Nextgov [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- The Bot Decade: How AI Took Over Our Lives in the 2010s - Popular Mechanics [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- Benefits & Risks of Artificial Intelligence - Future of ... [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- What is Artificial Intelligence? How Does AI Work? | Built In [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- artificial intelligence | Definition, Examples, and ... [Last Updated On: December 10th, 2019] [Originally Added On: December 10th, 2019]
- Iktos and Almirall Announce Research Collaboration in Artificial Intelligence for New Drug Design - Business Wire [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- Artificial Intelligence Job Demand Could Live Up to Hype - Dice Insights [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- Artificial intelligence is writing the end of Beethoven's unfinished symphony - Euronews [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- LTTE: It's important to know of weaponized artificial intelligence - Rocky Mountain Collegian [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- 8 Artificial Intelligence, Machine Learning and Cloud Predictions To Watch in 2020 - Irish Tech News [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- It's artificial intelligence to the rescue (and response and recovery) - GreenBiz [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- Joint Artificial Intelligence Center Director tells Naval War College audience to 'Dive In' on AI - What'sUpNewp [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- Tip: Seven recommendations for introducing artificial intelligence to your newsroom - Journalism.co.uk [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- Boschs A.I.-powered tech could prevent accidents by staring at you - Digital Trends [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- Schlumberger inks deal to expand artificial intelligence in the oil field - Chron [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- Artificial Intelligence Isn't an Arms Race With China, and the United States Shouldn't Treat It Like One - Foreign Policy [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- Beethovens unfinished tenth symphony to be completed by artificial intelligence - Classic FM [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- Accountability is the key to ethical artificial intelligence, experts say - ComputerWeekly.com [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- Artificial intelligence must be used with care - The Australian Financial Review [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- Squirrel AI Learning Attends the Web Summit to Talk About the Application and Breakthrough of Artificial Intelligence in the Field of Education -... [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- Top Artificial Intelligence Books Released In 2019 That You Must Read - Analytics India Magazine [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- 12 Everyday Applications Of Artificial Intelligence Many People Aren't Aware Of - Forbes [Last Updated On: December 17th, 2019] [Originally Added On: December 17th, 2019]
- Artificial Intelligence might be a factor behind the Climate Change - Digital Information World [Last Updated On: December 21st, 2019] [Originally Added On: December 21st, 2019]
- Innovations in Artificial Intelligence-, Cloud-, and IoT-based Security, 2019 Research Report - ResearchAndMarkets.com - Business Wire [Last Updated On: December 21st, 2019] [Originally Added On: December 21st, 2019]
- Artificial intelligence predictions for 2020: 16 experts have their say - Verdict [Last Updated On: December 21st, 2019] [Originally Added On: December 21st, 2019]
- Tommie Experts: Ethically Educating on Artificial Intelligence at St. Thomas - University of St. Thomas Newsroom [Last Updated On: December 21st, 2019] [Originally Added On: December 21st, 2019]
- How Internet of Things and Artificial Intelligence pave the way to climate neutrality - EURACTIV [Last Updated On: December 21st, 2019] [Originally Added On: December 21st, 2019]