Page 168«..1020..167168169170..180190..»

Category Archives: Artificial Intelligence

Retailers must see artificial intelligence as more than bots – PaymentsSource

Posted: June 5, 2017 at 7:27 am

From Amazon Echos AI assistant, Alexa, to Sephoras Kik chatbot, businesses are utilizing AI to enhance the user experience in a growing number of ways.

In the past year, chatbots have surged in popularity, helping retailers better connect with their customers. Beyond chatbots, AI has shown immense potential for improving and streamlining the customer experience.

Retailers can use the power of AI and predictive analytics to anticipate shoppers needs. For example, by looking at a shoppers order history and frequency to determine repeat ordering patterns.

These tools help retailers estimate when a customer might place a new order, so they can send either a reminder email with a personalized promotion or potentially help the shopper set up an automated ordering option. This enables retailers to build loyalty with customers through AI, because it saves shoppers time and is tailored to address their needs.

In the future, AI is expected to play an integral role in enabling merchants to build trust with their customers through their payment processors by supporting reliable fraud prevention, accurate behavioral prediction and increased checkout speed. While the technologies surrounding AI and machine learning might not be accessible to all retailers today, the increased shift toward using these capabilities to address their needs is opening the door to partnerships and collaboration to leverage AI across the industry.

According to FuturePays recent report, The Big Ticket: Whats Stopping Shoppers?, security is still a concern for customers shopping online. From storing payment information to entering contact details, 13% of shoppers admitted to abandoning a cart because of security concerns. AI has the potential to ease some of these worries. With a trusted partnership in place, retailers can leverage AI to identify fraud and stop it in its tracks. By using payment processors that rely on intuitive, human-like reasoning when evaluating transactions, business owners can help spot inconsistencies in their customers purchasing behavior and mitigate fraudulent transactions before its too late.

In addition to using past customer behavior to prevent fraud, AI chatbots are redefining the customer services experience offered by retailers. The increased popularity of mobile messaging and the demand for instant gratification has led to an opportunity for brands to utilize chatbots as a more convenient, automated way to improve the customer experience.

A recent survey from IBM revealed that more than 65% of millennials in North America prefer interacting with bots rather than communication with live customer service agents. Chatbots not only provide a channel where brands can better engage and acquire customers, but the technology also introduces a unique way for retailers to deliver a personalized experience that may keep customers coming back for more.

While its still a little early to say how AI will ultimately impact retailers, chatbots and AI are changing how consumers are connecting to retailers, especially when it comes to the customer experience. However, to keep up with shoppers desire for security, convenience and speed, all aspects of the customer experience must be adapted, including payments, and AI has the potential to help streamline that process.

Denise Purtzer is vice president of business development for FuturePay.

More here:

Retailers must see artificial intelligence as more than bots - PaymentsSource

Posted in Artificial Intelligence | Comments Off on Retailers must see artificial intelligence as more than bots – PaymentsSource

Artificial intelligence – wonderful and terrifying – will change life as we know it – CBC.ca

Posted: at 7:27 am

Sunday June 04, 2017 more stories from this episode

"The year 2017 has arrived and we humans are still in charge. Whew!"

That reassuring proclamation came from a New Year's editorial in the Chicago Tribune.

If you haven't been paying attention to the news about artificial intelligence, and particularly its newest iteration called deep learning, then it's probably time you started. This technology is poised to completely revolutionize just about everything in our lives.

If it hasn't already.

Experts say Canadian workers could be in for some major upheaval over the next decade as increasingly intelligent software, robotics and artificial intelligence perform more sophisticated tasks in the economy. (CBC News)

Today, machines are able to "think" more like humans than most of us, even the scientists who study it, ever imagined.

They are moving into our workplaces, homes, cars, hospitals and schools, and they are making decisions for us. Big ones.

Artificial intelligence has enormous potential for good.But its galloping development has also given rise to fears of massive economic dislocation, even fears that these sentient computers might one day get so smart, we will no longer be able to control them.

To use an old fashioned card playing analogy, this is not a shuffle. It's a whole new deck and with a mind of its own.

Sunday Edition contributor Ira Basen has been exploring the frontiers of this remarkable new technology. His documentary is called"Into the Deep: The Promise and Perils of Artificial Intelligence."

Ira Basen June 2, 2017

Remember HAL?

The HAL 9000 computer was the super smart machine in charge of the Discovery One space station in Stanley Kubrick's classic 1968 movie 2001: A Space Odyssey. For millions of moviegoers, it was their first look at a computer that could think and respond like a human, and it did not go well.

In one of the film's pivotal scenes, the two astronauts living in the space station try to return from a mission outside the spacecraft, only to discover that HAL won't allow them back in.

"Open the pod bay doors, please, HAL," Dave, one of astronauts, demands several times.

"I'm sorry Dave, I'm afraid I can't do that," HAL finally replies. "I know that you and Frank were planning to disconnect me, and I'm afraid that's something that I can't allow to happen."

The astronauts were finally able to re-enter the spacecraft and disable HAL, but the image of a sentient computer going rogue and trying to destroy its creators has haunted many people's perceptions of artificial intelligence ever since.

For most of the past fifty years, those negative images haven't really mattered very much. Machines with the cognitive powers of HAL lay in the realm of science fiction. But not anymore. Today, artificial intelligence (AI) is the hottest thing going in the field of computer science.

Governments and industry are pouring billions of dollars into AI research. The most recent example isthe Vector Institute, a new Toronto-based AI research lab announced with much fanfare in March and backed by about $170 million in funding from the Ontario and federal governments, and gig tech companies like Google and Uber.

The Vector Institute will focus on a particular subset of AI called "deep learning."It was pioneered by U of T professor Geoffrey Hinton, who is now the Chief Scientific Advisor at the Institute. Hinton and other deep learning researchers have been able to essentially mimic the architecture of the human brain inside a computer. They created artificial neural networks that work in much the same way as the vast networks of neurons in our brain, that when triggered, allow us to think.

"Once your computer is pretending to be a neural net," Hinton explained in a recent interview in the Toronto office of Google Canada, where he is currently an Engineering Fellow, "you get it to be able to do a particular task by just showing it a whole lot of examples."

So if you want your computer to be able to identify a picture of a cat, you show it lots of pictures of cats. But it doesn't need to see every picture of a cat to be able to figure out what a cat looks like. This is not programming the way computers have been traditionally been programmed. "What we can do," Hinton says, "is show it a lot of examples and have it just kind of get it. And that's a new way of getting computers to do things."

For people haunted by memories of HAL, or Skynet in the Terminator movies another AI computer turned killing machinethe idea of computers being able to think for themselves, to "just kind of get it", in ways that even people like Geoffrey Hinton can't really explain, is far from re-assuring.

They worry about "superintelligence"the point at which computers become more intelligent than humans, and we lose control of our creations. It's this fear that has people like Elon Musk, the man behind the Tesla electric car, declaring that the "biggest existential threat" to the planet today is artificial intelligence. "With artificial intelligence," he asserts, "we are summoning the demon".

SHODAN, the malevolent artificial intelligence from System Shock 2. (Irrational Games/Electronic Arts)

People who work in AI believe these fears of superintelligence are vastly overblown. They argue we are decades away from superintelligence, and we may, in fact, never get there. And even if we do, there's no reason to think that our machines will turn against us.

Yoshua Bengio of the University of Montreal, one of the world's leading deep learning researchers, believes we should avoid projecting our own psychology onto the machines we are building.

"Our psychology is really a defensive one," he argued in a recent interview. "We are afraid of the rest of the world, so we try to defend from potential attacks." But we don't have to build that same defensive psychology into our computers. HAL was a programming error, not an inevitable consequence of artificial intelligence.

"It's not like by default an intelligent machine also has a will to survive against anything else,"Bengio concludes. "This is something that would have to be put in. So long as we don't put that in, they will be as egoless as a toaster, even though it could be much, much smarter than us.

So if we decide to build machines that have an ego and would kill rather than be killed then, well, we'll suffer from our own stupidity. But we don't have to do that."

Humans suffering from our own stupidity? When has that ever happened?

Feeling better?

Click 'listen' above to hear Ira Basen'sdocumentary on artificial intelligence.

Go here to read the rest:

Artificial intelligence - wonderful and terrifying - will change life as we know it - CBC.ca

Posted in Artificial Intelligence | Comments Off on Artificial intelligence – wonderful and terrifying – will change life as we know it – CBC.ca

Labour and Artificial Intelligence: Visions of despair, hope, and liberation – Hindustan Times

Posted: at 7:27 am

In the United States, job demographic data from censuses since the 1900s reveal a startling fact. Despite the two post-Industrial revolutions of electricity and computers, the occupations with the largest employment numbers are still jobs for drivers, retail, cashiers, secretaries, janitors etc, i.e. old professions needing simple skills and mostly repetitive work. This lack of transition to newer jobs is a global phenomenon, especially in the global south. India, for example, has half of the working population doing agriculture.

One must grasp the significance of Artificial Intelligence (AI) in this context. Unlike technological upheavals of the past, AI is unique in that it can rather cheaply replace a vast spectrum of mental, creative, and intuitive human labour. With AI presiding over the mass extinction of repetitive jobs, precisely the sort employing the most workers, no precedent exists of newer jobs replacing them in large enough numbers.

There is no dearth of alarmist narratives around AI. But the danger of AI isnt that it will become hostile, or follow its instructions with such a literal interpretation and on such a scale that human existence itself be jeopardised. Science-fiction scenarios of rampant AI are interesting thought-experiments but already-existing AI is here, and requires well-crafted policy.

Take driverless cars. The US Bureau of Labor Statistics lists six million professional drivers in US as of 2016 and their jobs are in peril. Trains are easier to automate, and metros like Rome, London, and Paris to name a few are already transitioning. An MIT Tech Review 2016 article describes factories, warehouses, and farms developed in China needing minimal humans to operate. A US firm, WinterGreen Research Inc, projects that agricultural robots can become a $16 billion industry.

In India, Maruti Suzuki India Ltd already has one robot for every four workers at its Manesar and Gurgaon factories. McKinsey reports in 2017 that half of the Indian Information Technology workforce will become irrelevant in four years. Some industry watchers advocate retraining in social skills, under the prevalent but incorrect notion that machines cannot replicate human empathy and genius. AI, however, can perform creative or empathic types of labour. Caregiver robots will eventually enter nursing. The arts (including music) are subjects of AI research with Artificial Creativity as a subfield. Even areas like journalism, teaching, and entertainment are not entirely immune. Sophisticated processes like answering free-form questions in natural language is being actively researched, and will dramatically change the service sector.

In medicine, auxiliary work is easy to automate but the real challenge arrives when AI starts making better diagnoses than humans, which was demonstrated for cardiovascular diseases in an April 2017 paper. In the field of law, interns and junior lawyers, the backbone of legal firms, doing tasks like discovery, can be replaced. Finally, US banking giants like BNY Mellon, BBVA, and American Express, have spent hundreds of millions of dollars on AI research, and low-end banking jobs might get axed.

A 2016 study by Deloitte states that 35% of jobs in Britain are at high risk in the next two decades. A 2016 McKinsey report pegs the potential for automation in US at 75% for food services, 40% in services, 35% in education, and 30% in administration. And unlike the first world India lacks the robust welfare state to support our underpaid contractual labour when automation hits our shores.

Given that Artificial Intelligence is revolutionary, and imminent, what is to be done?

The worst policy is to do nothing. A broken labour market alongside the euphemistically named sharing economy wherein monopolies own vast assets managed by AI and people only rent (think Uber like services not just for cars but for everything, operated via AI), presents a real danger of a regression to a system where only capital is needed and most of labour isnt. Futurists call this neo-feudalism. In an extreme case, much of the working population might become irrelevant to the economy, reduced to penury, and locked out of civilised sustenance.

A panacea which technocratic thought leaders are advocating is Universal Basic Income (UBI). The idea behind UBI is that we accept unemployability of most humans and to preserve a minimum living standard and consumption needed for the economy, all welfare measures be streamlined, and a regular quantum of money paid to everyone. This money could be, as Bill Gates suggested, obtained via the heavy taxation of automation, or might come from public wealth, like land or oil.

The problem is, in a trivial form, UBI does nothing to address the root cause the control of vast productive forces by an ever-decreasing few. It gives up on most of the population, relegating them to an infantile, consumerist role, to be only fed and entertained, with no chance at social mobility. It does noting to correct the pseudo-scarcity the market creates in an otherwise era of AI-led hyperproduction. This is a waste of the potential of both humanity and AI.

There is another way, however, which doesnt treat AI as a technological artefact separate from socio-political forces, but as a component of public policy. AI could be a public good, and not merely an awe-inspiring private resource that companies can dazzle us with. Theres a need to challenge free-market fundamentalism and initiate international cooperation on AI policy, and start large, public funded, and distributed AI research, AI public-works, and AI-centric education.

In the light of this who controls the AI? becomes a vital question. A serious conversation is needed, especially in the third world, on how the AI led production of the future be managed: Will it be democratically controlled, or driven by corporate shareholders? AI can conceivably and radically improve both distribution and productivity, augmenting individual and public affluence. In other words, AI need not be market driven; there is a case for conceptualising it as a public good that is used to realise a better redistribution and a critical tool for shaping and strengthening democratic institutions.

AI can upend the metaphorical gameboard and liberate labour. It is a historical opportunity.

Anupam Guha is with the University of Maryland

The views expressed are personal

View post:

Labour and Artificial Intelligence: Visions of despair, hope, and liberation - Hindustan Times

Posted in Artificial Intelligence | Comments Off on Labour and Artificial Intelligence: Visions of despair, hope, and liberation – Hindustan Times

China gets smarter on artificial intelligence, Asia News & Top Stories … – The Straits Times

Posted: at 7:27 am

When Chinese Go master Ke Jie was comprehensively defeated by Google's artificial intelligence (AI) program AlphaGo on May 27 in the ancient canal town of Wuzhen, technology watchers around the world had to redraw the timeline in which AI rules the world.

The artificial mind had swept all three matches against the world's top player, a feat once thought impossible, given Go's deep complexity compared with previous AI wins, such as in chess or checkers.

The event also underscored continued Western dominance in the field of AI. AlphaGo was developed by the American search juggernaut's Deepmind unit, a British firm it acquired in 2014.

But China is today nipping at the heels of the United States, long the undisputed leader in AI technology, and may soon be poised to edge ahead as it brings products to market at a quicker pace.

One firm is using AI to analyse and approve loans more quickly and with a far lower default rate, while another that uses AI to enhance photos already has a user base of 1.3 billion, venture capitalist and former Microsoft and Google executive Lee Kai-Fu said at a commencement speech at Columbia University last month.

The quantity of papers published by Chinese researchers on branches of AI research such as deep learning - where a machine modelled after the human brain can learn by itself over time - already exceeds those by American scientists.

Chinese studies have also been cited more often than American ones, said a White House report last October that sought to draw attention to China's soaring capabilities in the field and the US' own eroding lead.

AI firms in China are also drawing unprecedented amounts of funding. While the US$2.6 billion (S$3.6 billion) they attracted last year is still a fraction of the US$17.9 billion that went to American AI firms, Shenzhen research firm ASKCI estimated that this is a 12-fold increase compared with three years ago.

Experts said three key ingredients make China's rise to the top in AI capability inevitable.

Its population of 1.3 billion provides a broad domestic market to test out new applications for AI, and to also supply the vast amounts of data that AI systems - like search engines before them - need to become more accurate, said Professor Zhang Wenqiang, director of the Robot Intelligence Lab of Fudan University.

"Firms such as Huawei and BAT (Baidu, Alibaba and Tencent) are also taking AI into uncharted territory in China, exploring possibilities in areas such as mobile payments," he told The Straits Times.

Having lost the PC and mobile Internet eras to the West, Beijing sees AI as a field it must dominate, and a key piece of the "Internet-plus" strategy that the government introduced in 2015 to use online technology to modernise its economy.

Besides putting in place fiscal policies last year to grow the sector, such as financial incentives to encourage greater use of industrial robots, Premier Li Keqiang drove home the point in March when he used the term "artificial intelligence" for the first time in his Government Work Report.

But amid the exuberance and high expectations, experts say China's AI sector may already be frothing.

The tussle with Silicon Valley for AI talent has seen salaries soar, while start-ups without any revenue are seeing valuations of as much as US$1 billion.

"As AI remains largely a technology tool and not a full-fledged product or platform, some companies are clearly overvalued," an analyst at Fortune Venture Capital told the China Money Network.

And even as it has made large strides in AI, China remains better at imitating and improving rather than truly inventive, ground-breaking work, said Prof Zhang.

"The US remains superior in this area and in attracting AI talent from all over," he added. "China has to innovate on its education and immigration policies if it is to get to the next level."

Read more:

China gets smarter on artificial intelligence, Asia News & Top Stories ... - The Straits Times

Posted in Artificial Intelligence | Comments Off on China gets smarter on artificial intelligence, Asia News & Top Stories … – The Straits Times

Slack eyes artificial intelligence as it takes on Microsoft and Asian expansion – The Australian Financial Review

Posted: at 7:27 am

Noah Weiss, head of search, learning and intelligence at Slack, says the company is in a great position to take on the likes of Microsoft.

When former Google and Foursquare product specialist Noah Weiss joined workplace communication specialist Slack at the start of 2016, it was already vaunted as the world's hottest start-up, and enjoyed the kind of cool set aside for only the hottest of hot new things.

Described in some quarters as an email killer, the collaboration tool had evolved beyond being a co-worker chat tool to one that was attempting to redefine the way whole organisations and teams worked, shared information and applied their knowledge.

But the man who had helped Google define its "knowledge graph" of individuals' searches, was brought in to ensure it stayed at the forefront in an era where artificial intelligence has slipped off the pages of science fiction and into the marketing brochures of almost every tech company on the planet.

Making his first visit to Australia over a year later, Weiss, Slack's head of search, learning and intelligence tells The Australian Financial Review that the company has applied analytics and intelligence in such a way that it believes it can keep an edge over an eye-wateringly competitive field.

"A lot of people just love using Slack, because it felt like the tools that they used when they weren't at work, and we have now taken that further to the intelligent services, so that work systems feel as smart, convenient and proactive as the things you get to use on your phone away from the office," he says.

"It's kind of ironic that people are now able to do leisure more effectively than they can do work because their phone predicts what you want to do because it has all the data on you ... we have turned the unprecedented level of engagement that our users have to learn about what they do and who they do it with, so we can do interesting things to recycle it back to them and make them more effective at their jobs."

When he speaks of unprecedented levels of engagement he refers to stats that show more than 5million daily active usersusing Slack for more than two hours a day, andsending more than 70 messages per person per day.

In the same way that Google uses extensive user data to rank search results, Slack is now applying AI-like smarts when users look for information within it. Effectively Slack is watching its users, learning how they do their job and knows what users want to know before they even think to ask.

This will feasibly progress to theautomation ofsome of the purely process driven tasks, or suggestions about how workersshould be doing things better.

Weiss says there needs to be a balance between AI-driven communication and human interaction joking about a recent conversation in Gmail with a friend, where both came to realise that the other was using pre-predicted suggested answers but says once companies such as Slack perfect it, productivity should go through the roof.

"A lot of research into AI is is being published really openly both from the academic labs and and industry players, which is great for companies like us, which can use the public infrastructure to build these types of services as prices are dropping tremendously," he says.

"In a sense it has created a golden era for companies to create smart systems ... [which] means less people working on things that feel menial and rote, and hopefully more people getting to work on things that feel meaningful and creative and new."

Despite still being spoken of as a start-up, Slack is no small-time play. It has already raised just shy of $US540 million ($726 million) in external funding and is facing down some of the biggest companies in the world. While it is known in Australia as a competitor to Atlassian's HipChat product, it is also up against the likes of Facebook, Google and Microsoft.

Weiss says that Slack tends to view Atlassian more as a partner, through the integration of Atlassian's Jira software with Slack, and rarely comes across HipChat in a competitive conversation outside of Australia. He says Slack's main game is a head-to-head against US giant Microsoft for the future of corporate teamwork.

Late last year Microsoft seemingly went straight after Slack with the launch of Microsoft Teams, but Weiss says he is confident it is a fight Slack will win.

"Frankly I think Microsoft is by far the most credible competitor, in part because we present the biggest existential risk to Microsoft more so than even Google ... but the juxtaposition between us and Microsoft couldn't be bigger," he says.

"We are building an open platform and ecosystem, where we want everybody else to be able to build great experiences into Slack, whereas Microsoft is trying to sell a bundle of its products and keep competitors out ... We are happy to be on this side of technology where we're trying to help you have this connective tissue that pulls all of the best services together."

A practical example he uses to highlight this is a partnership with US software firm Salesforce, which enables sales executives to work withthe specialist software from inside of Slack. He says Microsoft's wish to force customers to use its own Salesforce competitor Dynamics, means it will never allow integration with one of the most popularly used systems in the world.

In the near term,Weiss says Slack will continue its growth in the Asia Pacific region, which accounts for 15 per cent of its global usage, with plans to open an office in Japan this year.

While the product has not yet evolved to operate in Japanese, he said the country is one of the fastest adopters of Slack globally.

"Most of the history about technology companies in Japan is being befuddled by them wondering how to get these very wealthy intelligent folks to use their services," Weiss says.

"Our experience has been the opposite as we never even tried to build it for them and they seem to love using it. So we intend to see how great it can be if we actually tryto help them use it better."

Read more:

Slack eyes artificial intelligence as it takes on Microsoft and Asian expansion - The Australian Financial Review

Posted in Artificial Intelligence | Comments Off on Slack eyes artificial intelligence as it takes on Microsoft and Asian expansion – The Australian Financial Review

Timeline of artificial intelligence – Wikipedia

Posted: June 3, 2017 at 12:29 pm

Date Development Antiquity Greek myths of Hephaestus and Pygmalion incorporated the idea of intelligent robots (such as Talos) and artificial beings (such as Galatea and Pandora).[1] Antiquity Yan Shi presented King Mu of Zhou with mechanical men.[2] Antiquity Sacred mechanical statues built in Egypt and Greece were believed to be capable of wisdom and emotion. Hermes Trismegistus would write "they have sensus and spiritus ... by discovering the true nature of the gods, man has been able to reproduce it." Mosaic law prohibits the use of automatons in religion.[3] 384 BC322 BC Aristotle described the syllogism, a method of formal, mechanical thought. 1st century Heron of Alexandria created mechanical men and other automatons.[4] 260 Porphyry of Tyros wrote Isagog which categorized knowledge and logic.[5] ~800 Geber develops the Arabic alchemical theory of Takwin, the artificial creation of life in the laboratory, up to and including human life.[6] 1206 Al-Jazari created a programmable orchestra of mechanical human beings.[7] 1275 Ramon Llull, Spanish theologian invents the Ars Magna, a tool for combining concepts mechanically, based on an Arabic astrological tool, the Zairja. The method would be developed further by Gottfried Leibniz in the 17th century.[8] ~1500 Paracelsus claimed to have created an artificial man out of magnetism, sperm and alchemy.[9] ~1580 Rabbi Judah Loew ben Bezalel of Prague is said to have invented the Golem, a clay man brought to life.[10] Early 17th century Ren Descartes proposed that bodies of animals are nothing more than complex machines (but that mental phenomena are of a different "substance").[11] 1623 Wilhelm Schickard drew a calculating clock on a letter to Kepler. This will be the first of five unsuccessful attempts at designing a direct entry calculating clock in the 17th century (including the designs of Tito Burattini, Samuel Morland and Ren Grillet)).[12] 1641 Thomas Hobbes published Leviathan and presented a mechanical, combinatorial theory of cognition. He wrote "...for reason is nothing but reckoning".[13][14] 1642 Blaise Pascal invented the mechanical calculator,[15] the first digital calculating machine[16] 1672 Gottfried Leibniz improved the earlier machines, making the Stepped Reckoner to do multiplication and division. He also invented the binary numeral system and envisioned a universal calculus of reasoning (alphabet of human thought) by which arguments could be decided mechanically. Leibniz worked on assigning a specific number to each and every object in the world, as a prelude to an algebraic solution to all possible problems.[17] 1726 Jonathan Swift published Gulliver's Travels, which includes this description of the Engine, a machine on the island of Laputa: "a Project for improving speculative Knowledge by practical and mechanical Operations " by using this "Contrivance", "the most ignorant Person at a reasonable Charge, and with a little bodily Labour, may write Books in Philosophy, Poetry, Politicks, Law, Mathematicks, and Theology, with the least Assistance from Genius or study."[18] The machine is a parody of Ars Magna, one of the inspirations of Gottfried Leibniz' mechanism. 1750 Julien Offray de La Mettrie published L'Homme Machine, which argued that human thought is strictly mechanical.[19] 1769 Wolfgang von Kempelen built and toured with his chess-playing automaton, The Turk.[20] The Turk was later shown to be a hoax, involving a human chess player. 1818 Mary Shelley published the story of Frankenstein; or the Modern Prometheus, a fictional consideration of the ethics of creating sentient beings.[21] 18221859 Charles Babbage & Ada Lovelace worked on programmable mechanical calculating machines.[22] 1837 The mathematician Bernard Bolzano made the first modern attempt to formalize semantics. 1854 George Boole set out to "investigate the fundamental laws of those operations of the mind by which reasoning is performed, to give expression to them in the symbolic language of a calculus", inventing Boolean algebra.[23] 1863 Samuel Butler suggested that Darwinian evolution also applies to machines, and speculates that they will one day become conscious and eventually supplant humanity.[24] Date Development 1913 Bertrand Russell and Alfred North Whitehead published Principia Mathematica, which revolutionized formal logic. 1915 Leonardo Torres y Quevedo built a chess automaton, El Ajedrecista and published speculation about thinking and automata.[25] 1923 Karel apek's play R.U.R. (Rossum's Universal Robots) opened in London. This is the first use of the word "robot" in English.[26] 1920s and 1930s Ludwig Wittgenstein and Rudolf Carnap lead philosophy into logical analysis of knowledge. Alonzo Church develops Lambda Calculus to investigate computability using recursive functional notation. 1931 Kurt Gdel showed that sufficiently powerful formal systems, if consistent, permit the formulation of true theorems that are unprovable by any theorem-proving machine deriving all possible theorems from the axioms. To do this he had to build a universal, integer-based programming language, which is the reason why he is sometimes called the "father of theoretical computer science". 1941 Konrad Zuse built the first working program-controlled computers.[27] 1943 Warren Sturgis McCulloch and Walter Pitts publish "A Logical Calculus of the Ideas Immanent in Nervous Activity" (1943), laying foundations for artificial neural networks.[28] 1943 Arturo Rosenblueth, Norbert Wiener and Julian Bigelow coin the term "cybernetics". Wiener's popular book by that name published in 1948. 1945 Game theory which would prove invaluable in the progress of AI was introduced with the 1944 paper, Theory of Games and Economic Behavior by mathematician John von Neumann and economist Oskar Morgenstern. 1945 Vannevar Bush published As We May Think (The Atlantic Monthly, July 1945) a prescient vision of the future in which computers assist humans in many activities. 1948 John von Neumann (quoted by E.T. Jaynes) in response to a comment at a lecture that it was impossible for a machine to think: "You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!". Von Neumann was presumably alluding to the Church-Turing thesis which states that any effective procedure can be simulated by a (generalized) computer. Date Development 1950 Alan Turing proposes the Turing Test as a measure of machine intelligence.[29] 1950 Claude Shannon published a detailed analysis of chess playing as search. 1950 Isaac Asimov published his Three Laws of Robotics. 1951 The first working AI programs were written in 1951 to run on the Ferranti Mark 1 machine of the University of Manchester: a checkers-playing program written by Christopher Strachey and a chess-playing program written by Dietrich Prinz. 19521962 Arthur Samuel (IBM) wrote the first game-playing program,[30] for checkers (draughts), to achieve sufficient skill to challenge a respectable amateur. His first checkers-playing program was written in 1952, and in 1955 he created a version that learned to play.[31] 1956 The first Dartmouth College summer AI conference is organized by John McCarthy, Marvin Minsky, Nathan Rochester of IBM and Claude Shannon. 1956 The name artificial intelligence is used for the first time as the topic of the second Dartmouth Conference, organized by John McCarthy[32] 1956 The first demonstration of the Logic Theorist (LT) written by Allen Newell, J.C. Shaw and Herbert A. Simon (Carnegie Institute of Technology, now [[Carnegie Mellon University] or CMU]). This is often called the first AI program, though Samuel's checkers program also has a strong claim. 1957 The General Problem Solver (GPS) demonstrated by Newell, Shaw and Simon while at CMU. 1958 John McCarthy (Massachusetts Institute of Technology or MIT) invented the Lisp programming language. 1958 Herbert Gelernter and Nathan Rochester (IBM) described a theorem prover in geometry that exploits a semantic model of the domain in the form of diagrams of "typical" cases. 1958 Teddington Conference on the Mechanization of Thought Processes was held in the UK and among the papers presented were John McCarthy's Programs with Common Sense, Oliver Selfridge's Pandemonium, and Marvin Minsky's Some Methods of Heuristic Programming and Artificial Intelligence. 1959 John McCarthy and Marvin Minsky founded the MIT AI Lab. Late 1950s, early 1960s Margaret Masterman and colleagues at University of Cambridge design semantic nets for machine translation. Date Development 1960s Ray Solomonoff lays the foundations of a mathematical theory of AI, introducing universal Bayesian methods for inductive inference and prediction. 1960 Man-Computer Symbiosis by J.C.R. Licklider. 1961 James Slagle (PhD dissertation, MIT) wrote (in Lisp) the first symbolic integration program, SAINT, which solved calculus problems at the college freshman level. 1961 In Minds, Machines and Gdel, John Lucas[33] denied the possibility of machine intelligence on logical or philosophical grounds. He referred to Kurt Gdel's result of 1931: sufficiently powerful formal systems are either inconsistent or allow for formulating true theorems unprovable by any theorem-proving AI deriving all provable theorems from the axioms. Since humans are able to "see" the truth of such theorems, machines were deemed inferior. 1961 Unimation's industrial robot Unimate worked on a General Motors automobile assembly line. 1963 Thomas Evans' program, ANALOGY, written as part of his PhD work at MIT, demonstrated that computers can solve the same analogy problems as are given on IQ tests. 1963 Edward Feigenbaum and Julian Feldman published Computers and Thought, the first collection of articles about artificial intelligence. 1963 Leonard Uhr and Charles Vossler published "A Pattern Recognition Program That Generates, Evaluates, and Adjusts Its Own Operators", which described one of the first machine learning programs that could adaptively acquire and modify features and thereby overcome the limitations of simple perceptrons of Rosenblatt 1964 Danny Bobrow's dissertation at MIT (technical report #1 from MIT's AI group, Project MAC), shows that computers can understand natural language well enough to solve algebra word problems correctly. 1964 Bertram Raphael's MIT dissertation on the SIR program demonstrates the power of a logical representation of knowledge for question-answering systems. 1965 J. Alan Robinson invented a mechanical proof procedure, the Resolution Method, which allowed programs to work efficiently with formal logic as a representation language. 1965 Joseph Weizenbaum (MIT) built ELIZA, an interactive program that carries on a dialogue in English language on any topic. It was a popular toy at AI centers on the ARPANET when a version that "simulated" the dialogue of a psychotherapist was programmed. 1965 Edward Feigenbaum initiated Dendral, a ten-year effort to develop software to deduce the molecular structure of organic compounds using scientific instrument data. It was the first expert system. 1966 Ross Quillian (PhD dissertation, Carnegie Inst. of Technology, now CMU) demonstrated semantic nets. 1966 Machine Intelligence workshop at Edinburgh the first of an influential annual series organized by Donald Michie and others. 1966 Negative report on machine translation kills much work in Natural language processing (NLP) for many years. 1967 Dendral program (Edward Feigenbaum, Joshua Lederberg, Bruce Buchanan, Georgia Sutherland at Stanford University) demonstrated to interpret mass spectra on organic chemical compounds. First successful knowledge-based program for scientific reasoning. 1968 Joel Moses (PhD work at MIT) demonstrated the power of symbolic reasoning for integration problems in the Macsyma program. First successful knowledge-based program in mathematics. 1968 Richard Greenblatt (programmer) at MIT built a knowledge-based chess-playing program, MacHack, that was good enough to achieve a class-C rating in tournament play. 1968 Wallace and Boulton's program, Snob (Comp.J. 11(2) 1968), for unsupervised classification (clustering) uses the Bayesian Minimum Message Length criterion, a mathematical realisation of Occam's razor. 1969 Stanford Research Institute (SRI): Shakey the Robot, demonstrated combining animal locomotion, perception and problem solving. 1969 Roger Schank (Stanford) defined conceptual dependency model for natural language understanding. Later developed (in PhD dissertations at Yale University) for use in story understanding by Robert Wilensky and Wendy Lehnert, and for use in understanding memory by Janet Kolodner. 1969 Yorick Wilks (Stanford) developed the semantic coherence view of language called Preference Semantics, embodied in the first semantics-driven machine translation program, and the basis of many PhD dissertations since such as Bran Boguraev and David Carter at Cambridge. 1969 First International Joint Conference on Artificial Intelligence (IJCAI) held at Stanford. 1969 Marvin Minsky and Seymour Papert publish Perceptrons, demonstrating previously unrecognized limits of this feed-forward two-layered structure. This book is considered by some to mark the beginning of the AI winter of the 1970s, a failure of confidence and funding for AI. Nevertheless, significant progress in the field continued (see below). 1969 McCarthy and Hayes started the discussion about the frame problem with their essay, "Some Philosophical Problems from the Standpoint of Artificial Intelligence". Date Development Early 1970s Jane Robinson and Don Walker established an influential Natural Language Processing group at SRI. 1970 Jaime Carbonell (Sr.) developed SCHOLAR, an interactive program for computer assisted instruction based on semantic nets as the representation of knowledge. 1970 Bill Woods described Augmented Transition Networks (ATN's) as a representation for natural language understanding. 1970 Patrick Winston's PhD program, ARCH, at MIT learned concepts from examples in the world of children's blocks. 1971 Terry Winograd's PhD thesis (MIT) demonstrated the ability of computers to understand English sentences in a restricted world of children's blocks, in a coupling of his language understanding program, SHRDLU, with a robot arm that carried out instructions typed in English. 1971 Work on the Boyer-Moore theorem prover started in Edinburgh.[34] 1972 Prolog programming language developed by Alain Colmerauer. 1972 Earl Sacerdoti developed one of the first hierarchical planning programs, ABSTRIPS. 1973 The Assembly Robotics Group at University of Edinburgh builds Freddy Robot, capable of using visual perception to locate and assemble models. (See Edinburgh Freddy Assembly Robot: a versatile computer-controlled assembly system.) 1973 The Lighthill report gives a largely negative verdict on AI research in Great Britain and forms the basis for the decision by the British government to discontinue support for AI research in all but two universities. 1974 Ted Shortliffe's PhD dissertation on the MYCIN program (Stanford) demonstrated a very practical rule-based approach to medical diagnoses, even in the presence of uncertainty. While it borrowed from DENDRAL, its own contributions strongly influenced the future of expert system development, especially commercial systems. 1975 Earl Sacerdoti developed techniques of partial-order planning in his NOAH system, replacing the previous paradigm of search among state space descriptions. NOAH was applied at SRI International to interactively diagnose and repair electromechanical systems. 1975 Austin Tate developed the Nonlin hierarchical planning system able to search a space of partial plans characterised as alternative approaches to the underlying goal structure of the plan. 1975 Marvin Minsky published his widely read and influential article on Frames as a representation of knowledge, in which many ideas about schemas and semantic links are brought together. 1975 The Meta-Dendral learning program produced new results in chemistry (some rules of mass spectrometry) the first scientific discoveries by a computer to be published in a refereed journal. Mid-1970s Barbara Grosz (SRI) established limits to traditional AI approaches to discourse modeling. Subsequent work by Grosz, Bonnie Webber and Candace Sidner developed the notion of "centering", used in establishing focus of discourse and anaphoric references in Natural language processing. Mid-1970s David Marr and MIT colleagues describe the "primal sketch" and its role in visual perception. 1976 Douglas Lenat's AM program (Stanford PhD dissertation) demonstrated the discovery model (loosely guided search for interesting conjectures). 1976 Randall Davis demonstrated the power of meta-level reasoning in his PhD dissertation at Stanford. 1978 Tom Mitchell, at Stanford, invented the concept of Version spaces for describing the search space of a concept formation program. 1978 Herbert A. Simon wins the Nobel Prize in Economics for his theory of bounded rationality, one of the cornerstones of AI known as "satisficing". 1978 The MOLGEN program, written at Stanford by Mark Stefik and Peter Friedland, demonstrated that an object-oriented programming representation of knowledge can be used to plan gene-cloning experiments. 1979 Bill VanMelle's PhD dissertation at Stanford demonstrated the generality of MYCIN's representation of knowledge and style of reasoning in his EMYCIN program, the model for many commercial expert system "shells". 1979 Jack Myers and Harry Pople at University of Pittsburgh developed INTERNIST, a knowledge-based medical diagnosis program based on Dr. Myers' clinical knowledge. 1979 Cordell Green, David Barstow, Elaine Kant and others at Stanford demonstrated the CHI system for automatic programming. 1979 The Stanford Cart, built by Hans Moravec, becomes the first computer-controlled, autonomous vehicle when it successfully traverses a chair-filled room and circumnavigates the Stanford AI Lab. 1979 BKG, a backgammon program written by Hans Berliner at CMU, defeats the reigning world champion. 1979 Drew McDermott and Jon Doyle at MIT, and John McCarthy at Stanford begin publishing work on non-monotonic logics and formal aspects of truth maintenance. Late 1970s Stanford's SUMEX-AIM resource, headed by Ed Feigenbaum and Joshua Lederberg, demonstrates the power of the ARPAnet for scientific collaboration. Date Development 1980s Lisp machines developed and marketed. First expert system shells and commercial applications. 1980 First National Conference of the American Association for Artificial Intelligence (AAAI) held at Stanford. 1981 Danny Hillis designs the connection machine, which utilizes Parallel computing to bring new power to AI, and to computation in general. (Later founds Thinking Machines Corporation) 1982 The Fifth Generation Computer Systems project (FGCS), an initiative by Japan's Ministry of International Trade and Industry, begun in 1982, to create a "fifth generation computer" (see history of computing hardware) which was supposed to perform much calculation utilizing massive parallelism. 1983 John Laird and Paul Rosenbloom, working with Allen Newell, complete CMU dissertations on Soar (program). 1983 James F. Allen invents the Interval Calculus, the first widely used formalization of temporal events. Mid-1980s Neural Networks become widely used with the Backpropagation algorithm (first described by Paul Werbos in 1974). 1985 The autonomous drawing program, AARON, created by Harold Cohen, is demonstrated at the AAAI National Conference (based on more than a decade of work, and with subsequent work showing major developments). 1986 The team of Ernst Dickmanns at Bundeswehr University of Munich builds the first robot cars, driving up to 55mph on empty streets. 1986 Barbara Grosz and Candace Sidner create the first computation model of discourse, establishing the field of research.[35] 1987 Marvin Minsky published The Society of Mind, a theoretical description of the mind as a collection of cooperating agents. He had been lecturing on the idea for years before the book came out (c.f. Doyle 1983).[36] 1987 Around the same time, Rodney Brooks introduced the subsumption architecture and behavior-based robotics as a more minimalist modular model of natural intelligence; Nouvelle AI. 1987 Commercial launch of generation 2.0 of Alacrity by Alacritous Inc./Allstar Advice Inc. Toronto, the first commercial strategic and managerial advisory system. The system was based upon a forward-chaining, self-developed expert system with 3,000 rules about the evolution of markets and competitive strategies and co-authored by Alistair Davidson and Mary Chung, founders of the firm with the underlying engine developed by Paul Tarvydas. The Alacrity system also included a small financial expert system that interpreted financial statements and models.[37] 1989 Dean Pomerleau at CMU creates ALVINN (An Autonomous Land Vehicle in a Neural Network). Date Development Early 1990s TD-Gammon, a backgammon program written by Gerry Tesauro, demonstrates that reinforcement (learning) is powerful enough to create a championship-level game-playing program by competing favorably with world-class players. 1990s Major advances in all areas of AI, with significant demonstrations in machine learning, intelligent tutoring, case-based reasoning, multi-agent planning, scheduling, uncertain reasoning, data mining, natural language understanding and translation, vision, virtual reality, games, and other topics. 1991 DART scheduling application deployed in the first Gulf War paid back DARPA's investment of 30 years in AI research.[38] 1993 Ian Horswill extended behavior-based robotics by creating Polly, the first robot to navigate using vision and operate at animal-like speeds (1 meter/second). 1993 Rodney Brooks, Lynn Andrea Stein and Cynthia Breazeal started the widely publicized MIT Cog project with numerous collaborators, in an attempt to build a humanoid robot child in just five years. 1993 ISX corporation wins "DARPA contractor of the year"[39] for the Dynamic Analysis and Replanning Tool (DART) which reportedly repaid the US government's entire investment in AI research since the 1950s.[40] 1994 With passengers on board, the twin robot cars VaMP and VITA-2 of Ernst Dickmanns and Daimler-Benz drive more than one thousand kilometers on a Paris three-lane highway in standard heavy traffic at speeds up to 130km/h. They demonstrate autonomous driving in free lanes, convoy driving, and lane changes left and right with autonomous passing of other cars. 1994 English draughts (checkers) world champion Tinsley resigned a match against computer program Chinook. Chinook defeated 2nd highest rated player, Lafferty. Chinook won the USA National Tournament by the widest margin ever. 1995 "No Hands Across America": A semi-autonomous car drove coast-to-coast across the United States with computer-controlled steering for 2,797 miles (4,501km) of the 2,849 miles (4,585km). Throttle and brakes were controlled by a human driver.[41][42] 1995 One of Ernst Dickmanns' robot cars (with robot-controlled throttle and brakes) drove more than 1000 miles from Munich to Copenhagen and back, in traffic, at up to 120mph, occasionally executing maneuvers to pass other cars (only in a few critical situations a safety driver took over). Active vision was used to deal with rapidly changing street scenes. 1997 The Deep Blue chess machine (IBM) defeats the (then) world chess champion, Garry Kasparov. 1997 First official RoboCup football (soccer) match featuring table-top matches with 40 teams of interacting robots and over 5000 spectators. 1997 Computer Othello program Logistello defeated the world champion Takeshi Murakami with a score of 60. 1998 Tiger Electronics' Furby is released, and becomes the first successful attempt at producing a type of A.I to reach a domestic environment. 1998 Tim Berners-Lee published his Semantic Web Road map paper.[43] 1998 Leslie P. Kaelbling, Michael Littman, and Anthony Cassandra introduce the first method for solving POMDPs offline, jumpstarting widespread use in robotics and automated planning and scheduling[44] 1999 Sony introduces an improved domestic robot similar to a Furby, the AIBO becomes one of the first artificially intelligent "pets" that is also autonomous. Late 1990s Web crawlers and other AI-based information extraction programs become essential in widespread use of the World Wide Web. Late 1990s Demonstration of an Intelligent room and Emotional Agents at MIT's AI Lab. Late 1990s Initiation of work on the Oxygen architecture, which connects mobile and stationary computers in an adaptive network. Date Development 2000 Interactive robopets ("smart toys") become commercially available, realizing the vision of the 18th century novelty toy makers. 2000 Cynthia Breazeal at MIT publishes her dissertation on Sociable machines, describing Kismet (robot), with a face that expresses emotions. 2000 The Nomad robot explores remote regions of Antarctica looking for meteorite samples. 2002 iRobot's Roomba autonomously vacuums the floor while navigating and avoiding obstacles. 2004 OWL Web Ontology Language W3C Recommendation (10 February 2004). 2004 DARPA introduces the DARPA Grand Challenge requiring competitors to produce autonomous vehicles for prize money. 2004 NASA's robotic exploration rovers Spirit and Opportunity autonomously navigate the surface of Mars. 2005 Honda's ASIMO robot, an artificially intelligent humanoid robot, is able to walk as fast as a human, delivering trays to customers in restaurant settings. 2005 Recommendation technology based on tracking web activity or media usage brings AI to marketing. See TiVo Suggestions. 2005 Blue Brain is born, a project to simulate the brain at molecular detail.[1] 2006 The Dartmouth Artificial Intelligence Conference: The Next 50 Years (AI@50) AI@50 (1416 July 2006) 2007 Philosophical Transactions of the Royal Society, B Biology, one of the world's oldest scientific journals, puts out a special issue on using AI to understand biological intelligence, titled Models of Natural Action Selection[45] 2007 Checkers is solved by a team of researchers at the University of Alberta. 2007 DARPA launches the Urban Challenge for autonomous cars to obey traffic rules and operate in an urban environment. 2009 Google builds self driving car.[46] Date Development 2010 Microsoft launched Kinect for Xbox 360, the first gaming device to track human body movement, using just a 3D camera and infra-red detection, enabling users to play their Xbox 360 wirelessly. The award-winning machine learning for human motion capture technology for this device was developed by the Computer Vision group at Microsoft Research, Cambridge.[47][48] 2011 IBM's Watson computer defeated television game show Jeopardy! champions Rutter and Jennings. 2011 Apple's Siri, Google's Google Now and Microsoft's Cortana are smartphone apps that use natural language to answer questions, make recommendations and perform actions. 2013 Robot HRP-2 built by SCHAFT Inc of Japan, a subsidiary of Google, defeats 15 teams to win DARPAs Robotics Challenge Trials. HRP-2 scored 27 out of 32 points in 8 tasks needed in disaster response. Tasks are drive a vehicle, walk over debris, climb a ladder, remove debris, walk through doors, cut through a wall, close valves and connect a hose.[49] 2013 NEIL, the Never Ending Image Learner, is released at Carnegie Mellon University to constantly compare and analyze relationships between different images.[50] 2015 An open letter to ban development and use of autonomous weapons signed by Hawking, Musk, Wozniak and 3,000 researchers in AI and robotics.[51] 2015 Google DeepMind's AlphaGo defeated 3 time European Go champion 2 dan professional Fan Hui by 5 games to 0.[52] 2016 Google DeepMind's AlphaGo defeated Lee Sedol 4-1. Lee Sedol is a 9 dan professional Korean Go champion who won 27 major tournaments from 2002 to 2016.[53] Before the match with AlphaGo, Lee Sedol was confident in predicting an easy 5-0 or 4-1 victory.[54] 2017 Google DeepMind's AlphaGo won 60-0 rounds on two public Go websites including 3 wins against world Go champion Ke Jie. [55] 2017 Libratus, designed by Carnegie Mellon professor Tuomas Sandholm and his grad student Noam Brown won against four top players at no-limit Texas hold 'em, a very challenging version of poker. Unlike Go and Chess, Poker is a game in which some information is hidden (the cards of the other player) which makes it much harder to model. [56]

Read the rest here:

Timeline of artificial intelligence - Wikipedia

Posted in Artificial Intelligence | Comments Off on Timeline of artificial intelligence – Wikipedia

Artificial Intelligence: From The Cloud To Your Pocket – Seeking Alpha

Posted: at 12:29 pm

Artificial Intelligence ('AI') is a runaway success and we think it is going to be one of the biggest, if not the biggest driver of future economic growth. There are major AI breakthroughs on a fundamental level leading to a host of groundbreaking applications in autonomous driving, medical diagnostics, automatic translation, speech recognition and a host more.

See for instance the acceleration in speech recognition in the last year or so:

We're only at the beginning of these developments, which is going in several overlapping stages:

We have described the development of specialist AI chips in an earlier article, where we already touched on the new phase emerging - the move of AI from the cloud to the device (usually the mobile phone).

This certainly isn't a universal movement but involves inference (the application of the algorithms to answer queries), rather than the more computing-heavy training (where the algorithms are improved through iteration rounds with the help of massive amounts of data).

Since GPUs weren't designed with AI in mind, so in principle, it isn't much of a stretch to assume that specialist AI chips will take performance higher, even if Nvidia is now designing new architectures like the Volta with AI in mind at least in part, from Medium:

Although Pascal has performed well in deep learning, Volta is far superior because it unifies CUDA Cores and Tensor Cores. Tensor Cores are a breakthrough technology designed to speed up AI workloads. The Volta Tensor Cores can generate 12 times more throughput than Pascal, allowing the Tesla V100 to deliver 120 teraflops (a measure of GPU power) of deep learning performance... The new Volta-powered DGX-1 leapfrogs its previous version with significant advances in TFLOPS (170 to 960), CUDA cores (28,672 to 40,960), Tensor Cores (0 to 5120), NVLink vs. PCIe speed-up (5X to 10X), and deep learning training speed (1X to 3X).

However, while the systems on a chip (SoC) that drive mobile devices contain a GPU processor, these are pretty tiny compared to their desktop and server equivalents. There is room here too for adding intelligence locally (or, as the jargon has it, 'on the edge').

Advantages

Why would one want to put AI processing 'on the edge' (on the device rather than in the cloud)? There are a few reasons:

The privacy issue was best explained by SA contributor Mark Hibben:

The motivation for this is customer privacy. Currently, AI assistants such as Siri, Cortana, Google Assistant, and Alexa are all hosted in the cloud and require Internet connections to access. The simple reason for this is that AI functionality requires a lot of processing horsepower that only datacenters could provide. But this constitutes a potential privacy issue for users, since cloud-hosted AIs are most effective when they are observing the actions of the user. That way they can learn the users' needs and be more "assistive". This means that virtually every user action, including voice and text messaging, could be subject to such observation. This has prompted Apple to look for ways to host some AI functionality on the mobile device, where it can be locked behind the protection of Apple's redoubtable Secure Enclave. The barrier to this is simply the magnitude of the processing task.

Lower latency and a possible lack of internet connection are crucial where there are life and death decisions that have to be taken instantly, for instance in autonomous driving.

Security of devices might benefit from AI-driven behavioural malware applications, which could run more efficient on specialist chips locally, rather than via the cloud.

Specialist AI chips might also provide an energy advantage, especially when some AI applications already use the local resources (CPU, GPU), and/or depend for data on the cloud (especially in scenarios where there is no Wi-Fi available). We understand that this is one motivation for Apple (NASDAQ:AAPL) to develop its own AI chips.

But here are some of the challenges, very well explained by Google (NASDAQ:GOOG) (NASDAQ:GOOGL):

These low-end phones can be about 50 times slower than a good laptop-and a good laptop is already much slower than the data centers that typically run our image recognition systems. So how do we get visual translation on these phones, with no connection to the cloud, translating in real-time as the camera moves around? We needed to develop a very small neural net, and put severe limits on how much we tried to teach it-in essence, put an upper bound on the density of information it handles. The challenge here was in creating the most effective training data. Since we're generating our own training data, we put a lot of effort into including just the right data and nothing more.

One route is what Google is doing by optimizing these very small neural nets and feeding it with just the right amount of data. However, if more resources were available locally on the device, these constraints would loosen. Hence, the search for a mobile AI chip that is more efficient in handling these neural networks.

ARM

ARM, now part of the Japanese SoftBank (OTCPK:SFTBY), is adapting its architecture to produce better results for AI. For instance, its DynamiQ architecture, from The Verge:

Dynamiq goes beyond offering just additional flexibility, and will also let chip makers optimize their silicon for tasks like machine learning. Companies will have the option of building AI accelerators directly into chips, helping systems manage data and memory more efficiently. These accelerators could mean that machine learning-powered software features (like Huawei's latest OS, which studies the apps users use most and allocates processing power accordingly) could be implemented more efficiently.

ARM is claiming that DynamiQ will deliver a 50 times increase in "AI-related performance" over the next three to five years. That remains to be seen, but it's noteworthy that they are designing chips with AI in mind.

Qualcomm (NASDAQ:QCOM)

The major user of ARM designs is Qualcomm and this company is also adding AI capabilities to its chips. It isn't adding hardware, but a machine learning platform called Zeroth, or the Snapdragon Neural Processing Engine.

It's a software development kit that will make it easier to develop deep learning programs directly on the mobile (and other devices run by Snapdragon processors). Here is the selling point ( The Verge):

This means that if companies want to build their own deep learning analytics, they won't have to rent servers to deliver their software to customers. And although running deep learning operations locally means limiting their complexity, the sort of programs you can run on your phone or any other portable device are still impressive. The real limitation will be Qualcomm's chips. The new SDK will only work with the latest Snapdragon 820 processors from the latter half of 2016, and the company isn't saying if it plans to expand its availability.

Snapdragons like the 825, the flagship 835 and some of the 600-tier chips incorporate some machine learning capabilities. And they're not doing this all by themselves either, from Qualcomm:

An exciting development in this field is Facebook's stepped up investment in Caffe2, the evolution of the open source Caffe framework. At this year's F8 conference, Facebook and Qualcomm Technologies announced a collaboration to support the optimization of Caffe2, Facebook's open source deep learning framework, and the Qualcomm Snapdragon neural processing engine (NPE) framework. The NPE is designed to do the heavy lifting needed to run neural networks efficiently on Snapdragon, leaving developers with more time and resources to focus on creating their innovative user experiences.

IBM (NYSE:IBM)

IBM is developing its own specialist AI chip called True North. It is a unique product that mirrors the design of neural networks. It will be like a 'brain on a phone' the size of the brain of a small rodent, packing 48 million electronic nerve cells, from Wired:

Each chip mimics about a million neurons, and these can communicate with each other via something similar to a synapse, the connections between neurons in the brain.

The chip won't be out for quite some time, but its main benefit is that it's exceptionally frugal, from Wired:

The upshot is a much simpler architecture that consumes less power. Though the chip contains 5.4 billion transistors, it draws about 70 milliwatts of power. A standard Intel computer processor, by comparison, includes 1.4 billion transistors and consumes about 35 to 140 watts. Even the ARM chips that drive smartphones consume several times more power than the TrueNorth.

For now, it will do the less computationally heavy stuff involved in inferencing, not the training part of machine learning (feeding algorithms massive amounts of data in order to improve them). From Wired:

But the promise is that IBM's chip can run these algorithms in smaller spaces with considerably less electrical power, letting us shoehorn more AI onto phones and other tiny devices, including hearing aids and, well, wristwatches.

Considering its energy needs, IBM's True North is perhaps the prime candidate to add local intelligence to devices, even tiny ones. This could ultimately revolutionize the internet of things (IoT), which itself is still in its infancy but based on simple processors and sensors.

Adding intelligence to IoT devices and interconnecting these opens up distributed computing on a staggering scale, but speculation about its possibilities is best left for another time.

Apple

Apple is also working on an AI chip for mobile devices, Apple's Neural Engine. There isn't much known in terms of detail; its use is to offload tasks from the CPU and GPU so saving battery and speed up stuff like face and speech recognition and mixed reality.

Groq

Then there is the startup called Groq, founded by some of the people who developed the Tensor at Google. Unfortunately, at this stage, there is very little known about the company, apart from the fact that they're developing a Tensor like AI chip. Here is Venture capitalist Chamath Palihapitiya (from CNBC):

There are no promotional materials or website. All that exists online are a couple SEC filings from October and December showing that the company raised $10.3 million, and an incorporation filing in the state of Delaware on Sept. 12. "We're really excited about Groq," Palihapitiya wrote in an e-mail. "It's too early to talk specifics, but we think what they're building could become a fundamental building block for the next generation of computing."

It's certainly a daring venture as the cost of erecting a new chip company from scratch can be exorbitant and the company faces well established competitors with Google, Apple and Nvidia (NASDAQ:NVDA).

What is also unknown is whether the chip is for datacenters or smaller devices providing local AI processing.

Nvidia

The current leader for datacenter "AI" chips (obviously, these are not specific AI chips but GPUs that are used to do most of the massive parallel computing of training neural networks to improve the accuracy of the algorithms.

But it is building its own solution for local AI computing in the form of the Xavier SoC, integrating CPU, CUDA GPU and deep learning accelerators and the GPU now contains the new Volta architecture. It is built for the forthcoming Drive PX3 (autonomous driving).

However, Nvidia's Xavier will feature its own form of TPU that it calls a Tensor Core, and this is built into the SOC.

The advantage for on-device computing in autonomous driving is clear - it reduces latency and the risk of loss of internet connection. Critical autonomous driving functions simply cannot rely on spotty internet connections or long latencies.

From what we understand, it's like a supercomputer in a box, but that's still too big (and too power hungry, sipping 20W) for smartphones. But needless to say, autonomous driving is a big emerging market in and by itself, and in time, this stuff tends to miniaturize, and that TPU itself will be a lot smaller and less energy hungry so it might very well be applicable in other environments.

Conclusion

Before we get too excited, there are serious limitations to putting too much AI computing on small devices like smartphones, here is Voicebot:

The third chip approach seems logical for on-device AI processing. However, few AI processes actually occur on-device today. Whether it is Amazon's Alexa or Apple's Siri, the language processing and understanding occurs in the cloud. It would be impressive if Apple could actually bring all of Siri's language understanding processing onto a mobile device, but that is unlikely in the near term. It's not just about analyzing the data, it's also about having access to information that helps you interpret and respond to requests. The cloud is well suited to these challenges.

Most AI requires massive amounts of computing power and massive amounts of data. While some of that can be shifted from the cloud to devices, especially where latency and secure coverage are essential (autonomous driving), there are still significant limitations for what can be done locally.

However, the development of specialist AI chips for local (rather than cloud) use is only starting today and a new and exciting market is opening up here, with big companies like Apple, Nvidia, STMicroelectronics (NYSE:STM), and IBM all at it. And the companies developing cloud AI chips, like Google and Groq might very well crack this market too, as Google's Tensor seems particularly efficient in terms of energy use.

Disclosure: I/we have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

Original post:

Artificial Intelligence: From The Cloud To Your Pocket - Seeking Alpha

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence: From The Cloud To Your Pocket – Seeking Alpha

How artificial intelligence optimizes recruitment – TNW

Posted: at 12:29 pm

While we worry about artificial intelligence driving us into unemployment, a lot of job positions remain vacant and a large percentage of applicants never hear back from employers.

This is largely due to the inefficiency of the manual recruitment tools and processes, which make it extremely hard for employers to find the right candidates. Among problems that recruiters struggle with are company job boards and applicant tracking systems that fail to deliver, email threads that become unmanageable, resumes that get lost in the corporate hiring pipeline and online job posts that become cluttered with low quality applications.

Fortunately, developments in artificial intelligence have created a huge potential to fix the problems of antiquated hiring systems and accelerate the process to make recruiters more productive.

A handful of software vendors are incorporating AI algorithms into their tools in order to automate tasks such as examining resumes, sending follow up emails, or finding potential candidates for your companys new vacancies.

Beamery, a candidate relationship management software, uses machine learning software to enhance its clients applicant tracking systems and build relationships with their candidates. Beamery searches across social media channels to find and parse information and fill the gaps in candidates profiles.

The company, which provides its service to Facebook and VMware among others, uses data mining algorithms to keep track of interactions between candidates and employers to find the best candidates to engage. The service can help companies scale their recruitment efforts without the need for large teams.

Alexander Mann Solutions, a recruitment outsourcing and consultancy services provider that made early investments in AI, creates profiles of candidates by processing their resumes and extracting information that is publicly available on the web. The company uses AI to analyze the data and determine which candidates are best suited for each job role.

ThisWay Global, another recruitment platform, has tried to incorporate AI while avoiding bias, a problem that exists in both humans and machines. ThisWay focuses on gathering skills data instead of identifiable information such as gender, race and age, and it uses that information to match the employers requirements.

Automating mundane tasks will help recruiters perform better by freeing up their time and enabling them to engage applicants at a more personal level.

Another interesting development in the space is the advent of AI-powered assistants that help streamline the process of seeking jobs and hiring talent.

An example is Mya, a recruiting assistant that automates up to 75% of the recruiting process. On its front end, Mya provides a chatbot that applicants can communicate with through its native environment or popular messaging apps such as Facebook Messenger.

Instead of waiting for a recruiter, applicants can get immediate feedback on their applications. Mya uses Natural Language Processing to examine candidate data and pose relevant questions to fill the gap. Applicants in turn can query the assistant on topics such as company culture and hiring process. Whenever Mya cant answer a question, it will ask human recruiters. The assistant constantly learns from its interactions to become more efficient at its work.

Mya subsequently processes the data to rank candidates based on several factors, including their qualification and level of engagement.

JobBot, another AI-powered chatbot, aims to optimize the recruitment of hourly workers, a labor market that is growing in demand and hard to manage. JobBot plugs-in to platforms such as Craigslist and Indeed and interviews applicants immediately after they apply. The assistant uses AI to assess and rank candidates and book interviews with the staff.

Tests show that applicants that go through AI assistants are much more likely to get a response from their employers.

Being more responsive to job applications will ultimately have a positive impact on a companys customer relationship management, because applicants that dont hear back from an employer are less likely to buy from it in the future.

Other assistants are focused on helping out skilled workers find their next job. One example is Yodas, a bot that asks a series of questions, analyzes your skills and brings back job listings along with its own assessment of the employer. For the moment Yodas works for software engineers only, but the company plans to expand to other domains in the future.

Jobo, an HR chatbot, provides a similar service. You can provide it with your LinkedIn profile address and resume and let it search for jobs that fit your skill-set and send you alarms. Alternatively you can query Jobo for jobs in your area of expertise and apply directly through the conversational interface.

EstherBot, another interesting project, helps turn your resume into an interactive chatbot that interacts with potential employers.

Ironically, the same technology that isbecoming known as a job destroyer might facilitate your way into your next job position.

Read next: Understand data science with access to dozens of cool courses - for less than $40

Read the original post:

How artificial intelligence optimizes recruitment - TNW

Posted in Artificial Intelligence | Comments Off on How artificial intelligence optimizes recruitment – TNW

Alexander Peysakhovich’s Theory on Artificial Intelligence – Pacific Standard

Posted: at 12:29 pm


Pacific Standard
Alexander Peysakhovich's Theory on Artificial Intelligence
Pacific Standard
He's a scientist in Facebook's artificial intelligence research lab, as well as a prolific scholar, having posted five papers in 2016 alone. He has a Ph.D. from Harvard University, where he won a teaching award, and has published articles in the New ...

Read the original:

Alexander Peysakhovich's Theory on Artificial Intelligence - Pacific Standard

Posted in Artificial Intelligence | Comments Off on Alexander Peysakhovich’s Theory on Artificial Intelligence – Pacific Standard

Pitfalls of Artificial Intelligence Decisionmaking Highlighted In Idaho ACLU Case – ACLU (blog)

Posted: at 12:29 pm

One of the biggest civil liberties issues raised by technology today is whether, when, and how we allow computer algorithms to make decisions that affect peoples lives. Were starting to see this in particular in the criminal justice system. For the past several years the ACLU of Idaho has been involved in a fascinating case that, so far as I can tell, has received very little if any national coverage, but which raises fascinating issues that are core to the new era of big data that we are entering.

The case, K.W. v. Armstrong, is a class action lawsuit brought by the ACLU representing about 4,000 Idahoans with developmental and intellectual disabilities who receive assistance from the states Medicaid program. I spoke recently with Richard Eppink, Legal Director of the ACLU of Idaho, and he told me about the case:

It originally started because a bunch of people were contacting me and saying that that the amount of assistance that they were being given each year by the state Medicaid program was being suddenly cut by 20 or 30 percent. I thought the case would be a simple matter of saying to the state, Okay, tell us why these dollar figures dropped by so much.

What happens in this particular program is that each year you go to an assessment interview with an assessor who is a contractor with the Medicaid program, and they ask you a bunch of questions. The assessor plugs these into an Excel spreadsheet, and it comes out with this dollar figure amount, which is how much that you can spend on your services that year.

But when we asked them how the dollar amounts were arrived at, the Medicaid program came back and said, we cant tell you that, its a trade secret.

And so thats what led to the lawsuit. We said youve got to release this, you cant just be coming up with these numbers using a secret formula. And then, within a couple of weeks of filing the case, the court agreed and told the state, yeah, you have to disclose that. In a ruling from the bench the judge said its just a blatant due process violation to tell people youre going to reduce their health care services by $20,000 in a year for some secret reason. The judge also ruled on Medicaid Act groundsthere are requirements in the act that if youre going to reduce somebodys coverage, you have to explain why.

That was five years ago. And once we got their formula, we hired a couple of experts to dig into it and figure out what it was doinghow the whole process was working, both the assessmentthe formula itselfand the data that was used to create it.

Eppink said the experts that they hired found big problems with what the state Medicaid program was doing:

There were a lot of things wrong with it. First of all, the data they used to come up with their formula for setting peoples assistance limits was corrupt. They were using historical data to predict what was going to happen in the future. But they had to throw out two-thirds of the records they had before they came up with the formula because of data entry errors and data that didnt make sense. So they were supposedly predicting what this population was going to need, but the historical data they were using was flawed, and they were only able to use a small sub-set of it. And bad data produces bad results.

A second thing is that the state itself had found in its own testing that there were problemsdisproportionate results for different parts of the state that couldnt be explained.

And the third thing is that our experts found that there were fundamental statistical flaws in the way that the formula itself was structured.

Idahos Medicaid bureaucracy was making arbitrary and irrational decisions with big impacts on peoples lives, and fighting efforts to make it explain how it was reaching those decisions. This lack of transparency is unconscionable. Algorithms are often highly complicated, and when you marry them to human social/legal/bureaucratic systems, the complexity only skyrockets. That means public transparency is vital. The experience in Idaho only confirms this.

I asked Eppink, if Idahos decisionmaking system was so irrational, why did the state rely on it?

I dont actually get the sense they even knew how bad this was. Its just this bias we all have for computerized resultswe dont question them. Its a cultural, maybe even biological thing, but when a computer generates somethingwhen you have a statistician, who looks at some data, and comes up with a formulawe just trust that formula, without asking hey wait a second, how is this actually working? So I think the state fell victim to this complacency that we have with computerized decisionmaking.

Secondly, I dont think anybody at the Medicaid program really thought about how this was working. When we took the depositions in the case I asked each person we deposed from the program to explain to me how they got from these assessment figures to this number, and everybody pointed a finger at somebody else. I dont know that, but this other person does. So I would take a deposition from that other person, and that person pointed at somebody else, and eventually everybody was pointing around in a circle.

And so, that machine bias or complacency, combined with this idea that nobody really fully understood thisit was a lack of understanding of the process on the part of everybody; everybody assumed somebody else knew how it worked.

This, of course, is one of the time-honored horrors of bureaucracies: the fragmentation of intelligence that (as I have discussed) allows hundreds or thousands of intelligent, ethical individuals to behave in ways that are collectively stupid and/or unethical. I have written before about a fascinating paper by Danielle Citron entitled Technological Due Process, which looks at the problems and solutions that arise when translating human rules and policies into computer code. This case shows those problems in action.

So what are the solutions in this case? Eppink:

A couple years ago after wed done all that discovery and worked with the experts, we put it together in a summary judgment package for the judge. And last year the court held that the formula itself was so bad that it was unconstitutionalviolated due processbecause it was effectively producing arbitrary results for a large number of people. And the judge ordered that the Medicaid program basically overhaul the way it was doing this. That includes regular testing, regular updating, and the use of quality data. And thats where we are now; theyre in the process of doing that.

My hunch is that this kind of thing is happening a lot across the United States and across the world as people move to these computerized systems. Nobody understands them, they think that somebody else doesbut in the end we trust them. Even the people in charge of these programs have this trust that these things are working.

And the unfortunate part, as we learned in this case, is that it costs a lot of money to actually test these things and make sure theyre working right. It cost us probably $50,000, and I dont think that a state Medicaid program is going to be motivated to spend the money that it takes to make sure these things are working right. Or even these private companies that are running credit predictions, housing predictions, recidivism predictionsunless the cost is internalized on them through litigation, and its understood that hey, eventually somebodys going to have the money to test this, so it better be working.

As our technological train hurtles down the tracks, we need policymakers at the federal, state, and local level who have a good understanding of the pitfalls involved in using computers to make decisions that affect peoples lives.

See the original post:

Pitfalls of Artificial Intelligence Decisionmaking Highlighted In Idaho ACLU Case - ACLU (blog)

Posted in Artificial Intelligence | Comments Off on Pitfalls of Artificial Intelligence Decisionmaking Highlighted In Idaho ACLU Case – ACLU (blog)

Page 168«..1020..167168169170..180190..»