Page 153«..1020..152153154155..160170..»

Category Archives: Artificial Intelligence

Is Artificial Intelligence Over-Hyped? – MediaPost Communications

Posted: July 7, 2017 at 2:12 am

Worldwide spending on cognitive and artificial intelligence (AI) systems is predicted to increase 59.3% year-over-year to reach $12.5 billion by the end of 2017, according to an International Data Corporation (IDC) spending guide. That number is forecast to almost quadruple by 2020, when spending on AI is predicted to reach more than $46 billion.

If personalization was the marketing buzzword of 2016, then 2017 is the year of artificial intelligence. As more cloud vendors tout their own AI systems, however, could AI be over-hyped?

Joe Stanhope, vice president and principal analyst at Forrester, says a cultural dissonance exists with AI, thanks to science fiction, and that many people have a preconceived notion about what artificial intelligence really is.

A lot of people are talking a big game about AI and how it will change the world, but today its only applied in extremely discrete ways, says Stanhope. Theres a lot of hype around it.

advertisement

advertisement

Stanhope says this overexposure creates a dissonance, compounded by marketers trust issues with AI.

Marketers have a right to be skeptical about artificial intelligence, says Stanhope, adding that it is imperative that they begin to educate themselves about AI, since it is highly complex and difficult to understand without a doctoral degree in statistics, math or engineering.

Stanhope recommends that marketers become "educated about AI techniques and algorithms to develop a functional understanding of how it works. By educating themselves, marketers can be more critical of vendors AI-driven applications.

AI gets thrown out quite a bit, but marketers need to get to the point where they can ask, 'what can your AI do for me now'?" says Stanhope. You need to be able to ask, and they [vendors] need to be able to define and validate that question.

Although it may not be as exciting as changing the world, Stanhope says there are very realistic applications for AI today. Humans have become the bottleneck in marketing, he says, and AI has the potential to make marketers and marketing better.

AI is an efficiency play, says Stanhope, describing how it helps marketers manage data, experiment with segmentation, and takes out the human drudgery of menial tasks. He recommends that marketers dip their toes in AI by applying it to one existing use case first, and then broadening the scope when new use cases become available and trust is built.

Youre not turning over the whole marketing team to a computer, says Stanhope.

Stanhope also recommends that AI marketers investigate whether their ESP offers some sort of AI function, as it is easier to evaluate an add-on solution than find a completely new product.

View post:

Is Artificial Intelligence Over-Hyped? - MediaPost Communications

Posted in Artificial Intelligence | Comments Off on Is Artificial Intelligence Over-Hyped? – MediaPost Communications

A ‘Neurographer’ Puts the Art in Artificial Intelligence – WIRED

Posted: at 2:12 am

Claude Monet used brushes, Jackson Pollock liked a trowel, and Cartier-Bresson toted a Leica. Mario Klingemann makes art using artificial neural networks.

In the past few years this kind of softwareloosely inspired by ideas from neurosciencehas enabled computers to rival humans at identifying objects in photos. Klingemann, who has worked part-time as an artist in residence at Google Cultural Institute in Paris since early 2016, is a prominent member of a new school of artists who are turning this technology inside out. He builds art-generating software by feeding photos, video, and line drawings into code borrowed from the cutting edge of machine learning research. Klingemann curates what spews out into collections of hauntingly distorted faces and figures, and abstracts. You can follow his work on a compelling Twitter feed .

A photographer goes out into the world and frames good spots, I go inside these neural networks, which are like their own multidimensional worlds, and say Tell me how it looks at this coordinate, now how about over here? Klingemann says. With tongue in cheek, he describes himself as a neurographer.

Klingemanns one big project for Google so far is an interactive online installation launched in November that uses image recognition to find visual connections between any two images in a giant collection covering thousands of years of art historysay a roman sculpture and a Frida Kahlo self-portrait . While working in secret on a sequel to that project at Google, Klingemann has been exploring the potential of neurography in public on his own time. Many of his recent creations were made with a technique trendy among machine learning researchers called generative adversarial networks , which, given the right source material, can teach themselves to fabricate strikingly realistic digital images and audio files.

Some computer science researchers are using the method to fill in missing details in patchy radio telescope images. Others are using it to train systems to process health records without risking real patient data. [I dont quite understand. How would records be put at risk? Clarify?] Klingemann has harnessed it to generate images that combine the styles of 19th century portraits and 21st century selfies , and fabricating impressively realistic footage like this clip of 1960s French chanteuse Francoise Hardy.

Klingemann's work is in turn inspiring other artists. In a Barcelona show called My Artificial Muse earlier this month, artist Albert Barqu-Duran spent three days painting a fresco of an image Klingemanns software had generated from a stick figure modelled on John Everetts famous painting Ophelia .

All of which raises the perennial question: Is this art ? Klingemann says he and others using neural networks this way will have to gradually earn their place in the art world just as video and digital artists had to do over the last several decades. These new forms always have a hard time being accepted by the establishment, he says.

Jessi Hempel

Melinda Gates and Fei-Fei Li Want to Liberate AI from Guys With Hoodies

David Weinberger

Our Machines Now Have Knowledge Well Never Understand

Helen Greiner

Why Robots Are As Interesting As Humans

Right now Klingemann has to work hard to find the training data that will cause his neural networks to produce interesting results. Hes built himself a Tinder-style interface to quickly work through piles of newly generated neurographs and find the few that strike him as any good. I produce a thousand images and maybe two or three are great, 50 are promising, and the rest are just ugly or repetitive, he says.

As you may have noticed, the images he does select typically come with more than a little of the uncanny about them; Klingemann has lost count of the times hes been told the faces and figures his code generates are reminiscent of Francis Bacons famously grotesque and disturbing work.

The comparison is apt. Its also evidence of how far artificial neural networks are from really understanding images or artnot that computers have warped minds. Im doing creepy right now because I cant do non-creepy, I wish I could, Klingemann says. In two or three years, the creepiness will go away, which might make it more creepy because we wont be able to distinguish from a photo or painted artwork. The uncanniest AI artist of all might be the one whose raw output doesnt look artificial.

Originally posted here:

A 'Neurographer' Puts the Art in Artificial Intelligence - WIRED

Posted in Artificial Intelligence | Comments Off on A ‘Neurographer’ Puts the Art in Artificial Intelligence – WIRED

What Does Baidu Have In Its Artificial Intelligence Pipeline? – Barron’s

Posted: at 2:12 am


Barron's
What Does Baidu Have In Its Artificial Intelligence Pipeline?
Barron's
As COO of Baidu Mr. Qi Lu said, Baidu will all in AI and continues to emphasize AI (artificial intelligence) as the key strategic focus given the window of opportunity for Baidu. Its AI strategy will be empowered by DuerOS (conversational AI ...
China can seize opportunity to lead global AI development, Baidu executives sayCNBC
Baidu embraces artificial intelligence, set to build an open ecosystemGlobal Times

all 93 news articles »

The rest is here:

What Does Baidu Have In Its Artificial Intelligence Pipeline? - Barron's

Posted in Artificial Intelligence | Comments Off on What Does Baidu Have In Its Artificial Intelligence Pipeline? – Barron’s

Google’s DeepMind Turns to Canada for Artificial Intelligence Boost – Fortune

Posted: July 5, 2017 at 11:12 pm

Googles high-profile artificial intelligence unit has a new Canadian outpost.

DeepMind, which Google bought in 2014 for roughly $650 million, said Wednesday that it would open a research center in Edmonton, Canada. The new research center, which will work closely with the University of Alberta, is the United Kingdom-based DeepMinds first international AI research lab.

DeepMind, now a subsidiary of Google parent company Alphabet ( goog ) , recruited three University of Alberta professors from to lead the new research lab. The professorsRich Sutton, Michael Bowling, and Patrick Pilarskiwill maintain their positions at the university while working at the new research office.

Get Data Sheet , Fortunes technology newsletter .

Sutton, in particular, is a noted expert in a subset of AI technologies called reinforcement learning and was an advisor to DeepMind in 2010. With reinforcement learning, computers look for the best possible way to achieve a particular goal, and learn from each time they fail.

DeepMind has popularized reinforcement learning in recent years through its AlphaGo program that has beat the worlds top players in the ancient Chinese board game, Go. Google has also incorporated some of the reinforcement learning techniques used by DeepMind in its data centers to discover the best calibrations that result in lower power consumption.

DeepMind has taken this reinforcement learning approach right from the very beginning, and the University of Alberta is the worlds academic leader in reinforcement learning, so its very natural that we should work together, Sutton said in a statement. And as a bonus, we get to do it without moving.

DeepMind has also been investigated by the United Kingdom's Information Commissioner's Office for failing to comply with the United Kingdom's Data Protection Act as it expands to using its technology in the healthcare space.

ICO information commissioner Elizabeth Denham said in a statement on Monday that the office discovered a "number of shortcomings" in the way DeepMind handled patient data as part of a clinical trial to use its technology to alert, detect, and diagnosis kidney injuries. The ICO claims that DeepMind failed to explain to participants how it was using their medical data for the project.

DeepMind said Monday that it "underestimated the complexity" of the United Kingdom's National Health Service "and of the rules around patient data, as well as the potential fears about a well-known tech company working in health." DeepMind said it would be now be more open to the public, patients, and regulators with how it uses patient data.

"We were almost exclusively focused on building tools that nurses and doctors wanted, and thought of our work as technology for clinicians rather than something that needed to be accountable to and shaped by patients, the public and the NHS as a whole," DeepMind said in a statement. "We got that wrong, and we need to do better."

Original post:

Google's DeepMind Turns to Canada for Artificial Intelligence Boost - Fortune

Posted in Artificial Intelligence | Comments Off on Google’s DeepMind Turns to Canada for Artificial Intelligence Boost – Fortune

Artificial Stupidity: Learning To Trust Artificial Intelligence (Sometimes) – Breaking Defense

Posted: at 11:12 pm

A young Marine reaches out for a hand-launched drone.

In science fiction and real life alike, there are plenty of horror stories where humans trust artificial intelligence too much. They range from letting the fictional SkyNet control our nuclear weapons to letting Patriots shoot down friendly planes or letting Tesla Autopilot crash into a truck. At the same time, though, theres also a danger of not trusting AI enough.

As conflict on earth, in space, and in cyberspace becomes increasingly fast-paced and complex, the Pentagons Third Offset initiative is counting on artificial intelligence to help commanders, combatants, and analysts chart a course through chaos what weve dubbed the War Algorithm (click here for the full series). But if the software itself is too complex, too opaque, or too unpredictable for its users to understand, theyll just turn it off and do things manually. At least, theyll try: What worked for Luke Skywalker against the first Death Star probably wont work in real life. Humans cant respond to cyberattacks in microseconds or coordinate defense against a massive missile strike in real time. With Russia and China both investing in AI systems, deactivating our own AI may amount to unilateral disarmament.

Abandoning AI is not an option. Never is abandoning human input. The challenge is to create an artificial intelligence that can earn the humans trust, a AI that seems transparent or even human.

Robert Work

Tradeoffs for Trust

Clausewitz had a term calledcoup doeil, a great commanders intuitive grasp of opportunity and danger on the battlefield, said Robert Work, the outgoing Deputy Secretary of Defense and father of the Third Offset, at a Johns Hopkins AI conference in May. Learning machines are going to give more and more commanderscoup doeil.

Conversely, AI can speak the ugly truths that human subordinates may not. There are not many captains that are going to tell a four-star COCOM (combatant commander) that idea sucks,' Work said, (but) the machine will say, you are an idiot, there is a 99 percent probability that you are going to get your ass handed to you.

Before commanders will take an AIs insights as useful, however, Work emphasized, they need to trust and understand how it works. That requires intensive operational test and evaluation, where you convince yourself that the machineswilldo exactly what you expect them to, reliably and repeatedly, he said. This goes back to trust.

Trust is so important, in fact, that two experts we heard from said they were willing to accept some tradeoffs in performance in order to get it: A less advanced and versatile AI, even a less capable one, is better than a brilliant machine you cant trust.

Army command post

The intelligence community, for instance, is keenly interested in AI that can help its analysts make sense of mind-numbing masses of data. But the AI has to help the analysts explain how it came to its conclusions, or they can never brief them to their bosses, explained Jason Matheny, director of the Intelligence Advanced Research Projects Agency. IARPA is the intelligence equivalent of DARPA, which is running its own explainable AI project. So, when IARPA held one recent contest for analysis software, Matheny told the AI conference, it barred entry to programs whose reasoning could not be explained in plain English.

From the start of this program, (there) was a requirement that all the systems be explainable in natural language, Matheny said. That ended up consuming up about half the effort of the researchers, and they were really irritated.because it meant they couldnt in most cases use the best deep neural net approaches to solve this problem, they had to use kernel-based methods that were easier to explain.

Compared to cutting edge but harder-to-understand software, Matheny said, we got a 20-30 percent performance loss but these tools were actually adopted. They were used by analysts because they were explainable.

Transparent, predictable software isnt only importance for analysts: Its also vital for pilots, said Igor Cherepinsky, director ofautonomy programsat Sikorsky. Sikorskys goal for its MATRIX automated helicopter is that the AI prove itself as reliable as flight controls for manned aircraft, failing only once in a billion flight hours. Its the same probability as the wing falling off, Cherepinsky told me in an interview. By contrast, traditional autopilots are permitted much higher rates of failure, on the assumption a competent human pilot will take over if theres a problem.

Sikorskys experimental unmanned UH-60 Black Hawk

To reach that higher standard and just as important, to be able to prove theyd reached it the Sikorsky team ruled out the latest AI techniques, just as IARPA had done, in favor of more old-fashioned deterministic programming. While deep learning AI can surprise its human users with flashes of brilliance or stupidity deterministic software always produces the same output from a given input.

Machine learning cannot be verified and certified, Cherepinsky said. Some algorithms (in use elsewhere) we chose not to use even though they work on the surface, theyre not certifiable, verifiable, and testable.

Sikorsky has used some deep learning algorithms in its flying laboratory, Cherepinsky said, and hes far from giving up on the technology, but he doesnt think its ready for real world use: The current state of the art (is) theyre not explainable yet.

Robots With A Human Face

Explainable, tested, transparent algorithms are necessary but hardly sufficient to making an artificial intelligence that people will trust. They help address our rational concerns about AI, but if humans were purely rational, we might not need AI in the first place. Its one thing to build AI thats trustworthy in general and in the abstract, quite another to get actual individual humans to trust it. The AI needs to communicate effectively with humans, which means it needs to communicate the way humans do even think the way a human does.

You see in artificial intelligence an increasing trend towards lifelike agents and a demand for those agents, like Siri, Cortana, and Alexa, to be more emotionally responsive, to be more nuanced in ways that are human-like, David Hanson, CEO of Hong Kong-based Hanson Robotics, told the Johns Hopkins conference. When we deal with AI and robots, he said, intuitively, we think of them as life forms.

David Hanson with his Einstein robot.

Hanson makes AI toys like a talking Einstein doll and expressive talking heads like Han and Sophia, but hes looking far beyond such gadgets to the future of ever-more powerful AI. How can we, if we make them intelligent, make them caring and safe? he asked. We need a global initiative to create benevolent super intelligence.

Theres a danger here, however. Its called anthropomorphization, and we do it all the time. People chronically attribute human-like thoughts and emotions to our cats, dogs, and other animals, ignoring how they are really very different from us. But at least cats and dogs and birds, and fish, and scorpions, and worms are, like us, animals. They think with neurons and neurotransmitters, they breathe air and eat food and drink water, they mate and breed, are born and die. An artificial intelligence has none of these things in common with us, and programming it to imitate humanity doesnt make it human. The old phrase putting lipstick on a pig understates the problem, because a pig is biochemically pretty similar to us. Think instead of putting lipstick on a squid except a squid is a close cousin to humanity compared to an AI.

With these worries in mind, I sought out Hanson after his panel and asked him about humanizing AI. There are three reasons, he told me: Humanizing AI makes it more useful, because it can communicate better with its human users; it makes AI smarter, because the human mind is the only template of intelligence we have; and it makes AI safer, because we can teach our machines not only to act more human but to be more human. These three things combined give us better hope of developing truly intelligent adaptive machines sooner and making sure that theyre safe when they do happen, he said.

This squids thought process is less alien to you than an artificial intelligence would be.

Usefulness: On the most basic level, Hanson said, using robots and intelligent virtual agents with a human-like form makes them appealing. It creates a lot of uses for communicating and for providing value.

Intelligence: Consider convergent evolution in nature, Hanson told me. Bats, birds, and bugs all have wings, although they grow and work differently. Intelligence may evolve the same way, with AI starting in a very different place from humans but ending up awfully similar.

We may converge on human level intelligence in machines by modeling the human organism, Hanson said. AI originally was an effort to match the capacities of the human mind in the broadest sense, (with) creativity, consciousness, and self-determination and we found that that was really hard, (but still) theres no better example of mind that we know of than the human mind.

Safety: Beyond convergent evolution is co-evolution, where two species shape each other over time, as humans have bred wolves into dogs and irascible aurochs into placid cows. As people and AI interact, Hanson said, people will naturally select for features that desirable and can be understand by humans, which then puts a pressure on the machines to get smarter, more capable, more understanding, more trustworthy.

Sorry, real robots wont be this cute and friendly.

By contrast, Hanson warned, if we fear AI and keep it at arms length, it may develop unexpectedly deep in our networks, in some internet backbone or industrial control system where it has not co-evolved in constant contact with humanity. Putting them out of sight, out of mind, means were developing aliens, he said, and if they do become truly alive, and intelligent, creative, conscious, adaptive, but theyre alien, they dont care about us.

You may contain your machine so thats it safe, but what about your neighbors machine? What about the neighbor nations? What about some hackers who are off the grid? Hanson told me. I would say it will happen, we dont know when. My feeling is that if we can there first with a machine that we can understand, that proves itself trustworthy, that forms a positive relationship with us, that would be better.

Click to read the previous stories in the series:

Artificial Stupidity: When Artificial Intelligence + Human = Disaster

Artificial Stupidity: Fumbling The Handoff From AI To Human Control

Here is the original post:

Artificial Stupidity: Learning To Trust Artificial Intelligence (Sometimes) - Breaking Defense

Posted in Artificial Intelligence | Comments Off on Artificial Stupidity: Learning To Trust Artificial Intelligence (Sometimes) – Breaking Defense

Baidu Is Partnering With Nvidia To ‘Accelerate’ Artificial Intelligence – Benzinga

Posted: at 11:12 pm

NVIDIA Corporation (NASDAQ: NVDA) and Baidu Inc (ADR) (NASDAQ: BIDU) announced a partnership Wednesday to unite their cloud computing services and artificial intelligence technology.

"We believe AI is the most powerful technology force of our time, with the potential to revolutionize every industry, Ian Buck, NVIDIA vice president and general manager of accelerated computing, said in a press release. Our collaboration aligns our exceptional technical resources to create AI computing platforms for all developers from academic research, startups creating breakthrough AI applications, and autonomous vehicles."

The companies will collaborate to infuse Baidu Cloud with NVIDIA Voltas deep learning capabilities, Baidus self-driving vehicle platform with NVIDIAs Drive PX 2 AI, and NVIDIAs Shield TV with Baidus DuerOS voice command program.

Additionally, Baidu will use NVIDIA HGX architecture and TensorRT software to support Tesla Inc (NASDAQ: TSLA) accelerators in its data centers.

"Baidu and NVIDIA will work together on our Apollo self-driving car platform, using NVIDIA's automotive technology, Baidu President and Chief Operations Officer Qi Lu said at the companys recent AI developer conference. We'll also work closely to make PaddlePaddle the best deep learning framework; advance our conversational AI system, DuerOS; and accelerate research at the Institute of Deep Learning."

NVIDIA is already a significant player in the autonomous vehicle and home assistant spaces, but the latest deal will provide greater exposure to Chinese automakers such as Changan, Chery Automobile Co., FAW Car Co. and Greatwall Motor.

Related Links:

The Bull Case For Nvidia: $300 Per Share?

Signs The Desktop GPU Market Is Oversaturated Keep Analyst Underweight On Nvidia

Posted-In: Ian Buck Qi LuNews Contracts Tech Best of Benzinga

2017 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.

Follow this link:

Baidu Is Partnering With Nvidia To 'Accelerate' Artificial Intelligence - Benzinga

Posted in Artificial Intelligence | Comments Off on Baidu Is Partnering With Nvidia To ‘Accelerate’ Artificial Intelligence – Benzinga

Artificial intelligence better than scientists at choosing successful IVF embryos – The Independent

Posted: at 9:14 am

Scientists are using artificial intelligence (AI) to help predict which embryos will result inIVFsuccess.

In a new study, AI was found to be more accurate than embryologists at pinpointing which embryos had the potential to result in the birth of a healthy baby.

Experts from Sao Paulo State University in Brazil have teamed up with Boston Place Clinic in London to develop the technology in collaboration with Dr Cristina Hickman, scientific adviser to the British Fertility Society.

They believe the inexpensive technique has the potential to transform care for patients and help women achieve pregnancy sooner.

During the process, AI was trained in what a good embryo looks like from a series of images.

AI is able to recognise and quantify 24 image characteristics of embryos that are invisible to the human eye.

These include the size of the embryo, texture of the image and biological characteristics such as the number and homogeneity of cells.

During the study, which used cattle embryos, 48 images were evaluated three times each by embryologists and by the AI system.

The embryologists could not agree on their findings across the three images, but AI led to complete agreement.

Stuart Lavery, director of the Boston Place Clinic, said the technology would not replace examining chromosomes in detail, which is thought to be a key factor in determining which embryos are normal or abnormal.

He said: Looking at chromosomes does work, but it is expensive and it is invasive to the embryo.

What we are looking for here is something that can be universal.

Instead of a human looking at thousands of images, actually a piece of software looks at them and is capable of learning all the time.

As we get data about which embryos produce a baby, that data will be fed back into the computer and the computer will learn.

What we have found is that the technique is much more consistent than an embryologist, it is more reliable.

It can also look for things that the human eye can't see.

We don't think it will replace genetic screening we think it will be a complimentary to this type of screening.

Analysis of the embryo won't improve the chances of that particular embryo, but it will help us pick the best one.

We won't waste time on treatments that won't work, so the patient should get pregnant quicker.

He said work was under way to look back at images from parents who had genetic screening and became pregnant. Applying AI to those images will help the computer learn, he said.

Mr Lavery added: This is an innovative and exciting project combining state of the art embryology with new advances in computer modelling, all with the aim of selecting the best possible embryo for transfer to give all our patients the best possible chance of having a baby.

Although further work is needed to optimise the technique, we hope that a system will be available shortly for use in a clinical setting.

Continued here:

Artificial intelligence better than scientists at choosing successful IVF embryos - The Independent

Posted in Artificial Intelligence | Comments Off on Artificial intelligence better than scientists at choosing successful IVF embryos – The Independent

7 myths about AI that are holding your business back – VentureBeat

Posted: at 9:14 am

We can all agree that the use of AI in business is at its infancy and may be long until it becomes widespread. Businesses of all sizes may find it easier than thought to run early AI experiments to clear their vision on how to accelerate their competitiveness. However, several myths will be on the way and need to be reflected upon. Lets dive into the most common ones.

AI is humanitys attempt to simulate our brains intuition and put it on the fast track to experience and interpret the world for us. In the early 90s, the development of very narrow applications using AI concepts gave birth to what we now call machine learning (ML). Think of a computer playing checkers or an e-mail spam filter. Deep learning (DL) is making a comeback from its debut in the early 50s. Think of a computer telling you what is in an image or video or translating languages.

In summary, we say that DL is a subset of ML which is a subset of the broad field we call AI.Your business can and eventually will use AI. The reflection about which approach to use will depend on the problem to be solved and the data available.

While there is something magical about predicting an outcome from an input that the computer never saw, the magic ends there. If you try to use machine learning without minimally understanding the problem you want to solve, you will fail miserably. Its very important to think of your AI strategy as a portfolio of approaches to solving very hard problems you cant solve with traditional programming. Each problem may require completely different datasets and approaches to achieve meaningful results.

While its true that whoever has the data will have an advantage in solving certain problems, no business should be trapped in the analysis paralysis around the question do I have enough data? Maybe you dont, but that doesnt mean you shouldnt try to attack a business problem using AI. There are some scenarios to keep in mind:

Most of the machine learning models are trained offline. Surprised? Things can get widely out of control if you just feed more data to your model. By keeping humans in the loop, you can make sure your models will keep performing well.So, every time Siri, Alexa or the Google Assistant tell you they cant help you, but they are learning, it doesnt mean they are learning with you right then. However, the collection of inputs that didnt map to any result is highly valuable data to help you fill the important gaps with users. You will need to use them to retrain your model.

During training, a typical machine learning model will have an accuracy that asymptotically increases with the number of data used to train it. After training, you will test the model with your evaluation set (a subset of the data you had at the beginning) and see how the model performs. You want a model that behaves well with both training data and new data.Sometimes an accuracy above 70% will be more than enough for practical applications as long as you have a good plan to work out the situations where the model doesnt work well and improve your model over time.

The image above is from a mobile application that implements the imagenet model for image recognition. As you can see, the photo on the left, from above the mouse, led to an unexpected result. By tilting the camera I managed to catch the right category, albeit at a small confidence percentage.Now imagine if the mobile application used the device sensor information like gyroscope data, and it told me that I should tilt the camera in order to get a better result. It wouldve guided me to a better experience because it wouldve provided the machine learning model a better input.Depending on how you design your application, you can also get valuable information from users that will help improve your model.

The cost of building your first AI project should be equivalent to the cost you had when you built your first mobile app, just to give you a tangible reference.In contrast, the cost of not building your first AI project soon, rest assured, will be much higher as time goes by.

Companies who will treat AI as part of their portfolio of problem-solving tools will probably achieve compounding gains over time. They will have, however, to manage internal expectations around early results and consider experiments as bets worth making.

Mars Cyrillo is the VP of Machine Learning and Product Development at CI&T,a digital tech agency.

Above: The Machine Intelligence Landscape This article is part of our Artificial Intelligence series. You can download a high-resolution version of the landscape featuring 288 companies by clicking the image.

View post:

7 myths about AI that are holding your business back - VentureBeat

Posted in Artificial Intelligence | Comments Off on 7 myths about AI that are holding your business back – VentureBeat

Navigating the AI ethical minefield without getting blown up – Diginomica

Posted: at 9:14 am

It is 60 years since Artificial Intelligence (AI) was first recognised as an academic discipline, but it is only in the 21st Century that AI has caught both businesses interest and the publics imagination.

Smartphones, smart hubs, and speech recognition have brought AI simulations to homes and pockets, autonomous vehicles are on our roads, and enterprise apps promise to reveal hidden truths about data of every size, and the people or behaviors it describes.

But AI doesnt just refer to a machine that is intelligent in terms of its operation, but also in terms of its social consequences. Thats the alarm bell sounding in the most thought-provoking report on AI to appear recently Artificial Intelligence and Robotics, a 56-page white paper published by UK-RAS, the umbrella body for British robotics research.

The upside of AI is easily expressed:

Current state-of-the-art AI allows for the automation of various processes, and new applications are emerging with the potential to change the entire workings of the business world. As a result, there is huge potential for economic growth.

One-third of the report explores the history of AIs development which is recommended reading but the authors get to the nitty gritty of its application right away:

A clear strategy is required to consider the associated ethical and legal challenges to ensure that society as a whole will benefit from AI, and its potential negative impact is mitigated from early on.

Neither the unrealistic enthusiasm, nor the unjustified fears of AI, should hinder its progress. [Instead] they should be used to motivate the development of a systemic framework on which the future of AI will flourish.

And AI is certainly flourishing, it adds:

The revenues of the AI market worldwide, were around $260 billion in 2016 and this is estimated to exceed $3,060 billion by 2024. This has had a direct effect on robotic applications, including exoskeletons, rehabilitation, surgical robots, and personal care-bots. [] The economic impact of the next 10 years is estimated to be between $1.49 and $2.95 trillion.

For vendors and their customers, AI is the new must-have differentiator. Yet in the context of what the report calls unrealistic enthusiasm about it, the need to understand AIs social impact is both urgent and overwhelming.

As AI, big data, and the related fields of machine learning, deep learning, and computer vision/object recognition rise, buyers and sellers are rushing to include AI in everything, from enterprise CRM to national surveillance programmes. An example of the latter is the FBIs scheme to record and analyse citizens tattoos in order to establish if people who have certain designs inked on their skin are likely to commit crimes*.

Such projects should come with the label Because we can.

In such a febrile environment, the risk is that the twin problems of confirmation bias in research and human prejudice in society become an automated pandemic: systems that are designed to tell people exactly what they want to hear; or software that perpetuates profound social problems.

This is neither alarmist, nor an overstatement. The white paper notes:

In an article published by Science magazine, researchers saw how machine learning technology reproduces human bias, for better or for worse. [AI systems] reflect the links that humans have made themselves.

These are real-world problems. Take the facial recognition system developed at MIT recently that was unable to identify an African American woman, because it was created within a closed group of white males male insularity is a big problem in IT. When Media Lab chief Joichi Ito shared this story at Davos earlier this year, he described his own students as oddballs.*

The white paper adds its own example of human/societal bias entering AI systems:

When an AI program became a juror in a beauty contest in September 2016, it eliminated most black candidates as the data on which it had been trained to identify beauty did not contain enough black skinned people.

Now apply this model in, say, automated law enforcement

The point is that human bias infects AI systems at both linguistic and cultural levels. Code replicates belief systems including their flaws, prejudices, and oversights while coders themselves often prefer the binary world of computing to the messy world of humans. Again, MITs Ito made this observation, while Microsofts Tay chatbot disaster proved the point: a nave robot, programmed by binary thinkers in a closed community.

The report acknowledges the industrys problem and recognises that it strongly applies to AI today:

One limitation of AI is the lack of common sense; the ability to judge information beyond its acquired knowledge [] AI is also limited in terms of emotional intelligence.

Then the report makes a simple observation that businesses must take on board: true and complete AI does not exist, it says, adding that there is no evidence yet that it will exist before 2050.

So its a sobering thought that AI software with no common sense and probable bias, and which cant understand human emotions, behaviour, or social contexts, is being tasked with trawling context-free communications data (and even body art) pulled from human society in order to expose criminals, as they are defined by career politicians.

And yet thats precisely whats happening in the US, in the UK, and elsewhere.

The white paper takes pains to set out both the opportunities and limitations of this transformative, trillion-dollar technology, the future of which extends into augmented intelligence and quantum computing. On the one hand, the authors note:

[AI] applications can replace costly human labour and create new potential applications and work along with/for humans to achieve better service standards.

It is certain that AI will play a major role in our future life. As the availability of information around us grows, humans will rely more and more on AI systems to live, to work, and to entertain.

[AI] can achieve impressive results in recognising images or translating speech.

Buton the other hand, they add:

When the system has to deal with new situations when limited training data is available, the model often fails. [] Current AI systems are still missing [the human] level of abstraction and generalisability.

Most current AI systems can be easily fooled, which is a problem that affects almost all machine learning techniques.

Deep neural networks have millions of parameters and to understand why the network provides good or bad results becomes impossible. [] Trained models are often not interpretable. Consequently, most researchers use current AI approaches as a black box.

So organisations should be wary of the black boxs potential to mislead, and to be misled.

The paper has been authored by four leading academics in the field: Dr Guang-Zhong Yang (chair of UK-RAS and a great advocate for the robotics industry), and three of his colleagues at Imperial College, London: Doctors Fani Deligianni, Daniele Ravi, and Javier Andreu Perez. These are clear-sighted idealists as well as world authorities on the subject. As a result, they perhaps under-estimate businesses zeal to slash costs and seek out new, tactical solutions.

The digital business world is faddy and, as anyone who uses LinkedIn knows just as full of surface noise as its consumer counterpart: claims that fail the Snopes test attract thousands of Likes, while rigorous analysis goes unread. As a result, businesses risk seeing the attractions of AI through the pinhole of short-term financial advantage, rather than locating it in a landscape of real social renewal, as academics and researchers do.

As our recent report on UK Robotics Week showed, productivity rather than what this paper calls the amplification of human potential is the main driver of tech policy in government today. Meanwhile, think tanks such as Reform are falling over themselves to praise robotics and AIs shared potential to slash costs and cut humans out of the workforce.

But thats not what AIs designers intend for it at all.

So the problem for the many socially and ethically conscious academics working in the field is that business often leaps before it looks, or thinks. A recent global study by consultancy Avanade found that 70%of the C-level executives it questioned admitted to having given little thought to the ethical dimensions of smart technologies.

But what are the most pressing questions to answer? First, theres the one about human dignity:

Data is the fuel of AI and special attention needs to be paid to the information source and if privacy is breached. Protective and preventive technologies need to be developed against such threats.

It is the responsibility of AI operators to make sure that data privacy is protected. [] Additionally, applications of AI, which may compromise the rights to privacy, should be treated with special legislation that protects the individual.

Then there is the one about human employment. Currently, eight percent of jobs are occupied by robots, claims the report, but in 2020 this percentage will rise to 26.

The authors add:

The accelerated process of technological development now allows labour to be replaced by capital (machinery). However, there is a negative correlation between the probability of automation of a profession and its average annual salary, suggesting a possible increase in short-term inequality.

Id argue that the middle class will be seriously hit by AI and automation. Once-secure, professional careers in banking, finance, law, journalism, medicine, and other fields, are being automated far more quickly than, say, skilled manual trades, many of which will never fall to the machines. (If you want a long-term career, become a plumber.)

But the report continues:

To reduce the social impact of unemployment caused by robots and autonomous systems, the EU parliament proposed that they should pay social security contributions and taxes as if they were human.

(As did Bill Gates.)

Words to make Treasury officials worldwidejump for joy. But whatever the likelihood of such ideas ever being accepted by cost-focused businesses, its clear that strong, national-level engagement is essential to ensure that everyone in society has a clear, factual view of both current and future developments in robotics and AI, says the report not just enterprises and governments.

The reports authors have tried to do just that, and for that we should thank them.

*The two case studies referenced have also been quoted by Prof. Simon Rogerson in a July 2017 article on computer ethics, which Chris Middleton edited and to which he contributed these examples, with Simons permission.

Image credit - Free for use

See the original post:

Navigating the AI ethical minefield without getting blown up - Diginomica

Posted in Artificial Intelligence | Comments Off on Navigating the AI ethical minefield without getting blown up – Diginomica

Explainable AI: The push to make sure machines don’t learn to be racist – CTV News

Posted: at 9:14 am

Growing concerns about how artificial intelligence (AI) makes decisions has inspired U.S. researchers to make computers explain their thinking.

Computers are going to become increasingly important parts of our lives, if they arent already, and the automation is just going to improve over time, so its increasingly important to know why these complicated systems are making the decisions that they are, assistant professor of computer science at the University of California Irvine, Sameer Singh, told CTVs Your Morning on Tuesday.

Singh explained that, in almost every application of machine learning and AI, there are cases where the computers do something completely unexpected.

Sometimes its a good thing, its doing something much smarter than we realize, he said. But sometimes its picking up on things that it shouldnt.

Such was the case with the Microsoft AI chatbot, Tay, which became racist in less than a day. Another high-profile incident occurred in 2015, when Googles photo app mistakenly labelled a black couple as gorillas.

Singh says incidents like that can happen because the data AI learns from is based on humans; either decisions humans made in the past or basic social-economic structures that appear in the data.

When machine learning models use that data they tend to inherit those biases, said Singh.

In fact, it can get much worse where if the AI agents are part of a loop where theyre making decisions, even the future data, the biases get reinforced, he added.

Researchers hope that, by seeing the thought process of the computers, they can make sure AI doesnt pick up any gender or racial biases that humans have.

However, Googles research director Peter Norvig cast doubt on the concept of explainable AI.

You can ask a human, but, you know, what cognitive psychologists have discovered is that when you ask a human youre not really getting at the decision process. They make a decision first, and then you ask, and then they generate an explanation and that may not be the true explanation, he said at an event in June in Sydney, Australia.

So we might end up being in the same place with machine learning where we train one system to get an answer and then we train another system to say given the input of this first system, now its your job to generate an explanation.

Norvig suggests looking for patterns in the decisions themselves, rather than the inner workings behind them.

But Singh says understanding the decision process is critical for future use, particularly in cases where AI is making decisions, like approving loan applications, for example.

Its important to know what details theyre using. Not just if theyre using your race column or your gender column but are they using proxy signals like your location, which we know it could be an indicator of race or other problematic attributes, explained Singh.

Over the last year theres been multiple efforts to find out how to better explain the rational of AI.

Currently, The Defense Advanced Research Projects Agency (DARPA) is funding 13 different research groups, which are pursuing a range of approaches to making AI more explainable.

The rest is here:

Explainable AI: The push to make sure machines don't learn to be racist - CTV News

Posted in Artificial Intelligence | Comments Off on Explainable AI: The push to make sure machines don’t learn to be racist – CTV News

Page 153«..1020..152153154155..160170..»