Page 257«..1020..256257258259..270280..»

Category Archives: Ai

AI Adds a New Layer to Cyber Risk – Harvard Business Review

Posted: April 13, 2017 at 11:49 pm

Executive Summary

Cognitive and AI technologies are a certainty of our future. While they have the power to bring immense potential to our productivity and quality of life, we must also be mindful of potential vulnerabilities on an equally large scale. With humans, a security breach can often be localized back to the source and sealed. With cognitive and AI breaches, the damage can become massive in seconds. Balancing the demands between automation and information security should be about making cybersecurity integral not an afterthought to an organizations information infrastructure.

Cognitive computing and artificial intelligence (AI)are spawning what many are calling a new type of industrial revolution. While both technologies refer to the same process, there is a slight nuance to each. To be specific, cognitive uses a suite of many technologies that are designed to augment the cognitive capabilities of a human mind. A cognitive system can perceive and infer, reason and learn. Were defining AIhere as a broad term that loosely refers to computers that can perform tasks that once required human intelligence. Because these systems can be trained to analyze and understand natural language, mimic human reasoning processes, and make decisions, businesses are increasingly deploying them to automate routine activities. From self-driving cars to drones to automated business operations, this technology has the potential to enhance productivity, direct human talent on critical issues, accelerate innovation, and lower operating costs.

Yet, like any technology that is not properly managed and protected, cognitive systems that use humanoid robots and avatars and less human labor can also pose immense cybersecurity vulnerabilities for businesses, compromising their operations. The criminal underground has been leveraging this capability for years, using the concept of botnets which distribute tiny pieces of code across thousands of computers programmed to execute tasks that mimic the actions of tens and hundreds of thousands of users, resulting in mass cyberattacks and spamming of email and texts, and even making major websites unavailable for large periods of time via denial of service attacks.

How it will impact business, industry, and society.

In a digital world where there is greater reliance on business data analytics and electronic consumer interactions, the C-suite cannot afford to ignore these existing security risks. In addition, there are unique and new cyber risks associated with cognitive and AI technology. Businesses must be thoughtful about adopting new information technologies, employing multiple layers of cyber defense, and security planning to reduce the growing threat. As with any innovative new technology, there are positive and negative implications. Businesses must recognize that a technology powerful enough to benefit them is equally capable of hurting them.

First of all, theres no guarantee of reliability with cognitive technology. It is only as good as the information fed into the system, and the training and context that a human expert provides. In an ideal state, systems are designed to simulate and scale the reasoning, judgment, and decision making capabilities of the most competent and expertly trained human minds. But, bad human actors say, a disgruntled employee or rogue outsiders could hijack the system, enter misleading or inaccurate data, and hold it hostage by withholding mission-critical information or by teaching the computer to process data inappropriately.

Second, cognitive and artificial intelligence systems are trained to mimic analytical processes of the human brain not always through clear, step-by-step programming instructions like a traditional system, but through example, repetition, observation and inference.

But, if the system is sabotaged or purposely fed inaccurate information, it could infer an incorrect correlation as correct or learn a bad behavior. Since most cognitive systems are designed to have freedom, as humans do, they often use non-expiring and hard-coded passwords. A malicious hacker can use the same login credentials as the bot to gain access to much more data than a single individual is allowed. Security monitoring systems are sometimes configured to ignore bot or machine access logs to reduce the large volume of systemic access. But this can allow a malicious intruder, masquerading as a bot, to gain access to systems for long periods of time and go largely undetected.

In some cases, attempts to leverage new technology can have unintended consequences, and an entire organization can become a victim. In a now-classic example, MicrosoftsTwitterbot, Tay, which was designed to learn how to communicate naturally with young people on social media, was compromised shortly after going live when internet trolls figured out the vulnerabilities of its learning algorithms and began feeding it racist, sexist, and homophobic content. The result was that Tay began to spew hateful and inappropriate answers and commentary on social media to millions of followers.

Finally, contrary to popular thinking, cognitive systems are not protected from hacks just because a process is automated. Chatbots are increasingly becoming commonplace in every type of setting, including enterprise and customer call centers. By collecting personal information about users and responding to their inquiries, some bots are designed to keep learning over time how to do their jobs better. This plays a critical role in ensuring accuracy, particularly in regulated industries like healthcare and finance that possess a high volume of confidential membership and customer information.

But like any technology, these automated chatbots can also be used by malicious hackers to scale up fraudulent transactions, mislead people, steal personally-identifiable information, and penetrate systems. We have already seen evidence of advanced AI tools being used to penetrate websites to steal compromising and embarrassing information on individuals, with high-profile examples such as Ashley Madison, Yahoo and the DNC. As bad actors continue to develop advanced AI for malicious purposes, it will require organizations to deploy equally advanced AI to prevent, detect and counter these attacks.

But, risks aside, there is tremendous upside for cyber security professionals to leverage AI and cognitive techniques. Routine tasks such as analyzing large volumes of security event logs can be automated by using digital labor and machine learning to increase accuracy. As systems become more effective at identifying malicious and unauthorized access, cybersecurity systems can become self-healing actually updating controls and patching systems in real time as a direct result of learning and understanding how hackers exploit new approaches.

Cognitive and AI technologies are a certainty of our future. While they have the power to bring immense potential to our productivity and quality of life, we must also be mindful of potential vulnerabilities on an equally large scale. With humans, a security breach can often be localized back to the source and sealed. With cognitive and AI breaches, the damage can become massive in seconds. Balancing the demands between automation and information security should be about making cybersecurity integral not an afterthought to an organizations information infrastructure.

Follow this link:

AI Adds a New Layer to Cyber Risk - Harvard Business Review

Posted in Ai | Comments Off on AI Adds a New Layer to Cyber Risk – Harvard Business Review

Google’s AI has learned how to draw by looking at your doodles – The Verge

Posted: at 11:49 pm

Remember last year when Google released an AI-powered web tool that played Pictionary with your doodles? Well, surprise! Those doodles you drew have now been used to teach Googles AI how to draw. The resulting program is called Sketch-RNN and, frankly, it draws about as well as a toddler. But like any new parents, Googles AI scientists are proud as punch.

To create Sketch-RNN, Google Brain researchers David Ha and Douglas Eck collected more than half a million user-drawn sketches from the Google tool Quick, Draw! Each time a user drew something on the app, it recorded not only the final image, but also the order and direction of every pen stroke used to make it. The resulting data gives a more complete picture (ho, ho, ho) of how we really draw.

All in all, Ha and Eck gathered 70,000 training doodles for 75 different categories, including cat, firetruck, garden, owl, pig, face, and mermaid. Their goal? To create a machine that can draw and generalize abstract concepts in a manner similar to humans. And it can! After studying this data, it learned first to draw based on human input, as seen below:

Notice, as seen most clearly in the penultimate row, the AI is not just copying line for line the human doodle. The input on the left-hand side shows a cat with three eyes but the AI is copying the concept, not the sketch itself, and it knows enough to know three eyes is one too many.

Next, Sketch-RNN learned to draw the objects without copying a starting sketch. (For more on how deep neural networks process and imitate data, check out our AI explainer.)

But whats the benefit of getting neural networks to sketch things in the first place, when theyre already pretty good at making photo-realistic images? Well, as Ha and Eck explain, although doodles look childish to us, theyre also masterpieces of abstraction and data compression. Doodles, they say, tell us something about how people represent and reconstruct images of the world around them. In other words, theyre more human. And once youve taught an AI to sketch, you can deploy it in all sorts of fun ways. Sketch-RNN can complete doodles started by someone else:

And it can combine different doodles together. So, in the picture below, the neural network has been asked to draw some combination of the category cat and chair. The result? Weird cat-chair chimeras:

It can also create what are called latent space interpolations looking at any number of doodle subjects, and combining them together in different ratios to create new sketches with multiple characteristics. In the group of drawings on the left, below, the AI has combined four different doodles: the pig, rabbit, crab, and face.

These drawings are obviously quite basic, but the methods used to create them are so interesting and, so potentially useful. In the future, AI programs like Sketch-RNN could be used as creative aids for designers, architects, and artists. If someone is struggling with a certain picture or design, they could get an AI to absorb their work and spit out a few more suggested variations. The images the computer produces might not be useful in themselves, but they could spark something in the human. Is this AI creativity? Its difficult to know what else to call it.

See the rest here:

Google's AI has learned how to draw by looking at your doodles - The Verge

Posted in Ai | Comments Off on Google’s AI has learned how to draw by looking at your doodles – The Verge

AI programs exhibit racial and gender biases, research reveals – The Guardian

Posted: at 11:49 pm

An artificial intelligence tool that has revolutionised the ability of computers to interpret everyday language has been shown to exhibit striking gender and racial biases.

The findings raise the spectre of existing social inequalities and prejudices being reinforced in new and unpredictable ways as an increasing number of decisions affecting our everyday lives are ceded to automatons.

In the past few years, the ability of programs such as Google Translate to interpret language has improved dramatically. These gains have been thanks to new machine learning techniques and the availability of vast amounts of online text data, on which the algorithms can be trained.

However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals.

Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: A lot of people are saying this is showing that AI is prejudiced. No. This is showing were prejudiced and that AI is learning it.

But Bryson warned that AI has the potential to reinforce existing biases because, unlike humans, algorithms may be unequipped to consciously counteract learned biases. A danger would be if you had an AI system that didnt have an explicit part that was driven by moral ideas, that would be bad, she said.

The research, published in the journal Science, focuses on a machine learning tool known as word embedding, which is already transforming the way computers interpret speech and text. Some argue that the natural next step for the technology may involve machines developing human-like abilities such as common sense and logic.

A major reason we chose to study word embeddings is that they have been spectacularly successful in the last few years in helping computers make sense of language, said Arvind Narayanan, a computer scientist at Princeton University and the papers senior author.

The approach, which is already used in web search and machine translation, works by building up a mathematical representation of language, in which the meaning of a word is distilled into a series of numbers (known as a word vector) based on which other words most frequently appear alongside it. Perhaps surprisingly, this purely statistical approach appears to capture the rich cultural and social context of what a word means in the way that a dictionary definition would be incapable of.

For instance, in the mathematical language space, words for flowers are clustered closer to words linked to pleasantness, while words for insects are closer to words linked to unpleasantness, reflecting common views on the relative merits of insects versus flowers.

The latest paper shows that some more troubling implicit biases seen in human psychology experiments are also readily acquired by algorithms. The words female and woman were more closely associated with arts and humanities occupations and with the home, while male and man were closer to maths and engineering professions.

And the AI system was more likely to associate European American names with pleasant words such as gift or happy, while African American names were more commonly associated with unpleasant words.

The findings suggest that algorithms have acquired the same biases that lead people (in the UK and US, at least) to match pleasant words and white faces in implicit association tests.

These biases can have a profound impact on human behaviour. One previous study showed that an identical CV is 50% more likely to result in an interview invitation if the candidates name is European American than if it is African American. The latest results suggest that algorithms, unless explicitly programmed to address this, will be riddled with the same social prejudices.

If you didnt believe that there was racism associated with peoples names, this shows its there, said Bryson.

The machine learning tool used in the study was trained on a dataset known as the common crawl corpus a list of 840bn words that have been taken as they appear from material published online. Similar results were found when the same tools were trained on data from Google News.

Sandra Wachter, a researcher in data ethics and algorithms at the University of Oxford, said: The world is biased, the historical data is biased, hence it is not surprising that we receive biased results.

Rather than algorithms representing a threat, they could present an opportunity to address bias and counteract it where appropriate, she added.

At least with algorithms, we can potentially know when the algorithm is biased, she said. Humans, for example, could lie about the reasons they did not hire someone. In contrast, we do not expect algorithms to lie or deceive us.

However, Wachter said the question of how to eliminate inappropriate bias from algorithms designed to understand language, without stripping away their powers of interpretation, would be challenging.

We can, in principle, build systems that detect biased decision-making, and then act on it, said Wachter, who along with others has called for an AI watchdog to be established. This is a very complicated task, but it is a responsibility that we as society should not shy away from.

See the article here:

AI programs exhibit racial and gender biases, research reveals - The Guardian

Posted in Ai | Comments Off on AI programs exhibit racial and gender biases, research reveals – The Guardian

AI robots learning racism, sexism and other prejudices from humans, study finds – The Independent

Posted: at 11:49 pm

Artificially intelligent robots and devices are being taught to be racist, sexist and otherwise prejudiced by learning from humans, according to new research.

A massive study of millions of words online looked at how closely different terms were to each other in the text the same way that automatic translators use machine learningto establish what language means.

Some of the results were stunning.

The researchers found male names were more closely associated with career-related terms than female ones, which were more closely associated with words related to the family.

This link was stronger than the non-controversial findings that musical instruments and flowers were pleasant and weapons and insects were unpleasant.

Female names were also strongly associated with artistic terms, while male names were found to be closer to maths and science ones.

There were strong associations, known as word embeddings, between European or American names and pleasant terms, and African-American names and unpleasant terms.

The effects of such biases on AIcan be profound.

For example Google Translate, which learns what words mean by the way people use them, translates the Turkish sentence O bir doktor into he is a doctor in English, even though Turkish pronouns are not gender specific. So, it can actually mean he is a doctor or she is a doctor.

But change doktor to hemsire, meaning nurse, in the same sentence and this is translated as she is a nurse.

Last year, a Microsoft chatbot called Taywas given its own Twitter account and allowed to interact with the public.

It turned into a racist, pro-Hitler troll with a penchant for bizarre conspiracy theoriesin just 24 hours. [George W]Bush did 9/11 and Hitler would have done a better job than the monkey we have now, it wrote. Donald Trump is the only hope weve got.

In a paper about the new study in the journal Science, the researchers wrote: Our work has implications for AI and machine learning because of the concern that these technologies may perpetuate cultural stereotypes.

Our findings suggest that if we build an intelligent system that learns enough about the properties of language to be able to understand and produce it, in the process it will also acquire historical cultural associations, some of which can be objectionable.

Already, popular online translation systems incorporate some of the biases we study. Further concerns may arise as AI is given agency in our society.

If machine-learning technologies used for, say, rsum screening were to imbibe cultural stereotypes, it may result in prejudiced outcomes.

The researchers said the AI was not to blame for such problematic effects.

Notice that the word embeddings know these properties of flowers, insects, musical instruments, and weapons with no direct experience of the world and no representation of semantics other than the implicit metrics of words co-occurrence statistics with other nearby words.

But changing the way AI learns would risk missing out on unobjectionable meanings and associations of words.

We have demonstrated that word embeddings encode not only stereotyped biases but also other knowledge, such as the visceralpleasantness of flowers or the gender distribution of occupations, the researchers wrote.

The study also implies that humans may develop prejudices partly because of the language they speak.

Our work suggests that behaviour can be driven by cultural history embedded in a terms historic use. Such histories can evidently vary between languages, the paper said.

Before providing an explicit or institutional explanation for why individuals make prejudiced decisions, one must show that it was not a simple outcome of unthinking reproduction of statistical regularities absorbed with language.

Similarly, before positing complex models for how stereotyped attitudes perpetuate from one generation to the next or from one group to another, we must check whether simply learning language is sufficient to explain (some of) the observed transmission of prejudice.

One of the researchers, Professor Joanna Bryson, of Princeton University, told The Independent that instead of changing the way AI learns, the way it expresses itself should be altered.

So the AI would still hearracism and sexism, but would have a moral code that would prevent it from expressing these same sentiments.

Such filters can be controversial. The European Union has passed laws to ensure the terms of AI filters are made public.

For Professor Bryson, the key finding of the research was not so much about AI but humans.

I think the most important thing here is we have understood more about how we are transmitting information, where words come from and one of the ways in which implicit biases are affecting us all, she said.

Link:

AI robots learning racism, sexism and other prejudices from humans, study finds - The Independent

Posted in Ai | Comments Off on AI robots learning racism, sexism and other prejudices from humans, study finds – The Independent

Accelerator program Zeroth wants to find Asia’s top AI and machine learning startups – TechCrunch

Posted: at 11:49 pm

Zerothis an accelerator program that is out to fix the lack of talent, and investment options, for artificial intelligence (AI) in Asia, and it has just opened applications for its second program which takes place in Hong Kong in late July.

Theres almost nothing that wont be touched by AI,high-profile investor and former Google China head Kaifu Lee said at our most recent China event. And yet, Asias biggest firms still lag their U.S. peers on AI.

Things aremoving in the right direction but seemingly for a select few.Didi Chuxing recently set up a U.S.-based lab, but it appears some way behind Google and Uber. Even then,retaining talent is tough.The recent departure of Andrew Ng, who led Baidus research lab in the U.S., has highlighted the struggle that Chinas (Asias) biggest firms have in hiring and retain top talent in the field of artificial intelligence (AI) and deep learning.

Tak Lo, founding partner ofZeroth, left his role at early-stage venture firm Mind Fund to start the project in 2015, which is Asias first dedicated AI and machine learning accelerator program.

Itwasnt that there wasnt enough talent,there just arent many investors [in the Asia region] focused on artificial intelligence, said Lo, whose past projects have included Tech City in London, the SPH Plug and Play programin Singapore and a stint in the U.S. armed forces.

The idea is to take 20 companies per batch, withZeroth offering up to $120,000 in optional funding. Thats a slightly different approach to its inaugural batch, which took in 10 companies and offered each six percent equity in exchange for$20,000.

The program is pretty industry agnostic, and Lo said his preference is to work with early stage companies because thats where he feels the program can have the most influence.

Our mission isnt that all companiesshould have AI, but everyone to be able to develop in this tech, he said. We want it to be in thehands of many not just a few.

Particular areas of focus include edge computing, natural language, autonomous vehicles, agritech, human-machine interface technology and ethical computing. Its first cohort includes a diverse range of startups from chatbots (Clareand Botimize), to deep-learning-powered recognition (DT42), social media marketing (Rocco) and crop disease diagnosis (Sero).

Beyond a standard demo day, the event was live-streamed to selected investors inSeoul, Beijing, Tokyo and Singapore.

We figured it was the best way to build community and trust in these companies. Ithink it worked pretty well as an overall experiment, Lo said. Thecompanies are fundraising, we view investor day as the kick off.

The firstZeroth program was locatedin Hong Kong, but it has temporaryrelocated to Japan before the next program opens in Hong Kong. In Tokyo, Lo is keen to recruit venture firms, corporate partners and, of course, founders. Already,Mind Fund invested directly in the program, while others, such as 500 Startups, are closely involved.

Theres anamount of patient capital here and the liking of deeper tech is a big pro, Lo said. Where else we go will be dictated on market conditions and where we are after this cohort.

A number of regionally-focused accelerator programs struggled to make the grade as a business and shut up shop over the past year. Lo is candid that, for now, theres no business model set in stone for Zeroth.

Werestill trying to figure it out, but the AI focus is one because AI is hot but also potentially acquisitive. he said. Companies realize this is the next thing. One hypothesis is that M&A deals could provide short term cash flow, while another option is working with funds that are a little more long term.

Zeroth partner Tak Lo

On the mentoring side, Lo saidZeroths advisors have created more than $1.7bbillion in AI company value. Among themis Antoine Blondeau, who worked on theprecursor to what became now Apple-owned Siri and whoseSentient.ai startup is the worlds most funded AI company.

The othersare Hajime Hotta, whosold mobile ad firm Citrius Technologies to Yahoo Japan, Skype and Kazaa co-founderJaan Tallinn, Bangalore-based Sachin Unni, Alexandre Winter (who sold Placemeter to Netgear), early-stage investorTakahiro Shoji, Techstars Eamonn Carey, and investorNathan Benaich.

Lo said the focus on AI has been validated by similar strategies from Y Combinator, which recently announced a dedicated AI track, and others who have been increasing their interest in the space.

Were proud of the fact we are first, we took a bet and realized this would be good, he said. We may be ahead of these guys and they are more established.

Zeroths second batch is due to start in late July. Applications are open now until June 15, more details and the form can be found on the website.

Update: The original version of this article has been updated to correct that Zeroth is temporarily relocating to Tokyo but the next program will be held in Hong Kong.

Read more here:

Accelerator program Zeroth wants to find Asia's top AI and machine learning startups - TechCrunch

Posted in Ai | Comments Off on Accelerator program Zeroth wants to find Asia’s top AI and machine learning startups – TechCrunch

The AI revolution: Is the future finally now? – Techworld Australia

Posted: at 11:49 pm

Over the last several decades, the evolution of artificial intelligence has followed an uncertain path reaching incredible highs and new levels of innovation, often followed by years of stagnation and disillusionment as the technology fails to deliver on its promises.

Today we are once again experiencing growing interest in the future possibilities for AI. From voice powered personal assistants like Google Home and Alexa, to Netflixs predictive recommendations, Nest learning thermostats and chatbots used by banks and retailers, there are countless examples of AI seeping into everyday life and the potential of future applications seem limitless . . . again.

Despite the mounting interest and the proliferation of new technologies, is this current wave that much different than what we have seen in the past? Do the techniques of the modern AI movement machine learning, data mining, deep learning, natural language processing and neural nets deserve to be captured under the AI moniker, or is it just more of the same?

In the earlier peaks of interest, the broad set of activities that were typically bunched together under the term AI were reserved for the labs and, if they ever saw the light of day, they were severely constrained by what the technology of the day could deliver and were limited by cost constraints. Many of the algorithms and structures central to AI have been known for some time; rather, previous surges of AI had unrealistic expectations of immediate consumer applications that could never be accomplished given limitations of the data and techniques available at the time.

Within the last five years however, the combination of enormous amounts of data and improvements to database technology to effectively utilize it, along with increases in computer horsepower to process it, have enabled AI to move from mainly scientific and academic usage to enterprise software usage, becoming viable as a mainstream business solution.

This time around, the AI movement seems to have tailwinds in the form of a few critical enabling and supporting factors:

As the tide is turning for AI, innovation- and technology-driven corporations and their leaders think IBM, Yahoo, Salesforce, Uber and Apple have become believers in the power of AI and are willing to commit long term funds to this pursuit. The desire to inject new technology into their operations to drive corporate efficiency or improve workflows (both customer and back-office) has convinced many large corporations that this new iteration of AI is worth investigating and worth investment through acquisitions and investments in startups that innovate independently.

In addition, tech heavyweights Google, Facebook, Amazon, IBM and Microsoft recently joined forces to create a new AI partnership dedicated to advancing public understanding of the sector, as well as coming up with standards for future researchers to abide by.

With so much support from these titans of industry, its no wonder that the latest burst of AI interest seems to be gaining momentum rather than losing it. But are the techniques used today truly what is meant by AI?

As is typically the case in questions of technology and business, the answer is yes and no. Just like there are varying levels of complexity in other areas of technology (consider the range of databases from simple to complex; from SQL to NoSQL; or the range of programming languages LOGO, BASIC, C, Perl, Swift, R) there are many technologies and techniques that naturally fall under the moniker AI.

AI as a technology is nebulous. Would machine learning be possible without access to large amounts of data from a traditional SQL or a cutting-edge NoSQL environment? Could an AI package be effectively used without modern concepts of APIs and REST services?

In my opinion, all of the tools commonly covered and discussed today are a part of the larger AI family of technologies that are going to drive the next generation of consumer, corporate and government solutions.

On the other hand, you have to remember that true artificial intelligence wont happen anytime soon at least no examples that can act independently of human intervention. A true AI system has the ability to learn on its own, making connections and improving on past scenarios without relying on programmed algorithms to improve capabilities. This is thankfully, still the realm of Science Fiction.

What is called AI even today is in fact, the leveraging of machines with minimal though not zero human intelligence to solve specific, narrow problems. Humans still have the upper hand as machines cannot think on their own and rely on human intervention (through code) and past data to be able to work. They can be better at finding patterns that humans can miss and find similarities between objects, but this is possible only through sheer horsepower. With todays state-of-the-art they will never be able to invent something totally new or independently address a problem that they have never come across before.

Most of what passes for AI today is the sophisticated application of statistical techniques to data invented in the past four-to-five decades, not real intelligence. Please note however, that this designation does not detract from the immense capabilities afforded by the newfound resurgence of AI. It may not be fundamentally intelligent but is no less useful and impressive.

While the core technologies of AI are similar to those of prior years and the term AI has become somewhat of a catchall for a variety of different techniques, the biggest difference and perhaps what will spur future cycles of interest is the thirst for and commitment to more from both corporations and consumers. With continued funding, research and interest in AI and with advances in the tools and techniques needs to capitalize on them, perhaps one day we will finally witness the emergence of true, independent AI.

Visit Incedoinc.

Error: Please check your email address.

More about AppleFacebookGoogleHomeIBMMicrosoftNestNetflixTechnologyUberYahoo

Read more:

The AI revolution: Is the future finally now? - Techworld Australia

Posted in Ai | Comments Off on The AI revolution: Is the future finally now? – Techworld Australia

Magic AI: these are the optical illusions that trick, fool, and flummox … – The Verge

Posted: at 11:49 pm

Theres a scene in William Gibsons 2010 novel Zero History, in which a character embarking on a high-stakes raid dons what the narrator refers to as the ugliest T-shirt in existence a garment which renders him invisible to CCTV. In Neal Stephensons Snow Crash, a bitmap image is used to transmit a virus that scrambles the brains of hackers, leaping through computer-augmented optic nerves to rot the targets mind. These stories, and many others, tap into a recurring sci-fi trope: that a simple image has the power to crash computers.

But the concept isnt fiction not completely, anyway. Last year, researchers were able to fool a commercial facial recognition system into thinking they were someone else just by wearing a pair of patterned glasses. A sticker overlay with a hallucinogenic print was stuck onto the frames of the specs. The twists and curves of the pattern look random to humans, but to a computer designed to pick out noses, mouths, eyes, and ears, they resembled the contours of someones face any face the researchers chose, in fact. These glasses wont delete your presence from CCTV like Gibsons ugly T-shirt, but they can trick an AI into thinking youre the Pope. Or anyone you like.

These types of attacks are bracketed within a broad category of AI cybersecurity known as adversarial machine learning, so called because it presupposes the existence of an adversary of some sort in this case, a hacker. Within this field, the sci-fi tropes of ugly T-shirts and brain-rotting bitmaps manifest as adversarial images or fooling images, but adversarial attacks can take forms, including audio and perhaps even text. The existence of these phenomena were discovered independently by a number of teams in the early 2010s. They usually target a type of machine learning system known as a classifier, something that sorts data into different categories, like the algorithms in Google Photos that tag pictures on your phone as food, holiday, and pets.

To a human, a fooling image might look like a random tie-dye pattern or a burst of TV static, but show it to an AI image classifier and itll say with confidence: Yep, thats a gibbon, or My, what a shiny red motorbike. Just as with the facial recognition system that was fooled by the psychedelic glasses, the classifier picks up visual features of the image that are so distorted a human would never recognize them.

These patterns can be used in all sorts of ways to bypass AI systems, and have substantial implications for future security systems, factory robots, and self-driving cars all places where AIs ability to identify objects is crucial. Imagine youre in the military and youre using a system that autonomously decides what to target, Jeff Clune, co-author of a 2015 paper on fooling images, tells The Verge. What you dont want is your enemy putting an adversarial image on top of a hospital so that you strike that hospital. Or if you are using the same system to track your enemies; you dont want to be easily fooled [and] start following the wrong car with your drone.

These scenarios are hypothetical, but perfectly viable if we continue down our current path of AI development. Its a big problem, yes, Clune says, and I think its a problem the research community needs to solve.

The challenge of defending from adversarial attacks is twofold: not only are we unsure how to effectively counter existing attacks, but we keep discovering more effective attack variations. The fooling images described by Clune and his co-authors, Jason Yosinski and Anh Nguyen, are easily spotted by humans. They look like optical illusions or early web art, all blocky color and overlapping patterns, but there are far more subtle approaches to be used.

perturbations can be applied to photos as easily as Instagram filters

One type of adversarial image referred to by researchers as a perturbation is all but invisible to the human eye. It exists as a ripple of pixels on the surface of a photo, and can be applied to an image as easily as an Instagram filter. These perturbations were first described in 2013, and in a 2014 paper titled Explaining and Harnessing Adversarial Examples, researchers demonstrated how flexible they were. That pixely shimmer is capable of fooling a whole range of different classifiers, even ones it hasnt been trained to counter. A recently revised study named Universal Adversarial Perturbations made this feature explicit by successfully testing the perturbations against a number of different neural nets exciting a lot of researchers last month.

Using fooling images to hack AI systems does have its limitations: first, it takes more time to craft scrambled images in such a way that an AI system thinks its seeing a specific image, rather than making a random mistake. Second, you often but not always need access to the internal code of the system youre trying to manipulate in order to generate the perturbation in the first place. And third, attacks arent consistently effective. As shown in Universal Adversarial Perturbations, what fools one neural network 90 percent of the time, may only have a success rate of 50 or 60 percent on a different network. (That said, even a 50 percent error rate could be catastrophic if the classifier in question is guiding a self-driving semi truck.)

To better defend AI against fooling images, engineers subject them to adversarial training. This involves feeding a classifier adversarial images so it can identify and ignore them, like a bouncer learning the mugshots of people banned from a bar. Unfortunately, as Nicolas Papernot, a graduate student at Pennsylvania State University whos written a number of papers on adversarial attacks, explains, even this sort of training is weak against computationally intensive strategies (i.e, throw enough images at the system and itll eventually fail).

To add to the difficulty, its not always clear why certain attacks work or fail. One explanation is that adversarial images take advantage of a feature found in many AI systems known as decision boundaries. These boundaries are the invisible rules that dictate how a system can tell the difference between, say, a lion and a leopard. A very simple AI program that spends all its time identifying just these two animals would eventually create a mental map. Think of it as an X-Y plane: in the top right it puts all the leopards its ever seen, and in the bottom left, the lions. The line dividing these two sectors the border at which lion becomes leopard or leopard a lion is known as the decision boundary.

The problem with the decision boundary approach to classification, says Clune, is that its too absolute, too arbitrary. All youre doing with these networks is training them to draw lines between clusters of data rather than deeply modeling what it is to be leopard or a lion. Systems like these can be manipulated in all sorts of ways by a determined adversary. To fool the lion-leopard analyzer, you could take an image of a lion and push its features to grotesque extremes, but still have it register as a normal lion: give it claws like digging equipment, paws the size of school buses, and a mane that burns like the Sun. To a human its unrecognizable, but to an AI checking its decision boundary, its just an extremely liony lion.

we're working hard to develop better defenses.

As far as we know, adversarial images have never been used to cause real-world harm. But Ian Goodfellow, a research scientist at Google Brain who co-authored Explaining and Harnessing Adversarial Examples, says theyre not being ignored. The research community in general, and especially Google, take this issue seriously, says Goodfellow. And we're working hard to develop better defenses. A number of groups, like the Elon Musk-funded OpenAI, are currently conducting or soliciting research on adversarial attacks. The conclusion so far is that there is no silver bullet, but researchers disagree on how much of a threat these attacks are in the real world. There are already plenty of ways to hack self-driving cars, for example, that dont rely on calculating complex perturbations.

Papernot says such a widespread weakness in our AI systems isnt a big surprise classifiers are trained to have good average performance, but not necessarily worst-case performance which is typically what is sought after from a security perspective. That is to say, researchers are less worried about the times the system fails catastrophically than how well it performs on average. One way of dealing with dodgy decision boundaries, suggests Clune, is simply to make image classifiers that more readily suggest they dont know what something is, as opposed to always trying to fit data into one category or another.

Meanwhile, adversarial attacks also invite deeper, more conceptual speculation. The fact that the same fooling images can scramble the minds of AI systems developed independently by Google, Mobileye, or Facebook, reveals weaknesses that are apparently endemic to contemporary AI as a whole.

Its like all these different networks are sitting around saying why dont these silly humans recognize that this static is actually a starfish, says Clune. That is profoundly interesting and mysterious; that all these networks are agreeing that these crazy and non-natural images are actually of the same type. That level of convergence is really surprising people.

That is profoundly interesting and mysterious.

For Clunes colleague, Jason Yosinski, the research on fooling images points to an unlikely similarity between artificial intelligence and intelligence developed by nature. He noted that the same category errors made by AI and their decision boundaries also exists in the world of zoology, where animals are tricked by what scientists call supernormal stimuli.

These stimuli are artificial, exaggerated versions of qualities found in nature that are so enticing to animals that they override their natural instincts. This behavior was first observed around the 1950s, when researchers used it to make birds ignore their own eggs in favor of fakes with brighter colors, or to get red-bellied stickleback fish to fight pieces of trash as if they were rival males. The fish would fight trash, so long as it had a big red belly painted on it. Some people have suggested human addictions, like fast food and pornography, are also examples of supernormal stimuli. In that light, one could say that the mistakes AIs are making are only natural. Unfortunately, we need them to be better than that.

Read more from the original source:

Magic AI: these are the optical illusions that trick, fool, and flummox ... - The Verge

Posted in Ai | Comments Off on Magic AI: these are the optical illusions that trick, fool, and flummox … – The Verge

Why Facebook Is Doubling Down on Its AI — The Motley Fool – Motley Fool

Posted: April 12, 2017 at 8:41 am

Imagine the computing power necessary to handle the activity of 1.23 billion usersplaying 100 million hours of video and uploading 95 million posts of photos and videos. Facebook, Inc. (NASDAQ:FB) processes that and more every day. Now imagine sifting through all that data to perform facial recognition, describe the contents of photos and video, and populate your news feed with relevant content. Those tasks are all handled by Facebook's servers that have been infused with its homegrown brand of artificial intelligence (AI).

The training that takes place behind the scenes has been the job of Facebook's AI brain named Big Sur, which has been handling the task since 2015. Now, the system that has been at the heart of the company's AI activity for the last two years is being replaced by a newer, faster model. Facebook revealed in a blog post that its successor is dubbed Big Basin, and this new platform boasts some impressive credentials. The upgraded AI server can train machine-learning models that are 30% larger in about half the time.

Facebook data center in Prineville, Oregon. Image source: Facebook.

Machine learning is a combination of software models and algorithms that sort through data at lightning speeds to make deductions based on what it finds. This might sound somewhat futuristic, but this AI technology is being used today. An example would be running facial recognition and comparing a newly uploaded photo with all the photos in its database to suggest a name when tagging a friend in the picture. "If you've logged into Facebook, it's very likely you've used some type of AI system we've been developing," according to Kevin Lee, a technical program manager at Facebook and author of the blog.

Lee detailed that the new system was built using graphics processing units (GPUs) from NVIDIA Corporation (NASDAQ:NVDA) to achieve this new level of AI computing. The platform hosts eight Tesla P100 GPU accelerators, which NVIDIA describes as "the most advanced data center GPU ever built." The system also features NVIDIA NVLink, which connects the GPUs in what Facebook describes as a "hybrid mesh cube." The speed of these systems can bog down, caused by a bottleneck at the connections, which the NVLink seeks to prevent. Lee indicated that the architecture was similar NVIDIA's DGX-1 -- its AI supercomputer in a box. Lee quipped that "Big Basin behaves like a JBOD [just a bunch of disks] -- or as we call it, JBOG, just a bunch of GPUs."

Facebook's recent emphasis on live video streaming provided challenges because of how quickly it was adopted by users. Larger and more complex data sets required additional computing power at higher speeds. The company uses AI to not only classify its real-time video but in other applications, such as speech and text translations. This new platform provides the data capacity and speed to train Facebook's next generation of AI models.

Facebook plans to release the design specifications of its AI server via the Open Compute Project in the near future. Once that happens, any ambitious computer engineer with sufficient time and money could conceivably build one of these systems in his or her basement.

Big Basin AI Server can train 30% larger data sets than its predecessor in half the time! Image source: Facebook.

Facebook isn't the only company creating innovative AI applications. Alphabet's (NASDAQ:GOOGL) (NASDAQ:GOOG) Google revealed the details behind its Tensor Processing Unit (TPU), a specialized chip it designed and has been using to power its AI systems -- called deep neural networks -- for the last two years. These machine-learning systems are created using algorithms and software modeled on the human brain, and they learn by analyzing vast quantities of data.

In a Google blog post, hardware engineer Norm Jouppi described the capabilities of the TPU chip, saying that it processed AI workloads 15 to 30 times faster than conventional CPUs and GPUs while achieving a 30 to 80 times reduction in energy consumption. Google realized six years ago that the AI speech-recognition technology that it was deploying to users required the storage of vast amounts of data, and worried that it would have to double the number of data centers just to keep up. Google credits the development of TPU for improvements in its Translate, Image Search, and Photos programs, and for its victory over world's best Go players.

It would be difficult to quantify the value these innovations bring to the companies that develop them, and even more so since they are being shared with the world. Very few of the companies involved in AI research reveal any direct financial benefit, though the advances make their products more useful to consumers, as the resulting technologies affect the things we do and the products we use every day.

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Danny Vena owns shares of Alphabet (A shares) and Facebook. Danny Vena has the following options: long January 2018 $640 calls on Alphabet (C shares) and short January 2018 $650 calls on Alphabet (C shares). The Motley Fool owns shares of and recommends Alphabet (A shares), Alphabet (C shares), Facebook, and Nvidia. The Motley Fool has a disclosure policy.

Continue reading here:

Why Facebook Is Doubling Down on Its AI -- The Motley Fool - Motley Fool

Posted in Ai | Comments Off on Why Facebook Is Doubling Down on Its AI — The Motley Fool – Motley Fool

Does Google’s AutoDraw AI grasp modern art better than you? – CNET

Posted: at 8:41 am

Turns out Google is a better artist than you.

The company has unveiled a new web-based drawing tool called AutoDraw, which will turn your terrible Microsoft Paint skills into actual pictures.

Just scrawl something on your phone screen with your finger (or on your desktop with a mouse) and Google's machine learning will detect what you're trying to draw, and fix it up for you.

The tech company has been making great strides in artificial intelligence, scooping squads of AI enthusiasts into its fold, tackling the toxic mess that is online comments, firing up special-purpose AI chips and once again pitting its machines against human players at the board game Go. For AutoDraw, the company used the same technology behind its "Quick, Draw!" AI Experiment, which tried to teach a neural network to recognize doodles. Now Google has upped the ante, partnering with artists to create some of the suggested sketches in AutoDraw.

But we're forgetting the big question: Does a neural network really understand modern art?

From left, "The Birth of Day" by Miro, the writer's impression and Google's rendition.

Apparently Miro is a no-go.

But what if we feed it something even simpler? Say, the ultra minimalist animal sketches of sometime doodler, Pablo Picasso?

Google understands Picasso's animal sketches better than all of us.

Google knows a parrot when it sees one.

But apparently Google doesn't like camels.

Apparently even the smartest machines have their limits. A single-line drawing on a smartphone, copying one of the greatest artists of the 20th century -- excuse the pun, but that's where Google draws the line.

It's Complicated: This is dating in the age of apps. Having fun yet? These stories get to the heart of the matter.

Technically Literate: Original works of short fiction with unique perspectives on tech, exclusively on CNET.

The rest is here:

Does Google's AutoDraw AI grasp modern art better than you? - CNET

Posted in Ai | Comments Off on Does Google’s AutoDraw AI grasp modern art better than you? – CNET

AI wins $290000 in Chinese poker competition – BBC News

Posted: at 8:41 am


BBC News
AI wins $290000 in Chinese poker competition
BBC News
An artificial intelligence program has beaten a team of six poker players at a series of exhibition matches in China. The AI system, called Lengpudashi, won a landslide victory and $290,000 (230,000) in the five-day competition. It is the second time ...
AI takes a seat at the table (and wins) | CIOCIO
Famous poker-playing AI takes down scientists and engineersEngadget
AI tech that solved poker could be mobile in 5-yrs; AlphaGo gets ...CalvinAyre.com

all 9 news articles »

Read the original post:

AI wins $290000 in Chinese poker competition - BBC News

Posted in Ai | Comments Off on AI wins $290000 in Chinese poker competition – BBC News

Page 257«..1020..256257258259..270280..»