Daily Archives: April 13, 2017

Google’s AI has learned how to draw by looking at your doodles – The Verge

Posted: April 13, 2017 at 11:49 pm

Remember last year when Google released an AI-powered web tool that played Pictionary with your doodles? Well, surprise! Those doodles you drew have now been used to teach Googles AI how to draw. The resulting program is called Sketch-RNN and, frankly, it draws about as well as a toddler. But like any new parents, Googles AI scientists are proud as punch.

To create Sketch-RNN, Google Brain researchers David Ha and Douglas Eck collected more than half a million user-drawn sketches from the Google tool Quick, Draw! Each time a user drew something on the app, it recorded not only the final image, but also the order and direction of every pen stroke used to make it. The resulting data gives a more complete picture (ho, ho, ho) of how we really draw.

All in all, Ha and Eck gathered 70,000 training doodles for 75 different categories, including cat, firetruck, garden, owl, pig, face, and mermaid. Their goal? To create a machine that can draw and generalize abstract concepts in a manner similar to humans. And it can! After studying this data, it learned first to draw based on human input, as seen below:

Notice, as seen most clearly in the penultimate row, the AI is not just copying line for line the human doodle. The input on the left-hand side shows a cat with three eyes but the AI is copying the concept, not the sketch itself, and it knows enough to know three eyes is one too many.

Next, Sketch-RNN learned to draw the objects without copying a starting sketch. (For more on how deep neural networks process and imitate data, check out our AI explainer.)

But whats the benefit of getting neural networks to sketch things in the first place, when theyre already pretty good at making photo-realistic images? Well, as Ha and Eck explain, although doodles look childish to us, theyre also masterpieces of abstraction and data compression. Doodles, they say, tell us something about how people represent and reconstruct images of the world around them. In other words, theyre more human. And once youve taught an AI to sketch, you can deploy it in all sorts of fun ways. Sketch-RNN can complete doodles started by someone else:

And it can combine different doodles together. So, in the picture below, the neural network has been asked to draw some combination of the category cat and chair. The result? Weird cat-chair chimeras:

It can also create what are called latent space interpolations looking at any number of doodle subjects, and combining them together in different ratios to create new sketches with multiple characteristics. In the group of drawings on the left, below, the AI has combined four different doodles: the pig, rabbit, crab, and face.

These drawings are obviously quite basic, but the methods used to create them are so interesting and, so potentially useful. In the future, AI programs like Sketch-RNN could be used as creative aids for designers, architects, and artists. If someone is struggling with a certain picture or design, they could get an AI to absorb their work and spit out a few more suggested variations. The images the computer produces might not be useful in themselves, but they could spark something in the human. Is this AI creativity? Its difficult to know what else to call it.

See the rest here:

Google's AI has learned how to draw by looking at your doodles - The Verge

Posted in Ai | Comments Off on Google’s AI has learned how to draw by looking at your doodles – The Verge

Accelerator program Zeroth wants to find Asia’s top AI and machine learning startups – TechCrunch

Posted: at 11:49 pm

Zerothis an accelerator program that is out to fix the lack of talent, and investment options, for artificial intelligence (AI) in Asia, and it has just opened applications for its second program which takes place in Hong Kong in late July.

Theres almost nothing that wont be touched by AI,high-profile investor and former Google China head Kaifu Lee said at our most recent China event. And yet, Asias biggest firms still lag their U.S. peers on AI.

Things aremoving in the right direction but seemingly for a select few.Didi Chuxing recently set up a U.S.-based lab, but it appears some way behind Google and Uber. Even then,retaining talent is tough.The recent departure of Andrew Ng, who led Baidus research lab in the U.S., has highlighted the struggle that Chinas (Asias) biggest firms have in hiring and retain top talent in the field of artificial intelligence (AI) and deep learning.

Tak Lo, founding partner ofZeroth, left his role at early-stage venture firm Mind Fund to start the project in 2015, which is Asias first dedicated AI and machine learning accelerator program.

Itwasnt that there wasnt enough talent,there just arent many investors [in the Asia region] focused on artificial intelligence, said Lo, whose past projects have included Tech City in London, the SPH Plug and Play programin Singapore and a stint in the U.S. armed forces.

The idea is to take 20 companies per batch, withZeroth offering up to $120,000 in optional funding. Thats a slightly different approach to its inaugural batch, which took in 10 companies and offered each six percent equity in exchange for$20,000.

The program is pretty industry agnostic, and Lo said his preference is to work with early stage companies because thats where he feels the program can have the most influence.

Our mission isnt that all companiesshould have AI, but everyone to be able to develop in this tech, he said. We want it to be in thehands of many not just a few.

Particular areas of focus include edge computing, natural language, autonomous vehicles, agritech, human-machine interface technology and ethical computing. Its first cohort includes a diverse range of startups from chatbots (Clareand Botimize), to deep-learning-powered recognition (DT42), social media marketing (Rocco) and crop disease diagnosis (Sero).

Beyond a standard demo day, the event was live-streamed to selected investors inSeoul, Beijing, Tokyo and Singapore.

We figured it was the best way to build community and trust in these companies. Ithink it worked pretty well as an overall experiment, Lo said. Thecompanies are fundraising, we view investor day as the kick off.

The firstZeroth program was locatedin Hong Kong, but it has temporaryrelocated to Japan before the next program opens in Hong Kong. In Tokyo, Lo is keen to recruit venture firms, corporate partners and, of course, founders. Already,Mind Fund invested directly in the program, while others, such as 500 Startups, are closely involved.

Theres anamount of patient capital here and the liking of deeper tech is a big pro, Lo said. Where else we go will be dictated on market conditions and where we are after this cohort.

A number of regionally-focused accelerator programs struggled to make the grade as a business and shut up shop over the past year. Lo is candid that, for now, theres no business model set in stone for Zeroth.

Werestill trying to figure it out, but the AI focus is one because AI is hot but also potentially acquisitive. he said. Companies realize this is the next thing. One hypothesis is that M&A deals could provide short term cash flow, while another option is working with funds that are a little more long term.

Zeroth partner Tak Lo

On the mentoring side, Lo saidZeroths advisors have created more than $1.7bbillion in AI company value. Among themis Antoine Blondeau, who worked on theprecursor to what became now Apple-owned Siri and whoseSentient.ai startup is the worlds most funded AI company.

The othersare Hajime Hotta, whosold mobile ad firm Citrius Technologies to Yahoo Japan, Skype and Kazaa co-founderJaan Tallinn, Bangalore-based Sachin Unni, Alexandre Winter (who sold Placemeter to Netgear), early-stage investorTakahiro Shoji, Techstars Eamonn Carey, and investorNathan Benaich.

Lo said the focus on AI has been validated by similar strategies from Y Combinator, which recently announced a dedicated AI track, and others who have been increasing their interest in the space.

Were proud of the fact we are first, we took a bet and realized this would be good, he said. We may be ahead of these guys and they are more established.

Zeroths second batch is due to start in late July. Applications are open now until June 15, more details and the form can be found on the website.

Update: The original version of this article has been updated to correct that Zeroth is temporarily relocating to Tokyo but the next program will be held in Hong Kong.

Read more here:

Accelerator program Zeroth wants to find Asia's top AI and machine learning startups - TechCrunch

Posted in Ai | Comments Off on Accelerator program Zeroth wants to find Asia’s top AI and machine learning startups – TechCrunch

AI robots learning racism, sexism and other prejudices from humans, study finds – The Independent

Posted: at 11:49 pm

Artificially intelligent robots and devices are being taught to be racist, sexist and otherwise prejudiced by learning from humans, according to new research.

A massive study of millions of words online looked at how closely different terms were to each other in the text the same way that automatic translators use machine learningto establish what language means.

Some of the results were stunning.

The researchers found male names were more closely associated with career-related terms than female ones, which were more closely associated with words related to the family.

This link was stronger than the non-controversial findings that musical instruments and flowers were pleasant and weapons and insects were unpleasant.

Female names were also strongly associated with artistic terms, while male names were found to be closer to maths and science ones.

There were strong associations, known as word embeddings, between European or American names and pleasant terms, and African-American names and unpleasant terms.

The effects of such biases on AIcan be profound.

For example Google Translate, which learns what words mean by the way people use them, translates the Turkish sentence O bir doktor into he is a doctor in English, even though Turkish pronouns are not gender specific. So, it can actually mean he is a doctor or she is a doctor.

But change doktor to hemsire, meaning nurse, in the same sentence and this is translated as she is a nurse.

Last year, a Microsoft chatbot called Taywas given its own Twitter account and allowed to interact with the public.

It turned into a racist, pro-Hitler troll with a penchant for bizarre conspiracy theoriesin just 24 hours. [George W]Bush did 9/11 and Hitler would have done a better job than the monkey we have now, it wrote. Donald Trump is the only hope weve got.

In a paper about the new study in the journal Science, the researchers wrote: Our work has implications for AI and machine learning because of the concern that these technologies may perpetuate cultural stereotypes.

Our findings suggest that if we build an intelligent system that learns enough about the properties of language to be able to understand and produce it, in the process it will also acquire historical cultural associations, some of which can be objectionable.

Already, popular online translation systems incorporate some of the biases we study. Further concerns may arise as AI is given agency in our society.

If machine-learning technologies used for, say, rsum screening were to imbibe cultural stereotypes, it may result in prejudiced outcomes.

The researchers said the AI was not to blame for such problematic effects.

Notice that the word embeddings know these properties of flowers, insects, musical instruments, and weapons with no direct experience of the world and no representation of semantics other than the implicit metrics of words co-occurrence statistics with other nearby words.

But changing the way AI learns would risk missing out on unobjectionable meanings and associations of words.

We have demonstrated that word embeddings encode not only stereotyped biases but also other knowledge, such as the visceralpleasantness of flowers or the gender distribution of occupations, the researchers wrote.

The study also implies that humans may develop prejudices partly because of the language they speak.

Our work suggests that behaviour can be driven by cultural history embedded in a terms historic use. Such histories can evidently vary between languages, the paper said.

Before providing an explicit or institutional explanation for why individuals make prejudiced decisions, one must show that it was not a simple outcome of unthinking reproduction of statistical regularities absorbed with language.

Similarly, before positing complex models for how stereotyped attitudes perpetuate from one generation to the next or from one group to another, we must check whether simply learning language is sufficient to explain (some of) the observed transmission of prejudice.

One of the researchers, Professor Joanna Bryson, of Princeton University, told The Independent that instead of changing the way AI learns, the way it expresses itself should be altered.

So the AI would still hearracism and sexism, but would have a moral code that would prevent it from expressing these same sentiments.

Such filters can be controversial. The European Union has passed laws to ensure the terms of AI filters are made public.

For Professor Bryson, the key finding of the research was not so much about AI but humans.

I think the most important thing here is we have understood more about how we are transmitting information, where words come from and one of the ways in which implicit biases are affecting us all, she said.

Link:

AI robots learning racism, sexism and other prejudices from humans, study finds - The Independent

Posted in Ai | Comments Off on AI robots learning racism, sexism and other prejudices from humans, study finds – The Independent

Magic AI: these are the optical illusions that trick, fool, and flummox … – The Verge

Posted: at 11:49 pm

Theres a scene in William Gibsons 2010 novel Zero History, in which a character embarking on a high-stakes raid dons what the narrator refers to as the ugliest T-shirt in existence a garment which renders him invisible to CCTV. In Neal Stephensons Snow Crash, a bitmap image is used to transmit a virus that scrambles the brains of hackers, leaping through computer-augmented optic nerves to rot the targets mind. These stories, and many others, tap into a recurring sci-fi trope: that a simple image has the power to crash computers.

But the concept isnt fiction not completely, anyway. Last year, researchers were able to fool a commercial facial recognition system into thinking they were someone else just by wearing a pair of patterned glasses. A sticker overlay with a hallucinogenic print was stuck onto the frames of the specs. The twists and curves of the pattern look random to humans, but to a computer designed to pick out noses, mouths, eyes, and ears, they resembled the contours of someones face any face the researchers chose, in fact. These glasses wont delete your presence from CCTV like Gibsons ugly T-shirt, but they can trick an AI into thinking youre the Pope. Or anyone you like.

These types of attacks are bracketed within a broad category of AI cybersecurity known as adversarial machine learning, so called because it presupposes the existence of an adversary of some sort in this case, a hacker. Within this field, the sci-fi tropes of ugly T-shirts and brain-rotting bitmaps manifest as adversarial images or fooling images, but adversarial attacks can take forms, including audio and perhaps even text. The existence of these phenomena were discovered independently by a number of teams in the early 2010s. They usually target a type of machine learning system known as a classifier, something that sorts data into different categories, like the algorithms in Google Photos that tag pictures on your phone as food, holiday, and pets.

To a human, a fooling image might look like a random tie-dye pattern or a burst of TV static, but show it to an AI image classifier and itll say with confidence: Yep, thats a gibbon, or My, what a shiny red motorbike. Just as with the facial recognition system that was fooled by the psychedelic glasses, the classifier picks up visual features of the image that are so distorted a human would never recognize them.

These patterns can be used in all sorts of ways to bypass AI systems, and have substantial implications for future security systems, factory robots, and self-driving cars all places where AIs ability to identify objects is crucial. Imagine youre in the military and youre using a system that autonomously decides what to target, Jeff Clune, co-author of a 2015 paper on fooling images, tells The Verge. What you dont want is your enemy putting an adversarial image on top of a hospital so that you strike that hospital. Or if you are using the same system to track your enemies; you dont want to be easily fooled [and] start following the wrong car with your drone.

These scenarios are hypothetical, but perfectly viable if we continue down our current path of AI development. Its a big problem, yes, Clune says, and I think its a problem the research community needs to solve.

The challenge of defending from adversarial attacks is twofold: not only are we unsure how to effectively counter existing attacks, but we keep discovering more effective attack variations. The fooling images described by Clune and his co-authors, Jason Yosinski and Anh Nguyen, are easily spotted by humans. They look like optical illusions or early web art, all blocky color and overlapping patterns, but there are far more subtle approaches to be used.

perturbations can be applied to photos as easily as Instagram filters

One type of adversarial image referred to by researchers as a perturbation is all but invisible to the human eye. It exists as a ripple of pixels on the surface of a photo, and can be applied to an image as easily as an Instagram filter. These perturbations were first described in 2013, and in a 2014 paper titled Explaining and Harnessing Adversarial Examples, researchers demonstrated how flexible they were. That pixely shimmer is capable of fooling a whole range of different classifiers, even ones it hasnt been trained to counter. A recently revised study named Universal Adversarial Perturbations made this feature explicit by successfully testing the perturbations against a number of different neural nets exciting a lot of researchers last month.

Using fooling images to hack AI systems does have its limitations: first, it takes more time to craft scrambled images in such a way that an AI system thinks its seeing a specific image, rather than making a random mistake. Second, you often but not always need access to the internal code of the system youre trying to manipulate in order to generate the perturbation in the first place. And third, attacks arent consistently effective. As shown in Universal Adversarial Perturbations, what fools one neural network 90 percent of the time, may only have a success rate of 50 or 60 percent on a different network. (That said, even a 50 percent error rate could be catastrophic if the classifier in question is guiding a self-driving semi truck.)

To better defend AI against fooling images, engineers subject them to adversarial training. This involves feeding a classifier adversarial images so it can identify and ignore them, like a bouncer learning the mugshots of people banned from a bar. Unfortunately, as Nicolas Papernot, a graduate student at Pennsylvania State University whos written a number of papers on adversarial attacks, explains, even this sort of training is weak against computationally intensive strategies (i.e, throw enough images at the system and itll eventually fail).

To add to the difficulty, its not always clear why certain attacks work or fail. One explanation is that adversarial images take advantage of a feature found in many AI systems known as decision boundaries. These boundaries are the invisible rules that dictate how a system can tell the difference between, say, a lion and a leopard. A very simple AI program that spends all its time identifying just these two animals would eventually create a mental map. Think of it as an X-Y plane: in the top right it puts all the leopards its ever seen, and in the bottom left, the lions. The line dividing these two sectors the border at which lion becomes leopard or leopard a lion is known as the decision boundary.

The problem with the decision boundary approach to classification, says Clune, is that its too absolute, too arbitrary. All youre doing with these networks is training them to draw lines between clusters of data rather than deeply modeling what it is to be leopard or a lion. Systems like these can be manipulated in all sorts of ways by a determined adversary. To fool the lion-leopard analyzer, you could take an image of a lion and push its features to grotesque extremes, but still have it register as a normal lion: give it claws like digging equipment, paws the size of school buses, and a mane that burns like the Sun. To a human its unrecognizable, but to an AI checking its decision boundary, its just an extremely liony lion.

we're working hard to develop better defenses.

As far as we know, adversarial images have never been used to cause real-world harm. But Ian Goodfellow, a research scientist at Google Brain who co-authored Explaining and Harnessing Adversarial Examples, says theyre not being ignored. The research community in general, and especially Google, take this issue seriously, says Goodfellow. And we're working hard to develop better defenses. A number of groups, like the Elon Musk-funded OpenAI, are currently conducting or soliciting research on adversarial attacks. The conclusion so far is that there is no silver bullet, but researchers disagree on how much of a threat these attacks are in the real world. There are already plenty of ways to hack self-driving cars, for example, that dont rely on calculating complex perturbations.

Papernot says such a widespread weakness in our AI systems isnt a big surprise classifiers are trained to have good average performance, but not necessarily worst-case performance which is typically what is sought after from a security perspective. That is to say, researchers are less worried about the times the system fails catastrophically than how well it performs on average. One way of dealing with dodgy decision boundaries, suggests Clune, is simply to make image classifiers that more readily suggest they dont know what something is, as opposed to always trying to fit data into one category or another.

Meanwhile, adversarial attacks also invite deeper, more conceptual speculation. The fact that the same fooling images can scramble the minds of AI systems developed independently by Google, Mobileye, or Facebook, reveals weaknesses that are apparently endemic to contemporary AI as a whole.

Its like all these different networks are sitting around saying why dont these silly humans recognize that this static is actually a starfish, says Clune. That is profoundly interesting and mysterious; that all these networks are agreeing that these crazy and non-natural images are actually of the same type. That level of convergence is really surprising people.

That is profoundly interesting and mysterious.

For Clunes colleague, Jason Yosinski, the research on fooling images points to an unlikely similarity between artificial intelligence and intelligence developed by nature. He noted that the same category errors made by AI and their decision boundaries also exists in the world of zoology, where animals are tricked by what scientists call supernormal stimuli.

These stimuli are artificial, exaggerated versions of qualities found in nature that are so enticing to animals that they override their natural instincts. This behavior was first observed around the 1950s, when researchers used it to make birds ignore their own eggs in favor of fakes with brighter colors, or to get red-bellied stickleback fish to fight pieces of trash as if they were rival males. The fish would fight trash, so long as it had a big red belly painted on it. Some people have suggested human addictions, like fast food and pornography, are also examples of supernormal stimuli. In that light, one could say that the mistakes AIs are making are only natural. Unfortunately, we need them to be better than that.

Read more from the original source:

Magic AI: these are the optical illusions that trick, fool, and flummox ... - The Verge

Posted in Ai | Comments Off on Magic AI: these are the optical illusions that trick, fool, and flummox … – The Verge

The AI revolution: Is the future finally now? – Techworld Australia

Posted: at 11:49 pm

Over the last several decades, the evolution of artificial intelligence has followed an uncertain path reaching incredible highs and new levels of innovation, often followed by years of stagnation and disillusionment as the technology fails to deliver on its promises.

Today we are once again experiencing growing interest in the future possibilities for AI. From voice powered personal assistants like Google Home and Alexa, to Netflixs predictive recommendations, Nest learning thermostats and chatbots used by banks and retailers, there are countless examples of AI seeping into everyday life and the potential of future applications seem limitless . . . again.

Despite the mounting interest and the proliferation of new technologies, is this current wave that much different than what we have seen in the past? Do the techniques of the modern AI movement machine learning, data mining, deep learning, natural language processing and neural nets deserve to be captured under the AI moniker, or is it just more of the same?

In the earlier peaks of interest, the broad set of activities that were typically bunched together under the term AI were reserved for the labs and, if they ever saw the light of day, they were severely constrained by what the technology of the day could deliver and were limited by cost constraints. Many of the algorithms and structures central to AI have been known for some time; rather, previous surges of AI had unrealistic expectations of immediate consumer applications that could never be accomplished given limitations of the data and techniques available at the time.

Within the last five years however, the combination of enormous amounts of data and improvements to database technology to effectively utilize it, along with increases in computer horsepower to process it, have enabled AI to move from mainly scientific and academic usage to enterprise software usage, becoming viable as a mainstream business solution.

This time around, the AI movement seems to have tailwinds in the form of a few critical enabling and supporting factors:

As the tide is turning for AI, innovation- and technology-driven corporations and their leaders think IBM, Yahoo, Salesforce, Uber and Apple have become believers in the power of AI and are willing to commit long term funds to this pursuit. The desire to inject new technology into their operations to drive corporate efficiency or improve workflows (both customer and back-office) has convinced many large corporations that this new iteration of AI is worth investigating and worth investment through acquisitions and investments in startups that innovate independently.

In addition, tech heavyweights Google, Facebook, Amazon, IBM and Microsoft recently joined forces to create a new AI partnership dedicated to advancing public understanding of the sector, as well as coming up with standards for future researchers to abide by.

With so much support from these titans of industry, its no wonder that the latest burst of AI interest seems to be gaining momentum rather than losing it. But are the techniques used today truly what is meant by AI?

As is typically the case in questions of technology and business, the answer is yes and no. Just like there are varying levels of complexity in other areas of technology (consider the range of databases from simple to complex; from SQL to NoSQL; or the range of programming languages LOGO, BASIC, C, Perl, Swift, R) there are many technologies and techniques that naturally fall under the moniker AI.

AI as a technology is nebulous. Would machine learning be possible without access to large amounts of data from a traditional SQL or a cutting-edge NoSQL environment? Could an AI package be effectively used without modern concepts of APIs and REST services?

In my opinion, all of the tools commonly covered and discussed today are a part of the larger AI family of technologies that are going to drive the next generation of consumer, corporate and government solutions.

On the other hand, you have to remember that true artificial intelligence wont happen anytime soon at least no examples that can act independently of human intervention. A true AI system has the ability to learn on its own, making connections and improving on past scenarios without relying on programmed algorithms to improve capabilities. This is thankfully, still the realm of Science Fiction.

What is called AI even today is in fact, the leveraging of machines with minimal though not zero human intelligence to solve specific, narrow problems. Humans still have the upper hand as machines cannot think on their own and rely on human intervention (through code) and past data to be able to work. They can be better at finding patterns that humans can miss and find similarities between objects, but this is possible only through sheer horsepower. With todays state-of-the-art they will never be able to invent something totally new or independently address a problem that they have never come across before.

Most of what passes for AI today is the sophisticated application of statistical techniques to data invented in the past four-to-five decades, not real intelligence. Please note however, that this designation does not detract from the immense capabilities afforded by the newfound resurgence of AI. It may not be fundamentally intelligent but is no less useful and impressive.

While the core technologies of AI are similar to those of prior years and the term AI has become somewhat of a catchall for a variety of different techniques, the biggest difference and perhaps what will spur future cycles of interest is the thirst for and commitment to more from both corporations and consumers. With continued funding, research and interest in AI and with advances in the tools and techniques needs to capitalize on them, perhaps one day we will finally witness the emergence of true, independent AI.

Visit Incedoinc.

Error: Please check your email address.

More about AppleFacebookGoogleHomeIBMMicrosoftNestNetflixTechnologyUberYahoo

Read more:

The AI revolution: Is the future finally now? - Techworld Australia

Posted in Ai | Comments Off on The AI revolution: Is the future finally now? – Techworld Australia

Bad News: Artificial Intelligence Is Racist, Too – Live Science

Posted: at 11:49 pm

When Microsoft released an artificially intelligent chatbot named Tay on Twitter last March, things took a predictably disastrous turn. Within 24 hours, the bot was spewing racist, neo-Nazi rants, much of which it picked up by incorporating the language of Twitter users who interacted with it.

Unfortunately, new research finds that Twitter trolls aren't the only way that AI devices can learn racist language. In fact, any artificial intelligence that learns from human language is likely to come away biased in the same ways that humans are, according to the scientists.

The researchers experimented with a widely used machine-learning system called the Global Vectors for Word Representation (GloVe) and found that every sort of human bias they tested showed up in the artificial system. [Super-Intelligent Machines: 7 Robotic Futures]

"It was astonishing to see all the results that were embedded in these models," said Aylin Caliskan, a postdoctoral researcher in computer science at Princeton University. Even AI devices that are "trained" on supposedly neutral texts like Wikipedia or news articles came to reflect common human biases, she told Live Science.

GloVe is a tool used to extract associations from texts in this case, a standard corpus of language pulled from the World Wide Web.

Psychologists have long known that the human brain makes associations between words based on their underlying meanings. A tool called the Implicit Association Test uses reaction times to demonstrate these associations: People see a word like "daffodil" alongside pleasant or unpleasant concepts like "pain" or "beauty" and have to quickly associate the terms using a key press. Unsurprisingly, flowers are more quickly associated with positive concepts; while weapons, for example, are more quickly associated with negative concepts.

The IAT can be used to reveal unconscious associations people make about social or demographic groups, as well. For example, some IATs that are available on the Project Implicit website find that people are more likely to automatically associate weapons with black Americans and harmless objects with white Americans.

There are debates about what these results mean, researchers have said. Do people make these associations because they hold personal, deep-seated social biases they aren't aware of, or do they absorb them from language that is statistically more likely to put negative words in close conjunction with ethnic minorities, the elderly and other marginalized groups?

Caliskan and her colleagues developed an IAT for computers, which they dubbed the WEAT, for Word-Embedding Association Test. This test measured the strength of associations between words as represented by GloVe, much as the IAT measures the strength of word associations in the human brain.

For every association and stereotype tested, the WEAT returned the same results as the IAT. The machine-learning tool reproduced human associations between flowers and pleasant words; insects and unpleasant words; musical instruments and pleasant words; and weapons and unpleasant words. In a more troubling finding, it saw European-American names as more pleasant than African-American names. It also associated male names more readily with career words, and female names more readily with family words. Men were more closely associated with math and science, and women with the arts. Names associated with old people were more unpleasant than names associated with young people.

"We were quite surprised that we were able to replicate every single IAT that was performed in the past by millions," Caliskan said.

Using a second method that was similar, the researchers also found that the machine-learning tool was able to accurately represent facts about the world from its semantic associations. Comparing the GloVe word-embedding results with real U.S. Bureau of Labor Statistics data on the percentage of women in occupations, Caliskan found a 90 percent correlation between professions that the GloVe saw as "female" and the actual percentage of women in those professions.

In other words, programs that learn from human language do get "a very accurate representation of the world and culture," Caliskan said, even if that culture like stereotypes and prejudice is problematic. The AI is also bad at understanding context that humans grasp easily. For example, an article about Martin Luther King Jr. being jailed for civil rights protests in Birmingham, Alabama, in 1963 would likely associate a lot of negative words with African-Americans. A human would reasonably interpret the story as one of righteous protest by an American hero; a computer would add another tally to its "black=jail" category.

Retaining accuracy while getting AI tools to understand fairness is a big challenge, Caliskan said. [A Brief History of Artificial Intelligence]

"We don't think that removing bias would necessarily solve these problems, because it's probably going to break the accurate representation of the world," she said.

The new study, published online today (April 12) in the journal Science, is not surprising, said Sorelle Friedler, a computer scientist at Haverford College who was not involved in the research. It is, however, important, she said.

"This is using a standard underlying method that many systems are then built off of," Friedler told Live Science. In other words, biases are likely to infiltrate any AI that uses GloVe, or that learns from human language in general.

Friedler is involved in an emerging field of research called Fairness, Accountability and Transparency in Machine Learning. There are no easy ways to solve these problems, she said. In some cases, programmers might be able to explicitly tell the system to automatically disregard specific stereotypes, she said. In any case involving nuance, humans may need to be looped in to make sure the machine doesn't run amok. The solutions will likely vary, depending on what the AI is designed to do, Caliskan said are they for search applications, for decision making or for something else?

In humans, implicit attitudes actually don't correlate very strongly with explicit attitudes about social groups. Psychologists have argued about why this is: Are people just keeping mum about their prejudices to avoid stigma? Does the IAT not actually measure prejudice that well? But, it appears that people at least have the ability to reason about right and wrong, with their biased associations, Caliskan said. She and her colleagues think humans will need to be involved and programming code will need to be transparent so that people can make value judgments about the fairness of machines.

"In a biased situation, we know how to make the right decision," Caliskan said, "but unfortunately, machines are not self-aware."

Original article on Live Science.

See the original post:

Bad News: Artificial Intelligence Is Racist, Too - Live Science

Posted in Artificial Intelligence | Comments Off on Bad News: Artificial Intelligence Is Racist, Too – Live Science

Artificial intelligence: How to avoid racist algorithms – BBC News

Posted: at 11:49 pm


BBC News
Artificial intelligence: How to avoid racist algorithms
BBC News
There is growing concern that many of the algorithms that make decisions about our lives - from what we see on the internet to how likely we are to become victims or instigators of crime - are trained on data sets that do not include a diverse range of ...

and more »

Continued here:

Artificial intelligence: How to avoid racist algorithms - BBC News

Posted in Artificial Intelligence | Comments Off on Artificial intelligence: How to avoid racist algorithms – BBC News

Artificial intelligence: coming soon to a hospital near you – STAT

Posted: at 11:49 pm

H

uman intelligence has long powered hospitals and health care. We rely on doctors, nurses, and a variety of other clinicians to solve problems and create new solutions. Advances in artificial intelligence are now making it possible to apply this form of computer-based thinking to health care.

As the chief technology officer for a new state-of-the-art advanced medical learning facility, I have been closely watching developments in artificial intelligence. Here are three areas training, surgical robots, and data mining in which I believe it will begin making a difference sooner rather than later.

Inside their operating rooms, surgeons are the captains of the ship. They possess extensive medical training and the skills to apply it. But they rely on the cooperation and contributions of the entire team to make the most of those skills. Unfortunately, few surgeons get training in how to effectively lead people with different educational and skill backgrounds.

advertisement

Creating environments in which all members of an operating room team can come together to learn and practice communication skills is a significant challenge. There are, of course, standard communication protocols for teamwork in health care. They have been gathered into in a national program known as TeamSTEPPS. However, the opportunity to really learn to communicate in the operating room seldom exists because different players on the team get their education and training via separate professional organizations and events.

New robot no substitute for humans in the operating room

To overcome this problem, my colleagues and I at the Florida Hospital Nicholson Center worked with a game development company called ARA/Virtual Heroes to create a virtual world in which a surgeon can practice team communication and leadership. This game runs on the same type of avatar intelligence underpinning teammates in the Call of Duty games. The automated avatars give audio feedback and guidance to help the surgeon make the right choices. A collection of rules, conditions, and scripts guide the surgeon through a scenario in the operating room and teach him or her which actions and decisions are correct and which ones arent. Game scenarios have decision branches that lead to favorable and unfavorable outcomes. As with most such games, there is just a single path through the scenario that leads to a successful conclusion and a corresponding score derived from making correct and incorrect decisions.

Major advances in robotic surgery let doctors perform many types of complex procedures with more precision, flexibility, and control than is possible with other conventional techniques. Robots like the da Vinci Surgical System provide a platform for translating a surgeons movements into precise actions with advanced instruments. Current robots, however, are not aware of the anatomy they show the surgeon, the procedures they are being used to perform, or what the surgeon intends to do. They are fantastic tools, but they arent yet intelligent assistants.

Future generations of robotic surgery platforms will be more aware of the procedure being performed and use that knowledge and perception to give the surgeon intelligent assistance. Companies like Verb Surgical, a collaboration between Google and Ethicon Endo-Surgery, have indicated that their robot will include machine learning and awareness. That would let it identify potential issues during a procedure. They also plan to link the robot to a cloud supercomputer service like IBMs Watson, so knowledge of thousands of similar procedures will be accessible to both the surgeon and the robot to improve the performance of each operation.

Watson goes to Asia: Hospitals use supercomputer for cancer treatment

Capabilities like those should greatly enhance the level of expertise brought into the operating room of the future, combining the skills and knowledge of the surgeon with the experience of thousands of his or her colleagues and the artificial intelligence of the worlds leading computer scientists.

Hospital systems collect data on thousands of patients each year. But each record slides into multiple disparate and disconnected databases. Hospitals know a great deal about individual patients, but very little about the aggregate health of their populations. Data mining and artificial intelligence have the potential to bring that information together into an integrated whole that can be analyzed to create a valuable picture of the health of any defined population while maintaining the anonymity of the individuals involved.

In 1854, Dr. John Snow proved that cholera was being spread through the water system of London by creating his now-famous death map, which showed the houses of those dying from the disease and which system was delivering their water. This big data analysis of an important health problem, carried out by a single human intelligence, saved countless lives.

The databases in modern hospital systems contain information that may identify the causes of disease for thousands of different issues on command. They can address questions like which medical services are best suited to which communities in the city, or where are new disease outbreaks originating, or which communities would benefit from which health education programs? Government is increasingly holding local health care providers responsible for the health of the populations they serve. They expect these providers to deliver clinics, vaccinations, and screenings. However, you cant know what to provide if you dont know the demographic and health makeup of your population.

We know so little about the aggregate health of communities because there arent enough minds and hours to collect and analyze massive datasets. Big data and artificial intelligence will bring computer minds to these problems and significantly improve our ability to offer effective health care to individuals and communities.

Roger Smith, PhD, is chief technology officer for the Florida Hospital Nicholson Center and a graduate faculty member at the University of Central Florida.

Roger Smith can be reached at FHNC.Info@flhosp.org Follow Roger on Twitter @@NCGlobal

View post:

Artificial intelligence: coming soon to a hospital near you - STAT

Posted in Artificial Intelligence | Comments Off on Artificial intelligence: coming soon to a hospital near you – STAT

Will Artificial Intelligence Make Email Marketers Obsolete? – MediaPost Communications

Posted: at 11:49 pm

Forrester Research says intelligent agents (cognitive marketing, artificial intelligence, machine learning, chatbots, etc.) will eliminate 6% of U.S. jobs by 2021. But machine learning and artificial intelligence (AI) are already making inroads in marketing areas usually reserved for humans.

Is it time to push the panic button? Will marketer jobs as we know them today become obsolete in five, 10 or 15 years?

No on the panic button, but some marketing roles will clearly be at risk in the future.

Artificial intelligence and machine learning will bring significant changes to marketing, but those changes will most likely come through automating repetitive tasks that machines can do more efficiently.

This will free up marketers to work on activities where the human touch is more reliable, such as planning, strategy, context and correction.

What AI means for marketers

You've seen all the hype over self-driving cars, right? Marketing AI is similar. Both autonomous-driving technology for cars and marketing AI will handle the most routine, repeatable tasks -- but humans, at least for the near future, will set and correct the destination and keep their hands on or near the wheel.

A couple of definitions:

Machine learning is moving into the email technology mainstream, as you might have read about in industry publications like MediaPost -- or heard when you attend conferences where early adopter companies and technology vendors have been talking about it for a couple of years.

Subject lines are an excellent proving ground for AI and machine learning, because computers can figure out which combination of words, phrases and images in a subject line works best to meet the goals you set for a specific kind of email.

Machine learning can select content assets in emails for hyper-personalization, such as choosing which offers to send to different segments based on behavior and other data. More than dynamic content, these assets represent what works best for the brand.

Time to start sending out resumes?

Again, not yet.

AI/machine learning will likely take over jobs with these characteristics:

See why subject-line writing is a natural fit? Now, the jobs most at risk:

Even if your job is on this list, you don't need to panic unless it's the only thing you do. But for most marketers, these tasks take up most of their time but contribute the least to job satisfaction.

How much more could you get done if AI took over some or all of those tasks? Maybe you could finally have more time to develop new strategic plans or programs, integrate email with other channels, and more.

Embrace it now; profit from it later

One of my favorite quotes is from German economist Rudi Dornbusch: "In economics, things take longer to happen than you think they will, and then they happen faster than you thought they could."

In other words, these vast changes will probably take longer than we think to change the marketing landscape, but they are coming, and when they take hold -- as with the transformation to mobile-first digital marketing -- it will happen quickly.

Take time to understand the possibilities and get ahead of the changes, and look at your own work to see where you might need to upgrade your own skills so you don't get replaced by a robot.

Until next time, take it up a notch!

advertisement

advertisement

See more here:

Will Artificial Intelligence Make Email Marketers Obsolete? - MediaPost Communications

Posted in Artificial Intelligence | Comments Off on Will Artificial Intelligence Make Email Marketers Obsolete? – MediaPost Communications

Opinion: How artificial intelligence could transform police body cameras – MarketWatch

Posted: at 11:49 pm

The first generation of police body cameras was introduced in 2005 in Great Britain. Their primary task was, and still is, to record police interactions with the public as well as to gather evidence at crime scenes. During a typical day, every police officer captures hours and hours of footage. To separate the irrelevant data from incriminating evidence, every recording needs to be reviewed and edited.

Now imagine sifting through hundreds of hours of police footage, looking for a specific piece of information, such as a verbal exchange between an officer and a suspect, a clear shot of license plates, or a suspect entering a particular building or meeting a particular person. Its a rather time-consuming task, so a lot of this footage ends up archived to be reviewed later, forfeiting potentially valuable or time-sensitive evidence.

The solution could be the use of artificial intelligence. Machine-learning algorithms within an AI system can be trained to differentiate between people and objects, recognize various events, such as car chases and shootings, and even identify people caught on tape through its facial-recognition abilities. It also can do some of the officers paperwork.

Once AI recognizes patterns, it tags them appropriately, enabling police officers to browse for a particular scene within a video using keywords, just like via online search engines.

It also can do some of paperwork that consumes a good chunk of police officers time. AI can transcribe verbal interactions as well as handle images and then enter both in reports. With AI working hard as officers personal secretary, he or she will have more time to concentrate on more important matters.

Axon Enterprise Inc. AAXN, -1.69% formerly Taser International and best known for its Taser electroshock weapon, has since diversified into technology products for police officers. Its Axon AI division is now working to create a system that will use enhanced image/video/audio processing and integrate AI and deep-learning capabilities.

As you can see, there are many reasons why using AI and body cameras in police work is a good idea.

However, theres another side to this story.

AIs facial-recognition ability raises privacy concerns, because it allows the police to track down any individual captured on the footage simply by running the video through the algorithm.

The Pentagon's Strategic Capabilities Office aims to "get the military ready to fight tomorrow's war." SCO Director William Roper talked to MarketWatch about how drones, innovation and big data will shape the future of warfare.

This footage be abused by police departments as well as other officials. In addition, hackers, criminals and others could get their hands on it since the data are stored in the cloud. Although faces can be redacted from the videos as its being uploaded, its only a matter of time before systems evolve enough to enable dynamic facial recognition, making these videos even more powerful tools in the hands of individuals or organizations that seek to establish mass surveillance a dream come true for any authoritarian regime.

Finally, some may ask what guarantee there is to prevent for-profit companies from acquiring and reselling police footage on the black market or on the dark web. Criminals can use that information to learn how police operate, for example.

Still I believe these cameras should exist since their benefits outweigh their disadvantages. Could the same be said in five, 10 or 15 years? Only time will tell. In the meantime, you could start wearing these while running errands to trick facial-recognition systems.

See the original post here:

Opinion: How artificial intelligence could transform police body cameras - MarketWatch

Posted in Artificial Intelligence | Comments Off on Opinion: How artificial intelligence could transform police body cameras – MarketWatch