Page 282«..1020..281282283284..»

Category Archives: Ai

Google’s DeepMind pits AI against AI to see if they fight or cooperate – The Verge

Posted: February 10, 2017 at 3:14 am

In the future, its likely that many aspects of human society will be controlled either partly or wholly by artificial intelligence. AI computer agents could manage systems from the quotidian (e.g., traffic lights) to the complex (e.g., a nations whole economy), but leaving aside the problem of whether or not they can do their jobs well, there is another challenge: will these agents be able to play nice with one another? What happens if one AIs aims conflict with anothers? Will they fight, or work together?

Googles AI subsidiary DeepMind has been exploring this problem in a new study published today. The companys researchers decided to test how AI agents interacted with one another in a series of social dilemmas. This is a rather generic term for situations in which individuals can profit from being selfish but where everyone loses if everyone is selfish. The most famous example of this is the prisoners dilemma, where two individuals can choose to betray one another for a prize, but lose out if both choose this option.

As explained in a blog post from DeepMind, the companys researchers tested how AI agents would perform in these sorts of situations, by dropping them into a pair of very basic video games.

In the first game, Gathering, two player have to collect apples from a central pile. They have the option of tagging the other player with a laser beam, temporarily removing them from the game, and giving the first player a chance to collect more apples. You can see a sample of this gameplay below:

In the second game, Wolfpack, two players have to hunt a third in an environment filled with obstacles. Points are claimed not just by the player that captures the prey, but by all players near to the prey when its captured. You can see a gameplay sample of this below:

What the researchers found was interesting, but perhaps not surprising: the AI agents altered their behavior, becoming more cooperative or antagonistic, depending on the context.

For example, with the Gathering game, when apples were in plentiful supply, the agents didnt really bother zapping one another with the laser beam. But, when stocks dwindled, the amount of zapping increased. Most interestingly, perhaps, was when a more computationally-powerful agent was introduced into the mix, it tended to zap the other player regardless of how many apples there were. That is to say, the cleverer AI decided it was better to be aggressive in all situations.

AI agents varied their strategy based on the rules of the game

Does that mean that the AI agent thinks being combative is the best strategy? Not necessarily. The researchers hypothesize that the increase in zapping behavior by the more-advanced AI was simply because the act of zapping itself is computationally challenging. The agent has to aim its weapon at the other player and track their movement activities which require more computing power, and which take up valuable apple-gathering time. Unless the agent knows these strategies will pay off, its easier just to cooperate.

Conversely, in the Wolfpack game, the cleverer the AI agent, the more likely it was to cooperate with other players. As the researchers explain, this is because learning to work with the other player to track and herd the prey requires more computational power.

The results of the study, then, show that the behavior of AI agents changes based on the rules theyre faced with. If those rules reward aggressive behavior (Zap that player to get more apples) the AI will be more aggressive; if they rewards cooperative behavior (Work together and you both get points!) theyll be more cooperative.

That means part of the challenge in controlling AI agents in the future, will be making sure the right rules are in place. As the researchers conclude in their blog post: As a consequence [of this research], we may be able to better understand and control complex multi-agent systems such as the economy, traffic systems, or the ecological health of our planet - all of which depend on our continued cooperation.

See the original post:

Google's DeepMind pits AI against AI to see if they fight or cooperate - The Verge

Posted in Ai | Comments Off on Google’s DeepMind pits AI against AI to see if they fight or cooperate – The Verge

Can AI make Facebook more inclusive? – Christian Science Monitor

Posted: at 3:14 am

February 9, 2017 When faced with a challenge, whats a tech company to do? Turn to technology, Facebook suggests.

Following criticism that its ad-approval process was failing to weed out discriminatory ads,Facebook has revised its approach to advertising, the company announced on Wednesday. In addition to updating its policies about how advertisers can use data to target users, the social media giant plans to implement a high-tech solution: machine learning.

In recent years, artificial intelligence has climbed off the pages of science fiction novels and into myriad aspects of everyday life, from internet searches to health care decisions to traffic recommendations. But Facebook's new ad-approval algorithms wade into greener territory as the company attempts to utilize machine learning to address, or at least not contribute to, social discrimination.

Machine learning has been around for half a century at least but were only now starting to use it to make a social difference,Geoffrey Gordon, an associate professor in the Machine Learning Department at Carnegie Mellon University in Pittsburgh, Penn., tells The Christian Science Monitor in a phone interview. Its going to become increasingly important.

Though analysts caution that machine learning has its limits, such an approach also carries tremendous potential for addressing these types of challenges. With that in mind, more companies particularly in the tech sector are likely to deploy similar techniques.

Facebooks change of strategy, intended to make the platform more inclusive, follow the discovery that some of its ads were specifically excluding certain racial groups. In October, nonprofit investigative news site ProPublica tested the companys ad approval process with an ad for a renter event that explicitly excluded African-Americans. The Fair Housing Act of 1968 prohibits discrimination or showing preference to anyone on the basis of race, making that ad illegal but it was nevertheless approved within 15 minutes, ProPublica reported.

Why? Because while Facebook doesn't ask users to identify their race and bars advertisers from directing their content at specific races, they have a host of information about users on file: pages they like, what languages they use, and so on. This kind of information is important to advertisers, since it means they can improve their chances of making a sale by targeting their ads toward people who are more likely to buy their product.

But by creating a demographic picture of a user, this data may make it possible to determine an individuals race, and then improperly exclude or target individuals. The company's updated policies emphasize that advertisers cannot discriminate against users on the basis of personal attributes, which Facebook says include "race, ethnicity, color, national origin, religion, age, sex, sexual orientation,gender identity, family status, disability, medical or genetic condition."

There's a fine line between appropriate use of such information and discrimination, as Facebooks head of US multicultural sales, Christian Martinez, explained following the ProPublica investigation: a merchant selling hair care products that are designed for black women will need to reach that constituency, while an apartment building that wont rent to black people or an employer that only hires men [could use the information for]negative exclusion.

For Facebook, the challenge is maintaining that advertising advantage, while preventing discrimination, particularly where its illegal. Thats where machine learning comes in.

Were beginning to test new technology thatleverages machine learning to help us identify adsthat offer housing, employment or credit opportunities - the types of advertising stakeholders told us they were concerned about, the company said in a statement on Wednesday.

The computer is just looking for patterns in data that you supply to it, explains Professor Gordon.

That means Facebook can decide which areas it wants to focus on namely, ads that offer housing, employment or credit opportunities, according to the company and then supply hundreds of examples of these types of ads to a computer.

If a human teaches the computer by initially labeling each ad as discriminatory or nondiscriminatory, a computer can learn to go from the text of the advertising to a prediction of whether its discriminatory or not, Gordon says.

This kind of machine learning known as supervised learning already has dozens of applications, from determining which emails are spam to recognizing faces in a photo.

But there are certainly limits to its effectiveness, Gordon adds.

Youre not going to do better than your source of information, he explains. Teaching the machine to recognize discriminatory ads requires lots of examples of similar ads.

If the distribution of ads that you see changes, the machine learning might stop working, Gordon explains, noting that these changing strategies on the part of content producers can often get them past AI filters, like your email spam filter. Insufficient understanding of details on the part of machines can also lead to high-profile problems, like Google Photos, which in 2015 mistakenly labeled black people as gorillas.

Teaching the machine also means having a person take the time to go through hundreds of ads and label them, as well as continue to check and correct a machines work. That makes the system vulnerable to human biases.

That process of refinement involves sorting, labeling and tagging which is difficult to do without using assumptions about ethnicity, gender, race, religion and the like, explains Amy Webb, founder and CEO of the Future Today Institute, in an email to the Monitor. The system learns through a process of real-time experimenting and testing, so once bias creeps in, it can be difficult to remove it.

More overt bias issues have already been observed with AI bots, like Tay, Microsofts chatbot, who repeated the Nazi slogans fed to her by Twitter users. While this bias may be more subtle, since it is presumably unintentional, it could conceivably create persistent problems.

Unbiased machine learning is the subject of a lot of current research, says Gordon. One answer, he suggests, is having a lot of teachers, since it offers a consensus view of discrimination that may be less vulnerable to individual biases.

Since October, the company has been working with civil rights groups and government organizations to strengthen their nondiscrimination policies. Despite potential obstacles, those groups seem pleased with the progress that the AI system and associated steps represent.

We like Facebook for following up on its commitment to combatting discriminatory targeting in online advertisements, Wade Henderson, president and chief executive officer of the Leadership Conference on Civil and Human Rights, said in a statement on Wednesday.

And machine learning is likely to become a component in other companies efforts to combat discrimination, as well as perform a host of other functions. Though he notes that tech companies are typically fairly secretive about their plans, Gordon suggests that such projects are probably already underway at many of them.

Facebook isnt the only company doing this as far as I know, all of the tech companies are considering a similar ... question, he concludes.

But is the ability to target advertising on social media platforms really worth the trouble? Professor Webb, who also teaches at the NYU School of Business, sounds a note of caution.

My behavior in Facebook is not an accurate representation for who I really am, how I think, and how I act and thats true of most people, she writes. We sometimes like, comment and post authentically, but more often were revealing just the aspirational versions of ourselves. That may ultimately not be useful for would-be advertisers.

Read more:

Can AI make Facebook more inclusive? - Christian Science Monitor

Posted in Ai | Comments Off on Can AI make Facebook more inclusive? – Christian Science Monitor

How a poker-playing AI could help prevent your next bout of the flu – ExtremeTech

Posted: at 3:14 am

Youd be forgiven for finding little exceptional about the latest defeat of an arsenal of poker champions by the computer algorithm Libratus in Pittsburgh last week. After all, inthe last decade or two, computers have made a habit of crushingboard game heroes. And at first blush, this appears to be just another iteration in that all-too-familiar story. Peel back a layer though, and the most recent AI victory is as disturbing as it is compelling. Lets explore the compelling side of the equation before digging into the disturbing implications of the Libratus victory.

By now, many of us are familiar with the idea of AI helping out in healthcare. For the last year or so IBM has been bludgeoning us with TV commercials about its Jeopardy-winning Watson platform, now being put to use to help oncologists diagnose and treat cancer. And while I wish to take nothing away from that achievement, Watson is a question answering system with no capacity for strategic thinking. The latter topic belongs to a class of situations more germane to the field of game theory. Game theory is usually tucked underthe sub-genre of economics, for it deals with how entities make strategic decisions in the pursuit of self interest. Its also the discipline from which the AI poker playing algorithm Libratus gets its smarts.

What does this have to do with health care and the flu? Think of disease as a game between strategic entities. Picture avirus as one player, a player with a certain set of attack and defense strategies. When the virus encounters your body, a game ensues, in which your body defends with its own strategies and hopefully prevails. This game has been going on a long time, with humans having only a marginal ability to control the outcome. Our bodys natural defenses have been developed in evolutionary time, and thus have a limited ability to make on the fly adaptations.

But what if we could recruit computers to be our allies in this game against viruses? And what if the same reasoning ability that allowed Libratus to prevail over the best poker mindsin the world could tacklehow to defeat a virus or a bacterial infection? This is in fact the subject of a compelling research paperby Toumas Sandholm, the designer of the Libratus algorithm. In it, he explains at length how an AI algorithm could be used for drug design and disease prevention.

With only the health of the entire human race at stake, its hard to imagine a rationale that would discourage us from making use of such a strategic superpower. Now for the disturbing part of story, and the so-called fable of the sparrows recounted by Nick Bostrom in his singular work Superintelligence: Paths, Dangers and Strategies. In the preface to the book, he tells of a group of sparrows who recruit a baby owl to help defend them against other predators, not realizing the owl might one day grow up and devour them all. In Libratus, an algorithm thats in essence a universal strategic game-playing machine, and is likely capable of besting humankind in any number of real-world strategic games, we may have finally met our owl. And while the end of the story between ourselves and Libratus has yet to be determined, prudence would surely advise we tread carefully.

See original here:

How a poker-playing AI could help prevent your next bout of the flu - ExtremeTech

Posted in Ai | Comments Off on How a poker-playing AI could help prevent your next bout of the flu – ExtremeTech

Dynatrace Drives Digital Innovation With AI Virtual Assistant – Forbes

Posted: at 3:14 am


Forbes
Dynatrace Drives Digital Innovation With AI Virtual Assistant
Forbes
And then there's davis, the artificial intelligence (AI)-driven interface to Dynatrace's deep, real-time knowledge about application and infrastructure performance. Interact with davis via Amazon Alexa's soothing, vaguely British female voice, asking ...

and more »

Read more here:

Dynatrace Drives Digital Innovation With AI Virtual Assistant - Forbes

Posted in Ai | Comments Off on Dynatrace Drives Digital Innovation With AI Virtual Assistant – Forbes

Legaltech 2017: Announcements, AI, And The Future Of Law – Above the Law

Posted: at 3:14 am

I spent most of last week in the Midtown Hilton in New York City attending Legaltech 2017, or Legalweek: The Experience, or some sort of variation of the two. For the most part, it pretty much had the same feel as every other Legaltech Ive attended. But I agree with my fellow Above the Law tech columnist, Bob Ambrogi, that ALM deserves kudos for trying to change the focus a bit. It may take a year or two of experimentation to get it right, but at least theyre trying.

This year, one of the topics that popped up over and over throughout the conference was artificial intelligence and its potential impact on the practice of law. In part the AI focus was attributable to the Keynote speaker on the opening day of the conference,Andrew McAfee, author of The Second Machine Age(affiliate link). His talk focused on ways that AI would disrupt business as usual in the years to come. His predictions were in part premised on his assertion that key technologies had improved greatly in recent years and as a result were in the midst of a convergence of these technologies such that AI is finally coming of age.

I was particularly excited about this keynote sinceId started reading McAfeesbook in mid-December after Klaus Schauser, the CTO of AppFolio, MyCases parent company, recommended it to me. As McAfee explains in his book, its abundantly clear that AI is already having an incredible impact on other industries.

But what about the legal industry? I started mulling over this issue last September after attending ILTA in D.C. andwriting about a few different legal software platforms grounded in AI concepts. Because I find this topic to be so interesting, I decided to hone in on it during my interviews at Legaltech as well, which I livestreamed via Periscope.

First I met with Mark Noel, managing director of professional services at Catalyst Repository Systems. After he shared the news ofCatalysts latest release, Insight Enterprise, a platform for corporate general counsel designed to centralize and streamline discovery processes, we turned to AI and his thoughts on how it will affect the legal industry over the next year. He believes that AI will eventually manage the more tedious parts of practicing law, thus allowing lawyers to focus on the analytical aspects that tend to be more interesting: Some of the types of tasks lawyers are best at I dont see AI taking over anytime soon. A lot of what lawyers work with is justice, fairness, and equity, which are more abstract. The ultimate goal of legal practice the human practitioner is going to have to do, but the the grunt work and repeatable stuff like discovery which is becoming more onerous because of growing data volumes those are the kinds of things these tools can take over for us. You can watch the full interview here.

Next I spoke with AJ Shankar, the founder of Everlaw, an ediscovery platform that recently rolled out an integrated litigation case management tool as well, which I wrote about here. According to AJ, AI is undergoing a renaissance across many different industries. But when it comes to the legal space, its a different story. AI is not ready to make the tough judgments that lawyers make, but it is ready to augment human processes. AI will become a very important assistant for you. It will work hand in hand with humans who will then provide the valuable context. You can watch the full interview here.

I also met with Jack Grow, the president of LawToolBox, which provides calendaring and docketing softwareand he talked to me about their latest integration with DocuSign. Then we moved onto AI and Jack suggested that in the short term, the focus would be on aggregating the data needed to build useful AI platforms for the legal industry. Over the next year software vendors will figure out how to collect better data that can be consumed for analysis later on, so it can be put into an algorithm to make better use of it. Theyll be building the foundation and infrastructure so that they can later take advantage of artificial intelligence. You can watch the full interview here.

And last but certainly not least, I spoke with Jeremiah Kelman, the president of Everchron, a company that Ive covered previously, which provides a collaborative case management platform for litigators. Jeremiah predicts that AI will provide very targeted and specific improvements for lawyers. Replacement of lawyers sounds interesting, but its more about leveraging the information you have and the data that is out there and using it to provide insights and give direction to lawyers as they do their tasks and speed up what they do. From research, ediscovery, case management, and things across the spectrum, well see it in targeted areas and youll get the most impact from leveraging and improving within the existing framework. You can watch the full interview here.

Nicole Black is a Rochester, New York attorney and the Legal Technology Evangelist at MyCase, web-based law practice management software. Shes been blogging since 2005, has written a weekly column for the Daily Record since 2007, is the author of Cloud Computing for Lawyers, co-authors Social Media for Lawyers: the Next Frontier, and co-authors Criminal Law in New York. Shes easily distracted by the potential of bright and shiny tech gadgets, along with good food and wine. You can follow her on Twitter @nikiblack and she can be reached at niki.black@mycase.com.

Read more:

Legaltech 2017: Announcements, AI, And The Future Of Law - Above the Law

Posted in Ai | Comments Off on Legaltech 2017: Announcements, AI, And The Future Of Law – Above the Law

What’s Still Missing From The AI Revolution – Co.Design (blog)

Posted: February 9, 2017 at 6:13 am

Artificial intelligence is a young field full of nearly unlimited potential that remains largely misunderstood by most people. We've come a long way since Watson won Jeopardy in 2011 and IBM formed the business unit with over $1 billion in investments. AI is no longer a one-trick pony. AI technology from IBM Watson and multiple companies such as WayBlazer and SparkCognition has moved firmly into the real world. It is now being used for a variety of daily applications including:

We have no doubt come a good distance on what is indeed a very long road. My colleagues at Intel believe that AI will be bigger than the Internet. Software that can understand context and learn about users as individuals is an entirely new paradigm for computing. But many dangers and problems lie ahead, if we don't look past the hype and focus on five key areas:

1. Applying AI It all starts with what you are trying to achieve. Companies are struggling to generate business value with AI. Data scientists are overwhelmed by the complexity and quantity of data, and line-of-business executives for their part are underwhelmed by the tangible output of those data scientists. (See the recently published Harvard Business Review article, "Why Youre Not Getting Value from Your Data Science.") Machine learning teams are struggling with what business problems to solve with clear outcomes. What is needed is a clear set of high-value use cases by industry and process domains where AI can create demonstrable business value.

2. Building AI. We have a global talent shortage, and the demand for data scientists continues to grow rapidly, far outpacing the anemic growth in supply. A McKinsey study predicts that by 2018 the number of data science jobs in the United States alone will exceed 490,000, but there will be fewer than 200,000 available data scientists to fill these positions. Globally, demand for data scientists is projected to exceed supply by more than 50 percent by 2018.

In addition, the training offered at universities is too focused on the mathematical and research aspects of AI and machine learning. Largely missing are strategy, design, insights, and change management. This oversight may have serious consequences for graduating students and their future employerswithout a multi-disciplinary approach, we will be graduating data scientists capable of designing an algorithm that is mathematically elegant, but doesnt make strategic sense for the business.

3. Testing AI. Quality assurance is one of the most important parts of software development. Products must pass a number of tests before they reach the real worldthese include unit testing, boundary testing, stress testing, and other practices. In addition, we need systems that deliver the required training data for machine learning of systems. AI is not deterministicmeaning you can receive different results from the same input data when training it. The software learns in different, nuanced ways each time it is trained. So we need new types of software testing that start with an initial "ground truth" and then verify whether the AI system is doing its job.

4. Governing AI. Every transformative tool that people have createdfrom the steam engine to the microprocessoraugments human capabilities. Successful use of these tools requires proper governance, and AI is no different; we need governance to ensure that AI is developed the right way and for the right reasons. As the UX designer Mark Rolston wrote last year on Co.Design, "The coming tidal wave of [AI-based decision support software] threatens to give very few people a phenomenal amount of suggestive power over a great many peoplethe kind of power that is hard to trace and almost impossible to stop."

AI systems should be manageable and able to clearly explain their actions. Algorithm development has so far been driven by the goal of improving performance, at the expense of credibility and traceability, which means we end up with opaque "black boxes." We are already seeing such black boxes rejected by users, regulators, and companies, as they fail the regulatory, compliance and risk requirements of corporations dealing with sensitive personal health and financial information. This issue will only get bigger as AI leads to new processes and longer chains of responsibility.

Last years White House report on "Preparing for the Future of Artificial Intelligence" outlined key areas of governance:

5. Experiencing AI. One of the biggest stories at the 2017 Consumer Electronics Show in Las Vegas was the exponential growth of Amazon's Alexa ecosystem. It foretold a future of endless smart home and office products accessible via voice, gesture, and other ways through Amazon Echo. Another tech giant, chipmaker Nvidia, presented an expansive vision for homes, offices, and cars controlled by AI assistants. Meanwhile holographic projection, VR headsets, and "merged" reality technologies like Intels Project Alloy showed that the fundamental way we experience computers is evolving.

When it comes to experiencing AI, researchers tend to focus on creating better algorithms. But theres really much more to be done here. The quality of the user experience determines both the usefulness of the product and its rate of adoption, and this is why I believe design is the next frontier of AI. At the machine intelligence firm CognitiveScale, where I'm chairman, we are facing this challenge with cognitive computing, the type of AI software we create for multinational banks, retailers, healthcare providers, and others. Like a lot of enterprise systems today our software is cloud-based. So how do you make something as nebulous sounding as a "cognitive cloud" something that a user would be thrilled to welcome into her daily life?

"Cognitive design" is the subject of a longer article, but here I will hint that a key strategy is to focus on the micro-interactions between man and machinethe fleeting moments that add up to make engagement with an AI system delightful. Just as designers use tools like journey maps to develop a human-centered experience around a particular product or service, companies must practice "cognitive design thinking"creating an experience between man and machine that builds efficacy, trust, and an emotional bond. In the end, outcomes are determined as much by the human element as by the software element.

All of this only touches the surface of the issues and difficulties that lie ahead. AI isnt just software, and it isnt just about making things easier. Its potential for radical social and economic change is enormous, and it will touch every aspect of our personal and public lives, which is why we need to think carefully and ethically about how we apply, build, test, govern, and experience machine intelligence.

Link:

What's Still Missing From The AI Revolution - Co.Design (blog)

Posted in Ai | Comments Off on What’s Still Missing From The AI Revolution – Co.Design (blog)

Amazon Is Humiliating Google & Apple In The AI Wars – Forbes

Posted: at 6:13 am


Forbes
Amazon Is Humiliating Google & Apple In The AI Wars
Forbes
Amazon's strategy to make Alexa available absolutely everywhere on every device ever created will give it the advantage it needs in the upcoming AI wars. The news that Amazon is making its AI technology available in the UK to third party developers (it ...
Alexa Voice Service Now Available for the UK and Germany : Amazon Developer BlogsAmazon Developer

all 93 news articles »

The rest is here:

Amazon Is Humiliating Google & Apple In The AI Wars - Forbes

Posted in Ai | Comments Off on Amazon Is Humiliating Google & Apple In The AI Wars – Forbes

Real life CSI: Google’s new AI system unscrambles pixelated faces – The Guardian

Posted: at 6:13 am

On the left, 8x8 images; in the middle, the images generated by Google; and on the right, the original 32x32 faces. Photograph: Google

Googles neural networks have achieved the dream of CSI viewers everywhere: the company has revealed a new AI system capable of enhancing an eight-pixel square image, increasing the resolution 16-fold and effectively restoring lost data.

The neural network could be used to increase the resolution of blurred or pixelated faces, in a way previously thought impossible; a similar system was demonstrated for enhancing images of bedrooms, again creating a 32x32 pixel image from an 8x8 one.

Googles researchers describe the neural network as hallucinating the extra information. The system was trained by being shown innumerable images of faces, so that it learns typical facial features. A second portion of the system, meanwhile, focuses on comparing 8x8 pixel images with all the possible 32x32 pixel images they could be shrunken versions of.

The two networks working in harmony effectively redraw their best guess of what the original facial image would be. The system allows for a huge improvement over old-fashioned methods of up-sampling: where an older system might simply look at a block of red in the middle of a face, make it 16 times bigger and blur the edges, Googles system is capable of recognising it is likely to be a pair of lips, and draw the image accordingly.

Of course, the system isnt capable of magic. While it can make educated guesses based on knowledge of what faces generally look like, it sometimes wont have enough information to redraw a face that is recognisably the same person as the original image. And sometimes it just plain screws up, creating inhuman monstrosities. Nontheless, the system works well enough to fool people around 10% of the time, for images of faces.

Running the same system on pictures of bedrooms is even better: test subjects were unable to correctly pick the original image almost 30% of the time. A score of 50% would indicate the system was creating images indistinguishable from reality.

Although this system exists at the extreme end of image manipulation, neural networks have also presented promising results for more conventional compression purposes. In January, Google announced it would use a machine learning-based approach to compress images on Google+ four-fold, saving users bandwidth by limiting the amount of information that needs to be sent. The system then makes the same sort of educated guesses about what information lies between the pixels to increase the resolution of the final picture.

See the original post:

Real life CSI: Google's new AI system unscrambles pixelated faces - The Guardian

Posted in Ai | Comments Off on Real life CSI: Google’s new AI system unscrambles pixelated faces – The Guardian

AI could transform the way governments deliver public services – The Guardian

Posted: at 6:13 am

Japan and Singapore are at the forefront of marrying intention and action to harness the power of AI. Photograph: Alamy Stock Photo

Lauding the transformative powers of artificial intelligence (AI) has almost become a cliche, and with good reason. It permeates our everyday lives. AI manifests itself through film or music recommendations, speech recognition on our phones or face recognition in our digital photo albums. And AI has the potential to transform the way governments design and deliver public services.

Our report, published on 6 February, predicts that almost 250,000 public sector workers could lose their jobs to robots over the next 15 years.

Governments around the world have recognised the potential of AI, but in practice actual application varies widely. Japan and Singapore are at the forefront of marrying intention and action to harness the power of AI.

Japans prime minister sees it as a vital tool to enhance the countrys sluggish economy and Singapore views it an essential part of its plan to become a smart nation. This has translated into greater government investment in R&D, and, crucially, the creation of partnerships with the private sector and universities around the world. Singapore has partnered with Microsoft to create chatbots to deliver certain public services. Japan has partnered with universities in the US to complement their comparative lack of expertise in machine learning. Across the Atlantic, the Obama administration developed a national plan for artificial intelligence, though it is difficult to assess whether Trumps government will action it.

National capability is a key factor in progress demonstrated in the different specialisms of countries. Japan, for example, is mostly known for its robotics, largely driven by the governments need to care for an increasingly ageing population. Robots, for example, are being used to assist the elderly in walking and bathing. The US retains most of the expertise in machine learning, driven by pioneering universities such as MIT and the Silicon Valley.

Like the US, the UK is well placed to harness AI through its universities and private sector, but the governments AI strategy is less clear. This has meant piecemeal application, largely driven by the initiative of individual service providers. The use of chatbots in the London Borough of Enfield, for example, or Moorfields eye hospital, which partnered with Google DeepMind to use the powers of AI to increase early diagnosis of degenerative eye conditions.

One weak point for many governments is establishing a clear ethical framework for AI use. Many initiatives around the world, such as Leverhulme Centre for the Future of Intelligence in the UK, are working on solutions and plans. But partnerships with the private sector are happening right now, and current legislative frameworks are not adapting fast enough. Data protection laws in the UK favour data minimisation and purpose specification, which run contrary to the basic principles underpinning machine learning algorithms, which need big data to draw valuable insights.

Governments around the world are at different stages in the global race to harness AI. Those at the front have clear strategies, strong cross-sector partnerships and political will driving them. The UK is well placed to make the most of this ever evolving technology but success requires a comprehensive strategy and an open conversation with the public.

Eleonora Harwich is a researcher at thinktank Reform.

Talk to us on Twitter via @Guardianpublic and sign up for your free weekly Guardian Public Leaders newsletter with news and analysis sent direct to you every Thursday.

Read more from the original source:

AI could transform the way governments deliver public services - The Guardian

Posted in Ai | Comments Off on AI could transform the way governments deliver public services – The Guardian

AI Systems Are Learning to Communicate With Humans – Futurism

Posted: at 6:13 am

In the future, service robots equipped with artificial intelligence (AI) are bound to be a common sight. These bots will help people navigate crowded airports, serve meals, or even schedule meetings.

As these AI systems become more integrated into daily life, it is vital to find an efficient way to communicate with them. It is obviously more natural for a human to speak in plain language rather than a string of code. Further, as the relationship between humans and robots grows, it will be necessary to engage in conversations, rather than just give orders.

This human-robot interaction is what Manuela M. Velosos research is all about. Veloso, a professor at Carnegie Mellon University, has focused her research on CoBots, autonomous indoor mobile service robots which transport items, guide visitors to building locations, and traverse the halls and elevators. The CoBot robots have been successfully autonomously navigating for several years now, and have traveled more than 1,000km. These accomplishments have enabled the research team to pursue a new direction, focusing now on novel human-robot interaction.

If you really want these autonomous robots to be in the presence of humans and interacting with humans, and being capable of benefiting humans, they need to be able to talk with humans Veloso says.

Velosos CoBots are capable of autonomous localization and navigation in the Gates-Hillman Center using WiFi, LIDAR, and/or a Kinect sensor (yes, the same type used for video games).

The robots navigate by detecting walls as planes, which they match to the known maps of the building. Other objects, including people, are detected as obstacles, so navigation is safe and robust. Overall, the CoBots are good navigators and are quite consistent in their motion. In fact, the team noticed the robots could wear down the carpet as they traveled the same path numerous times.

Because the robots are autonomous, and therefore capable of making their own decisions, they are out of sight for large amounts of time while they navigate the multi-floor buildings.

The research team began to wonder about this unaccounted time. How were the robots perceiving the environment and reaching their goals? How was the trip? What did they plan to do next?

In the future, I think that incrementally we may want to query these systems on why they made some choices or why they are making some recommendations, explains Veloso.

The research team is currently working on the question of why the CoBots took the route they did while autonomous. The team wanted to give the robots the ability to record their experiences and then transform the data about their routes into natural language. In this way, the bots could communicate with humans and reveal their choices and hopefully the rationale behind their decisions.

The internals underlying the functions of any autonomous robots are completely based on numerical computations, and not natural language. For example, the CoBot robots in particular compute the distance to walls, assigning velocities to their motors to enable the motion to specific map coordinates.

Asking an autonomous robot for a non-numerical explanation is complex, says Veloso. Furthermore, the answer can be provided in many potential levels of detail.

We define what we call the verbalization space in which this translation into language can happen with different levels of detail, with different levels of locality, with different levels of specificity.

For example, if a developer is asking a robot to detail their journey, they might expect a lengthy retelling, with details that include battery levels. But a random visitor might just want to know how long it takes to get from one office to another.

Therefore, the research is not just about the translation from data to language, but also the acknowledgment that the robots need to explain things with more or less detail. If a human were to ask for more detail, the request triggers CoBot to move into a more detailed point in the verbalization space.

We are trying to understand how to empower the robots to be more trustable through these explanations, as they attend to what the humans want to know, says Veloso. The ability to generate explanations, in particular at multiple levels of detail, will be especially important in the future, as the AI systems will work with more complex decisions. Humans could have a more difficult time inferring the AIs reasoning. Therefore, the bot will need to be more transparent.

For example, if you go to a doctors office and the AI there makes a recommendation about your health, you may want to know why it came to this decision, or why it recommended one medication over another.

Currently, Velosos research focuses on getting the robots to generate these explanations in plain language. The next step will be to have the robots incorporate natural language when humans provide them with feedback. [The CoBot] could say, I came from that way, and you could say, well next time, please come through the other way, explains Veloso.

These sorts of corrections could be programmed into the code, but Veloso believes that trustability in AI systems will benefit from our ability to dialogue, query, and correct their autonomy. She and her team aim at contributing to a multi-robot, multi-human symbiotic relationship, in which robots and humans coordinate and cooperate as a function of their limitations and strengths.

What were working on is to really empower people a random person who meets a robot to still be able to ask things about the robot in natural language, she says.

In the future, when we will have more and more AI systems that are able to perceive the world, make decisions, and support human decision-making, the ability to engage in these types of conversations will be essential.

Read the original here:

AI Systems Are Learning to Communicate With Humans - Futurism

Posted in Ai | Comments Off on AI Systems Are Learning to Communicate With Humans – Futurism

Page 282«..1020..281282283284..»