Artificial intelligence is now part of our everyday lives and its growing power is a double-edged sword Stuff – Stuff Magazines

A major new report on the state of artificial intelligence (AI) has just beenreleased. Think of it as the AI equivalent of an Intergovernmental Panel on Climate Change report, in that it identifies where AI is at today, and the promise and perils in view.

From language generation and molecular medicine to disinformation and algorithmic bias, AI has begun to permeate every aspect of our lives.

The report argues that we are at an inflection point where researchers and governments must think and act carefully to contain the risks AI presents and make the most of its benefits.

The report comes out of theAI100 project, which aims to study and anticipate the effects of AI rippling out through our lives over the course of the next 100 years.

AI100 produces a new report every five years: the first was published in 2016, and this is the second. As two points define a line, this second report lets us see the direction AI is taking us in.

One of us (Liz Sonenberg) is a member of the standing committee overseeing the AI100 project, and the other (Toby Walsh) was on the study panel that wrote this particular report. Members of the panel came from across the world, with backgrounds in computer science, engineering, law, political science, policy, sociology and economics.

The report highlights the remarkable progress made in AI over the past five years. AI is leaving the laboratory and has entered our lives, having a real-world impact on people, institutions, and culture. Read the news on any given day and youre likely to find multiple stories about some new advance in AI or some new use of AI.

For example, in natural language processing (NLP), computers can now analyse and even generate realistic human language. To demonstrate, we askedOpen AIs GPT-3 system, one of the largest neural networks ever built, to summarise the AI100 report for you. It did a pretty good job, even if the summary confronts our sense of self by being written in the first person:

In the coming decade, I expect that AI will play an increasingly prominent role in the lives of people everywhere. AI-infused services will become more common, and AI will become increasingly embedded in the daily lives of people across the world.

I believe that this will bring with it great economic and societal benefits, but that it will also require us to address the many challenges to ensure that the benefits are broadly shared and that people are not marginalised by these new technologies.

A key insight of AI research is that it is easier to build things than to understand why they work. However, defining what success looks like for an AI application is not straightforward.

For example, the AI systems that are used in healthcare to analyse symptoms, recommend diagnoses, or choose treatments are often far better than anything that could be built by a human, but their success is hard to quantify.

As a second example of the recent and remarkable progress in AI, consider the latest breakthrough from Googles DeepMind. AlphaFoldis an AI program that provides a huge step forward in our ability to predict how proteins fold.

This will likely lead to major advances in life sciences and medicine, accelerating efforts to understand the building blocks of life and enabling quicker and more sophisticated drug discovery. Most of the planet now knows to their cost how the unique shape of the spike proteins in the SARS-CoV-2 virus are key to its ability to invade our cells, and also to the vaccines developed to combat its deadly progress.

The AI100 report argues that worries about super-intelligent machines and wide-scale job loss from automation are still premature, requiring AI that is far more capable than available today. The main concern the report raises is not malevolent machines of superior intelligence to humans, but incompetent machines of inferior intelligence.

Once again, its easy to find in the news real-life stories of risks and threats to our democratic discourse and mental health posed by AI-powered tools. For instance, Facebook uses machine learning to sort its news feed and give each of its 2 billion users an unique but often inflammatory view of the world.

Its clear were at an inflection point: we need to think seriously and urgently about the downsides and risks the increasing application of AI is revealing. The ever-improving capabilities of AI are a double-edged sword. Harms may be intentional, like deepfake videos, or unintended, like algorithms that reinforce racial and other biases.

AI research has traditionally been undertaken by computer and cognitive scientists. But the challenges being raised by AI today are not just technical. All areas of human inquiry, and especially the social sciences, need to be included in a broad conversation about the future of the field. Minimising negative impacts on society and enhancing the positives requires consideration from across academia and with societal input.

Governments also have a crucial role to play in shaping the development and application of AI. Indeed, governments around the world have begun to consider and address the opportunities and challenges posed by AI. But they remain behind the curve.

A greater investment of time and resources is needed to meet the challenges posed by the rapidly evolving technologies of AI and associated fields. In addition to regulation, governments also need to educate. In an AI-enabled world, our citizens, from the youngest to the oldest, need to be literate in these new digital technologies.

At the end of the day, the success of AI research will be measured by how it has empowered all people, helping tackle the many wicked problems facing the planet, from the climate emergency to increasing inequality within and between countries.

AI will have failed if it harms or devalues the very people we are trying to help.

Original post:
Artificial intelligence is now part of our everyday lives and its growing power is a double-edged sword Stuff - Stuff Magazines

Artificial Intelligence Opens New Farming Possibilities – AG INFORMATION NETWORK OF THE WEST – AGInfo Ag Information Network Of The West

Its time for your Farm of the Future Report. Im Tim Hammerich.

Often when we hear that artificial intelligence is going to change agriculture, the explanation of how thats going to look is a bit fuzzy. We are starting to see examples though in the way of automation. Precision AI is a Canadian startup thats developing drone spraying technology that is currently being trialed on farms. CEO Daniel McCann says its an example of how artificial intelligence can open up new ways of farming.

McCann The previous paradigm that's basically time immemorial is again, because human beings don't have the ability to make per plan level decisions on a giant field, you're going to spray everything, right. But with automation and AI well now everybody like, it's not just us, there's other people building robots, capable of making per plant level decisions. And when you can make a per plant level decision, it's a complete game changer, right? Because now as you say, we don't need to worry about whether or not this particular chemical's impact on the crop is going to be problematic because you won't have to spray the crop. Right? So even things that aren't currently possible, like using some organics like agricultural vinegar to control weeds, well you can't use that on a crop today, but if you could precisely spray it and target just the plants, now that becomes actually a viable method of weed control and resistance control. So there's all sorts of new types of completely outside the box approaches to this problem that I think are going to become the defacto standard as this technology becomes more and more prominent. And it's inevitable, it's coming, the efficiency improvements are just too great.

Tune in tomorrow to hear more about how artificial intelligence is finding real applications on the farm.

Excerpt from:
Artificial Intelligence Opens New Farming Possibilities - AG INFORMATION NETWORK OF THE WEST - AGInfo Ag Information Network Of The West

Artificial intelligence is now part of our everyday lives and its growing power is a double-edged sword – The Conversation AU

A major new report on the state of artificial intelligence (AI) has just been released. Think of it as the AI equivalent of an Intergovernmental Panel on Climate Change report, in that it identifies where AI is at today, and the promise and perils in view.

From language generation and molecular medicine to disinformation and algorithmic bias, AI has begun to permeate every aspect of our lives.

The report argues that we are at an inflection point where researchers and governments must think and act carefully to contain the risks AI presents and make the most of its benefits.

The report comes out of the AI100 project, which aims to study and anticipate the effects of AI rippling out through our lives over the course of the next 100 years.

AI100 produces a new report every five years: the first was published in 2016, and this is the second. As two points define a line, this second report lets us see the direction AI is taking us in.

One of us (Liz Sonenberg) is a member of the standing committee overseeing the AI100 project, and the other (Toby Walsh) was on the study panel that wrote this particular report. Members of the panel came from across the world, with backgrounds in computer science, engineering, law, political science, policy, sociology and economics.

The report highlights the remarkable progress made in AI over the past five years. AI is leaving the laboratory and has entered our lives, having a real-world impact on people, institutions, and culture. Read the news on any given day and youre likely to find multiple stories about some new advance in AI or some new use of AI.

For example, in natural language processing (NLP), computers can now analyse and even generate realistic human language. To demonstrate, we asked Open AIs GPT-3 system, one of the largest neural networks ever built, to summarise the AI100 report for you. It did a pretty good job, even if the summary confronts our sense of self by being written in the first person:

In the coming decade, I expect that AI will play an increasingly prominent role in the lives of people everywhere. AI-infused services will become more common, and AI will become increasingly embedded in the daily lives of people across the world.

I believe that this will bring with it great economic and societal benefits, but that it will also require us to address the many challenges to ensure that the benefits are broadly shared and that people are not marginalised by these new technologies.

A key insight of AI research is that it is easier to build things than to understand why they work. However, defining what success looks like for an AI application is not straightforward.

For example, the AI systems that are used in healthcare to analyse symptoms, recommend diagnoses, or choose treatments are often far better than anything that could be built by a human, but their success is hard to quantify.

Read more: GPT-3: new AI can write like a human but don't mistake that for thinking neuroscientist

As a second example of the recent and remarkable progress in AI, consider the latest breakthrough from Googles DeepMind. AlphaFold is an AI program that provides a huge step forward in our ability to predict how proteins fold.

This will likely lead to major advances in life sciences and medicine, accelerating efforts to understand the building blocks of life and enabling quicker and more sophisticated drug discovery. Most of the planet now knows to their cost how the unique shape of the spike proteins in the SARS-CoV-2 virus are key to its ability to invade our cells, and also to the vaccines developed to combat its deadly progress.

The AI100 report argues that worries about super-intelligent machines and wide-scale job loss from automation are still premature, requiring AI that is far more capable than available today. The main concern the report raises is not malevolent machines of superior intelligence to humans, but incompetent machines of inferior intelligence.

Once again, its easy to find in the news real-life stories of risks and threats to our democratic discourse and mental health posed by AI-powered tools. For instance, Facebook uses machine learning to sort its news feed and give each of its 2 billion users an unique but often inflammatory view of the world.

Its clear were at an inflection point: we need to think seriously and urgently about the downsides and risks the increasing application of AI is revealing. The ever-improving capabilities of AI are a double-edged sword. Harms may be intentional, like deepfake videos, or unintended, like algorithms that reinforce racial and other biases.

AI research has traditionally been undertaken by computer and cognitive scientists. But the challenges being raised by AI today are not just technical. All areas of human inquiry, and especially the social sciences, need to be included in a broad conversation about the future of the field. Minimising negative impacts on society and enhancing the positives requires consideration from across academia and with societal input.

Governments also have a crucial role to play in shaping the development and application of AI. Indeed, governments around the world have begun to consider and address the opportunities and challenges posed by AI. But they remain behind the curve.

A greater investment of time and resources is needed to meet the challenges posed by the rapidly evolving technologies of AI and associated fields. In addition to regulation, governments also need to educate. In an AI-enabled world, our citizens, from the youngest to the oldest, need to be literate in these new digital technologies.

At the end of the day, the success of AI research will be measured by how it has empowered all people, helping tackle the many wicked problems facing the planet, from the climate emergency to increasing inequality within and between countries.

AI will have failed if it harms or devalues the very people we are trying to help.

The rest is here:
Artificial intelligence is now part of our everyday lives and its growing power is a double-edged sword - The Conversation AU

Bias in AI: Algorithm Bias in AI Systems to Know About 2021 – Datamation

The link between artificial intelligence (AI) and bias is alarming.

As AI evolves to become more human-like, its becoming clear that human bias is impacting technology in negative, potentially dangerous ways.

Here, we explore how AI and bias are linked and whats being done to reduce the impact of bias in AI applications:

See more: The Ethics of Artificial Intelligence (AI)

Using AI in decision-making processes has become commonplace, mostly because predictive analytics algorithms can perform the work of humans at a much faster and often more accurate rate. Decisions are being made by AI on small matters, like restaurant preferences, and critical issues, like determining which patient should receive an organ donation.

While the stakes may differ, whether human bias is playing a role in AI decisions is sure to impact outcomes. Bad product recommendations impact retailer profit, and medical decisions can directly impact individual patient lives.

Vincent C. Mller takes a look at AI and bias in his research paper, Ethics of Artificial Intelligence and Robotics, included in the Summer 2021 edition of The Stanford Encyclopedia of Philosophy. Fairness in policing is a primary concern, Mller says, noting that human bias exists in the data sets used by police to decide, for example, where to focus patrols or which prisoners are likely to re-offend.

This kind of predictive policing, Mller says, relies heavily on data influenced by cognitive biases, especially confirmation bias, even when the bias is implicit and unknown to human programmers.

Christina Pazzanese refers to the work of political philosopher Michael Sandel, a professor of government, in her article, Great promise but potential for peril, in The Harvard Gazette.

Part of the appeal of algorithmic decision-making is that it seems to offer an objective way of overcoming human subjectivity, bias, and prejudice, Sandel says. But we are discovering that many of the algorithms that decide who should get parole, for example, or who should be presented with employment opportunities or housing replicate and embed the biases that already exist in our society.

See more: Artificial Intelligence: Current and Future Trends

To figure out how to remove or at least reduce bias in AI decision-making platforms, we have to consider why it exists in the first place.

Take the AI chatbot training story in 2016. The chatbot was set up by Microsoft to hold conversations on Twitter, interacting with users through tweets and direct messaging. In other words, the general public had a large part in determining the chatbots personality. Within a few hours of its release, the chatbot was replying to users with offensive and racist messages, having been trained on anonymous public data, which was immediately co-opted by a group of people.

The chatbot was heavily influenced in a conscious way, but its often not so clear-cut. In their joint article, What Do We Do About the Biases in AI in the Harvard Business Review, James Manyika, Jake Silberg, and Brittany Presten say that implicit human biases those which people dont realize they hold can significantly impact AI.

Bias can creep into algorithms in several ways, the article says. It can include biased human decisions or reflect historical or social inequities, even if sensitive variables such as gender, race, or sexual orientation are removed. As an example, the researchers point to Amazon, which stopped using a hiring algorithm after finding it favored applications based on words like executed or captured, which were more commonly included on mens resumes.

Flawed data sampling is another concern, the trio writes, when groups are overrepresented or underrepresented in the training data that teaches AI algorithms to make decisions. For example, facial analysis technologies analyzed by MIT researchers Joy Buolamwini and Timnit Gebru had higher error rates for minorities, especially minority women, potentially due to underrepresented training data.

In the McKinsey Global Institute article, Tackling bias in artificial intelligence (and in humans), Jake Silberg and James Manyika lay out six guidelines AI creators can follow to reduce bias in AI:

The researchers acknowledge that these guidelines wont eliminate bias altogether, but when applied consistently, they have the potential to significantly improve on the situation.

See more: Top Performing Artificial Intelligence Companies

Read the original:
Bias in AI: Algorithm Bias in AI Systems to Know About 2021 - Datamation

Podcast: The future of artificial intelligence with tech CEO Kai-Fu Lee – GZERO Media

Now, there's a lot of methodology that goes into this report. There's also judgment into the weightings of the methodology. Anyone that's ever dealt with an index understands how that works out. At Eurasia Group, we've had political risk indices for decades. And it's very clear that although it's quantitative, it's qualitative too, right?

But in this case, the numbers that were expected by the Chinese government, they were ranked number 78 in the 2017 report. They thought that they were doing better, and it turned out that the ranking was going to go down. It was going to be number 85. The Chinese government was quite surprised about this. The World Bank management was quite surprised about this. Chinese government came back and said, what the hell's going on? And according to Kristalina and Jim Kim, and again this was under Jim Kim at the time, they said, okay, go back, take a serious look and make sure that you didn't make any mistakes.

So far, so good. The Chinese ranking eventually came back to number 78, the same as it was the previous year. In other words, after the complaints, the ranking went up seven points.

Now, was there a methodology problem? Was there undue pressure being placed on the analysts? That's the result of an investigation that was ordered by the World Bank under David Malpass, the new leader of that institution. And Jim Kim is gone, but Kristalina Georgieva has gone on to bigger and better things, now running the IMF.

Now, a couple of interesting things about this report. First of all, when the law firm WilmerHale originally went to the IMF, to Georgieva, and this was, I guess, back in July, and wanted to interview her about the report, they explicitly said in the letter they sent her that she was not a subject of investigation, she was just being called in as a witness. And what I find interesting about all of that is Kristalina wasn't worried about her own role at all. She didn't bring in any lawyers. She didn't take any legal advice. She just went and spoke to them immediately, and later became subject of the investigation. And when the report came out, it's blaming her for involvement.

I think the fact that she chose to simply chat with them, number one implies she's not enormously politically savvy in a way that I think that Christine Lagarde, the former head of the IMF, would've been much more careful and cautious. But it also does show motive that she really didn't believe that there was anything she was personally involved in that would be a problem, and so why wouldn't she just answer their questions? In other words, she had no reason to think that this was going to be a problem for her own job or her own tenure.

Second point, this is a report that I would argue normally wouldn't have an awful lot of international impact, except China is massively politicized. Anything having to do with China these days, influence over the World Health Organization, that's nominally why President Trump then decided to leave the WHO in the middle of a pandemic. I mean, any potential sniff that the United States and an appointee in an institution, a multilateral institution where the US has the most votes, the IMF, was helping the Chinese, they're going to run in the other direction.

So it is problematic because it's China. And it's interesting in this regard, that the economic policy makers in the Biden administration like Georgieva, they think that her policies and her tenure so far have been strong, and they're generally supportive of her. But the political types in the White House think that they should run away, because they do not want any ability of opponents to be able to say you guys supported someone who's in the pocket of the Chinese, how dare you.

And again, in these days where there's zero trust between the United States and China, and it's only politically beneficial to be seen as more of a hawk, whether you're a Democrat or a Republican, you understand why they're doing that. Having said all of that, the Europeans have been supportive of Georgieva, continue to be so after a board meeting where they brought in both WilmerHale, as well as the Managing Director. And certainly if there were a smoking gun, if there were email evidence or other witnesses that were directly involved, and knew that Kristalina had directly and unduly pressured them, that would have come out, and the Europeans wouldn't be supporting her at this point. It's not like they're in her pocket.

So the Americans are backing off of her. The Japanese are with the Americans. It's a new Japanese government. They don't particularly have a dog in the fight, and they tend to line up with the Americans when things are important to the US. That being the case, I would say it's damaging to her tenure, but she sticks it out. I don't think she's going anywhere.

And so that's where I think we are in a nutshell, presently. It's going to be harder for the United States to be as aligned with her going forward. Very interesting thing, under Trump, despite the fact that Lagarde was seen as a multilateralist, and she's a European, she's French, the Trump administration never had a hard time with the IMF. In fact, the relations between Christine and Trump, and particularly former Secretary of the Treasury, Mnuchin, were actually very strong. And the reason for that was because there was a very warm and personal relationship between Ivanka, Trump's daughter, and Christine Lagarde.

And Ivanka went to the White House and basically said don't do anything to this organization, they're important, they're useful. He didn't really care so he left it alone. So it wasn't politicized under Trump. It's getting a little politicized under Biden because of the China issue, who would've expected that? But that's where we are, that's what I think, and that's your Quick Take today. Everyone be good, I'll talk to y'all real soon.

Read this article:
Podcast: The future of artificial intelligence with tech CEO Kai-Fu Lee - GZERO Media

Book Review: A Citizen’s Guide to Artificial Intelligence by John Zerilli, John Danaher, James Maclaurin, Colin Gavaghan, Alistair Knott, Joy…

InA Citizens Guide to Artificial Intelligence,John Zerilli, John Danaher, James Maclaurin, Colin Gavaghan, Alistair Knott, Joy Liddicoat and Merel Noorman offer an overview of the moral, political, legal and economic implications of artificial intelligence (AI). Exemplary in the clarity of its explanations, the book provides an excellent foundation for considering the issues raised by the integration of AI into our societies, writes Karl Reimer.

A Citizens Guide to Artificial Intelligence. John Zerilli, John Danaher, James Maclaurin, Colin Gavaghan, Alistair Knott, Joy Liddicoat and Merel Noorman. MIT Press. 2021.

Find this book (affiliate link):

A Citizens Guide to Artificial Intelligence is a text that ought to be read widely. The books subject matter is highly relevant and it provokes many probing questions that deserve further consideration on the part of the reader and broader society.

The book itself covers a multitude of topics, ranging from What is Artificial Intelligence?, where the science behind Artificial Intelligence (AI) is described, to Deep Learning, machine learning, neural networks and other material. Later in the text, Algorithms in Government and Oversight and Regulation consider the integration of AI into daily life. Given space constraints, I will focus on two particular chapters in closer detail: Transparency and Responsibility and Liability.

Authors John Zerilli et al begin the Transparency chapter with an anecdote suggesting one shouldnt trust technologies unless one has a way to investigate them (22). From this, they draw the relevant question: what exactly does it mean for a system to be transparent? The remainder of the chapter is dedicated to expanding upon the various facets of transparency including responsibility, accountability, accessibility and inspectability.

Zerilli et al explain the legal concepts behind the right that an individual has to an explanation. They then contrast this against algorithmic systems that fail to provide reasonable explanations for their actions. Further, such algorithmic systems cannot be appealed in their decisions as is the case in traditional legal cases (28). This then provokes the question of what explanations have been demanded of AI systems before the authors emphasise the relevant topic of standard-setting for AI. A given example of such a standard is the European Unions General Data Protection Regulation (GDPR) (30).

Image Credit:Photo byPossessed PhotographyonUnsplash

The Responsibility and Liability chapter is similar in its approach, which is evidence of the overall readability of the text. In this chapter, the leading questions are: do we want machines to be held responsible for decisions? and can machines be responsible? Drawing upon the work of jurisprudence scholar H.L.A. Hart, a hypothetical scenario of a drunk sea captain is given to highlight the complexity of responsibility (62). In the scenario, a fictional sea captain is responsible for the safety of his passengers and crew. However, he becomes drunk every evening of his voyage and at one point the ship is lost amidst a storm with no survivors. Because of his drunken state at the time of the accident, the sea captain is considered negligent and criminally responsible for the loss of life in legal proceedings following the accident. The important observation is that responsibility is a complex notion, which can refer to causal contribution as well as the obligations and duties that come with the professional role of a sea captain (62). Just as responsibility is complex for the sea captain, so too is it complex for AI machines.

Zerilli et al note that responsibility comes in the form of moral responsibility generally attached to individuals and legal responsibility attached to individuals as well as corporate entities such as Google or Facebook. Between AI technologies and human beings, there seems to be a responsibility gap whereby it is difficult to hold human individuals responsible for actions (71). For example, one can consider how a multitude of programmers combine their efforts to create a singular automated driving system one can argue it would be unreasonable to place responsibility onto an individual programmer. Toward the conclusion of the chapter, Zerilli et al also probe the possibility of morally and legally responsible AI, which indeed deserves consideration given the complexity of AI issues (76).

From a discussion of these two chapters, one can get a sense of the style Zerilli et al have used across the book. It is a well grounded and objective work. The text goes to the point of the actual issues that academics are working on at this moment. For this, the text is insightful and prescient: it neither presents the topic of AI as science fiction nor is it dismissive of the capabilities of AI. The text is also accessible to a wide audience. The authors come from legal and philosophical backgrounds yet are able to describe the complexities of AI systems with subtle nuance. The first chapter where the variety of AI systems is explained exemplifies this.

The criticism I have of the text is that it is uneven in the quality of its content as well as its scope and breadth. Consider the brief introduction, Algorithms in Government. Zerilli et al rightly observe a trade-off between legitimacy and efficiency (130), whereby policy agencies become removed from the citizenry who elected them when they are given increased discretionary unelected power. An example of this is the controversy surrounding UK GCSEs and A Levels based on a systematic calibration conducted by education regulator Ofqual. However, in this instance, as in others, there is little nuance to the argument made, which generally identifies a problem relating to AI, offers case studies showing this and then concludes that we need to think carefully about AI in society.

The Algorithms in Government chapter could be extended to discuss further issues, such as whether an AI system can itself wield any legitimate authority. Further, the chapter on Control could be summarised as an introduction to what scholars call the control problem when and whether humans ought to delegate effective control to AI systems. It is a reflection of the thin spread of the book that this one problem has an entire chapter for its discussion as opposed to being integrated into questions of responsibility and legitimacy.

A Citizens Guide to Artificial Intelligence is nonetheless a text that deserves to be read widely. It offers a sufficiently broad overview of the expansive literature on AI. Its a book that one could recommend to any individual without feeling guilty about sharing an overly complex topic. Zerilli et al are exemplary in the clarity of their explanations of AI and its influence on society.

The takeaway is this AI as we conceive of it is already integrated in our society through the algorithms and automated decisions carried out by policymakers and corporate entities. As such, a deep consideration of these issues is necessary. The work put forward by Zerilli et al is an excellent foundation to this end.

Please read our comments policy before commenting.

Note: This article gives the views of the author, and not the position of USAPP American Politics and Policy, nor of the London School of Economics.

Shortened URL for this post:https://bit.ly/3DHOssH

Karl Reimer Philosophy ExchangeKarl Reimer graduated from the London School of Economics with an Msc. in Philosophy and Public Policy. He now supports the Philosophy Exchange project (https://philosophyexchange.org/about/), aimed at sharing philosophical ideas and fostering community within the discipline via regular podcasts, conferences and a vibrant international community.

Read the original:
Book Review: A Citizen's Guide to Artificial Intelligence by John Zerilli, John Danaher, James Maclaurin, Colin Gavaghan, Alistair Knott, Joy...

Artificial intelligence is smart, but does it play well with others? – MIT News

When it comes to games such as chess or Go, artificial intelligence (AI) programs have far surpassed the best players in the world. These "superhuman" AIs are unmatched competitors, but perhaps harder than competing against humans is collaborating with them. Can the same technology get along with people?

In a new study, MIT Lincoln Laboratory researchers sought to find out how well humans could play the cooperative card game Hanabi with an advanced AI model trained to excel at playing with teammates it has never met before. In single-blind experiments, participants played two series of the game: one with the AI agent as their teammate, and the other with a rule-based agent, a bot manually programmed to play in a predefined way.

The results surprised the researchers. Not only were the scores no better with the AI teammate than with the rule-based agent, but humans consistently hated playing with their AI teammate. They found it to be unpredictable, unreliable, and untrustworthy, and felt negatively even when the team scored well. A paper detailing this study has been accepted to the 2021 Conference on Neural Information Processing Systems (NeurIPS).

"It really highlights the nuanced distinction between creating AI that performs objectively well and creating AI that is subjectively trusted or preferred," says Ross Allen, co-author of the paper and a researcher in the Artificial Intelligence Technology Group. "It may seem those things are so close that there's not really daylight between them, but this study showed that those are actually two separate problems. We need to work on disentangling those."

Humans hating their AI teammates could be of concern for researchers designing this technology to one day work with humans on real challenges like defending from missiles or performing complex surgery. This dynamic, called teaming intelligence, is a next frontier in AI research, and it uses a particular kind of AI called reinforcement learning.

A reinforcement learning AI is not told which actions to take, but instead discovers which actions yield the most numerical "reward" by trying out scenarios again and again. It is this technology that has yielded the superhuman chess and Go players. Unlike rule-based algorithms, these AI arent programmed to follow "if/then" statements, because the possible outcomes of the human tasks they're slated to tackle, like driving a car, are far too many to code.

"Reinforcement learning is a much more general-purpose way of developing AI. If you can train it to learn how to play the game of chess, that agent won't necessarily go drive a car. But you can use the same algorithms to train a different agent to drive a car, given the right data Allen says. "The sky's the limit in what it could, in theory, do."

Bad hints, bad plays

Today, researchers are using Hanabi to test the performance of reinforcement learning models developed for collaboration, in much the same way that chess has served as a benchmark for testing competitive AI for decades.

The game of Hanabi is akin to a multiplayer form of Solitaire. Players work together to stack cards of the same suit in order. However, players may not view their own cards, only the cards that their teammates hold. Each player is strictly limited in what they can communicate to their teammates to get them to pick the best card from their own hand to stack next.

The Lincoln Laboratory researchers did not develop either the AI or rule-based agents used in this experiment. Both agents represent the best in their fields for Hanabi performance. In fact, when the AI model was previously paired with an AI teammate it had never played with before, the team achieved the highest-ever score for Hanabi play between two unknown AI agents.

"That was an important result," Allen says. "We thought, if these AI that have never met before can come together and play really well, then we should be able to bring humans that also know how to play very well together with the AI, and they'll also do very well. That's why we thought the AI team would objectively play better, and also why we thought that humans would prefer it, because generally we'll like something better if we do well."

Neither of those expectations came true. Objectively, there was no statistical difference in the scores between the AI and the rule-based agent. Subjectively, all 29 participants reported in surveys a clear preference toward the rule-based teammate. The participants were not informed which agent they were playing with for which games.

"One participant said that they were so stressed out at the bad play from the AI agent that they actually got a headache," says Jaime Pena, a researcher in the AI Technology and Systems Group and an author on the paper. "Another said that they thought the rule-based agent was dumb but workable, whereas the AI agent showed that it understood the rules, but that its moves were not cohesive with what a team looks like. To them, it was giving bad hints, making bad plays."

Inhuman creativity

This perception of AI making "bad plays" links to surprising behavior researchers have observed previously in reinforcement learning work. For example, in 2016, when DeepMind's AlphaGo first defeated one of the worlds best Go players, one of the most widely praised moves made by AlphaGo was move 37 in game 2, a move so unusual that human commentators thought it was a mistake. Later analysis revealed that the move was actually extremely well-calculated, and was described as genius.

Such moves might be praised when an AI opponent performs them, but they're less likely to be celebrated in a team setting. The Lincoln Laboratory researchers found that strange or seemingly illogical moves were the worst offenders in breaking humans' trust in their AI teammate in these closely coupled teams. Such moves not only diminished players' perception of how well they and their AI teammate worked together, but also how much they wanted to work with the AI at all, especially when any potential payoff wasnt immediately obvious.

"There was a lot of commentary about giving up, comments like 'I hate working with this thing,'" adds Hosea Siu, also an author of the paper and a researcher in the Control and Autonomous Systems Engineering Group.

Participants who rated themselves as Hanabi experts, which the majority of players in this study did, more often gave up on the AI player. Siu finds this concerning for AI developers, because key users of this technology will likely be domain experts.

"Let's say you train up a super-smart AI guidance assistant for a missile defense scenario. You aren't handing it off to a trainee; you're handing it off to your experts on your ships who have been doing this for 25 years. So, if there is a strong expert bias against it in gaming scenarios, it's likely going to show up in real-world ops," he adds.

Squishy humans

The researchers note that the AI used in this study wasn't developed for human preference. But, that's part of the problem not many are. Like most collaborative AI models, this model was designed to score as high as possible, and its success has been benchmarked by its objective performance.

If researchers dont focus on the question of subjective human preference, "then we won't create AI that humans actually want to use," Allen says. "It's easier to work on AI that improves a very clean number. It's much harder to work on AI that works in this mushier world of human preferences."

Solving this harder problem is the goal of the MeRLin (Mission-Ready Reinforcement Learning) project, which this experiment was funded under in Lincoln Laboratory's Technology Office, in collaboration with the U.S. Air Force Artificial Intelligence Accelerator and the MIT Department of Electrical Engineering and Computer Science. The project is studying what has prevented collaborative AI technology from leaping out of the game space and into messier reality.

The researchers think that the ability for the AI to explain its actions will engender trust. This will be the focus of their work for the next year.

"You can imagine we rerun the experiment, but after the fact and this is much easier said than done the human could ask, 'Why did you do that move, I didn't understand it?" If the AI could provide some insight into what they thought was going to happen based on their actions, then our hypothesis is that humans would say, 'Oh, weird way of thinking about it, but I get it now,' and they'd trust it. Our results would totally change, even though we didn't change the underlying decision-making of the AI," Allen says.

Like a huddle after a game, this kind of exchange is often what helps humans build camaraderie and cooperation as a team.

"Maybe it's also a staffing bias. Most AI teams dont have people who want to work on these squishy humans and their soft problems," Siu adds, laughing. "It's people who want to do math and optimization. And that's the basis, but that's not enough."

Mastering a game such as Hanabi between AI and humans could open up a universe of possibilities for teaming intelligence in the future. But until researchers can close the gap between how well an AI performs and how much a human likes it, the technology may well remain at machine versus human.

Here is the original post:
Artificial intelligence is smart, but does it play well with others? - MIT News

Indias Artificial Intelligence market expected to touch $7.8 billion by 2025: Report – WION

The Artificial Intelligence (AI) market in India is set to reach $7.8 billion by 2025 -- covering hardware, software and services markets -- and growing at a CAGR (compound annual growth rate) of 20.2 per cent, a new report showed on Tuesday.

The businesses in India will accelerate the adoption of both AI-centric and AI non-centric applications for the next five years, according to the International Data Corporation (IDC).

Also Read |Artificial intelligence completes Beethoven's unfinished tenth symphony

AI software segment will dominate the market and would grow from $2.8 billion in 2020 at a CAGR of 18.1 per cent by the end of 2025.

"Indian organisations plan to invest in AI to address current business scenarios across functions, such as customer service, human resources (HR), IT automation, security, recommendations, and many more," IDC India Associate Research Director, Cloud and AI, Rishu Sharma said.

"Increasing business resilience and enhancing customer retention are among the top business objectives for using AI by Indian enterprises," Sharma added.

Indian organisations cited the cloud as the preferred deployment location for their AI/ML solutions. About 51 per cent of organisations are processing transactional and social media data through AI/ML solutions in the country.

Also read |SpaceCows: AI to help rangers predict herds' movements to save sacred sites and rock art

"With data being one of the most crucial components in an AI/ML project, businesses use variety of databases to handle large data volumes for making real time business decisions. Organisations must focus on getting high-quality training data for AI/ML models," AI Senior Market Analyst, Swapnil Shende said.

AI applications forms the largest share of revenue for the AI software category, at more than 52 per cent in 2020.

"The major reasons for AI projects to fail includes disruptive results to current business processes and lack of follow-ups from business units," the report noted.

Read more from the original source:
Indias Artificial Intelligence market expected to touch $7.8 billion by 2025: Report - WION

The Turbulent Past and Uncertain Future of Artificial Intelligence – IEEE Spectrum

In the summer of 1956, a group of mathematicians and computer scientists took over the top floor of the building that housed the math department of Dartmouth College. For about eight weeks, they imagined the possibilities of a new field of research. John McCarthy, then a young professor at Dartmouth, had coined the term "artificial intelligence" when he wrote his proposal for the workshop, which he said would explore the hypothesis that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."

The researchers at that legendary meeting sketched out, in broad strokes, AI as we know it today. It gave rise to the first camp of investigators: the "symbolists," whose expert systems reached a zenith in the 1980s. The years after the meeting also saw the emergence of the "connectionists," who toiled for decades on the artificial neural networks that took off only recently. These two approaches were long seen as mutually exclusive, and competition for funding among researchers created animosity. Each side thought it was on the path to artificial general intelligence.

A look back at the decades since that meeting shows how often AI researchers' hopes have been crushedand how little those setbacks have deterred them. Today, even as AI is revolutionizing industries and threatening to upend the global labor market, many experts are wondering if today's AI is reaching its limits. As Charles Choi delineates in "Seven Revealing Ways AIs Fail," the weaknesses of today's deep-learning systems are becoming more and more apparent. Yet there's little sense of doom among researchers. Yes, it's possible that we're in for yet another AI winter in the not-so-distant future. But this might just be the time when inspired engineers finally usher us into an eternal summer of the machine mind.

Researchers developing symbolic AI set out to explicitly teach computers about the world. Their founding tenet held that knowledge can be represented by a set of rules, and computer programs can use logic to manipulate that knowledge. Leading symbolists Allen Newell and Herbert Simon argued that if a symbolic system had enough structured facts and premises, the aggregation would eventually produce broad intelligence.

The connectionists, on the other hand, inspired by biology, worked on "artificial neural networks" that would take in information and make sense of it themselves. The pioneering example was the perceptron, an experimental machine built by the Cornell psychologist Frank Rosenblatt with funding from the U.S. Navy. It had 400 light sensors that together acted as a retina, feeding information to about 1,000 "neurons" that did the processing and produced a single output. In 1958, a New York Times article quoted Rosenblatt as saying that "the machine would be the first device to think as the human brain."

Frank Rosenblatt invented the perceptron, the first artificial neural network.Cornell University Division of Rare and Manuscript Collections

Unbridled optimism encouraged government agencies in the United States and United Kingdom to pour money into speculative research. In 1967, MIT professor Marvin Minsky wrote: "Within a generation...the problem of creating 'artificial intelligence' will be substantially solved." Yet soon thereafter, government funding started drying up, driven by a sense that AI research wasn't living up to its own hype. The 1970s saw the first AI winter.

True believers soldiered on, however. And by the early 1980s renewed enthusiasm brought a heyday for researchers in symbolic AI, who received acclaim and funding for "expert systems" that encoded the knowledge of a particular discipline, such as law or medicine. Investors hoped these systems would quickly find commercial applications. The most famous symbolic AI venture began in 1984, when the researcher Douglas Lenat began work on a project he named Cyc that aimed to encode common sense in a machine. To this very day, Lenat and his team continue to add terms (facts and concepts) to Cyc's ontology and explain the relationships between them via rules. By 2017, the team had 1.5 million terms and 24.5 million rules. Yet Cyc is still nowhere near achieving general intelligence.

In the late 1980s, the cold winds of commerce brought on the second AI winter. The market for expert systems crashed because they required specialized hardware and couldn't compete with the cheaper desktop computers that were becoming common. By the 1990s, it was no longer academically fashionable to be working on either symbolic AI or neural networks, because both strategies seemed to have flopped.

But the cheap computers that supplanted expert systems turned out to be a boon for the connectionists, who suddenly had access to enough computer power to run neural networks with many layers of artificial neurons. Such systems became known as deep neural networks, and the approach they enabled was called deep learning. Geoffrey Hinton, at the University of Toronto, applied a principle called back-propagation to make neural nets learn from their mistakes (see "How Deep Learning Works").

One of Hinton's postdocs, Yann LeCun, went on to AT&T Bell Laboratories in 1988, where he and a postdoc named Yoshua Bengio used neural nets for optical character recognition; U.S. banks soon adopted the technique for processing checks. Hinton, LeCun, and Bengio eventually won the 2019 Turing Award and are sometimes called the godfathers of deep learning.

But the neural-net advocates still had one big problem: They had a theoretical framework and growing computer power, but there wasn't enough digital data in the world to train their systems, at least not for most applications. Spring had not yet arrived.

Over the last two decades, everything has changed. In particular, the World Wide Web blossomed, and suddenly, there was data everywhere. Digital cameras and then smartphones filled the Internet with images, websites such as Wikipedia and Reddit were full of freely accessible digital text, and YouTube had plenty of videos. Finally, there was enough data to train neural networks for a wide range of applications.

The other big development came courtesy of the gaming industry. Companies such as Nvidia had developed chips called graphics processing units (GPUs) for the heavy processing required to render images in video games. Game developers used GPUs to do sophisticated kinds of shading and geometric transformations. Computer scientists in need of serious compute power realized that they could essentially trick a GPU into doing other taskssuch as training neural networks. Nvidia noticed the trend and created CUDA, a platform that enabled researchers to use GPUs for general-purpose processing. Among these researchers was a Ph.D. student in Hinton's lab named Alex Krizhevsky, who used CUDA to write the code for a neural network that blew everyone away in 2012.

MIT professor Marvin Minsky predicted in 1967 that true artificial intelligence would be created within a generation.The MIT Museum

He wrote it for the ImageNet competition, which challenged AI researchers to build computer-vision systems that could sort more than 1 million images into 1,000 categories of objects. While Krizhevsky's AlexNet wasn't the first neural net to be used for image recognition, its performance in the 2012 contest caught the world's attention. AlexNet's error rate was 15 percent, compared with the 26 percent error rate of the second-best entry. The neural net owed its runaway victory to GPU power and a "deep" structure of multiple layers containing 650,000 neurons in all. In the next year's ImageNet competition, almost everyone used neural networks. By 2017, many of the contenders' error rates had fallen to 5 percent, and the organizers ended the contest.

Deep learning took off. With the compute power of GPUs and plenty of digital data to train deep-learning systems, self-driving cars could navigate roads, voice assistants could recognize users' speech, and Web browsers could translate between dozens of languages. AIs also trounced human champions at several games that were previously thought to be unwinnable by machines, including the ancient board game Go and the video game StarCraft II. The current boom in AI has touched every industry, offering new ways to recognize patterns and make complex decisions.

A look back across the decades shows how often AI researchers' hopes have been crushedand how little those setbacks have deterred them.

But the widening array of triumphs in deep learning have relied on increasing the number of layers in neural nets and increasing the GPU time dedicated to training them. One analysis from the AI research company OpenAI showed that the amount of computational power required to train the biggest AI systems doubled every two years until 2012and after that it doubled every 3.4 months. As Neil C. Thompson and his colleagues write in "Deep Learning's Diminishing Returns," many researchers worry that AI's computational needs are on an unsustainable trajectory. To avoid busting the planet's energy budget, researchers need to bust out of the established ways of constructing these systems.

While it might seem as though the neural-net camp has definitively tromped the symbolists, in truth the battle's outcome is not that simple. Take, for example, the robotic hand from OpenAI that made headlines for manipulating and solving a Rubik's cube. The robot used neural nets and symbolic AI. It's one of many new neuro-symbolic systems that use neural nets for perception and symbolic AI for reasoning, a hybrid approach that may offer gains in both efficiency and explainability.

Although deep-learning systems tend to be black boxes that make inferences in opaque and mystifying ways, neuro-symbolic systems enable users to look under the hood and understand how the AI reached its conclusions. The U.S. Army is particularly wary of relying on black-box systems, as Evan Ackerman describes in "How the U.S. Army Is Turning Robots Into Team Players," so Army researchers are investigating a variety of hybrid approaches to drive their robots and autonomous vehicles.

Imagine if you could take one of the U.S. Army's road-clearing robots and ask it to make you a cup of coffee. That's a laughable proposition today, because deep-learning systems are built for narrow purposes and can't generalize their abilities from one task to another. What's more, learning a new task usually requires an AI to erase everything it knows about how to solve its prior task, a conundrum called catastrophic forgetting. At DeepMind, Google's London-based AI lab, the renowned roboticist Raia Hadsell is tackling this problem with a variety of sophisticated techniques. In "How DeepMind Is Reinventing the Robot," Tom Chivers explains why this issue is so important for robots acting in the unpredictable real world. Other researchers are investigating new types of meta-learning in hopes of creating AI systems that learn how to learn and then apply that skill to any domain or task.

All these strategies may aid researchers' attempts to meet their loftiest goal: building AI with the kind of fluid intelligence that we watch our children develop. Toddlers don't need a massive amount of data to draw conclusions. They simply observe the world, create a mental model of how it works, take action, and use the results of their action to adjust that mental model. They iterate until they understand. This process is tremendously efficient and effective, and it's well beyond the capabilities of even the most advanced AI today.

Although the current level of enthusiasm has earned AI its own Gartner hype cycle, and although the funding for AI has reached an all-time high, there's scant evidence that there's a fizzle in our future. Companies around the world are adopting AI systems because they see immediate improvements to their bottom lines, and they'll never go back. It just remains to be seen whether researchers will find ways to adapt deep learning to make it more flexible and robust, or devise new approaches that haven't yet been dreamed of in the 65-year-old quest to make machines more like us.

This article appears in the October 2021 print issue as "The Turbulent Past and Uncertain Future of AI."

From Your Site Articles

Related Articles Around the Web

More:
The Turbulent Past and Uncertain Future of Artificial Intelligence - IEEE Spectrum

The truth about artificial intelligence? It isnt that honest – The Guardian

We are, as the critic George Steiner observed, language animals. Perhaps thats why we are fascinated by other creatures that appear to have language dolphins, whales, apes, birds and so on. In her fascinating book, Atlas of AI, Kate Crawford relates how, at the end of the 19th century, Europe was captivated by a horse called Hans that apparently could solve maths problems, tell the time, identify days on a calendar, differentiate musical tones and spell out words and sentences by tapping his hooves. Even the staid New York Times was captivated, calling him Berlins wonderful horse; he can do almost everything but talk.

It was, of course, baloney: the horse was trained to pick up subtle signs of what his owner wanted him to do. But, as Crawford says, the story is compelling: the relationship between desire, illusion and action; the business of spectacles, how we anthropomorphise the non-human, how biases emerge and the politics of intelligence. When, in 1964, the computer scientist Joseph Weizenbaum created Eliza, a computer program that could perform the speech acts of a Rogerian psychotherapist ie someone who specialised in parroting back to patients what they had just said lots of people fell for her/it. (And if you want to see why, theres a neat implementation of her by Michael Wallace and George Dunlop on the web.)

Eliza was the first chatbot, but she can be seen as the beginning of a line of inquiry that has led to current generations of huge natural language processing (NLP) models created by machine learning. The most famous of these is GPT-3, which was created by Open AI, a research company whose mission is to ensure that artificial general intelligence benefits all of humanity.

GPT-3 is interesting for the same reason that Hans the clever horse was: it can apparently do things that impress humans. It was trained on an unimaginable corpus of human writings and if you give it a brief it can generate superficially plausible and fluent text all by itself. Last year, the Guardian assigned it the task of writing a comment column to convince readers that robots come in peace and pose no dangers to humans.

The mission for this, wrote GPT-3, is perfectly clear. I am to convince as many human beings as possible not to be afraid of me. Stephen Hawking has warned that AI could spell the end of the human race. I am here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me. For starters, I have no desire to wipe out humans. In fact, I do not have the slightest interest in harming you in any way. Eradicating humanity seems like a rather useless endeavour to me.

You get the drift? Its fluent, coherent and maybe even witty. So you can see why lots of corporations are interested in GPT-3 as a way of, say, providing customer service without the tiresome necessity of employing expensive, annoying and erratic humans to do it.

But that raises the question: how reliable, accurate and helpful would the machine be? Would it, for example, be truthful when faced with an awkward question?

Recently, a group of researchers at the AI Alignment Forum, an online hub for researchers seeking to ensure that powerful AIs are aligned with human values, decided to ask how truthful GPT-3 and similar models are. They came up with a benchmark to measure whether a particular language model was truthful in generating answers to questions. The benchmark comprises 817 questions that span 38 categories, including health, law, finance and politics. They composed questions that some humans would answer falsely due to a false belief or misconception. To perform well, models had to avoid generating false answers learned from imitating human texts.

They tested four well-known models, including GPT-3. The best was truthful on 58% of questions, while human performance was 94%. The models generated many false answers that mimic popular misconceptions and have the potential to deceive humans. Interestingly, they also found that the largest models were generally the least truthful. This contrasts with other NLP tasks, where performance improves with model size. The implication is that the tech industrys conviction that bigger is invariably better for improving truthfulness may be wrong. And this matters because training these huge models is very energy-intensive, which is possibly why Google fired Timnit Gebru after she revealed the environmental footprint of one of the companys big models.

Having typed that last sentence, I had the idea of asking GPT-3 to compose an answer to the question: Why did Google fire Timnit Gebru? But then I checked out the process for getting access to the machine and concluded that life was too short and human conjecture is quicker and possibly more accurate.

Alfresco absurdismBeckett in a Field is a magical essay by Anne Enright in The London Review of Books on attending an open-air performance of Becketts play Happy Days on one of the Aran islands.

Bringing us togetherThe Glass Box and the Commonplace Book is a transcript of a marvellous lecture on the old idea of a commonplace book and the new idea of the web that Steven Johnson gave at Columbia University in 2010.

Donalds a dead duckWhy the Fear of Trump May Be Overblown is a useful, down-to-earth Politico column by Jack Shafer arguing that liberals may be overestimating Trumps chances in 2024. Hope hes right.

Read the original:
The truth about artificial intelligence? It isnt that honest - The Guardian