Why Our Conversations on Artificial Intelligence Are Incomplete – The Wire

Posted: February 19, 2017 at 11:15 am

Featured Conversations about artificial intelligence must focus on jobs as well as questioning its purpose, values, accountability and governance.

There is an urgent need to expand the AI epistemic community beyond the specific geographies in which it is currently clustered. Credit: YouTube

Artificial Intelligence (AI) is no longer the subject of science fiction and is profoundly transforming our daily lives. While computers have already been mimicking human intelligence for some decades now using logic and if-then kind of rules, massive increases in computational power are now facilitating the creation of deep learning machines i.e. algorithms that permit software to train itselfto recognise patterns and perform tasks, like speech and image recognition, through exposure to vast amounts of data.

These deep learning algorithms are everywhere, shaping our preferences and behaviour. Facebook uses a set of algorithms totailor what news stories an individual user sees and in what order. Bot activity on Twittersuppressed a protest against Mexicos now presidentby overloading the hashtag used to organise the event. The worlds largest hedge fund is building a piece of software to automate the day-to-day management of the firm, including, hiring, firing and other strategic decision-making. Wealth management firms are increasingly using algorithms to decide where to invest money. The practice of traders shouting and using hand signals to buy and sell commodities has become outdated on Wall Street as traders have been replaced by machines. And bots are now being used to analyse legal documents to point out potential risks and areas of improvement.

Much of the discussion on AI in popular media has been through the prism of job displacement. Analysts, however, differ widely on the projected impact a 2016 studyby the Organisation for Economic Co-operation and Developmentestimates that 9% of jobs will be displaced in the next two years, whereas a 2013 study by Oxford University estimates that job displacement will be 47%. The staggering difference illustrates how much the impact of AI remains speculative.

Responding to the threat of automation on jobs will undoubtedly require revising existing education and skilling paradigms, but at present, we also need to consider more fundamental questions about the purposes, values and accountability of AI machines. Interrogating these first-order concerns will eventually allow for a more systematic and systemic response to the job displacement challenge as well.

First, what purpose do we want to direct AI technologies towards? AI technologies can undoubtedly create tremendous productivity and efficiency gains. AI might also allow us to solve some of the most complex problems of our time. But we need to make political and social choices about the parts of human life in which we want to introduce these technologies, at what cost and to what end.

Technological advancement has resulted in a growth in national incomes and GDP, yet the share of national incomes that have gone to labour has dropped in developing countries. Productivity and efficiency gains are thus not in themselves conclusive indicators on where to deploy AI rather, we need to consider the distribution of these gains. Productivity gains are also not equally beneficial to all incumbents with data and computational power will be able to use AI to gain insight and market advantage.

Moreover, a bot might be able to make more accurate judgments about worker performance and future employability, but we need to have a more precise handle over the problem that is being addressed by such improved accuracy.AI might be able to harness the power of big data to address complex social problems. Arguably, however, our inability to address these problems has not been a result of incomplete data for a number of decades now we have had enough data to make reasonable estimates about the appropriate course of action. It is the lack of political will and social and cultural behavioural patterns that have posed obstacles to action, not the lack of data. The purpose of AI in human life must not be merely assumed as obvious, or subsumed under the banner of innovation, but be seen as involving complex social choices that must be steered through political deliberations.

This then leads to a second question about the governance of AI who should decide where AI is deployed, how should these decisions be made and on what principles and priorities? Technology companies, particularly those that have the capital to make investments in AI capacities, are leading current discussions predominantly. Eric Horvitz, managing director of the Microsoft Research Lab, launched the One Hundred Year Study on Artificial Intelligence based out of Stanford University. The Stanford report makes the case for industry self-regulation, arguing that attempts to regulate AI, in general, would be misguided as there is no clear definition of AI and the risks and considerations are very different in different domains.

The White House Office of Science and Technology Policy recently released a report on the Preparing for the Future of Artificial Intelligence, but accorded a minimal role to thegovernment as regulator. Rather, the question of governance is left to the supposed ideal of innovation i.e. AI will fuel innovation, which will fuel economic growth and this will eventually benefit society as well. The trouble with such innovation-fuelled self-regulation is that development of AI will be concentrated in those areas in which there is a market opportunity, not necessarily areas that are the most socially beneficial. Technology companies are not required to consider issues of long-term planning and the sharing of social benefits, nor can they be held politically and socially accountable.

Earlier this year, a set of principles for Beneficial AI was articulated at the Asilomar Conference the star speakers and panelists were predominantly from large technology companies like Google, Facebook and Tesla, alongside a few notable scientists, economists and philosophers. Notably missing from the list of speakerswas the government, journalists and the public and their concerns. The principles make all the right points, clustering around the ideas of beneficial intelligence, alignment with human values and common good, but they rest on fundamentally tenuous value questions about what constitutes human benefit a question that demands much wider and inclusive deliberation, and one that must be led by government for reasons of democratic accountability and representativeness.

What is noteworthy about the White House Report in this regard is the attempt to craft a public deliberative process the report followed five public workshops and an Official Request for Information on AI.

The trouble is not only that most of these conversations about the ethics of AI are being led by the technology companies themselves, but also that governments and citizens in the developing world are yet to start such deliberations they are in some sense the passive recipients of technologies that are being developed in specific geographies but deployed globally. The Stanford report, for example, attempts to define the issues that citizens of a typical North American city will face in computers and robotic systems that mimic human capabilities. Surely these concerns will look very different across much of the globe. The conversation in India has mostly been clustered around issues of jobs and the need for spurring AI-based innovation to accelerate growth and safeguard strategic interests, with almost no public deliberation around broader societal choices.

The concentration of an AI epistemic community in certain geographies and demographics leads to a third key question about how artificially intelligent machines learn and make decisions. As AI becomes involved in high-stakes decision-making, we need to understand the processes by which such decision making takes place. AI consists of a set of complex algorithms built on data sets. These algorithms will tend to reflect the characteristics of the data that they are fed. This then means that inaccurate or incomplete data sets can also result in biased decision making. Such data bias can occur in two ways.

First, if the data set is flawed or inaccurately reflects the reality it is supposed to represent. If for example, a system is trained on photos of people that are predominantly white, it will have a harder time recognising non-white people. This kind of data bias is what led a Google application to tag black people as gorillas or the Nikon camera software to misread Asian people as blinking. Second, if the process being measured through data collection itself reflects long-standing structural inequality. ProPublica found, for example, that software that was being useful to assess the risk of recidivism in criminals was twice as likely to mistakenly flag black defendants as being at higher risk of committing future crimes. It was also twice as likely to incorrectly flag white defendants as low risk.

What these examples suggest is that AI systems can end up reproducing existing social bias and inequities, contributing towards the further systematic marginalisation of certain sections of society. Moreover, these biases can be amplified as they are coded into seemly technical and neutral systems that penetrate across a diversity of daily social practices. It is, of course, an epistemic fallacy to assume that we can ever have complete data on any social or political phenomena or peoples. Yet, there is an urgent need to improve the quality and breadth of our data sets, as well as investigate any structural biases that might exist in these data how we would do this is hard enough to imagine, leave alone implement.

The danger that AI will reflect and even exacerbate existing social inequities leads finally to the question of the agency and accountability of AI systems. Algorithms represent much more than code, as they exercise authority on behalf of organisations across various domains and have real and serious consequences in the analog world. However, the difficult question is whether this authority can be considered a form of agency that can be held accountable and culpable.

Recent studies suggest for example that algorithmic trading between banks was at least partly responsible for the financial crisis of 2008; the crash of the sterling in 2016 has similarly been linked to a panicky bot-spiral. Recently, both Google and Teslas self-driving care caused fatal crashes in the Tesla case, a man died while using Teslas autopilot function. Legal systems across the world are not yet equipped to respond to the issue of culpability in such cases, and the many more that we are yet to imagine. Neither is it clear how AI systems will respond to ethical conundrums like the famous trolley problem, nor the manner in which human-AI interaction on ethical questions will be influenced by cultural differences across societies or time. The question comes down to the legal liability of AI, whether it should be considered a subject or an object.

The trouble with speaking about accountability also stems from the fact that AI is intended to be a learning machine. It is this capacity to learn that marks the newness of the current technological era, and this capacity of learning that makes it possible to even speak of AI agency. Yet, machine learning is not a hard science; rather its outcomes are unpredictable and can only be fully known after the fact. Until Googles app labels a black person as a gorilla, Google may not even know what the machine has learnt this leads to an incompleteness problem for political and legal systems that are charged with the governance of AI.

The question of accountability also comes down to one of visibility. Any inherent bias in the data on which an AI machine is programmed is invisible and incomprehensible to most end users. This inability to review the data reduces the agency and capacity of individuals to resist, even recognise, the discriminatory practices that might result from AI. AI technologies thus exercise a form of invisible but pervasive power, which then also obscures the possible points or avenues for resistance. The challenge is to make this power visible and accessible. Companies responsible for these algorithms keep their formulas secret as proprietary information. However, the far-ranging impact of AI technologies necessitates the need for algorithmic transparency, even if it reduces the competitive advantage of companies developing these systems. A profit motive cannot be blindly prioritisedif it comes at the expense of social justice and accountability.

When we talk about AI, we need to talk about jobs both about the jobs that will be lost and the opportunities that will arise from innovation. But we must also tether these conversations to questions about the purpose, values, accountability and governance of AI. We need to think about the distribution of productivity and efficiency gains and broader questions of social benefit and well being. Given the various ways in which AI systems exercise power in social contexts, that power needs to be made visible to facilitate conversations about accountability. And responses have to be calibrated through public engagement and democratic deliberation the ethics and governance questions around AI cannot be left to market forces alone, albeit in the name of innovation.

Finally, there is a need to move beyond the universalising discourse around technology technologies will be deployed globally and with global impact, but the nature of that impact will be mediated through local political, legal, cultural and economic systems. There is an urgent need to expand the AI epistemic community beyond the specific geographies in which it is currently clustered, and provide resources and opportunities for broader and more diverse public engagement.

Urvashi Aneja is Founding Director of Tandem Research, a multidisciplinary think tank based in Socorro, Goa that produces policy insights around issues of technology, sustainability and governance. She is Associate Professor at the Jindal School of International Affairs and Research Fellow at the Observer Research Foundation.

Categories: Featured, Tech

Tagged as: AI, AI-based innovation, Artificial Intelligence, Beneficial AI, Facebook, GDP, Google, human intelligence, innovation, technology, Tesla, Urvashi Aneja

Go here to see the original:

Why Our Conversations on Artificial Intelligence Are Incomplete - The Wire

Related Posts