Why AI systems should disclose that they’re not human – Fast Company

Posted: January 31, 2020 at 9:45 am

By Alex C. Engler4 minute Read

We are nearing a new age of ubiquitous AI. Between your smartphone, computer, car, smart home, and social media, you might interact with some sort of automated, intelligent system dozens of times every day. For most of your interactions with AI, it will be obviously and intentionally clear that the text you read, the voice you hear, or the face you see is not a real person. However, other times it will not be so obvious. As automated technologies quickly and methodically climb out of the uncanny valley, customer service calls, website chatbots, and interactions on social media may become progressively less evidently artificial.

This is already happening. In 2018, Google demoed a technology called Duplex, which calls restaurants and hair salons to make appointments on your behalf. At the time, Google faced a backlash for using an automated voice that sounds eerily human, even employing vocal ticks like um, without disclosing its robotic nature. Perversely, todays Duplex has the opposite problem. The automated system does disclose itself, but at least 40% of its calls have humans on the phone, and its very easy for call recipients to confuse those real people with AI.

As I argue in a new Brookings Institution paper, there is clear and immediate value to a broad requirement of AI disclosure in this case and many others. Mandating that companies explicitly note when users are interacting with an automated system can help reduce fraud, improve political discourse, and educate the public.

The believability of these systems is driven by AI models of human language, which are rapidly improving. This is a boon for applications that benefit society, such as automated closed-captioning and language translation. Unfortunately, corporations and political actors are going to find many reasons to use this technology to duplicitously present their software as real people. And companies have an incentive to deceive: A recent randomized experiment showed that when chatbots did not disclose their automated nature, they outperformed inexperienced salespeople. When the chatbot revealed itself as artificial, its sales performance dropped by 80%.

Harmless chatbots can help us order pizza or choose a seasonal flannel, but others are starting to offer financial advice and become the first point of contact in telemedicine. While there are benefits to these systems, it is nave to think they are exclusively designed to inform customersthey may also be intended to change behavior, especially toward spending more money. We can also expect to eventually find AI systems behind celebrity chatbots on platforms such as Instagram and WhatsApp. They will be pitched as a way to bring stars and their fans closer together, but in reality their goals may be to sell pseudoscientific health supplements or expensive athleisure brands. As the technology improves and the datasets expand, AI will only get more effective at driving sales, and customers should have a right to know this influence is coming from automated systems.

Undisclosed algorithms are a problem in political discourse, too. BuzzFeed News has reported that the industry group Broadband for America was behind the 2017 effort to submit millions of fake comments supporting the repeal of net neutrality to the Federal Communications Commissionsometimes using the names of the deceased. The coalition of companies, which includes AT&T, Cox, and Comcast, has faced no consequences for its deceptive use of automation, and the proliferation of AI technologies only makes this kind of political campaign easier in the future.

Bots operating on social media should also be clearly labeled as automated. During the 2016 election, 12.6% of politically engaged Twitter accounts were botsaccounting for 20% of all political tweets. Twitter deserves credit for actively fighting organized disinformation from bots, and for making its data available for research. But the scale of the problem is less known on other digital platforms. The numerous political bot campaigns on WhatsApp and the recent discovery of hundreds of AI-generated Facebook profiles suggest that the influence of automated systems on social media is an extensive problem. Although claims that bots are responsible for swinging major elections is likely overblown, research shows that they can further polarization and reduce dissenting opinions. Bots have also been observed spreading dangerous pseudoscientific messages, for instance against the MRR vaccine. While enforcing bot disclosure is difficult for social media companies, I argue in the Brookings paper that its worth holding the digital platforms accountable to some standards.

Beyond fighting commercial fraud and deceptive politics, there are other advantages to an expansive AI disclosure requirement. If people know when they are interacting with AI systems, they will learn algorithms strengths and limitations through repeated exposure. This is important, since understanding AI is complicated, and most people are misled by the portrayals of incredibly intelligent AI that exists in our popular culturethink Westworld, Battlestar Galactica, and Ex Machina. In reality, todays AI systems are narrow AI, meaning they may perform remarkably well at some tasks while being utterly infantile in others.

Since it would reduce deceptive political and commercial applications, requiring AI systems to disclose themselves turns out to be low-hanging fruit in the typically complex bramble of technology policy. We cant foresee the ways in which AI will be used in the future, but we are only in the first decade of modern AI. Now is the time to set a standard for transparency.

Alex Engler is a David M. Rubenstein Fellow at the Brookings Institution, where he studies the governance of artificial intelligence and emerging technology. He is also an adjunct professor and affiliated scholar at Georgetown Universitys McCourt School of Public Policy, where he teaches courses on data science for policy analysis.

More here:

Why AI systems should disclose that they're not human - Fast Company

Related Posts