The AI community needs to take responsibility for its technology and its actions – MIT Technology Review

Posted: December 13, 2019 at 2:22 pm

On Monday, at the opening of one of the worlds largest gatherings of AI researchers, Celeste Kidd addressed thousands of attendees in a room nearly twice the size of a football field. She was not pulling her punches.

Theres no such thing as a neutral platform, the influential scientist and prominent #metoo figurehead told those gathered at the NeurIPS conference in Vancouver. The algorithms pushing content online have profound impacts on what we believe.

Kidd, a professor of psychology at the University of California, Berkeley, is known within her field for making important contributions to our understanding of theory of mindhow we acquire knowledge and how we form beliefs. Two years ago, she also became known to the wider world when Time named her Person of the Year among others who spoke out against sexual abuse and harassment.

On stage, Kidd shared five lessons from her research and demonstrated how the tech industrys decisions could influence people to develop false beliefsdenying climate change, for example. Near the end of her talk, she also shared her experience with sexual harassment as a graduate student and directly addressed some of the misunderstandings shed heard about the #metoo movement from men.

It may seem like a scary time to be a man in tech right now, she said to the conference goers, roughly 80% of whom are men this year. Theres a sense that a career could be destroyed over awkward passes or misunderstandings.

Sign up for The Algorithm artificial intelligence, demystified

What I want to say today to all of the men in the room is that you have been misled, she said.

Her talk received a standing ovationa rare moment in the conferences history.

Kidds remarks come at a time when the AI communityand the tech industry more broadlyhas been forced to reckon with the unintentional harms of its technologies. In the past year alone, a series of high-profile cases have exposed how deepfakes can abuse women, how algorithms can make discriminatory decisions in health care and credit lending, and how developing AI models can be immensely costly for the environment. At the same time, the community has been rocked by several sexual abuse and harassment scandals, including some over incidents at previous years of the conference itself. It has also continued to suffer from appalling diversity numbers.

But Kidds talk highlighted an important shift that has begun to happenone that was felt palpably in the room that night. After her talk, dozens of people lined up at the microphones scattered around the room to thank her for speaking out about these issues. Dozens more gathered around her after the sessionsome just to shake her hand in gratitude. To attendees who remember the annual gathering even two years ago, there is a new openness to acknowledging these challenges and a renewed focus on doing better.

The day after her talk, I sat down with Kidd to talk more about the two messages she delivered, how they are related, and her hopes for the future.

This interview has been edited and condensed for clarity.

In the research portion of your talk, you ended with your message: Theres no such thing as a neutral platform. How did you arrive at this conclusion from your research?

Something Ive only realized in the past few yearsbecause of my interactions with my two graduate studentsis theres not really a distinction between knowledge and beliefs. Those are the same thing, basically.

Now were moving toward understanding how these dynamics that weve observed in lab experiments extend to the real world. When somebody goes to the internet not sure of what they should believe, what do they tend to walk away with from these neutral searches? Can we use those same kinds of ideas to try to explain why people believe the earth is flat, and why those misconceptions dont get corrected? Thats not an area that I have seen a lot of attention on, but its one that I think is very important.

Why was it important for you to share your message at this conference?

So much of what we believe now comes from online sources. Especially kidsthey are forming the building blocks of knowledge that will later shape what they believe in, what theyre interested in learning about downstream. For young kids, theres also reason to expect that they are consuming more autoplay and suggested content than adults. So initially theyre more at risk of being influenced by the algorithm pushing content, because thats their only choice.

My talk was intended as a message to people working on the systems to be considerate about how those back-end decisions influence an individual persons beliefs, but also society as a whole. I dont think theres enough sensitivity in tech to how the decisions that you make behind the scenes about how to push content impact peoples lives.

Theres a common battle cry when questions come up about how content is offeredthe claim that platforms are neutral. And I think thats dishonest. The back-end decisions that you make directly influence what people believe, and people know this. So to pretend like thats not a thing is dishonest.

When we change peoples behavior, what we are doing is changing their beliefs. And those changes have real, concrete consequences. When a parent searches for information about whether or not they should vaccinate their childif they walk up to their laptop undecided and they walk away decided, it really matters what content was offered, what views were represented.

I dont think it's reasonable to say you dont have any responsibility for what a mother does to her childwhether she decides to vaccinate them or notbecause that was not something you considered when you built the system. I think you have a responsibility to consider what the repercussions are of the back-end decisions.

You mentioned in the private Q&A after your talk that youve never presented both your research and your experiences of sexual harassment in a public forum. Why do you usually separate those two, and why did you decide to combine them together?

Ill start with the second oneI made an exception to the rule in this case because I thought it was very important for this community to hear that message. Computer science is a field where women have had a really difficult time for a long time getting traction and breaking in. Theres a high degree of interest early on, and then theres a leaky pipe. And I know that one of the things that make it very hard to do well as a woman in this field is less mentorship opportunities.

I know that its very common that men in computer science with good intentions are worried about offending women. The downstream implication of that is that women are losing out on training opportunities, but also the men are losing out on the ideas and innovation that the women would bring. Empirical studies show that diversity leads to higher rates of innovation. And the opportunity to talk to a large portion of these men in one room all at onceI felt like it was important, and I had to do that.

The reason why I usually dont mix them: I didnt choose what happened to me my first year of grad school at Rochester. I didnt choose what the universitys response would be. I wanted a career in science and I want to protect that, so I dont want to do less talking about science because Ive spoken out on this issue. But Im also aware that most people dont get the opportunity, they dont get a platform to speak out. Usually what happens to people that were sexually harassed early in their careers and had their institution retaliate against them is they disappear. I wouldnt feel okay doing nothing. People who have privilege need to use it where they can. And this was an opportunity to use the privilege of giving a talk at NeurIPS to help the more junior women who deserve equal treatment.

Were you worried about the way these comments would land?

Of course. But being afraid of the response is not a reason to not speak. I talked a little bit about privilege. Im also in a relatively privileged position at this particular conference because there are so many people in industry, and I think the pressures to keep people silent are greater at companies than they are in academia, at least in tech right now. So if I was worried about being fired, that would be an extra thing keeping me quiet. UC Berkeley was aware of my speaking out on these issues before they hired me, and theyve shown me nothing but support and encouragement in fighting for equity. By being in a place that supports me like that, I can say things without fear of losing my job and not being able to pay for food for my child. And thats the other reason I felt like I should speak.

I was fully expecting some people to be angry. Its 13,000 people. Of course some people may misunderstand me. I literally talked about how when we use words, theyre ambiguous and people activate different concepts. It's not possible to convince all of the people exactly what you have in mind.

Even though you usually separate your talks about your research and your activism, and you separated it in two sections at NeurIPS, to me they really address the same thing: how to develop responsible technology by taking more responsibility for your decisions and actions.

Right. You mentioned to me that theres more talk in the AI community about the ethical implications, that theres more appreciation for the connection between the technology and society. And I think part of that comes from the community becoming more diverse and involving more people. They come from a less privileged place and are more acutely aware of things like bias and injustice and how technologies that were designed for a certain demographic may actually do harm to disadvantaged populations.

I hope that continues. We have to be talking to each other in order to make progress. And we all deserve to feel safe when interacting with each other.

To have more stories like this delivered directly to your inbox,sign upfor our Webby-nominated AI newsletter The Algorithm. It's free.

Visit link:

The AI community needs to take responsibility for its technology and its actions - MIT Technology Review

Related Posts