We need to design distrust into AI systems to make them safer – MIT Technology Review

Posted: May 20, 2021 at 4:42 am

Its interesting that youre talking about how, in these kinds of scenarios, you have to actively design distrust into the system to make it more safe.

Yes, thats what you have to do. Were actually trying an experiment right now around the idea of denial of service. We dont have results yet, and were wrestling with some ethical concerns. Because once we talk about it and publish the results, well have to explain why sometimes you may not want to give AI the ability to deny a service either. How do you remove service if someone really needs it?

But heres an example with the Tesla distrust thing. Denial of service would be: I create a profile of your trust, which I can do based on how many times you deactivated or disengaged from holding the wheel. Given those profiles of disengagement, I can then model at what point you are fully in this trust state. We have done this, not with Tesla data, but our own data. And at a certain point, the next time you come into the car, youd get a denial of service. You do not have access to the system for X time period.

Its almost like when you punish a teenager by taking away their phone. You know that teenagers will not do whatever it is that you didnt want them to do if you link it to their communication modality.

The other methodology weve explored is roughly called explainable AI, where the system provides an explanation with respect to some of its risks or uncertainties. Because all of these systems have uncertaintynone of them are 100%. And a system knows when its uncertain. So it could provide that as information in a way a human can understand, so people will change their behavior.

As an example, say Im a self-driving car, and I have all my map information, and I know certain intersections are more accident prone than others. As we get close to one of them, I would say, Were approaching an intersection where 10 people died last year. You explain it in a way where it makes someone go, Oh, wait, maybe I should be more aware.

The negatives are really linked to bias. Thats why I always talk about bias and trust interchangeably. Because if Im overtrusting these systems and these systems are making decisions that have different outcomes for different groups of individualssay, a medical diagnosis system has differences between women versus menwere now creating systems that augment the inequities we currently have. Thats a problem. And when you link it to things that are tied to health or transportation, both of which can lead to life-or-death situations, a bad decision can actually lead to something you cant recover from. So we really have to fix it.

The positives are that automated systems are better than people in general. I think they can be even better, but I personally would rather interact with an AI system in some situations than certain humans in other situations. Like, I know it has some issues, but give me the AI. Give me the robot. They have more data; they are more accurate. Especially if you have a novice person. Its a better outcome. It just might be that the outcome isnt equal.

Its important to me because I can identify times in my life where someone basically provided me access to engineering and computer science. I didnt even know it was a thing. And thats really why later on, I never had a problem with knowing that I could do it. And so I always felt that it was just my responsibility to do the same thing for those who have done it for me. As I got older as well, I noticed that there were a lot of people that didnt look like me in the room. So I realized: Wait, theres definitely a problem here, because people just dont have the role models, they dont have access, they dont even know this is a thing.

And why its important to the field is because everyone has a difference of experience. Just like Id been thinking about human-robot interaction before it was even a thing. It wasnt because I was brilliant. It was because I looked at the problem in a different way. And when Im talking to someone who has a different viewpoint, its like, Oh, lets try to combine and figure out the best of both worlds.

Airbags kill more women and kids. Why is that? Well, Im going to say that its because someone wasnt in the room to say, Hey, why dont we test this on women in the front seat? Theres a bunch of problems that have killed or been hazardous to certain groups of people. And I would claim that if you go back, its because you didnt have enough people who could say Hey, have you thought about this? because theyre talking from their own experience and from their environment and their community.

If you think about coding and programming, pretty much everyone can do it. There are so many organizations now like Code.org. The resources and tools are there. I would love to have a conversation with a student one day where I ask, Do you know about AI and machine learning? and they say, Dr. H, Ive been doing that since the third grade! I want to be shocked like that, because that would be wonderful. Of course, then Id have to think about what is my next job, but thats a whole other story.

But I think when you have the tools with coding and AI and machine learning, you can create your own jobs, you can create your own future, you can create your own solution. That would be my dream.

See original here:

We need to design distrust into AI systems to make them safer - MIT Technology Review

Related Posts