Diverse AI teams are key to reducing bias – VentureBeat

Posted: July 23, 2021 at 4:14 am

All the sessions from Transform 2021 are available on-demand now. Watch now.

An Amazon-built resume-rating algorithm, when trained on mens resumes, taught itself to prefer male candidates and penalize resumes that included the word women.

A major hospitals algorithm, when asked to assign risk scores to patients, gave white patients similar scores to Black patients who were significantly sicker.

If a movie recommendation is flawed, thats not the end of the world. But if you are on the receiving end of a decision [that] is being used by AI, that can be disastrous, Huma Abidi, senior director of AI SW products and engineering at Intel, said during a session on bias and diversity in AI at VentureBeats Transform 2021 virtual conference. Abidi was joined by Yakaira Nuez, senior director of research and insights at Salesforce, and Fahmida Y Rashid, executive editor of VentureBeat.

In order to produce fair algorithms, the data used to train AI needs to be free of bias. For every dataset, you have to ask yourself where the data came from, if that data is inclusive, if the dataset has been updated, and so on. And you need to utilize model cards, checklists, and risk management strategies at every step of the development process.

The best possible framework is that we were actually able to manage that risk from the outset we had all of the actors in place to be able to ensure that the process was inclusive, bringing the right people in the room at the right time that were representative of the level of diversity that we wanted to see and the content. So risk management strategies are my favorite. I do believe in order for us to really mitigate bias that its going to be about risk mitigation and risk management, Nuez said.

Make sure that diversity is more than just a buzzword and that your leadership teams and speaker panels are reflective of the people you want to attract to your company, Nuez said.

When thinking about diversity, equity, and inclusion work, or bias and racism, the most impact tends to be in areas in which individuals are most at risk, Nuez said. Health care, finance, and legal situations anything involving police and child welfare are all sectors where bias causes the most amount of harm when it shows up. So when people are working on AI initiatives in these spaces to increase productivity or efficiencies, it is even more critical that they are thinking deliberately about bias and potential for harm. Each person is accountable and responsible for managing that bias.

Nuez discussed how the responsibility of a research and insights leader is to curate data so executives can make informed decisions about product direction. Nuez is not just thinking about the people pulling the data together, but also the people who may not be in the target market, to give insight into people Salesforce would not have known anything about otherwise.

Nuez regularly asks the team to think about bias and whether it is present in the data, like asking whether the panel of individuals for a project is diverse. If the feedback is not from an environment that is representative of the target ecosystem, then that feedback is less useful.

Those questions are the small little things that I can do at the day to day level to try to move the needle a bit at Salesforce, Nuez said.

Research has shown that minorities often have to whiten their rsums in order to get callbacks and interviews. Companies and organizations can weave diversity and inclusion into their stated values to address this issue.

If its already not part of your core mission statement, its really important to add those things diversity, inclusion, equity. Just doing that, by itself, will help a lot, Abidi said.

Its important to integrate these values into corporate culture because of the interdisciplinary nature of AI: Its not just engineers; we work with ethicists, we have lawyers, we have policymakers. And all of us come together in order to fix this problem, Abidi said.

Additionally, commitments by companies to help fix gender and minority imbalances also provide an end goal for recruitment teams: Intel wants women in 40% of technical roles by 2030. Salesforce is aiming to have 50% of its U.S. workforce made up of underrepresented groups, including women, people of color, LGBTQ+ employees, people with disabilities, and veterans.

Original post:

Diverse AI teams are key to reducing bias - VentureBeat

Related Posts