Does technology increase the problem of racism and discrimination? – TechTarget

Technology was designed to perpetuate racism.This is pointed out by a recent article in the MIT Technology Review, written by Charlton McIlwain, professor of media, culture and communication at New York University and author of Black Software: The Internet & Racial Justice, From the AfroNet to Black Lives Matter.

The article explains how the Black population and the Latino community in the United States are victims of the configuration of technological tools, such as facial recognition, which is programmed to analyze the physical features of people and, in many cases, generate alerts of possible risks when detecting individuals whose facial features identify them as Black or Latino.

"We've designedfacial recognition technologiesthat target criminal suspects on the basis of skin color. We'vetrained automated risk profiling systemsthat disproportionately identify Latinx people as illegal immigrants. We'vedevised credit scoring algorithmsthat disproportionately identify black people as risks and prevent them from buying homes, getting loans, or finding jobs," McIlwain wrote.

In the article, the author elaborates on the origins of the use of algorithms in politics to win elections, understand the social climate and prepare psychological campaigns to modify the social mood, which in the late 1960s was tense in the United States.These efforts, however, paved the way for large-scale surveillance in the areas where there was most unrest, at the time, the Black community.

According to McIlwain, "this kind of information had helped create what came to be known as 'criminal justice information systems.' They proliferated through the decades, laying the foundation for racial profiling, predictive policing, and racially targeted surveillance. They left behind a legacy that includes millions of black and brown women and men incarcerated."

Contact tracing and threat-mapping technologies designed to monitor and contain the COVID-19 pandemic did not help improve the racial climate.On the contrary, these applications showed a high rate of contagion among Black people, Latinos and the indigenous population.

Although this statistic could be interpreted as a lack of quality and timely medical services for members of the aforementioned communities, the truth is that the information was disclosed as if Blacks, Latinos and indigenous people were a national problem and a threat of contagion.Donald Trump himself made comments in this regard and asked to reinforce the southern border to prevent Mexicans and Latinos from entering his country and increasing the number of COVID-19 patients, which is already quite high.

McIlwain's fear -- and that of other members of the Black community in the United States -- is that the new applications created as a result of the pandemic will be used to recognize protesters to later "quell the threat." Surely, he refers to persecutions and arrests, which may well end in jail, or in disappearances.

"If we dont want our technology to be used to perpetuate racism, then we must make sure that we dont conflate social problems like crime or violence or disease with black and brown people. When we do that, we risk turning those people into the problems that we deploy our technology to solve, the threat we design it to eradicate," concludes the author.

Although artificial intelligence and machine learning feed applications to enrich them, the truth is that the original programming is made by a human (or several).Who defines, initially, the parameters for the algorithms, are the people who created the program or application.The lack of well-defined criteria can result in generalizations, and this can lead to discriminatory or racist actions.

The British newspaperThe Guardianreported, a few years ago, that one of Google's algorithmsauto-tagged images of Black people like gorillas.Other companies, such asIBM and Amazon, avoid using facial recognition technologybecause of its discriminatory tendencies towards Black people, especially women.

"We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies," IBM executive director Arvind Krishna wrote in a letter sent to Congress in June."[T]he fight against racism is as urgent as ever," said Krishna, while announcing that IBM has ended its "general" facial recognition products and will not endorse the use of any technology for "mass surveillance, racial discrimination and human rights violations."

If we consider that thethe difference in error rate between identifying a white man and a Black woman is 34% in the case of IBM software, according to a study by the MIT Media Lab, IBM's decision not only seems fair from the point of view racially speaking, it is also a recognition of the path that lies ahead in programming increasingly precise algorithms.

The 2018 MIT Media Lab study concluded that, although the average precision of these products ranges between 93.7% and 87.9%, the differences based on skin color and gender are notable;93% of the errors made by Microsoft's product affected people with dark skin, and 95% of the errors made by Face ++, a Chinese alternative, concerned women.

Joy Buolamwini, co-author of the MIT study and founder of the Algorithmic Justice League, sees IBM's initiative as a first step in holding companies accountable and promoting fair and accountable artificial intelligence."This is a welcome recognition that facial recognition technology, especially as deployed by police, has been used to undermine human rights, and toharm Black people specifically, as well as Indigenous people and other People of Color,"she said.

Another issue related to discrimination in the IT industry has to do with the language used to define certain components of a network or systems architecture.Concepts like master/slave are beingreformulated to change for a less objectionable terminology.The same will happen with the concepts of blacklists/whitelists.Now, developers will have terms like leader/follower, and allowed list/blocked list.

TheLinux open source operating system will include newinclusiveterminologyin its code and documentation.Linux kernel maintainer Linus Torvaldsapproved this new terminology on July 10, according to ZDNet.

GitHub, a Microsoft-owned software development company,also announced a few weeks ago that it is working to remove such termsfrom its coding.

These actions demonstrate the commitment of the technology industry to create tools that help the growth of society, with inclusive systems and applications and technologies that help combat discrimination instead of fomenting racism.

Go here to read the rest:
Does technology increase the problem of racism and discrimination? - TechTarget

Related Posts
This entry was posted in $1$s. Bookmark the permalink.