Meta Developed A New AI That Has A Propensity Towards Racist Language – Digital Information World

Posted: May 11, 2022 at 11:36 am

Meta recently revealed a new tool built for the purposes of developing AI programs quickly and efficiently. Just one catch, though: the tools apparently got racist tendencies to it.

Its almost expected that AI, or even AI development systems that are built by humans with inherent bias would ultimately come to reflect some form of them. Its the ultimate fallacy of machines: no system in the world can be truly free of error and bias, especially since the quote unquote unnatural ones such as technological devices are ultimately made by imperfect, natural beings. And yes, this is as philosophical as I intend on getting with the subject matter of technology; now, back to our regularly scheduled programming. It is interesting to note, however, that this is the second time in the past few years that weve come across racist AI being employed by social media platforms, which in and of itself feels like a phenomenon that either shouldnt have happened twice or should have happened many, many more times than that.

The example that I have in mind is one of Twitters image resizing AI. The short-form text platform (thats an indie band name if Ive ever heard one) decided to employ an algorithm that would automatically resize photos that dont fit Twitters basic display, sparing users the effort of editing photos ahead of time. However, users quickly figured out that if photos of a larger group of individuals was posted, minorities such as black people kept getting cropped out of the photo. Some users even ran tests with this, to conclusively agree that the AI straight up started ignoring users that werent white. So, lets be real: theres little chance that developers were actively attempting to make their technology racist, personal beliefs notwithstanding. However, this does display just how effectively racial bias seeps into every social crevice; the AI was probably trained on a database of photos for referencing, and those photos probably just had a ton of white people in them since media channels arent super hip on showing other minorities except for scoring diversity points every now and then. This is, of course, speculative, and Im willing to be educated on the actual reason. My point, however, still stands.

Metas new system, named OPT-175B, was funnily enough outed for its less than scrupulous tendencies by the companys own researchers. In a report accompanying the systems test release, it was elaborated upon that OPT-175B had a tendency to generate toxic language that reinforced harmful stereotypes about individuals and races. I guess Meta wanted to stay ahead of the curve on this, and the researchers are still at work undoing the new AI generators kinks.

See the rest here:

Meta Developed A New AI That Has A Propensity Towards Racist Language - Digital Information World

Related Posts