Facebook, YouTube, and Twitter warn that AI systems could make mistakes – Vox.com

A day after Facebook announced it would rely more heavily on artificial-intelligence-powered content moderation, some users are complaining that the platform is making mistakes and blocking a slew of legitimate posts and links, including posts with news articles related to the coronavirus pandemic, and flagging them as spam.

While trying to post, users appear to be getting a message that their content sometimes just a link to an article violates Facebooks community standards. We work hard to limit the spread of spam because we do not want to allow content that is designed to deceive, or that attempts to mislead users to increase viewership, read the platforms rules.

The problem also comes as social media platforms continue to combat Covid-19-related misinformation. On social media, some now are floating the idea that Facebooks decision to send its contracted content moderators home might be the cause of the problem.

Facebook is pushing back against that notion, and the companys vice president for integrity, Guy Rosen, tweeted that this is a bug in an anti-spam system, unrelated to any changes in our content moderator workforce. Rosen said the platform is working on restoring the posts.

Recode contacted Facebook for comment, and well update this post if we hear back.

The issue at Facebook serves as a reminder that any type of automated system can still screw up, and that fact might become more apparent as more companies, including Twitter and YouTube, depend on automated content moderation during the coronavirus pandemic. The companies say theyre doing so to comply with social distancing, as many of their employees are forced to work from home. This week, they also warned users that, because of the increase in automated moderation, more posts could get taken down in error.

In a blog post on Monday, YouTube told its creators that the platform will turn to machine learning to help with some of the work normally done by reviewers. The company warned that the transition will mean some content will be taken down without human review, and that both users and contributors to the platform might see videos removed from the site that dont actually violate any of YouTubes policies.

The company also warned that unreviewed content may not be available via search, on the homepage, or in recommendations.

Similarly, Twitter has told users that the platform will increasingly rely on automation and machine learning to remove abusive and manipulated content. Still, the company acknowledged that artificial intelligence would be no replacement for human moderators.

We want to be clear: while we work to ensure our systems are consistent, they can sometimes lack the context that our teams bring, and this may result in us making mistakes, said the company in a blog post.

To compensate for potential errors, Twitter said it wont permanently suspend any accounts based solely on our automated enforcement systems. YouTube, too, is making adjustments. We wont issue strikes on this content except in cases where we have high confidence that its violative, the company said, adding that creators would have the chance to appeal these decisions.

Facebook, meanwhile, says its working with its partners to send its content moderators home and to ensure that theyre paid. The company is also exploring remote content review for some of its moderators on a temporary basis.

We dont expect this to impact people using our platform in any noticeable way, said the company in a statement on Monday. That said, there may be some limitations to this approach and we may see some longer response times and make more mistakes as a result.

The move toward AI moderators isnt a surprise. For years, tech companies have pushed automated tools as a way to supplement their efforts to fight the offensive and dangerous content that can fester on their platforms. Although AI can help content moderation move faster, the technology can also struggle to understand the social context for posts or videos and, as a result make inaccurate judgments about their meaning. In fact, research has shown that algorithms that detect racism can be biased against black people, and the technology has been widely criticized for being vulnerable to discriminatory decision-making.

Normally, the shortcomings of AI have led us to rely on human moderators who can better understand nuance. Human content reviewers, however, are by no means a perfect solution either, especially since they can be required to work long hours analyzing traumatic, violent, and offensive words and imagery. Their working conditions have recently come under scrutiny.

But in the age of the coronavirus pandemic, having reviewers working side by side in an office could not only be dangerous for them, it could also risk further spreading the virus to the general public. Keep in mind that these companies might be hesitant to allow content reviewers to work from home as they have access to lots of private user information, not to mention highly sensitive content.

Amid the novel coronavirus pandemic, content review is just another way were turning to AI for help. As people stay indoors and look to move their in-person interactions online, were bound to get a rare look at how well this technology fares when its given more control over what we see on the worlds most popular social platforms. Without the influence of human reviewers that weve come to expect, this could be a heyday for the robots.

Update, March 17, 2020, 9:45 pm ET: This post has been updated to include new information about Facebook posts being flagged as spam and removed.

Open Sourced is made possible by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.

Visit link:
Facebook, YouTube, and Twitter warn that AI systems could make mistakes - Vox.com

Related Posts
This entry was posted in $1$s. Bookmark the permalink.