News Site Says It’s Using to AI to Crank Out Articles Bylined by Fake Racially Diverse Writers in a Very Responsible Way

A news network is attributing AI-spun articles to fake authors with racially diverse names. Its publisher claims the names were unintentional.

A national network of local news sites called Hoodline is using fake authors with fictional and pointedly racially diverse names to byline AI-generated articles.

The outlet's publisher claims it's doing so in an extremely normal, human-mitigated way. But unsurprisingly, a Nieman Lab analysis of the content and its authors suggests otherwise.

Per Neiman, Hoodline websites were once a refuge for hyperlocal human-boots-on-the-ground reporting. These days, though, when you log onto a Hoodline site, you'll find articles written by a slew of entirely fabricated writers.

Hoodline is owned by a company called Impress3, which in turn is helmed by a CEO named Zack Chen. In April, Chen published an article on Hoodline's San Francisco site explaining that the news network was using "pen names" to publish AI-generated content — a euphemism that others have deployed when caught publishing fake writers in reputable outlets.

In that hard-to-parse post, Chen declared that these pen names "are not associated with any individual live journalist or editor." Instead, "the independent variants of the AI model that we're using are tied to specific pen names, but are still being edited by humans." (We must note: that's not the definition of a pen name, but whatever.)

Unlike the fake authors that Futurism investigations discovered at Sports Illustrated, The Miami Herald, The LA Times, and many other publications, Hoodline's made-up authors do have little "AI" badges next to their names. But in a way, as Nieman notes, that disclosure makes its writers even stranger — not to mention more ethically fraught. After all, if you're going to be up-front about AI use, why not just publish under a generalized byline, like "Hoodline Bot"?

The only reason to include a byline is to add some kind of identity, even if a fabricated one, to the content — and as Chen recently told Nieman, that's exactly the goal.

"These inherently lend themselves to having a persona," Chen told the Harvard journalism lab, so "it would not make sense for an AI news anchor to be named 'Hoodline San Francisco.'"

Which brings us to the details of the bylines themselves. Each city's Hoodline site has a bespoke lineup of fake writers, each with their own fake name. In May, Chen told Bloomberg that the writers' fake names were chosen at random. But as Nieman found, the fake author lineups at various Hoodline websites appear to reflect a given region's demographics, a reality that feels hardly coincidental. Hoodline's San Francisco-focused site, for example, published content under fake bylines like "Nina Singh-Hudson," "Leticia Ruiz," and "Eric Tanaka." But as Nieman's Neel Dhanesha writes, the "Hoodline site for Boston, where 13.9 percent of residents reported being of Irish ancestry in the 2022 census, 'Leticia Ruiz' and 'Eric Tanaka' give way to 'Will O'Brien' and 'Sam Cavanaugh.'"

In other words, it strongly seems as though Hoodline's bylines were designed to appeal to the people who might live in certain cities — and in doing so, Hoodline's sites didn't just manufacture the appearance of a human writing staff, but a racially varied one to boot. (In reality, the journalism industry in the United States is starkly lacking in racial diversity.)

And as it turns out? Hoodline's authors weren't quite as randomized as Chen had previously suggested.

"We instructed [the tool generating names] to be randomized, though we did add details to the AI tool generating the personas of the purpose of the generation," Chen admitted to Nieman, adding that his AI was "prompted to randomly select a name and persona for an individual who would be reporting on — in this case — Boston."

"If there is a bias," he added, "it is the opposite of what we intended."

Chen further claimed that Hoodline has a "team of dozens of (human) journalist researchers who are involved with information gathering, fact checking, source identification, and background research, among other things," though Nieman's research unsurprisingly found a number of publishing oddities and errors suggesting there might be less human involvement than Chen was letting on. Hoodline also doesn't have any kind of masthead, so it's unclear whether its alleged team of "dozens" reflects the same kind of diversity it's awarded its fake authors.

It's worth noting that a similar problem existed in the content we discovered at Sports Illustrated and other publishers. Like at Hoodline, many of these fake writers were attributed racially diverse names; many of these made-up writer profiles even went a step further and were outfitted with AI-generated headshots depicting fake, diverse faces.

Attributing AI-generated authors to fake writers at all, regardless of whether they have an "AI" badge next to their names, raises red flags from the jump. But fabricating desperately needed diversity in journalism by whipping up a fake writing staff — as opposed to, you know, hiring real humans from different minority groups — is a racing-to-the-bottom kind of low.

It seems that Hoodline's definition of good journalism, however, would differ.

"Our articles are crafted with a blend of technology and editorial expertise," reads the publisher's AI policy, "that respects and upholds the values of journalism."

More on fake writers: Meet AdVon, the AI-Powered Content Monster Infecting the Media Industry

The post News Site Says It’s Using to AI to Crank Out Articles Bylined by Fake Racially Diverse Writers in a Very Responsible Way appeared first on Futurism.

View original post here:
News Site Says It’s Using to AI to Crank Out Articles Bylined by Fake Racially Diverse Writers in a Very Responsible Way

Related Posts

Comments are closed.