The New York Times, with a little help from automation, is aiming to … – Nieman Journalism Lab at Harvard

Posted: June 14, 2017 at 4:08 am

The New York Times strategy for taming reader comments has for many years been laborious hand curation. Its community desk of moderators examines around 11,000 individual comments each day, across the 10 percent of total published articles that are open to commenting.

The bottom line on this is that the strategy on our end of moderating just about every comment by hand, and then using that process to show readers what kinds of content were looking for, has run its course, Bassey Etim, Times community editor, told me. From our end, weve seen that its working to scale comments to the point where you can have a good large comments section that youre also moderating very quickly, things that are widely regarded as impossible. But weve got a lot left to go.

Nudging readers towards comments that the Times is looking for is no easy task. Its own guidelines, laid out in an internal document and outlining various rules around comments and how to take action on them, have evolved over time. (I took the Times moderation quiz getting only one correct and at my pace, it wouldve taken more than 24 hours to finish tagging 11,000 comments.)

Jigsaws tool, called Perspective, has been fed a corpus of Times comments that have been tagged by human editors already. Human editors then trained the algorithm over the testing phase, flagging mistakes in moderation it made. In the new system, a moderator can evaluate comments based on the likelihood of rejection and checks that the algorithm has properly labeled comments that fall into a grayer zone (comments with 17 to 20 percent likelihood of rejection, for instance). Then the community desk team can set a rule to allow all comments that fall between 0 to 20 percent, for instance, to go through.

Were looking at an extract of all the mistakes its made, evaluate what the impact of each of those moderating mistakes might be on the community and on the perceptions of our product. Then based on that, we can choose different forms of moderation for each individual section at the Times, Etim said. Some sections could remain entirely human-moderated; some sections that tend to have a low rate of rejection for comments could be automated.

Etims team will be working closely with Ingbers Reader Center, helping out in terms of staffing projects, with advice, and all kinds of things, though the relationship and roles are not currently codified.

It used to be when something bubbled up in the comments, maybe wed hear repeated comments or concerns about coverage. Youd send that off to a desk editor, and they would say, Thats a good point; lets deal with this. But the reporter is out reporting something else, then time expires, and it passes, Etim said. Now its at the point where when things bubble up, [Ingber] can help us take care of it in the highest levels in the newsroom.

The Coral Project is just working on a different problem set at the moment and the Coral Project was never meant to be creating the New York Times commenting system, he said. They are focusing on helping most publishers on the web. Our business priority was, how do we do moderation at scale? And for moderation at our kind of scale, we needed the automation.

The Coral stuff became a bit secondary, but were going to circle back and look at what it has in the open source world, and looking to them as a model for how to deal with things like user reputation, he added.

Excerpt from:

The New York Times, with a little help from automation, is aiming to ... - Nieman Journalism Lab at Harvard

Related Posts