Twitter says it will start to automatically demote response posts that it determines are likely to disrupt or disturb users’ conversations in an effort to address bad behavior on its service.
Twitter Inc. is turning to greater automation in its battle against abuse on its platform, saying its software will start automatically demoting response posts that it determines are likely to disrupt users’ conversations.
The change, which will roll out over the coming week, isn’t designed to deal with accounts or messages that violate Twitter’s content policies, which the company says it already acts against. Rather, the new approach targets accounts that Twitter says exhibit signs of “troll-like behavior” and that “distort and detract from the public conversation on Twitter.”
Instead of deleting those accounts’ messages, Twitter will push them down in the list of replies people see to their tweets and in search results, particularly for popular hashtags.
Executives at Twitter said the move is among the more important it has made to address longstanding criticism about bad behavior on its service. “It is shaping up to be one of the biggest impact changes we have made,” said Twitter Chief Executive Jack Dorsey.
Twitter and other social networks have sometimes struggled to balance demands to filter out abusive content with criticism that doing so risks imposing the values of their employees on their users. Some right-wing activists, for example, have complained that past efforts by Twitter to curb harassment disadvantaged conservative commentators.
Del Harvey, vice president of trust and safety at Twitter, said its latest change focuses more on the conduct of users, instead of the content of what they are posting. Such tweeting might include accounts using lots of unrelated hashtags or repeatedly mentioning of accounts that don’t follow them back.
The new measure further advances the algorithmic reordering of content that Twitter rolled out in 2016 after years of showing content only in the order it was posted. Twitter said it aims to improve users’ experience by reducing the burden of reporting content.
David Gasca, a Twitter director of product management, said algorithmic changes are a major tool for tackling abuse because they don’t require adding hundreds of people to moderate conversations, and can be rolled out globally, regardless of a language.
The new change targets accounts that show numerous signs of what Twitter views as bad conduct, including failure to confirm the account-holder’s email address, repeatedly having tweets blocked or muted by others, and being one of many accounts that one person signed up for simultaneously.
Ms. Harvey said Twitter is aware that networks of people sometimes coordinate to block or report certain users, so it will only consider those signals along with other factors.
The move will affect mainly replies to posts, which is the primary way most Twitter users see messages from people they don’t follow. It will also push posts from targeted accounts lower in Twitter’s search results. But the change won’t affect how posts from those accounts show up in the feeds of people who do follow them, Twitter said.
Ms. Harvey said the number of accounts affected by the change is likely less than 1% of all accounts, but they are important because they trigger an outsize share of the complaints that Twitter receives. The company reported a total of 336 million active users as of March.
Twitter has rolled out other steps to fight abuse over the past two years. But Mr. Dorsey in March said he wasn’t proud of how Twitter had handled malicious activity on its platform, and that the company was looking to hire outside experts to help it measure the quality of conversations on the social network. Twitter received 230 proposals in response to this request, Mr. Dorsey said.
In tests of the new tool, Twitter found that reordering replies resulted in an 8% drop in abuse reports, Ms. Harvey said. Tests on tweets in the app’s search section resulted in a 4% drop in abuse reports, she said.