Facebook begins rating users on how trustworthy they are at flagging fake news
By Jacob Kastrenakes@jake_kIllustration by Alex Castro / The Verge
Facebook has started rating its users’ trustworthiness in order to help the social network know how much to value user reports that a certain news story might be fake. The Washington Post has details on the system and confirmation from Facebook that it’s been put in place. The system certainly sounds a touch dystopian, but Facebook sees it as a valuable tool for weeding out disinformation.
The trust ratings went into place over the last year, according to the Post, and were developed as part of Facebook’s fight against fake and malicious stories. Facebook relies, in part, on reports from users to help catch these stories. If enough people report a story as false, someone on a fact-checking team will look into it. But checking every story that racks up “fake news” reports would be overwhelming, so Facebook uses other information to figure out what it should and shouldn’t bother looking into.
“PEOPLE OFTEN REPORT THINGS THAT THEY JUST DISAGREE WITH.”
One of those is this trust rating. Facebook didn’t tell the Post everything that went into the score, but it is partly related to a user’s track record with reporting stories as false. If someone regularly reports stories as false, and a fact-checking team later finds them to be false, their trust score will go up; if a person regularly reports stories as false that later are found to be true, it’ll go down.
“People often report things that they just disagree with,” Tessa Lyons, Facebook’s product manager for fighting misinformation, told the Post.
In that sense, this may be less of a “trust” score and more of a “fact-check” score, and the name isn’t likely to do it any favors. Algorithms are often flawed and can have larger, deleterious effects that aren’t immediately visible, so Facebook will have to be careful about what other information it factors in and how else this score is used, lest it accidentally discount reports from a specific community of people.
Facebook pushed back on the score’s eeriness factor in a statement to Gizmodo, saying that the company doesn’t maintain a “centralized ‘reputation’ score.” Instead, the system is just part of “a process to protect against people indiscriminately flagging news as fake and attempting to game the system ... to make sure that our fight against misinformation is as effective as possible.”
Right now, it isn’t clear if the trust score is being used for anything other than reports on news stories, as well as reports on whether another Facebook user has posted something inappropriate or otherwise needing the company’s attention.
FACT CHECKERS WANT MORE TRANSPARENCY FROM FACEBOOK
If it’s used as advertised, the scores could help Facebook home in more quickly on disinformation that’s spreading around the network. While bad reports can come from all over, President Donald Trump and other Republican leaders have made a habit out of calling any story they dislike “fake news,” which could influence others to abuse the term. That could lead to fact-checkers wasting time on stories that are obviously correct.
The real backstop here is the fact-checkers. Facebook largely seems to rely on third-party fact-checking services like Snopes and PolitiFact to determine what is and isn’t real. That means the final determinations ought to be trustworthy, but there’s still a layer of Facebook’s algorithm in the way.
The Columbia Journalism Review published a report back in April that looked at Facebook’s fact-checking efforts. It found that many fact-checkers were frustrated with Facebook’s lack of transparency. Fact-checkers weren’t clear on how Facebook was determining which stories to show or hide from them and in which order. That means that even though widely accepted fact-checkers have a shot at monitoring these stories — and therefore a direct impact on users’ trust scores — it still comes down to Facebook to pick out the right stories to show them in the first place.
No comments:
Post a Comment