NYT Exposes Facebook’s ‘Biased,’ ‘Error-prone,’ Hate Speech Guidelines #GreatAwakening
Facebook’s massive hate speech policies were blasted by The New York Times Thursday. The Times detailed Facebook’s current hate-speech failsafe controls as essentially “a network of workers using a maze of Powerpoint slides spelling out what’s forbidden.”
If that sounds like a shoddy and unsuccessful solution to a way of monitoring free speech, that was NYT’s point. After being given access to “1,400 pages from the [hate speech] rulebook by an employee who said he feared that the company was exercising too much power,” the Times found that the rules contained “numerous gaps, biases, and outright errors.”
As 2018 ends, it’s starting to sound like the tech giant can’t do anything right.
The “speech policies” included a lot to worry conservatives. They are curated by a group of “several dozen Facebook employees,” who discuss rules for what the site’s 2 billion users should be allowed to say.
The Times described the rules enforced by the moderators as consisting of “dozens of unorganized PowerPoint presentations and Excel spreadsheets with bureaucratic titles like “Western Balkans Hate Orgs and Figures” and “Credible Violence: Implementation standards.” It’s just a heavy dose of rules that moderators — many of whom don’t speak multiple languages nor understand the geopolitical implications of the part of the world in which they police — have to consult when dealing with thousands of posts a day.
The Times wrote how Facebook is overwhelmed by the weight of it all. “The company’s goal is ambitious: to reduce context-heavy questions that even legal experts might struggle with — when is an idea hateful, when is a rumor dangerous — to one-size-fits-all rules. By telling moderators to follow the rules blindly, Facebook hopes to guard against bias and to enforce consistency.”
However, the paper wasn’t quite sure that this was all working as effectively as Facebook would like it work. In addition to the limitations of the 7,500+ moderators, some of the “one-size-fits-all” rules don’t work well for every situation. NYT wrote that they may have even exacerbated the killing of Muslims in Myanmar, a problem the U.N. originally agreed was an issue.
“Guidelines for policing hate speech in Myanmar instruct moderators not to remove posts supporting Ma Ba Tha … an extremist group that has been using the platform to incite violence against Muslims since 2014.” Not only did the group slip through the cracks, but Facebook also (without notifying the local government at all) banned several “senior military officials.” The ensuing confusion “deepened distrust” between Myanmar’s civilian leader and the military.
All of that effort leads back to Facebook’s hate list. Facebook’s list names groups that “moderators are instructed to remove any post praising, supporting, or representing.”
European political activist Anton Shekhovtsov told the Times that he was “confused” about the list’s methodology, saying that “it bans an impressive array of American and British groups… but relatively few in countries where the far right can be more violent, particularly Russia or Ukraine.”
Though the main issue here for “Harvard University expert on online extremism,” Jonas Kaiser, was that the ”hate list” exists in the first place. He told the Times that “It puts social networks in the position to make judgment calls that are traditionally the job of the courts.”
from NewsBusters Follow News Busters
Web Market Power providing the latest marketing tips, news and tricks throughout the industry.
Sourced by the online web marketing guys. Web Marketing Experts that know how to drive business uniquely using creative marketing methods, and self-sufficient social media strategies.
No comments