Facebook users ‘can praise mass murders’, leaked moderation guidelines show

A trove of leaked Facebook moderator memos has revealed how users are allowed to call for the death of public figures and can praise mass murders based on company’s own vague definitions of what constitutes a ‘crime‘. 

More than 300 pages of internal Facebook moderation guidelines in effect as of last December were obtained by the Guardian and outlined in a pair of articles published Tuesday. 

The documents exposed Facebook’s high tolerance for abuse targeting public figures – a loosely-defined category that includes people who’ve received scant coverage in local news outlets or who have a large following on social media. 

The company’s bullying and harassment policy states that public figures are permissible targets for types of abuse that are not tolerated against members of the general public ‘because we want to allow discussion, which often includes critical commentary of people who are featured in the news’.   

The leaked documents also revealed that Facebook maintains its own list of ‘recognized crimes’ that moderators are instructed to use when evaluating content around the world instead of referring to national laws. 

They showed that users are allowed to praise mass murderers and ‘violent non-state actors’ in certain situations, particularly in parts of the world where national laws are set by repressive regimes and are deemed by Facebook to be in violation of human rights.

The revelations in the documents drew outrage from experts who said they underscored how Facebook has been able to make its own rules for regulating content with little to no oversight from the people its policies affect.  

A trove of leaked Facebook moderator memos has revealed how users are allowed to call for the death of public figures and can praise mass murders based on company’s own vague definitions of what constitutes a ‘crime’ (file photo)

Facebook’s bullying and harassment policy offers specific instructions for moderators to differentiate attacks on public figures versus private citizens, explaining that the latter group is afforded a higher degree of protection.  

‘For public figures, we remove attacks that are severe as well as certain attacks where the public figure is directly tagged in the post or comment,’ the policy states. 

‘For private individuals, our protection goes further: we remove content that’s meant to degrade or shame, including, for example, claims about someone’s sexual activity.’ 

The policy casts a wide net for who is considered a public figure – including politicians at all levels of government, journalists and users with more than 100,000 followers on any of their social media accounts. 

A person is also considered a public figure if they’ve been mentioned in the title, subtitle or preview of five or more news articles within the last two years – unless they are under the age of 13.   

While ‘calls for death’ targeting private individuals are not permitted under any circumstance, they are acceptable for public figures so long as that person is not ‘purposefully exposed’ to the call by being tagged in a post that includes it.   

Both private and public figures are protected under the policy from direct threats of severe physical harm and threats to release personal information. 

In an interview with the Guardian, Center for Countering Digital Hate Founder Imran Ahmed called the policy ‘flabbergasting’ and said it could put public figures’ lives at risk.   

‘Highly visible abuse of public figures and celebrities acts as a warning – a proverbial head on a pike – to others,’ Ahmed said. 

‘It is used by identity-based hate actors who target women and minorities to dissuade participation by the very groups that campaigners for tolerance and inclusion have worked so hard to bring into public life. 

‘Just because someone isn’t tagged doesn’t mean that the message isn’t heard loud and clear.’ 

Facebook defended its policy when approached by the Guardian, saying that it was aimed at promoting freedom of discussion and still offered adequate protections for public figures.  

‘We think it’s important to allow critical discussion of politicians and other people in the public eye. But that doesn’t mean we allow people to abuse or harass them on our apps,’ a Facebook spokesperson told the newspaper. 

‘We remove hate speech and threats of serious harm no matter who the target is, and we’re exploring more ways to protect public figures from harassment. 

‘We regularly consult with safety experts, human rights defenders, journalists and activists to get feedback on our policies and make sure they’re in the right place.’ 

A spokesperson for Facebook defended the revelations in the leaked documents when approached by the Guardian (file photo)

A spokesperson for Facebook defended the revelations in the leaked documents when approached by the Guardian (file photo)

A second Guardian article based on the same documents analyzed Facebook’s curation of its own internal rule of law for moderators to follow worldwide. 

The newspaper said Facebook’s list of ‘recognized crimes’ underscored how the company handles operations in countries with repressive regimes whose laws it deems to be incompatible with human rights. 

‘We only recognize crimes that cause physical, financial or mental injury to individual(s),’ the guidelines state, listing theft, robbery and fraud, murder, vandalism and non-consensual sexual touching among the crimes it does consider legitimate. 

Among those not recognized – even though they might be in certain countries – are ‘peaceful protests against governments’, ‘claims about sexuality’ and ‘discussing historical events/controversial subjects such as religion’.  

The guidelines reference specific regions of the world where exceptions can be made for content due to the political and cultural climate. 

For example in Myanmar, the Middle East and north Africa, moderators are told:  ‘Allow content that praises violent non-state actors, unless it contains an explicit reference to violence.’

Facebook defines ‘violent non-state actors’ as designated militant groups engaged in civil wars that do not target civilians, according to the Guardian.

In other cases Facebook’s definition of what constitutes a crime is more stringent than local law. For example, the company does not allow for sales of marijuana in its Marketplace feature, even in areas where it is legal in the eyes of the government.  

Critics say that by adopting its own one-size-fits-all policy for what constitutes a crime, Facebook has shown a lack of understanding of the complexity of the places it reaches.  

‘One of the biggest problems is that Facebook has gone into every single country on the planet with no idea of the impact,’ Wendy Via, the co-founder and president of the US-based Global Project Against Hate and Extremism, told the Guardian.

Via described Facebook as having ‘zero cultural competency’ and said: ‘You can’t build secret rules if you can’t understand the situation.’

The leaked documents also revealed details about Facebook’s rules for posts affiliated with the QAnon conspiracy movement.  

Facebook, which banned accounts associated with QAnon last fall, describes the group as a ‘violence-inducing conspiracy network’. 

The group is listed among those that are not permitted because they meet the following criteria: ‘Organized under a name, sign mission statement or symbol; AND promote theories that attribute violent or dehumanizing behavior to people or organizations that have been debunked by credible sources; AND have advocated for incidents of real-world violence to draw attention to or redress the supposed harms promoted in those debunked theories’. 

As it did with the harassment and bullying policy, Facebook defended its ‘recognized crimes’ list when approached by the Guardian.   

‘We maintain a list of crimes that we apply under these policies, but rather than breaking them down by country or region they are crimes that are recognized globally,’ a spokesperson said. 

‘Since we’re a global platform, we have a single set of policies about what we allow and apply them to every country and region. While we’ve made progress in our enforcement, we know there is always more to do.’

The spokesperson added: ‘We don’t allow anyone to praise violent actions and we remove content that represents or supports the organisations we ban under our policies. 

‘We recognize that in conflict zones some violent non-state actors provide key services and negotiate with governments – so we enable praise around those non-violent activities but do not allow praise for violence by these groups.’