‘Humour overrules hate speech’


On the ‘Take back Australia’ Facebook Page, there is an image that has been ripped from an album cover by metal band Cainomen. In it, a beast that bears a resemblance to the Xenomorph from the Alien franchise grasps a Caucasian sniper’s shoulder. Blood dribbles from the alien’s mouth, trickling onto the soldier’s helmet. ‘Those with the towels on their heads!’ the superimposed text reads. ‘I want them first! I’ve been waiting for them … ’

This image – which seems designed explicitly to incite violence against an identifiable religious group – has been reported many times, as has the ‘Take back Australia’ Page itself. The image has been cited for ‘annoying and distasteful humor’ and ‘harassment’, but more than a week later, the picture remains on the social network. Every user that has filed a report now has a message in their ‘Support Dashboard’ informing them that their complaint has been rejected because the image ‘doesn’t violate [Facebook’s] Community Standards’.

‘Keep reporting. They remove it eventually,’ one user suggested, in response to another user expressing frustration at their content removal requests failing. ‘Basically the first few times it reports it goes to bots. Then if you bitch enough it gets taken down.’

The idea that repeatedly reporting offensive content to Facebook constitutes an effective form of political action, however, may be a flawed one. After all, if a piece of content is reported once and that report is rejected, why would repeatedly flagging the same content lead Facebook to take action?

Facebook’s moderation policy is difficult to unpack, in large part because Facebook likes to pretend the social network is not a moderated space at all. But it seems as though the number of removal requests a piece of content receives does not significantly influence the likelihood it will be scrubbed from the site. On the contrary, with the exception of material that very clearly breaches Facebook’s guidelines (nudity, spam, graphic violence, threats directed at individuals), content tends to be only removed as a result of advertiser discomfort, something that’s also the main factor shifting Facebook’s guidelines.

For example, last year, Facebook took a stand against gender-based hate speech, but only after carmaker Nissan announced they would withdraw advertising from the social network.

In the wake of the Nissan boycott and Facebook’s subsequent commitment to improving its moderation criteria, it is certainly true that Pages with titles like ‘Violently Raping Your Friend Just for Laughs’ no longer have much chance of long-term survival on Facebook. Nonetheless, hate speech on the social network is still rife. Facebook’s shifting guidelines simply change the rules of the game, with players adopting increasingly nuanced strategies to ensure their content resists removal in the face of repeated complaints. The resilience of the ‘Take back Australia’ Page, for example, which was created in 2012 (but, apparently, ‘Founded on January 26, 1788’), suggests that its anonymous administrators recognise how to use Facebook to promulgate content that, while often offensive and deliberately provocative, cannot justifiably be removed by the social network’s moderation teams.

For several years, for example, shrewd hate group Page administrators have had access to a ‘cheat sheet’ in the form of a leaked internal document intended as an operational manual for Facebook’s outsourced content moderators. Made public by moderators frustrated at being underpaid for their work, this document details how Facebook Page administrators can craft content that moderators cannot remove. One particular page lists which forms of ‘hate content’ should be ‘confirmed’ (removed from the site in response to a complaint), with the important addendum that ‘humor overrules hate speech’. Another page lists all forms of offensive content that moderators must leave untouched on the site (including certain kinds of ‘attacks against protected categories’ and uncaptioned images of animal and human abuse).

The ‘humour provision’ appears to be the one most keenly exploited by hate group administrators. Because Facebook’s moderators (at least at the lowest level) can do nothing more than follow the instructions outlined in their operational manual, if a piece of offensive content can be seen to have any humour value whatsoever, a complaint related to it will almost always be rejected. An image straightforwardly inciting Facebook users to murder Muslims will, for instance, be rapidly removed but if this sentiment is rephrased using a straightforward, easily identifiable joke structure, Facebook’s moderators – most of whom must dispatch reports as quickly as possible to meet their quota – will have no choice but to reject any user complaints relating to it.

Above all else, Facebook moderators are trained to swiftly identify genitalia and jokes: the former being cause to remove content, the latter being cause to retain it. This neatly explains why an image with the text, ‘‘Why did the Muslim cross the road?’ I thought to myself as my foot hit the accelerator’, has been hosted on Facebook for almost two years. It also explains, to some degree, why the image of the alien and the soldier apparently does not breach Facebook’s Community Standards – it looks like something somebody, somewhere just might find hilarious.

‘A lot of the trauma in bullying victims is the mocking they endure. It’s like [‘Take back Australia’ are] really going out of their way to provoke some kind of reaction, you know?’ one Facebook user tells me. While this is true, what seems to differentiate ‘Take back Australia’ from less durable hate group Pages is the precise balance struck between poor-taste ‘humour’ and apparent deep sincerity. When the ‘Aboriginal memes’ Page was removed in January, it was difficult for anybody to muster the enthusiasm to level serious charges of bias or censorship at Facebook, because the content being scrubbed was simply vulgar.

In the case of a hate group Page like ‘Take back Australia’, however, the content oscillates between tasteless anti-Islamic jokes, heartfelt paeans to deceased diggers, and indignant reports of Facebook censoring ‘patriotic’ content elsewhere on the social network. The combination appears to make the Page resilient, with individual pieces of offensive content protected by the ‘humour provision’, and the Page itself protected by the presence of images of fallen soldiers, which no Facebook moderator would dare remove.

Does Facebook care about regulating hate speech? It’s hard to say for sure, but at Facebook’s end it must seem less a free speech issue than a technical one. A rogue nipple or some pubic hair is easy to spot (even by an algorithm), but what exactly can be said to constitute ‘fighting words’ or ‘race baiting’ is less immediately obvious, especially when you may have less than ten seconds to make the call. Moderating complex content requires real workers, and anything that requires real workers simply doesn’t scale. Perhaps hate speech isn’t a joke at Facebook – it’s just logistically easier to treat it that way.

Connor Tomas O'Brien

Connor Tomas O'Brien runs Studio Sometimes, a design studio for arts organisation and non-profits, and writes Change is Hard, a newsletter for lapsed and lazy environmentalists: connortomas.com

More by Connor Tomas O'Brien ›

Overland is a not-for-profit magazine with a proud history of supporting writers, and publishing ideas and voices often excluded from other places.

If you like this piece, or support Overland’s work in general, please subscribe or donate.


Related articles & Essays