Social media platforms fail to tackle hate speech

Web giants across the globe are failing to censor online hate speech. They might start focusing more on the matter soon, however, in light of recent legal proceedings

Facebook, along with rivals Twitter and YouTube, has been lax in dealing with hate speech

Censoring hate speech on platforms with millions of users can’t be the most straightforward job. With hashtags, videos and pictures going viral in a matter of seconds, social media giants have to be on top of censoring offensive content 24 hours a day. Ranging from antisemitic comments to islamophobic slurs and racist abuse, hate speech covers a wide range of offensive language that is illegal in a number of countries, and can cause huge financial implications for even the wealthiest organisations.

Breaking the law
Three French organisations – the Union of Jewish French Students, SOS Racisme and SOS Homophobie – recently brought a lawsuit against web giants Facebook, Twitter and YouTube on the grounds of hate speech.

Together, the three organisations reviewed 586 offensive posts over one month, and found that Facebook deleted 34 percent of the flagged posts, while YouTube deleted seven percent and Twitter only four percent.

The Union of Jewish French Students sued Twitter for $50m regarding antisemitic tweets

The chairman of SOS Racisme, Dominique Sopo, said in a statement: “Given the profits made by Youtube, Twitter and Facebook in France, and the few taxes they pay, their refusal to invest in the fight against hatred is unacceptable. The mystery surrounding the functioning of the moderation teams of social networks prevents any serious progress in reducing racist and antisemitic messages. Since the major platforms do not respect French law, not even their own conditions, they have to face justice.”

Similar to French law, hate speech is a criminal offence in countries such as the UK, US, and South Africa, where authorities can now legally impose imprisonment by court of law on the grounds of online abusive behaviour, and social media users and platforms are no exception.

Rep reducer
Although the exact figure for the aforementioned lawsuit has not been revealed, a similar lawsuit was brought by the Union of Jewish French Students against Twitter in 2013. The organisation sued Twitter for $50m regarding antisemitic tweets. The most shocking part of the lawsuit was that, despite the fact Twitter deleted the tweets, they were still sued the enormous sum for protecting the identity of their abusive users.

Notably, this means simply deleting hate speech is not enough to avoid financial repercussions. Web giants and social media platforms need to actively tackle the issue by establishing a functioning process in which hate users can be identified quickly and reprimanded appropriately.

It is not only the organisation’s bottom line that will take a beating if they are not quick to act upon hate speech. The reputational consequences of spreading hate speech are grave – providing users with a platform to encourage serious offences such as racism, religious hate and supporting terrorism can lower the reputation of a company in the eyes of not only users, but also shareholders.

Freedom or censorship?
The main defence in favour of web giants who fail to censor content is the notion of free speech. Article 10 of the Human Rights Act 1998 states: “Everyone has the right to freedom of expression.This right includes the freedom to hold opinions, and to receive and impart information and ideas without interference by public authority, and regardless of frontiers.”

However, the act does not directly discuss the limits of freedom of expression in regard to online hate speech. Although it is evident that there are laws in favour of free speech, and laws in favour of censoring hate speech, there is no clear line which a social media platform cannot cross.

A spokesperson for Twitter said the company is not able to comment on the lawsuit, but its policy clearly states: “You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease. We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories.”

YouTube’s community policy states that the platform does not support content that “promotes or condones violence against individuals or groups based on race or ethnic origin, religion, disability, gender, age, nationality, veteran status or sexual orientation/gender identity, or whose primary purpose is inciting hatred on the basis of these core characteristics”.

Facebook’s guidelines are similar to those of YouTube, adding that it does allow “clear attempts at humour or satire that might otherwise be considered a possible threat or attack”.

The problem is that, although all three sets of guidelines are reasonably similar, they lack a clear, unified policy that states the legal procedures for tackling hate speech, including analytics and algorithms to detect content. If there were a more universal approach, establishing guidelines for all online platforms to follow, the rules and regulations would be far more straightforward. Websites would not be forced to create their own independent policies that do not necessarily follow the given law.

Regulated guidelines could help online companies avoid financial, legal and reputational consequences, in addition to eliminating the risk of offending users.