Turning the page on hate in the digital sphere: why and how?

London mayor Sadiq Khan warns big tech on hate speech”. “UN: Facebook has turned into a beast in Myanmar”. These were two headlines on BBC’s website within less than 24 hours. The first one was posted yesterday, on 12 March 2018, and the second only a few hours ago. Reading both articles, made me realize that on an almost daily basis, online hate speech is in the news. In the field of international development, these recent developments highlight the necessity to ensure ICTs are platforms for positive change, dialogue and development and not to further divide or create tensions among communities.


While social media have undoubtedly offered new opportunities for communication, there’s also been growing concerns that online platforms are helping to promote hate and incitement to violence. The big tech companies, including Twitter and Facebook, have been called out publicly for directly contributing to these trends given their expanding responsibilities as publishers of information.

The debate was mainly focused on two arguments: on the one hand, voices explained that if tech companies had created these platforms, they should also bear responsibility for the content disseminated through them. On the other hand, social media companies argued that they were just platforms for people to use but could not control or start regulating content.

However, things are slowly progressing and tech companies are showing more and more commitment to tackle the issue from its roots, starting with themselves. In May 2016, Facebook, YouTube, Microsoft and Twitter signed a “Code of Conduct on countering illegal online hate speech” with the European Union. The agreement encourages four of the world’s biggest internet companies to review most complaints from users concerning hate speech or incitement to violence in less than a day. So far, these companies successfully managed to do so in 81% of cases, compared to 51% in May 2017.








© Xkcd


First, we need to learn to identify hate speech. According to the Ethical Journalism Network “5-point test for hate speech” one needs to look at the content of speech, its tone, its targets as well as its impact:

  1. What’s the status of the speaker?  (should they even be listened to or just ignored?)
  2. What’s the reach of the speech? (is there a pattern of behaviour?)
  3. What’s the goal of the speech (is there a deliberate attempt to harm others?)
  4. Is the speech dangerous, inciting violence against others?
  5. What’s the surrounding climate of that speech?

To understand the scale of the problem within its geographical scope, the European Union conducted a study on the issue. The report shows that, of the hate speech flagged to the tech companies, almost half of it was found on Facebook, 24% on YouTube and 26% on Twitter. The element that were targeted the most were ethnic origins, followed by anti-Muslim hatred and xenophobia, including against migrants and refugees. According to HateBase, a web-based application collecting messages of hate speech worldwide, the majority of cases target individuals based on ethnicity and nationality, but incitements focusing on religion and class have also been on the rise.

Most Common Hate Speech worldwide ©HateBase

Today, a UN-fact finding mission investigating the situation in Myanmar accused Facebook of “playing a determining role in stirring up hatred against Rohingya Muslims in Myanmar“. According to the interim findings, social media and Facebook in particular considerably contributed to the level of violence and discrimination now present amongst the public in Myanmar against Rohingya Muslims. “I’m afraid that Facebook has now turned into a beast, and not what it originally intended,” Yanghee Lee, Special Rapporteur on the situation of human rights in Myanmar, said.

Responding to these challenges, several governments have turned to legislative bodies to enforce regulation. In Germany, the government passed a controversial law – the Netzwerkdurchsetzungsgesetz – in June 2017 requesting social media platforms to remove hate speech, fake news and illegal material as fast as possible. In the UK, while the Mayor of London wishes that the city remains the center of disruptive technology and startups and a hospitable host for key tech companies, he is increasingly vocal about the need for regulation. As he explained:

“We have evolving economies, which means we should have evolving regulations. For too long politicians and policy makers have allowed this revolution to take place around us and we’ve had our heads in the sand.”

Monitoring, Mapping

However, academics and civil society are also loudly ringing the warning bells. When it comes to regulating speech online, things can get tricky. Where is the line between ensuring freedom of expression and countering online hate speech? The answers each society has developed to balance between freedom of expression and respect for equality and dignity have created unique rifts and alliances at the international level, UNESCO report says. More interesting is to analyse how non-governmental bodies have attempted to tackle the issue. Below, two examples of initiatives against incitement to violence online are presented:

  • PeaceTechLab: South Sudan

In South Sudan, hate speech online is said to be fueling “tribal genocide”. To understand the link between narratives of hate online and on the ground violence, PeaceTechLab developed a lexicon of hate speech terms and used artificial intelligence to generate data from social media on where and from whom were these messages mainly coming from.

After having identified the terms most likely to incite violence, the lexicon also provides alternative language that social media users, journalists and other members of the community can use to avoid inflammatory language.

The initiative is completed with a series of social media campaigns, training workshops for youth and peacebuilding activities led by the #defyhatenow project.


  • Facebook: Myanmar

In Myanmar, Facebook is so popular that, for many people, it is internet itself, the New York Times explains. Before the UN finger-pointed the role of Facebook in the country’s conflict, the social media giant had already started taking measures against hate speech. This happened after Ashin Wirathu, an ultranationalist Buddhist monk was banned from preaching by the government and instead turned to Facebook to continue spreading his messages.


To illustrate its will to tackle the problem, Facebook worked with local partners to translate its universal community standards into Burmese with locally adapted illustrations. Commenting on their efforts, a Facebook spokesperson said:

“We take this incredibly seriously and have worked with experts in Myanmar for several years to develop safety resources and counter-speech campaigns”.

In 2015, Facebook introduced stickers designed by the Panzagar “flower speech” campaign against online bullying, initially launched by a group of Myanmar activists, including former political prisoners, encouraging others to “watch what we say so that hate between mankind does not proliferate.”  Last year it regulated the use of the word “kalar” which is considered inflammatory against Muslims. And in January this year, Facebook removed the page of Wirathu.


The issue of hate speech online isn’t likely to fade away anytime soon. As access to information is increasing and digital inequalities are reducing, more and more people will turn to social media to communicate, read and share information. This is particularly relevant for developing countries, where further investment in ICT infrastructures and increased access to ICTs will eventually mean that social media platforms will become a primary source of reference for communication. And in places where the technology sector is still in its infancy.

This was the case for Myanmar where, within three years, mobile penetration exploded. Facebook users in the country went from 2 million in 2014 to 30 million today. However, what will be important to take into account as international organizations and governments strive to make ICTs more accessible, will be to also ensure the public is trained and sensitized on how to use these technologies and platform safely, critically and constructively.

This is why media and information literacy is key in development and ICT4D interventions. It’s not only about providing the technology. If we want to turn the page on online hate speech, equal efforts will need to be placed and invested in educating the new digital natives in both urban and rural areas.




  1. Natacha

    Interesting points Marion! I’m often intrigued to read comments on some news articles shared on Facebook but I usually quickly regret it and end up losing faith in humanity. Migration, gun violence, gender issues, or even small local debates… It usually starts by stating contrasting (and expected) positions, mostly without any dialogue and largely nonconstructive. Then it often drifts to insults, losing any kind of arguments on the way. Hiding behind a screen makes it easier to hold inappropriate comments. Ironically, those comments once online—and even when or if they are later deleted—will leave a trace. We increasingly see cases of personalities who have to explain past comments they made on social media. I believe this is an important point to make people accountable. And it seems that today we start to understand that online privacy doesn’t really exist. Anyway, I wanted to thank you for the interesting read. I particularly enjoyed both examples you offered from the projects in South Sudan and in Myanmar!

    1. Marion

      Thanks a lot Natacha for your very insightful comment. You’re right … the digital sphere can be quite a confusing and sometimes disappointing space when we see how people use it. I totally agree that being accountable is important. I’ve come across a couple of online campaigns on twitter encouraging people to use their real names instead of hiding behind fake profiles. This is very important but difficult to enforce. Concerning comments online when reading the news, it’s a very interesting topic. Recently for example Al Jazeera decided to deactivate its comments section on its English website on the basis that it had become a space for hate speech more than constructive dialogue. I’ve also heard of a Scandinavian news website who forces readers to answer a couple of questions on the article they’re commenting on, to make sure that whoever writes a comment has actually read the article! It’s great to see media outlets and other organisations are actually taking this problem more and more seriously.

Comments are closed.