“Just as dangerous as the virus” – how the infodemic threatens India’s Muslim minority

Hindu nationalists have used social networks to whip up sectarian tempers against India’s Muslims for years. The Covid-19 infodemic is making this worse. As long as platform algorithms incentivize disinformation, no amount of fact-checking will solve the problem.

False claims about the new coronavirus spread around the world via social networks as virally as the disease itself. The World Health Organization (WHO) already warned of an “infodemic” in mid-February: “an over-abundance of information – some accurate and some not – that makes it hard for people to find trustworthy sources and reliable guidance when they need it” (WHO 2020). This is especially true for India which has more social media users than any other country in the world and where information disorder has already caused mob violence and displacements before the pandemic (Patil 2019).

With this brief essay, I conclude my work on our blog “Nudged – Big Data, small people”, in which we explored harmful consequences of Information and Communication Technologies, or “ICT4Bad” (as opposed to ICT4D, ICT for Development). We differentiate between three dimensions of ICT4Bad: (I) algorithmic bias and discrimination, (II) surveillance, and (III) information disorder (or dis- and malinformation). In my four journalistic blog posts, I have written about (I) why algorithms are biased (Felschen 2020a) and what critical techies and regulators do against this (Felschen 2020b), (II) how Cleaview’s facial recognition app helps the Trump administration target undocumented immigrants (Felschen 2020c), and (III) why Facebook ignored a hate speech campaign that ultimately led to the genocide against the Rohingya (Felschen 2020d).

With the start of the pandemic at the end of the seminar, all three dimensions of ICT4Bad have intensified as the need for quick fixes and remote work led to a “turbo-digitalization” (Mühlhoff 2020) without proper oversight: (I) When algorithms are used for medical ends, e.g. triage decisions in underequipped hospitals, their well-proven biases can take on life-and-death consequences (Loi 2020, Obermeyer et al. 2019, O’Neil 2016). Furthermore, (II) hastily implemented contact tracing and quarantine surveillance by governments worldwide raise concerns about potential data misuse, especially since emergency measures often outlast crises because citizens have gotten habituated and no longer remember their original rights (Zuboff 2019, Johnson 2020). Finally, (III) ICT is being exploited for the spread of dis- and malinformation to an unprecedented extent.

Hence I selected a case study that explores this changing reality – although events are evolving fast and although there is not much relevant academic literature yet: In this essay, I focus on the infodemic in India as a form of (III) information disorder. I discuss what causes the infodemic, how social media algorithms amplify it, how hate speech against Muslims in India manifests itself, what real-life consequences it has, what Facebook does (or neglects to do) to verify and correct false claims on its platform and why the South is hit harder. I conclude with reflections on how the seminar contributed to my personal and professional learning.


“Economics of emotion” – how social media platforms incentivize false content

Information disorder is a centuries-old phenomenon, from canards to propaganda, but it has increased with the internet age which allows everybody to disseminate information for free and on a massive scale, without journalistic guidelines or identity checks.

By following an “economics of emotion”, social media platforms incentivize information disorder: Their algorithms prioritize posts that trigger a high number of emotional interactions over posts by trusted news organizations, in order to increase the time that users spend on the platform which converts to advertising revenue. Furthermore, they target users with personalized content that confirms their biases and encloses them in filter bubbles with like-minded users – this makes it more difficult to identify information as false (Bakir et al. 2018).

Fig. i: Information disorder (Wardle et al. 2017)

Information disorder is often referred to by the term “fake news”. Claire Wardle warns that this is “woefully inadequate” and advises to refrain from using the term. She argues that it cannot capture the broad range of misleading, manipulated and fabricated content and that it has been weaponized by politicians around the world to restrict journalistic coverage that they disagree with, hence endangering press freedom (Wardle et al. 2017, Downie Jr. 2020).

Fig. ii: Disinformation vs. malinformation in Wardle’s terms (Nielsen et al. 2020)

Instead, she introduces a model that identifies three types of information disorder along the dimensions of falseness and harm: If false content is shared deliberately to cause harm, this is disinformation – if no harm was intended, misinformation. When content that is based on some facts is spread strategically to cause harm, this is malinformation (Wardle et al. 2017, fig. i). Malinformation about the pandemic is more prevalent than disinformation, as a study of the infodemic in six countries (not including India) found: Most false claims were not completely fabricated but involved reconfigurations where “existing and often true information is spun, twisted, recontextualized, or reworked” (Nielsen et al. 2020, fig. ii).


Fig. iii: Mumbai mosque video taken out of context (AFP India 2020a)

This is not surprising. Once we validate some elements of a story, we tend to lower our guards. The post in fig. iii has been viewed thousands of times on Facebook and Twitter: The video shows a heated conversation between a policeman and a Muslim politician that in itself is authentic; however, this can lead  users to overlook that it has been taken out of context: The post claims that the Muslim defends his congregation’s violation of the national lockdown in an impertinent way – while he in fact simply reacts to a noise complaint unrelated to the pandemic (AFP India 2020a).




The Covid-19 infodemic: a “mixture of panic and lack of good data”

The term “infodemic”, a blend of “information” and “epidemic”, was coined by journalist and political scientist David Rothkopf in reaction to the 2002-2004 SARS outbreak. The SARS infodemic, according to Rothkopf, had “implications that are far greater than the disease itself”; it transformed a “regional health crisis into a global economic and social debacle” (2003).

But what is causing the Covid-19 infodemic? Cristina Tardáguila, Associate Director of the International Fact-checking Network (IFCN), attributes it to “the mixture of panic and lack of good data” which can weaken “our capacity to sort fact from fiction” (Suárez 2020). Covid-19 sends up to 7.8 billion people into an emotional state of exception as it threatens our bodies and our communal lives. Meanwhile, there is no ultimate epistemic authority; even epidemiologists grapple with learning about the new virus as it spreads. Hence, just as the Internet made everybody a publisher, this crisis seems to have made anybody an “expert”. Furthermore, some governments are exploiting it to spread hatred against opponents or to present minorities as biohazards, most prominently China (Di Resta 2020) and Russia (Beaumont et al. 2020).

The Global South and marginalized populations in the Global North – whom I will subsume under the term “South” as proposed by Milan & Treré (2019:321) – are more exposed to the infodemic for various reasons. First, they are already (Van Dorn et al. 2020) or are predicted to be (Moore et al. 2016) disproportionately affected by the virus; the North discontinued many humanitarian projects due to travel restrictions and fear to import the virus, and leaves the South fight and make sense of the pandemic for itself. Second, the South tends to have less access to reliable information that would help to reject false claims due to a higher prevalence of news deserts and government censorship and Facebook’s practice to offer very limited versions of the Internet via its Free Basic app to marginalized regions (Felschen 2020d). Third, there are no fact-checking organizations for many smaller languages and fourth, governments and civil societies in the South tend to react to ICT with less scrutiny and regulations (Birhane 2019) than e.g. EU countries who regularly sue Facebook for information disorder (Murphy 2016).

Fig. iv: Infodemic trends in India (Chowdhury 2020)

In India, a group that already faced fierce discrimination is being scapegoated for the virus: Muslims, the country’s largest religious minority. Until March 30th, India’s infodemic followed trends similar to other countries as an analysis by the independent Indian fact-checking organization Boom shows (Chowdhury 2020, Fig. iv). Then, news broke of Corona-related deaths that were traced back to a gathering in Delhi of two thousand devotees of the Muslim Tablighi Jamaat movement in violation of social distancing rules. Authorities quickly labeled it a “super-spreader” event and correlated it with the sudden increase of Covid-19 cases; although widespread testing is not available, the Health Ministry links about 30 percent of India’s total cases to attendees of the event (Sharma 2020). In comparison, no mention was made of Hindu events in violation of the national lockdown. A journalist who reported critically about one such event attended by a Modi-ally is charged for “spreading rumors with intent to cause a riot” (Varadarajan 2020) – an example of how the concept of “fake news” can be exploited for censorship.

Senior leaders from the ruling Hindu fundamentalist Bharatiya Janata Party (BJP) quickly blamed the Jamaat for spreading the virus on purpose: a cabinet member called it a terror plot and “Taliban crime”. A sector of the mainstream media followed that narrative and platform users picked it up; #CoronaJihad, #CoronaTerrorism, and #CoronaBombsTablighi started trending online. In the archive of the fact-checking platform AFP India, most of the April and May stories are islamophobic.

I have identified three broad categories:


Fig. v: Malinformation (AFP India 2020b)
Fig. vi: Malinformation (AFP India 2020c)

1. Muslims are accused of spreading the virus, e.g. here and here and in a story about a plate-licking Muslim (the 2018 video, in fact, shows pre-Corona dining etiquette, AFP India 2020b, fig. v).

  1. Muslims are claimed to be granted exceptions from containment measures by the authorities, e.g. in fig. iii and in a post about a female nurse who was presumably forced by a local politician to bow before a Muslim man for accusing him of spreading the virus (she was, in fact, treating him for a leg injury, AFP India 2020c, fig. vi).
Fig. vii.: Malinformation (AFP India 2020d)
  1. Muslims are represented as militant Islamists, e.g. here and in a false report about an attack on a female pharmacist (there was no attack and the woman died of other reasons according to AFP India 2020d).

Several users who posted and commented on the above videos and photos used Islamophobic language and statements (“these type of mullahs”, “crazy behavior”, “gullible”, “Islam never accept (sic) truth”), compared Muslims to pigs and goats (fig. iii), and even incited to violence (“shoot them”, fig. vi).

This digital hate speech campaign translated into real-life discrimination and attacks: The leader of the nationalist Hindu Mahasabha party called to shoot Jamaatis “at sight” (N.N.a 2020). A BJP member of the Legislative Assembly called for a boycott of Muslim vegetable vendors; vigilante groups followed that appeal and assaulted vendors (Pandey 2020). A baby allegedly died when its Muslim mother was refused entry to a hospital and assaulted (Jain 2020). Hate speech also harms public health: Many Muslims have become so suspicious of the government that they refuse to cooperate with quarantine rules (N.N.b 2020). Prime Minister Modi waited for weeks before he made any effort to soothe this sectarian climate (Modi 2020).


“The biggest challenge fact-checkers have ever faced”

What is being done to verify and correct false claims? Tardáguila calls Covid-19 “the biggest challenge fact-checkers have ever faced” (Suárez 2020). Fact-checking has gained traction globally since a series of scandals, especially the Russian disinformation campaign on Facebook in favor of Donald Trump during the 2016 election. During the pandemic, many independent fact-checking organizations (such as the IFCN), media organizations, IOs (such as the WHO), and scientists (such as the Indian Scientists’ Response to Covid-19) are working overtime to respond to the large number of questionable publications. I will focus on Facebook’s role in checking and possibly correcting, deleting, and archiving the claims that are being published on its platform.

Fig. viii: Facebook took down content with the hashtag #CoronaJihad (22-05-2020)
Fig. ix: Facebook has labeled this content as contested but it can still be accessed, cp. fig. vi (22-05-2020)

The 2016 scandal forced the network to abandon its proudly hands-off attitude when it comes to the published content (Swisher et al. 2017). Today, it relies on a combination of human fact-checkers and, increasingly, machine learning to review content that has been flagged by users. False content is either taken down (fig. viii) or labeled as contested by fact-checkers and corrected (fig. ix).

Facebook partners with third-party fact-checking organizations in 79 countries, covering “more than 50 languages”. For its large Indian market, the platform has accredited eight independent organizations (Facebook 2020). Despite the progress, this still leaves many of the 191 countries in which Facebook is available (all countries except for China, Iran, Syria, and North Korea) and many of the 7,117 recognized languages (Ethnologue 2020) uncovered – with often serious consequences: Although civil society organizations had warned Facebook of hate speech against the Rohingya since 2013, the company did not stop Myanmar’s military from riling up millions of users against the Muslim minority group; the accounts were only shut down in 2018 after the Rohingya genocide and refugee crisis.

There are other loopholes: Even if an accredited organization found a false claim, this is no guarantee that Facebook shows a warning. For example, video iii. is still online (as of May 20th, 2020) without being marked as false. Furthermore, government accounts that promote hate speech (e.g. that of Sudan’s militia leader and government official Hemeti) are still allowed on the platform because Facebook refuses to take action against state actors (Shahani 2019; Felschen 2020d).


Can we still be at home in the digital world? An outlook

Most recently, Facebook has assigned an independent oversight board, a kind of “Supreme Court for Facebook” in the words of the Myanmar-based human rights activist Matthew Smith who contributed to its development (ibid.). Its mandate is to decide “what content to take down or leave up”. Those decisions will be binding for Facebook (Botero-Marino et al. 2020).

However, this will not solve the problem of disinformation and hate speech. False claims are faster generated than checked, especially if bots or troll farms are involved. Content moderators only interfere after some harm has already been done. As long as the board does not have the mandate to appeal the incentives that Facebook’s algorithms create for widely shared, emotionalizing content of dubious origin, the problem will persist. Especially in a pandemic, and especially in the Global South.

As a journalist, I have focused on human rights for the last ten years, but I had a blind spot: I never looked into how tech companies endanger human rights if they fail to provide safeguards against hate speech and A.I. bias, or if they monetize surveillance data. This ignorance stemmed from my own positionality as a white person living in a stable democratic welfare state, Germany, whose leaders and privacy laws I tend to trust. My skepticism emerged when I moved to the US with its deep-rooted discrimination of black and immigrant communities and weak privacy laws. This seminar has encouraged me to pay more attention to this aspect of human rights. By coincidence, the German newspaper FAZ assigned me to interview Shoshana Zuboff afterward. I asked her how we can reconcile our dependency on ICT tools and platforms with the urge to stop feeding surveillance and discrimination ecosystems with more data. From her point of view, the solution cannot be to withdraw individually; instead, she urges to fight for regulations and epistemic rights together. Throughout her work, Zuboff emphasizes that technology is not evil per se, that all depends on what we decide to make of it. She is convinced that once politics catch up with technology, we can finally “be at home in the digital world” – a hopeful utopia against today’s bleak backdrop.

Word count: 2490



AFP India (2020a) This video has circulated since 2016 in reports about a noise complaint at a Mumbai mosque. https://factcheck.afp.com/video-has-circulated-2016-reports-about-noise-complaint-mumbai-mosque (accessed 10/05/2020).

AFP India (2020b) This 2018 video shows Bohra Muslims practicing dining etiquette in Mumbai. https://factcheck.afp.com/2018-video-shows-bohra-muslims-practicing-dining-etiquette-mumbai (accessed 11/05/2020).

AFP India (2020c) This photo shows a nurse in India treating a man who sustained a leg injury. https://factcheck.afp.com/photo-shows-nurse-india-treating-man-who-sustained-leg-injury (accessed 11/05/2020).

AFP India (2020d) This late pharmacist’s husband and local Indian authorities said her death was not the result of pandemic-related violence. https://factcheck.afp.com/late-pharmacists-husband-and-local-indian-authorities-said-her-death-was-not-result-pandemic-related (accessed 10/05/2020).

Vian Bakir & Andrew McStay (2018) “Fake News and The Economy of Emotions”, Digital Journalism 6(2), pp. 154-175.

Peter Beaumont, Julian Borger & Daniel Boffey (2020) Malicious forces creating ‘perfect storm’ of coronavirus disinformation. https://www.theguardian.com/world/2020/apr/24/coronavirus-sparks-perfect-storm-of-state-led-disinformation (accessed 18/05/2020).

Birhane, Abeba (2019) The Algorithmic Colonization of Africa. https://reallifemag.com/the-algorithmic-colonization-of-africa/ (accessed 15/05/2020).

Catalina Botero-Marino, Jamal Greene, Michael W. McConnell & Helle Thorning-Schmidt (2020) We Are a New Board Overseeing Facebook. Here’s What We’ll Decide. We Are a New Board Overseeing Facebook. Here’s What We’ll Decide. https://www.nytimes.com/2020/05/06/opinion/facebook-oversight-board.html (accessed 20/05/2020).

Chowdhury, Archis (2020) Fake News In The Time Of Coronavirus: A BOOM Study. https://www.boomlive.in/fact-file/fake-news-in-the-time-of-coronavirus-a-boom-study-8008/page-2?infinitescroll=1 (accessed 10/05/2020).

DiResta, Renée (2020) For China, the ‘USA Virus’ Is a Geopolitical Ploy. https://www.theatlantic.com/ideas/archive/2020/04/chinas-covid-19-conspiracy-theories/609772/ (accessed 05/05/2020).

Downie Jr., Leonard (2020) The Trump Administration and the Media. Attacks on press credibility endanger US democracy and global press freedom. https://cpj.org/cpj_usa_2020.pdf (accessed 15/05/2020).

Ethnologue (2020) About. https://www.ethnologue.com/about (accessed 24/05/2020).

Facebook (2020) Fact-Checking on Facebook: What Publishers Should Know. https://www.facebook.com/business/help/182222309230722 (accessed 21/05/2020).

Felschen, Christina (2020a) Algorithms are opinions embedded in code. https://wpmu.mah.se/nmict201group5/2020/03/01/algorithm-discrimination-inequality-usa-credit-score/ (accessed 03/05/2020).

Felschen, Christina (2020b) The techie resistance. http://wpmu.mah.se/nmict201group5/2020/03/06/algorithms-discrimination-diversity-data-resistance-regulation-audit/ (accessed 03/05/2020).

Felschen, Christina (2020c) How Clearview AI targets undocumented immigrants.  http://wpmu.mah.se/nmict201group5/2020/03/14/clearview-ai-facial-recognition-undocumented-immigrants-privacy (accessed 03/05/2020).

Felschen, Christina (2020d) “It was the perfect storm” – Facebook’s role in the Rohingya genocide. http://wpmu.mah.se/nmict201group5/2020/03/18/hate-speech-propaganda-facebook-rohingya-genocide-myanmar-interviews/ (accessed 03/05/2020).

Jain, Shruti (2020) Baby Dies After Doctor Allegedly Refused to Treat Pregnant Muslim Woman. https://thewire.in/communalism/rajasthan-muslim-woman-baby-dies-doctor (accessed 20/05/2020).

Johnson, Stephen (2020) Edward Snowden warns ‘bio-surveillance’ may outlast coronavirus. https://bigthink.com/politics-current-affairs/coronavirus-tracking (accessed 07/05/2020).

Loi, Michele (2020) We must save privacy from privacy itself. https://algorithmwatch.org/en/we-must-save-privacy-from-privacy-itself (accessed 07/05/2020).

Modi, Narendra (2020) COVID-19 does not see race. https://twitter.com/PMOIndia/status/1251839308085915649 (accessed 22/05/2020).

Melinda Moore, Bill Gelfeld, Adeyemi Theophilus Okunogbe & Christopher Paul (2016) Identifying Future Disease Hot Spots: Infectious Disease Vulnerability Index. Santa Monica, CA: RAND Corporation.

Stefania Milan & Emiliano Treré (2019) “Big Data from the South(s): Beyond Data Universalism”, Television & New Media 20(4), pp. 319 –335.

Mühlhoff, Rainer (2020) “We Need to Think Data Protection Beyond Privacy: Turbo-Digitalization after COVID-19 and the Biopolitical Shift of Digital Capitalism”, Medium 2020.

Murphy, Mike (2016) Germany threatens to fine Facebook €500,000 for each fake news post. https://qz.com/865964/facebook-fb-could-face-e500000-fines-for-each-fake-news-post-in-germany (accessed 19/05/2020).

N.N.a (2020) Hindu Mahasabha leader held in UP for vile remark. https://www.nationalheraldindia.com/national/hindu-mahasabha-leader-held-in-up-for-vile-remark%20Hindu%20Mahasabha%20leader%20held%20in%20UP%20for%20vile%20remark (accessed 21/05/2020).

N.N.b (2020) Would-be autocrats are using covid-19 as an excuse to grab more power. https://www.economist.com/international/2020/04/23/would-be-autocrats-are-using-covid-19-as-an-excuse-to-grab-more-power (accessed 17/05/2020).

Ziad Obermeyer, Brian Powers, Christine Vogeli & Sendhil Mullainathan (2019) “Dissecting racial bias in an algorithm used to manage the health of populations”, Science Vol. 366, pp. 447-453.

O’Neil, Cathy (2016) Weapons of Math Destruction. New York: Crown Books.

Pandey, Alok (2020) Abused, Stopped From Selling Vegetables, Allege Muslim Vendors In UP. https://www.ndtv.com/india-news/coronavirus-uttar-pradesh-abused-stopped-from-selling-vegetables-allege-muslim-vendors-in-up-2210963 (accessed 20/05/2020).

Patil, Samir (2019) India Has a Public Health Crisis. It’s Called Fake News. https://www.nytimes.com/2019/04/29/opinion/india-elections-disinformation.html (accessed 03/05/2020).

Rasmus Kleis Nielsen, Richard Fletcher, Nic Newman, J. Scott Brennen & Philip N. Howard (2020) Navigating the ‘Infodemic’: How People in Six Countries Access and Rate News and Information about Coronavirus. https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2020-04/Navigating%20the%20Coronavirus%20Infodemic%20FINAL.pdf (accessed 08/05/2020).

Rothkopf, David J. (2003) When the Buzz Bites Back. https://www.washingtonpost.com/archive/opinions/2003/05/11/when-the-buzz-bites-back/bc8cd84f-cab6-4648-bf58-0277261af6cd/ (accessed 07/05/2020).

Shahani, Aarti (2019) Why Facebook Won’t Kick Off A Warlord. https://www.npr.org/2019/06/26/735883899/why-facebook-wont-kick-off-a-warlord?t=1584314399937&t=1590338827725 (accessed 18/05/2020).

Sharma, Neetu Chandra (2020) 30% covid-19 cases in India linked to Tablighi Jamaat event: Govt. https://www.livemint.com/news/india/30-covid-19-cases-in-india-linked-to-tablighi-jamaat-event-govt-11587218560611.html (accessed 16/05/2020).

Suárez, Eduardo (2020) How fact-checkers are fighting coronavirus misinformation worldwide. https://reutersinstitute.politics.ox.ac.uk/risj-review/how-fact-checkers-are-fighting-coronavirus-misinformation-worldwide (accessed 08/05/2020).

Kara Swisher & Kurt Wagner (2017) Facebook’s Mark Zuckerberg wrote a 6,000-word letter addressing fake news and saving the world. https://www.vox.com/2017/2/16/14632726/mark-zuckerberg-facebook-manifesto-fake-news-terrorism (accessed 22/05/2020).

Aaron van Dorn, Rebecca E Cooney & Miriam L Sabin (2020) “COVID-19 exacerbating inequalities in the US”, The Lancet Vol. 395, pp. 1243-1244.

Varadarajan, Siddharth (2020) In India, a Pandemic of Prejudice and Repression. https://www.nytimes.com/2020/04/21/opinion/coronavirus-india.html?fbclid=IwAR1JAvL3vv-nbUcGsclUIb0DIugkuc72WWaoTtabsyO01lC5HQqJXie_cWU (accessed 06/05/2020).

Claire Wardle & Hossein Derakhshan (2017) Information disorder. Toward an interdisciplinary framework for research and policy making. https://firstdraftnews.org/wp-content/uploads/2017/11/PREMS-162317-GBR-2018-Report-désinformation-1.pdf (accessed 15/05/2020).

WHO (2020) Novel Coronavirus (2019-nCoV) Situation Report-13.  https://www.who.int/docs/default-source/coronaviruse/situation-reports/20200202-sitrep-13-ncov-v3.pdf?sfvrsn=195f4010_6 (accessed 15/05/2020).

Zuboff, Shoshana (2019) The age of surveillance capitalism: The fight for a human future at the new frontier of power. New York: Public Affairs.


Header photo: A Muslim woman has covered her nose and mouth while waiting for a bus in Assam, India, to return home during the national Covid-19 lockdown in April 2020, (c) Talukdar David / Shutterstock.com

Leave a Reply

Your email address will not be published. Required fields are marked *