Fighting Fake News in Kerala

Fake news can pose severe challenges for local authorities and humanitarian organisations when delivering aid. This blog reflects on the (mis)use of social media during the recent Kerala floods in August 2018. 

How many social media accounts do you have? People increasingly share updates, photos and other personal information on new media. Social media is a term for platforms are ‘social’ in that they enable interaction between people using different forms of ‘media’. Facebook, YouTube and WhatsApp are the platforms with most users worldwide. But there are many more and their popularity differs between countries. With over 200 million users WhatsApp is one of the most popular platforms in India. Recent figures from the Centre for the Study of Developing Societies in Delhi show that the number of users is growing, especially among the rural population in India. But with increasing popularity also comes a greater risk of misuse…

Fake News During Kerala Floods

Kerala is one of the most densely populated states in India and a popular tourist destination. In 2018, exceptional monsoon rains caused a major flooding which forced over 1 million to leave their homes. During the peak of the rains in August, landslides and floods caused damage to at least 10,000 km of roads and destroyed more than 20,000 buildings. The aftermath of the floods is immense, as many rural areas have been heavily damaged and abandoned. They simply cannot provide the local population with basic needs such as food, shelter and water.

Kerala is situated in South-East India and always has monsoon rains between June and November. This year they are extremely severe. Source: Reuters.

Almost immediately, national and international media started to cover the event using images and reports from social media sources. Digital social networks were used to contact friends and family, to seek help in case of an emergency, or to organize volunteer groups. (Local) authorities, in turn, can use social media channels to issue warnings, advice and guidance on how to cope with or prevent emergencies or disasters. On Twitter #KeralaFloods and #KeralaFloodRelief were trending. Unfortunately, not all content was real.

Two Examples of How It Can Go Wrong

Example 1: The owner of a car park noticed that an older photo of his property was being circulated on WhatsApp. On the picture, the property is flooded, while in reality this was not the case. The photo was taken years ago. This example shows how easy it is for misinformation to be spread along popular social media platforms, especially in a country where over 13 billion messages are send each today, often in WhatsApp groups from villages.

This photo was circulated in WhatsApp groups and shared via Twitter in August 2018. However, the foto was taken during a flood a few years earlier.

Example 2:  Sometimes people try to take advantage of a situation by spreading false information, or manipulate images to suit their purpose. The man in this image below pretended to be an army official and shared videos in which he spread false information about rescue operations and aid delivery. Others may make news satire or parodies undistinguishable from real news. Either way, it represents a significant challenge for local authorities and aid organisors, as it creates confusion and is difficult to fight. Most often the official reply to fake posts do not get as much attention. Fact checking is a time-consuming task that requires more staff.

The man in this image pretended to be an army official and shared videos in which he spread false information about rescue operations and aid delivery.

Citizens as Information Brokers

How can we fight fake news? Citizens can help by being critical and sharing news only when it comes from a source that they know and trust. This may be a good rule of thumb, however, in times of a crisis people’s state of mind or the wish to help others may make them less selective. Digital volunteers increasingly take on an important task. Locals acting as volunteer journalists collect, synthesize and report on the situation in their home town. Others are completely digital and help to establish supportive platforms and mobilize resources while being outside the disaster area. Such remote operators can do simple things like retweeting or translating tweets and more complex tasks such as verifying or routing information. Fact-checking has become a critical task of volunteer networks. As described in Zeynep Tufekci’s captivating book Twitter and Teargas, members of the network 140journos in Turkey verify citizen reporting using a wifi connection only.

Science Can Help Too

Scientists develop automated techniques to identify real images from fake images posted on Twitter. Data mining of citizen-generated content (or crises informatics) is emerging as a multidisciplinary research field combining computing and social science knowledge of disasters. While such scientific breakthroughs are important, we should always remain aware of the limits of crises data. Also, a backdrop of relying on these new digital big data is the risk of having a ‘second-order disaster’ where richer people with greater access to social media can use it to recover at a rapid pace. In the mean time others with less access are falling behind, deepening social inequalities.

2 Comments

  1. Malin

    Thank you for an interesting read!

    I find your comment that relying to much on new digital data might increase social injustice interesting. I read an article the other day about how the focus on technology solutions (such as networks, phones etc.) in humanitarian aid risks to shut out those who cannot access, or those who does not know how to interpret or use it. But, it also mentions that it is impossible to always reach every single one, and that not all projects can include everyone, do you agree?

    Do you see a solution to the problem of new digital data and its potential connection to social injustice?

  2. Malin Pettersson

    Thank you for an interesting read!
    I tried posting before, but I’ll try again:

    I thought your comment on the risk of possible further social inequality as a consequence was interesting, and it reminded me of an article I read a while ago that argued that there is a risk of missing to inform/include a lot of people when focusing aid/response on digital data and technology- as it excludes those who cannot reach these channels or use the technology, or that does not know how to interpret the information provided via these channels (due to a lack of accessibility. And, if you don’t know how to interpret the information – false information is even more dangerous). The development is great for those who can use it, but what happens to those who can’t? Are they on their own? However, the article also mentions that it is impossible to always reach everyone. Do you agree?

    Do you see a better way of managing this issue of adding to social inequality by the use of new digital data in situations such as the one you mention?

Comments are closed.