In my last post I wrote about the online presence of right wing populism and the rise of “alternative news”. I also wrote about the use of bots to spread misinformation. All of this together puts more responsibility on the reader to distinguish whether what you are reading are actual news or just junk.
There has always existed rumours, defamation and false reputations in human society – the difference today with internet is the pace in which it is being produced and shared. Social media enables people to spread information rapidly without confirmation of truth. At the same time, to an increasing extent we turn to internet and search engines to get information. In the US, seeking information online has become the second most common activity on the internet (Schroeder 2017).
What happens online can have devastating impacts in real life. Several studies has shown that online misinformation is becoming more and more difficult to identify and that it can mislead even readers with strong (digital) literacy skills. When misinformation delegitimizes the message of public institutions and experts, this can have widespread consequences not only on your lives (when it comes to health issues for example), but also on our democracies. Not to mention, the climate and the future of our planet.
What measures do we need to take to battle this trend? How can you provide citizens, journalists, and policymakers with tools to spot ‘fake news’ online, understand how they spread, and obtain access to verified information?
Co-inform is an intra-European project that aims to recognize and tackle misinformation online with the outspoken mission to foster critical thinking and digital literacy. Based on the theory that informed and engaged communities constitutes the foundations of a healthy democracy who can find solutions to tackle misinformation, the project targets three main stakeholder groups that could help turn this problem around: policymakers, journalists, and citizens.
Historically the fight against online misinformation has been led by tech companies, state legislation and fact-checkers. But when citizens to a higher account doesn’t trust profit-driven companies, when legislation often is being criticized for censorship and fact-checkers are getting overwhelmed by the vast amount of information, it is also getting harder for policymakers to act effectively.
What Co-inform suggests is to create socio-technical tools for misinformation detection and to effectively combine technology and social science. As already mentioned, tech companies are facing more and more suspicion as reliable fact-checkers. Co-inform therefore suggests a method named Co-creation which includes a combination of both tech and citizen engagement.
One part of the suggested solution is a browser plugin to “raise citizens’ awareness of fully or partially misinforming content, of related fact-checking articles and corrective information, of average citizens’ perceptions towards this content, and of key pro and against comment from fellow citizens”. To complete this, Co-inform is also working on a dashboard for fact-checking journalists and policy-makers, “showing what misinformation is detected, where it originates from, how and where it spreads and will spread in the near future, what’s the current and predictable public perception, and what are the key comments about it from the public”.
I believe that the ambitious solution that Co-inform suggests can be of great importance but it will not solve the whole problem. If we rely on just one tool to tackle misinformation, that can also diminish our individual responsibility and critical thought. We need to educate about the consequences that misinformation can have on our societies and focus on enhancing digital literacy for everyone.
To follow the 3 steps recommended by Co-inform is a great start: 1. Think, 2. Check, 3. Share. In the end, we all have a responsibility to counteract against misinformation.