Internet has connected people all over the world as never before. Besides, it has enhanced commercial, corporate and political connections, which create more freedom in different spheres. This is the kind of universal democracy hosted by Internet via social connections and social media. But do we have a total democracy on social platforms?
In my previous article I have already mentioned the well-known case before US elections in 2016. This time my curiosity has left me wondering about social media companies’ view on misuse of data, data leakage and the actions they take in order to keep the democracy level on their platforms.
Social Media Giant’s View
The product manager in charge of civic engagement on Facebook, Samidh Chakrabarti, noticed in his article “Hard Questions: What Effect Does Social Media Have on Democracy?” that unprecedented numbers of people channel their political energy through social media, and it has been used in unforeseen ways with societal repercussions that were never anticipated.
Based on that, Facebook, for instance, has decided to take certain actions in order to facilitate democratic approach on their platform. Actions include requirements to organizations running election-related ads to confirm their identities or archiving electoral ads and making them searchable to enhance accountability.
At the same time, they do face challenges as many human rights organizations, according to Mr. Chakrabarti, commonly use Facebook to spread educational messages around the world. The wrong kind of transparency could put these activists in real danger in many countries. But without transparency, it can be hard to hold politicians accountable for their own word.
I do agree that if there is one fundamental truth about social media’s impact on democracy it’s that it amplifies human intent — both good and bad. At its best, it allows us to express ourselves and take action. At its worst, it allows people to spread misinformation and corrode democracy. However, it sounds like a standard summary for some press release.
Do algorithms serve democracy?
Facebook’s algorithms work to reinforce inequality: they identify and disseminate popular messages and hide messages from users who publish too much or whose posts, according to the algorithm’s calculations, are not liked by other users. Messages quickly supported by abuse will show more, and inconspicuous posts will go unnoticed.
Since in large networks inequality is inevitable in principle, the question is not how to overcome it, but what criteria should be applied. Public relevance is clearly not a Facebook criterion. Its goal as a corporation is to extend the time users spend on the network. Moreover, Facebook-sponsored posts rarely relate to what a person would really like to know.
Eli Pariser, in her book “The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think”, writes that Facebook affects a person’s weaknesses by providing simplified content. Between the tactical (for example, watching funny videos) and strategic tasks (for example, social activity) that a person sets to himself, Facebook would contribute to the first one. Knowledge for Facebook is like food: it encourages the consumption, etching and eating of only those dishes that one likes or feel emotional about. For the individual, this may make sense, but what about collectivism and democracy?
It is positive that Facebook provides free access to expert knowledge, while changing the very meaning of expertness. When, in traditional media, the gatekeepers who decide which information gets into the public space are editors, in social networks algorithms do the job. Formally, everyone can be an expert, really – yet few of all expert opinions will be widespread. To promote your position, message quality is less important than adjusting to Facebook algorithms.
And this is only one example. There are plenty others where algorithms’ imperfection, for instance, could play against democratic principles. For example, in face recognition, algorithms have misidentified people by skin colour.
Outcomes that could be in place
We have an overwhelming amount of number and data available to us today. Cathy O’Neil explored in her book, Weapons of Math Destruction: how big data increases inequality and threatens democracy.
The futurist Jacob Morgan in his article proposes to involve humans to test algorithms. A simple review of results can make a big difference and allows humans to essentially double check the work of data.
Mr. Morgan advises that the key to successfully combining human nature and data is not to lean too heavily on either side. While data and algorithms can be too stark and not see the entire picture, humans can be too emotional and not consider the numbers behind a decision. Using algorithms and then reviewing the results or even running the data past a number of people can eliminate error on either side and help ensure a rational and accurate decision.
The Washington Post adds that as the health of the democracy is at stake, scientists could get figures independently from social media companies in order to find answers on the effect of misinformation independently.
Humans should keep a hand on the pulse in order not to lose control. As democracy is a gravitating form of interaction, some forms of checks and balances should be maintained even online.