To start my series of blog posts on Datafication, Social Media and Development, I’d like to paraphrase Dr. Martin Luther King’s famous words on justice:
Data injustice anywhere is a threat to data justice everywhere.
MLK Day was celebrated on January 20th in the United States, and it coincided with the recent data privacy and Artificial Intelligence (AI) scandal in the same country, where news broke about American startup company Clearview AI.
Clearview AI collected billions of pictures (a method called “scraping”) of major Social Media platform users: from Twitter, Facebook, LinkedIn among others. It sells these photos for facial recognition purposes to various public and private law enforcement entities for “security” purposes – or plain surveillance as some would say. The technology can easily identify individuals by using their photos in search results, in this way entirely undermining the fundamental human right to privacy.
While citizens in Europe are being protected by General Data Protection Regulation (GDPR), this is not the case for other regions. India and China, for instance, are aiming to track and monitor citizens on a regular basis, and digital and biometric registration is increasing in the global south. As we can see, the United States is also not exempt of such developments, where data privacy is lacking. Thus, as global citizens we are concerned, even in our supposed GDPR safe haven.
Could this be the beginning of the end of privacy as we know it? Or is AI and biometric data going to actually help us in solving crimes?
So how does Clearview actually work?
The problem is further explained in the New York Times article:
“Searching someone by face could become as easy as Googling a name. Strangers would be able to listen in on sensitive conversations, take photos of the participants and know personal secrets. Someone walking down the street would be immediately identifiable — and his or her home address would be only a few clicks away. It would herald the end of public anonymity.”
However, not only did this company violate the rights of private citizens, it also violated the terms of above-mentioned SoMe giants. Consequently, Twitter, Google, YouTube, Linkedin, Venmo (and perhaps more to follow?) sent cease and desist letters to Clearview in order to stop this app from taking pictures from their platforms. But what will happen to the pictures already stored in the startup’s database?
In spite of lawsuits, the CEO of the startup doesn’t seem to be willing to negotiate. Take a look at what he has to say, justifying his venture by “first amendment right to public information”. But how right is it to sell it?
Nowadays data is an “object whose production interests those who exercise power” – this cannot be clearer than in the case of Clearview (pun was not intended by the way).
And this is where we encounter the everlasting struggle of class and power – now on the digital realm. Who is taking advantage of the data of common citizens? Who disposes the technical acumen? Who is responsible for behavioural change in the society in order to prevent human rights violations in data justice?
But are we actually safe in Europe, where Clearview is aiming to expand? It would violate the EU’s General Data Protection Regulation, Article 4 (14), which covers the processing of biometric data. The European Commission is supposedly trying to ban facial recognition technologies, and is now in close contact with EU data authorities in order to prevent advancements in data violations on the continent.
If you have any thoughts to share, please comment below.