AI Datafication Good practices

Technology should serve all of us and not only the privileged few

The Algorithmic Justice League (AJL) was formed by Joy Buolamwini, after having identified the racial and gender blind spots of AI software being developed and most commonly used in face-recognition services. Joy was developing her own tool for creating filters and add-ons to selfies taken by people, but through the whole development phase, she had to wear a mask because the software she used could not recognize her face to be a face. 

 

Notably, AI is a reflection of the data we feed it, it can operate in the spectrum of information and machine learning that its creators provide. The risks of AI are therefore notable and have in recent years been heavily discussed along with the implementation of, for example, AI for face recognition and law enforcement or AI used to in financial services and job hiring.

 

Joy created the AJL to promote inclusivity in code and AI development. Together with a team, they combine research on how AI forms biases and raises awareness on the experiences of AI on gender and race discrimination. The AJL promote equitable and accountable AI through 4 principles:

 

  • Affirmative consent
  • Meaningful transparency
  • Continuous oversight and accountability
  • Actionable critique

 

This can be seen through their many initiatives and research such as their publication Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification or their advocacy actions. Letter To Congress For Suspension Of Use Of Face Recognition Technology. The Algorithmic Justice League wants the world to remember that:

 

 “WHO codes matters, HOW we code matters and that we can code a better future.” 


Learn more about their initiative on
https://www.ajlunited.org

 

We can also see connections of this in the development sector.

John Shawe-Taylor, UNESCO’s Chair for Artificial Intelligence, explains in an interview with Synced how he foresees supporting the work of UNESCO and the mission of the SDGs by empowering communities with knowledge of AI. He reflects on their practices in Africa and how UNESCO is trying to empower people on the ground to find solutions to some of the most pressing problems through AI. One of the core challenges he identified were cultural bias. When analysing some of the datasets in comparison to the cultural contexts in which the solutions are to be applied, a serious question of usefulness and accuracy had to be re-thought. A Western data set analysed through a Western model might not be relevant in for example rural Zimbabwe. Having data from various cultures and environments is crucial in learning from data that is relevant to the task.  

 

...we don’t realise how biased we are until we see an AI reproduce the same bias, and we see that it’s biased. – John Shawe-Taylor, UNESCO’s Chair for Artificial Intelligence

 

Photo by Peter G

 

Today we can see growing initiatives in the field of machine learning in Africa, that are taking the matters in their own hands to create systems that respond to the different realities and cultures of the continent. Some examples are the conference Indaba, a South African event created for best practice sharing on machine learning and artificial intelligence. The mission, as stated on their website… 

 

“… it is not for Africa to be observers and receivers /…/ but active shapers and owners of the technological advancements of Africa.”

 

Do you have any examples of successful practices in AI in Asia, Africa, South America? Let us know and share it with us in the comments.

 

 

(Feature image credit: Christina Woc in TechChat)

Leave a Reply

Your email address will not be published. Required fields are marked *