Just another Malmö Högskola Blogg Sites site
 
The great paperclip maximiser: could AI kill development too?

The great paperclip maximiser: could AI kill development too?

Jobs are a misdirection in the human experiment. Or so believes Nobel Peace Prize recipient Professor Muhammad Yunus, the “father of microcredit and social business,” and founder of Grameen Bank.

Yunus thinks that it is silly to believe that the destiny of a human being is to work for someone else. As he puts it—first man wasn’t “calling from Cave 5 to Cave 10 to see if there was a job.”

“People are natural entrepreneurs…Jobs make us sacrifice our creativity; they put us into slots and force us to grow only in that slot. Real human beings are continuously creative,” said Yunus on a recent Overseas Development Institute (ODI) podcast.

Everybody is an entrepreneur

Yunus believes that Artificial Intelligence (AI) needs to take on the role of the worker, because people are entrepreneurs—job creators. And, ultimately, humans are not driven by entirely selfish interests; a real human being is “a mixture of selfishness and selflessness,” and a selflessness-driven economy can address global problems (unlike selfish capitalism which exacerbates them).

Science fiction writer Charlie Stross agrees that AI is inherently selfish, perhaps scarily so.

Charlie Stross: “AI-based systems that concretise existing prejudices and social outlooks make it harder for activists to achieve social change.” What then for AI4D?

In his keynote address to the Chaos Communication Congress in Leipzig in December 2017, he said that Elon Musk’s obsessive fear of the ‘paperclip maximiser’—a hypothetical AI built to maximise the number of paperclips in the universe—is already a reality. It’s called capitalism: “Corporations…have a common implicit paperclip-maximizer goal: to generate revenue.”

AI for good

Now what if we apply the paperclip maximiser to development? Ostensibly this could mean automated learning systems to course-correct us towards achieving Sustainable Development Goals (SDGs); if development is the corporation, then the SDGs are surely the ‘revenue targets’ of development.

These AI for good systems would give preventative warnings against natural disasters or regulate individual energy consumption to keep us tracking towards a maximum +1.5ºC target globally.

That’s work which the participants of the ICT4D Conference and NetHope webinar on AI and Machine Learning for Aid and Development are already doing.

Dr Pablo Suarez puts it succinctly: “AI can see what is there, and to do what the human brain would do in terms of recognising and providing information to make a decision later.”

The FUNES forecast-based financing for early action system.
Source: Red Cross Red Crescent Climate Centre presentation

With his Red Cross Red Crescent Climate Centre team, Dr Suarez worked on a machine learning algorithm for forecast-based financing for early action. In essence, this is a tool to integrate available data to learn about different environmental scenarios upstream, downstream and at Nangbeto dam in Togo, and how they could affect communities living downstream through flooding.

The system learns these scenarios, develops alert protocols for them and triggers a warning well in advance of the flooding happening. That means that financing—the money to address the issue of flooding—can be allocated ahead of a dam overspill.

What is fascinating is that the machine learning provided the fundamental connector between what is observed and what is expected, all within the digital platform, but that the Red Cross Red Crescent team then layered human interaction into that system to provide a holistic response.

They engaged local radio stations, using the traditional warning gong of the local area in radio messages. They worked with village groups to plan for evacuation routes, delegate responsibility for specific objects of value such as chickens and livelihood tools and identify shelter options on high ground.

The result? Village communities were alerted in advance of overspills, were able to salvage their valuables and their lives, and the hydropower company started working with the Red Cross Red Crescent to plan early releases with the community.

The FUNES forecast-based financing for early action system community awareness programmes.
Source: Red Cross Red Crescent Climate Centre presentation

Is everything prime for automation?

Dr Patrick Meier, co-founder of WeRobotics, would argue that there are certain demands which are most effectively and efficiently answered by AI, and not by humans.

A Pathways Commission background paper identifies these roles as manual and cognitive. Those roles which require some sort of inter-personal interaction or communication are cognitive, and are harder to automate. Those roles requiring routine manual tasks are prime for automation, particularly when large data sets are involved.

A WeRobotics project in Vanuatu after Cyclone Pam sought to do a Needs Assessment for post-cyclone damage of houses. It used drones to take aerial images of the island’s affected communities and used the analysis of those images to create maps of the damage. One 20-minute flight can produce 800 images, taking six human hours to analyse. Then they need to be traced onto an area map to translate the data into action.

Pictures of damage on Vanuatu after Cyclone Pam and the crowd-sourced image analysis output on the needs assessment mapping. Source: blog.WeRobotics.org

When you consider humanitarian drone missions can number in the hundreds to get the required information, this requires literally thousands of man hours of analysis. Yet in a post-disaster setting, humanitarian workers do not have the luxury of time. They need specific answers to specific questions—for example, the number of seriously damaged houses in an area.

A machine learning algorithm can do that work for disaster relief workers by being taught how to analyse aerial imagery through algorithms in a fraction of the time it would take a human. Meier still hopes that there will be an app for that, one day. And it seems that day may be closer with the recent WeRobotics and Picterra partnership.

The dark side of AI

But are we ignoring the dark side in lauding the transformative potential of AI in certain applications for development?

Yunus draws the comparison between the highly regulated pharma sector and the unregulated technology sector. Both now have life-changing potential, but only pharma is deemed dangerous when unregulated. Stross terms this “reflexive indignation at any criticism of technology and progress” as counter-productive to the very progress we are trying to achieve. Quite simply, without recognising the limitations of AI’s potential for good, we are letting it slip quickly towards the bad.

USAID’s Aubra Anthony warns that we need to keep our eyes open for the pitfalls of AI—data privacy, inherent design / build bias and otherwise. Technologies have the potential to “hard code” inequality into their applications; when used in a development context they can be dangerous tools to reinforce that bias which is counterintuitive to the mission.

What then for the future of development and development workers?

Are we all destined to be turned into paperclips by an AI brain-in-a-box?

There is no short answer. But what is clear is that we need to focus on the challenges—as Stross says, “we need to work out a general strategy for getting on top of this sort of AI before they get on top of us.”

We know that these technologies can do, and are doing, good; but we need regulation to ensure that they are developed for inclusion, equality and fairness.

They have the potential to provide systems that can help development workers to perform their roles better and faster, with more accuracy. They can amplify impact on-ground in short-term disaster settings and longer term behaviour change programmes. But we need to plan now for the way AI will develop over the next decade to ensure no harm now for future generations.

In the words of Yunus—they need to understand that they work for us. Who is going to teach them that?