AI is near as Never Before – What Humanity Should Keep in Mind

Evolution from the brain to artificial intelligence will be more radical than from monkey to human.

Nick Bostrom
philosopher and guru in the field of artificial intelligence

The nudged.info you are recently reading has touched upon and highlighted different aspects of lives that are affected by Big Data and influenced by artificial intelligence (AI) technologies’ empowering process. By being part of this blogging family, lots of material that came to the agenda created even more questions than there were before this journey.

The intention with this post is to collect all relevant evidence regarding where humanity has ended up today when it comes to Big Data and artificial intelligence, understand rather those innovations serve humanity equally and what are the potential danger to keep in mind and try to prevent. As well as to leave some basis for further discussion and research for anyone interested in the same theme.

Big Data makes Humanity excited

Definitions of ‘Big Data’ vary by industries such as information technology (IT), computer science, marketing, social media, communication, data storage, analytics, and statistics. For Kolb and Kolb this is ‘because people use the term to mean a wide variety of things to suit their purposes’ (Spratt, Baker, 2016, p.9). The term ‘Big Data’ has its origins in the late 1990s, but only became widespread in 2009 (Mashey, 1998). Western epistemic centres—especially those traditionally considered the interpreters and storytellers of technological development—have produced hyperbolic narratives of the ‘Big Data revolution’ (Cukier and Mayer-Schoenberger 2013).

Mayer-Schönberger and Cukier (2013) note that Big Data refers to things one can do at a large scale that cannot be done at a smaller one, to extract new insights or create new forms of value, in ways that change markets, organizations, the relationship between citizens and governments, and more.

Spratt and Baker in their report on Big Data and International Development note that it is important to distinguish Big Data from two related concepts: information and communications technology (ICT) and ‘open data’. For Osterwalder (2002), ‘ICT encompasses all the technology that facilitates the processing, transfer and exchange of information and communication services’. ICT for Development (ICT4D), therefore, is about how ICTs ‘can be used to help poor and marginalized people and communities make a difference to their lives’ (Unwin 2009). While Big Data is built upon ICT infrastructure, it is less concerned with the data being exchanged and more about the value that the data bring (Spratt, S,. Baker, J, 2016, p. 8).

Datafication – the production of data – is compared to electrification, suggesting that many businesses today are coming to rely on ‘data’ as they do on electricity (Bertolucci 2013). The relatively new term has entered to all levels of current life and is already mapped among most important life’s aspects. Big Data, as Spratt and Baker continue, will be the fuel that drives the next industrial revolution, radically reshaping economic structures, employment patterns and reaching into every aspect of economic and social life. Together Big Data and AI are set of two amazing modern technologies that empower machine learning, continuously reiterate and update the data banks, and taking the help of human intervention and recursive experiments for the same.

Current Status

Work on AI is a fairly young field of research, founded in 1956. But today it is used in many gadgets, such as smartphones and smart thermostats. Moreover, AI is increasingly used to solve social problems. AI is the ability of a machine to learn, think, perform actions and adapt to the real world, expand human capabilities and automate energy costs or dangerous tasks. According to some experts, artificial intelligence has enough potential to radically change the life of the whole society and it does so.

USAID 2018 report is recapping that computers allow to make data-derived predictions and automate decisions and have become part of daily life for billions of people (USAID 2018, p.4).

Ubiquitous digital services such as interactive maps, tailored advertisements, and voice-activated personal assistants are likely only the beginning. Some AI advocates even claim that AI’s impact will be as profound as “electricity or fire” that it will revolutionize nearly every field of human activity. This enthusiasm has reached international development as well. Emerging AI applications promise to reshape healthcare, agriculture, and democracy in the developing world. AI show tremendous potential for helping to achieve sustainable development objectives globally. They can improve efficiency by automating labour-intensive tasks, or offer new insights by finding patterns in large, complex datasets. A recent report suggests that AI advances could double economic growth rates and increase labour productivity 40% by 2035.

And already today AI is actively support human beings in different areas like medical diagnostic, home safety thought smart alarm systems, safer cars, virtual helping systems, manual labour automation, cheaper production, etc. Unfortunately, as our AI capabilities expand, we will also see it being used for dangerous or malicious purposes, and a lot of influential and involved people reflect on this already today.

Threat for Survival

Morality and ethical aspects are knocking the door for the discussion. Suppose scientists create an advanced AI that replaces humans in literally all areas. But will such a program have morality customary for human society? The ‘pure artificial intelligence’ understands and solves issues like a real person but having a competitive advantage – it is devoid of human emotions.

Big Data could also create negative effects when new and desirable jobs and skills could potentially keep less educated members of a workforce from upward mobility, as Spratt and Baker continue (Spratt, S,. Baker, J, 2016, p. 11).

In the case of developing countries, this could contribute to the reinforcement of a ‘digital divide’ in which the poor are disadvantaged because of their lack of access to technology. For those who are able to gain employment in this space, Big Data could prove to be a source of ‘brain drain’ in low-income countries, as has already been the case in certain other fields.

Experts’ Reflections

A ‘weak AI’, of course, can deprive a person from work and a job, but is unlikely to become a threat to the survival of the entire species of Homo sapiens. As the main threat, experts call it an advanced, full-fledged artificial intelligence, to which a person will simply not be needed. A rather frightening forecast in 2015 was made by the famous British theoretical physicist Stephen Hawking, who said that machines would prevail over a man over a hundred years.

Already in the next 100 years, artificial intelligence will surpass human. And before this happens, we must do everything so that the goals of the machines coincide with ours.     

said Hawking

The researcher is concerned about the short-sightedness of AI developers. As one of the nudged.info authors has reflected already, Cathy O’Neil – the author of ‘Weapons of Math Destruction’- criticizes that data scientists tend to be disconnected from the people affected by their code. “So many of the data scientists that are in work right now think of themselves as technicians and think that they can blithely follow textbook definitions of optimization, without considering the wider consequences of their work”, she said in a 2018 interview with Wired. Hawking at the same time named the reason for the superiority of AI over Homo sapiens: in his opinion, a person evolves very slowly, while machines can self-improve incredibly quickly.

“So many of the data scientists that are in work right now think of themselves…”

Hawking’s fears are shared by Microsoft founder Bill Gates. “I am in the camp that is concerned about the prospect of developing superintelligence. At first, the machines will do most of the work for us, but they will not have superintelligence. It’s good if we manage it properly. In a few decades, artificial intelligence will be sufficiently developed to become a cause for concern,” he said in 2015.

Surprisingly the main innovator of our time, Ilon Musk, who, seems, should see only advantages in all new technologies, speaks about the threat of AI as well. Two years ago he asked to mark his words that “AI is far more dangerous than nukes. Artificial intelligence is progressing very rapidly. You have no idea how much”.

“AI is far more dangerous than nukes.”

And as an ironic, in Autumn 2017 the social robot Sophia was given the citizenship of Saudi Arabia after participating on the Global Summit ‘AI for Good’ some months earlier. In March of 2016, Sophia’s creator, David Hanson of Hanson Robotics, asked Sophia during a live demonstration at the SXSW festival, “Do you want to destroy humans? …Please say ‘no.’” With a blank expression, Sophia responded, “OK. I will destroy humans.” Hopefully, Sophia was treated a bit after that, especially if she got the citizenship and recognition.

“OK. I will destroy humans.”

Failure Areas

Besides endless citates from futurists, technologists and innovators, there are tons of terabytes written on potential failures the technology might bring to humanity. USAID 2018 report indicates the most common failure points of AI and machines that researchers are working to better understand and mitigate. Those points are (USAID 2018, pp.37-38):

  • Fair, but inaccurate: Some prediction tasks are just really difficult, and models may not end up being very accurate. Such models can still be useful, especially if the previous decision method wasn’t any better. It’s also possible for them to be fair in the sense that they are equally inaccurate for everyone.
  • Less accurate for minority groups: Sometimes the relationships that are used to make predictions will be different for minority groups than for the majority population. Models that do not account for this may have impressive performance for the population as a whole, but exhibit high error rates for the minority group. For example, winning entries in a recent competition to detect buildings from satellite images achieved 89% accuracy for images of Las Vegas, but only 42% for images of Khartoum, Sudan.
  • Uneven error balance: “Accuracy” can be broken down into different types of errors — for example, false positives or false negatives. If a model predicts loan repayment, false positives are cases where a borrower was predicted to repay, but then defaulted. If the model predicted non-payment but the loan was repaid, then the error is a false negative. It is possible for a model to have similar accuracy across two sub-populations, but for the balance of false positives and false negatives to change between different groups. A model that grants more false positives to one population and more false negatives to another creates an uneven playing field and systematically disadvantages one group.
  • Reproducing existing inequities: Training data used in machine learning are always data about the past. If we aim to change an unjust status quo, predictions based on what happened in the past might be unhelpful, even if they are highly accurate. For example, if women have traditionally faced discrimination in hiring, then an algorithm that scores resumes based on past hiring records will discriminate against women.
  • Doubling down on bias: In many cases, the quantity we’d like to model isn’t available and we must settle for a related value, known as a proxy. Maybe we’re interested in actual levels of crime committed but only have data about arrests. Or we’d like to predict disease rates but only have data about hospitalizations. If the alignment between the “real” outcome of interest and the proxy isn’t perfect, then models can develop blind spots.
  • Model drift: Another potential problem with modelling based on the past is that the real-world changes. Models that infer human behaviour from mobile call detail records can be upended by changes in billing plans or service improvements. A model to predict flu cases based on Google searches eventually lost its accuracy, in part due to improvements in the search interface.

Forbes offers mental pabulum describing that AI might bring the biggest danger when it comes to autonomous weapons, manipulation through the social media, social grading, misalignment between human goals and the machines’ and d

AI does Discriminate

“When you invent the ship, you also invent the shipwreck; when you invent the plane you also invent the plane crash; and when you invent electricity, you invent electrocution … Every technology carries its own negativity, which is invented at the same time as technical progress”

Paul Virilio, Stowe, 2018

A 2015 study showed that in a Google images search for “CEO”, just 11 per cent of the people it displayed were women, even though 27 per cent of the chief executives in the US are female. A few months later, a separate study led by Anupam Datta at Carnegie Mellon University in Pittsburgh found that Google’s online advertising system showed high-income jobs to men much more often than to women.

PredPol, used in several US states, is an algorithm designed to predict when and where crimes will take place, with the aim of helping to reduce human bias in policing. But in 2016, the Human Rights Data Analysis Group found that the software could lead police to unfairly target certain neighbourhoods. When researchers applied a simulation of PredPol’s algorithm to drug offences in Oakland, California, it repeatedly sent officers to neighbourhoods with a high proportion of people from racial minorities, regardless of the true crime rate in those areas.

COMPAS is an algorithm widely used in the US to guide sentencing by predicting the likelihood of a criminal reoffending. In perhaps the most notorious case of AI prejudice, in May 2016 the US news organisation ProPublica reported that COMPAS is racially biased. According to the analysis, the system predicts that black defendants pose a higher risk of recidivism than they do, and the reverse for white defendants.

In October 2017, police in Israel arrested a Palestinian worker who had posted a picture of himself on Facebook posing by a bulldozer with the caption “attack them” in Hebrew. Only he hadn’t: the Arabic for “good morning” and “attack them” are very similar, and Facebook’s automatic translation software chose the wrong one. The man was questioned for several hours before someone spotted the mistake.

In February 2018, Joy Buolamwini at the Massachusetts Institute of Technology found that three of the latest gender-recognition AIs, from IBM Microsoft and Chinese company Megvii, could correctly identify a person’s gender from a photograph 99 per cent of the time – but only for white men. For dark-skinned women, accuracy dropped to just 35 per cent.

One difference between natural or human intelligence and artificial intelligence is that humans absorb and process data (especially visual data) in the context of the surrounding environment. If the interpretation of the data (for example identifying an image) doesn’t fit the context of the situation, the human can recognize that something is not quite right. Presently ML and AI systems lack the ability to recognize if the “answer” the machine arrives at agrees with the context (Spratt, S,. Baker, J, 2016, p. 11).

Lund University research on Machine Bias: Artificial Intelligence and Discrimination preliminary recommends to policymakers to cautiously regulate artificial intelligence to prevent artificial intelligence discrimination. Findings during the research bring the author to the hypothesizes that some of the most important artificial intelligence applications tend to discriminate the most discriminated groups, confusion around legal liability of artificial intelligence’s actions and non-transparent use of AI encumber one’s ability to bring discrimination claims before courts, and as artificial intelligence is used in diverse domains, many of them remain largely or entirely unregulated (lack of transparency, accountability, standards, audits, etc), which multiples problems.

Ethics at your Service

Earlier in our discussion there was a reference to Cathy O’Neil – the author of “Weapons of Math Destruction” who shared in interview with Wired that many of the data scientists that are in work right now think of themselves as technicians and think that they can blithely follow textbook definitions of optimization, without considering the wider consequences of their work”.

Complex innovation is an effort from knowledgeable experts that contribute their small pizza pieces to the assembly. Therefore, it is crucial to ensure that mindfulness is in place. Hopefully proper structures would be sufficiently on time to create needed regulations and frameworks to make sure that technological developments would never be used to the detriment of humanity keeping to be marginal and rare.

European Economic and Social Committee (EESC) in 2017 has published the study “The ethics of Big Data: Balancing economic benefits and ethical questions of Big Data in the EU Policy context” that explored the ethical dimensions of Big Data in an attempt to balance them with the need for economic growth within the EU. The study highlights ethical issues connected with Big Data, proposes devised actions as tools to strike the balance and brings results from interviews with a number of key stakeholders. Interesting to mention that during the study correspondents interviewed expressed more worries than optimism about the current Big Data scenario. The elements gathered at the end of this study provide a framework of the policies that have the best chances of being implemented in the short and medium term and that can have the most relevant impact on society. This is exactly the scenario needed in all modern societies that implement technologies actively in daily operations.

Last year EESC has published a position paper indicating the status of “Artificial Intelligence for Europe”. Besides highlighting the main points and positions around AI in Europe currently, the paper indicates that promotion of an informed and balanced public debate on AI involving all relevant stakeholders is crucial. A human-in-command approach to AI should be guaranteed, where the development of AI is responsible, safe and useful, and machines remain machines and people retain control over these machines at all times. The EU should take the lead globally in establishing clear global policy frameworks for AI, in line with European values and fundamental rights. The report concludes that the ethical guidelines on AI to be prepared by the High-Level Expert Group on AI to the Commission should include principles of transparency in the use of AI systems to hire employees and assess or control their performance. They must also safeguard rights and freedoms with regard to the processing of workers’ data, in accordance with the principles of non-discrimination.

In light of said about, I would like to recall the phenomenon of academician Andrei Sakharov, who was an extreme techie, the creator of the most deadly weapons, and who realized the inhumanity of all his work and advocated a different path for the development of mankind, based on discussion and compromises. If you imagine that today’s arms developers will not have enough deep humanitarian and ethical trainings, they would not be able to soberly evaluate the results of their work and developments in the final indicators (how many people could be killed or what damage they might bring). Of course, this also applies to artificial intelligence.

Conclusion

In this post there was collected for you the current status of Big Data and artificial intelligence usage. You got familiar with main experts’ predictions and that are the main potential threats the abuse implementation of new technologies might bring to the humanity. Also, the cases collected prove discrimination of artificial intelligence already today and confirm needed improvements. Last but not least, there was the discussion how ethics could serve humanity to regulate issues and prevent potential threats the technology could bring.

Personally I would add that anything could be misused regardless if it is Big Data, artificial intelligence, powerful technology or something else. Today, Big Data and artificial intelligence in particular are used for many good causes and at the very moment there is no reason to think currently that “the robots conceived something bad against humans”. However, it should be kept in mind that as our AI capabilities expand, we will also see it being used for dangerous or malicious purposes. Since AI technology is advancing so rapidly, it is vital for us to start to debate the best ways for AI to develop positively while minimizing its destructive potential.

Finally, the main enemy of Homo sapiens is not a machine, not a natural disaster, and not even aliens from the other worlds. The main danger to a person is another person acting in his/her own purposes having neither ethical or moral barriers. And the benefit/harm from artificial intelligence will entirely depend on how people themselves dispose of new achievements of science and technology.

p.s. This course and these exercises in form of posts gave a huge opportunity to stand for a while, dig into really existing facts and try to bring those to the discussion. Even there was enough timing to build a proper cooperation with the audience, those short discussion and comments put anyone on different reflections and discussion. I enjoyed the course and hope that my contribution could be useful for anyone visiting our blog once in a while.

*Pictures retrieved from https://unsplash.com – free photos for everyone

References

* Bertolucci, J. (2013). Big Data’s New Buzzword: Datafication. Information Week. Retrieved from www.informationweek.com/big-data/big-data-analytics/big-datas-new-buzzworddatafication/d/d-id/1108797 on 20 March 2020

* Cossins, D. (2018). Discriminating algorithms: 5 times AI showed prejudice. Newscientist, April 2018. Retrieved from https://www.newscientist.com/article/2166207-discriminating-algorithms-5-times-ai-showed-prejudice/ on 20 March 2020

* Mashey, J. (1998). “Big Data … and the Next Wave of InfraStress” Usenix. Retrieved 20 March 2020

* Mayer-Schönberger, V., Cukier, K. (2013). Big Data: A Revolution That Will Transform How We Live, Work, and Think. London: John Murray Publishers.

* Milan, S., Trere, E. (2019). Big Data from the South(s): Beyond Data Universalism. Television & New Media.

* Spratt, S., Baker, J. (2016). Big Data and International Development: Impacts, Scenarios and Policy Options. Brighton: IDS.

* Taylor, L., Schroeder R. (2015). Is bigger better? The emergence of Big Data as tool for international development policy. GeoJournal 80, 503-528.

* USAID 2018: Reflecting the Past, Shaping the Future: Making AI Work for International Development. Washington, DC: USAID.

 

Leave a Reply

Your email address will not be published. Required fields are marked *