
Artificial intelligence (AI) has engendered concerns and calls for regulation. This article will summarily articulate several such concerns, suggestions for regulations, and a few recommendations some concerned commentators have offered to deal with the rapid advances in AI.
To begin, what is AI? Answers to this question are debatable, but we will adopt the general view that AI is non-human intelligence measured by its ability to replicate human mental skills and performance (1). As simple as this definition might seem, AI itself has raised several pressing issues.
Some immediate concerns include AI turning machines against humanity, the production of devastating weapons and “killing machines”, the loss of jobs as machines increasingly replace people, all factors causing “dramatic and unpredictable changes for humanity” (2). The Oxford philosopher Nick Bostrom believes that just as humans out-competed and almost completely eliminated gorillas, AI will outpace human development and ultimately dominate. A group at Oxford University warned that,
“[E]xtreme intelligences could not easily be controlled (either by the groups creating them, or by some international regulatory regime)… the intelligence will be driven to construct a world without humans or without meaningful features of human existence. This makes extremely intelligent AIs a unique risk, in that extinction is more likely than lesser impacts” (3).
Already some machines operate with a measure of autonomy, especially in weapon systems. Although some weapons, like Israel’s Iron Dome, require a level of human command, certain devices can function and complete tasks without human assistance. South Korea maintains a sentry bot near the demilitarized zone bordering North Korea whose capabilities align with this latter level of autonomy.
AI can be unpredictable and manipulated. Tay, an AI chatbot on Twitter, for example, had to be pulled by Microsoft after it starting spewing racist, sexist, and anti-Semitic remarks, such as “HITLER DID NOTHING WRONG” (4). There was discussion on whether Tay’s offensive responses were the result of internally built operations or from the influence of Twitter users or ’trolls’ it was programmed to imitate.
What if an AI machine malfunctioned and turned on its creators? Although this might seem like a personal decision made on behalf of the machine with a vendetta against its makers, it is not because the machine would not even know what it is doing. A recent document from UNESCO notes how high-functioning “AI such as AlphaGo or Watson can perform impressively without recognizing what it is doing” and that “AlphaGo defeated a number of Go masters without even knowing that it was playing a human game called Go” (5).
Concerns over AI have led robotic and AI professionals to sign an open letter presented at the 2015 International Conference on Artificial Intelligence calling for the United Nations to stop the development of weaponized AI that could operate “beyond meaningful human control.” This letter was signed by several high-profile persons including Stephen Hawking, Elon Musk, and Noam Chomsky, and many leading researchers in the field of AI itself.
There are also threats of AI being in the hands of enemies, such as terrorists who could use advanced machines of war on innocent citizens. The motivations of developers will be programmed into their AI systems, which will reveal much about the developers themselves (6). As Gilli and Kelly write, “Machines do what coders write on the basis of the data they have access to. Such codes and data implicitly or explicitly reflect broader, and potentially contentious, philosophical, ethical, moral, cultural and legal stances” of their developers (7). Also concerning to some is that the development of AI often happens in secret within large companies, militaries and defense forces, and governments.
These developments and concerns lie behind suggestions for regulation. Regulation is to offset the singularity, a concept coined by the innovator and technologist Ray Kurzweil, referring to a point at which technological progress in machine intelligence approaches runaway growth. However, regulation is extremely difficult as AI is incorporated into an enormous number of machines humans use and that makes life easier (everything from planes, computers, GPSs, and weapons of war to machine pets, social media algorithms, security cameras, medical operating machines, and much more), and maintains an industry whose work is carried out by business people, government employees, and academics across various countries.
Nonetheless, recommendations are offered, especially on fully autonomous AI machines. One recommendation is to install within AI a hierarchy of control so that a machine’s intelligence does not stem from only one source,
“We suggest that what is needed, in addition, is a whole new AI development that is applicable to many if not all so-called smart technologies. What is required is the introduction into the world of AI the same basic structure that exists in practically all non-digital systems: a tiered decision-making system. On one level are the operational systems, the worker bees that carry out the various missions. Above that are a great variety of oversight systems that ensure that the work is carried out within specified parameters. Thus, factory workers and office staff have supervisors, businesses have auditors, and teachers have principals. Oversight AI systems – we call them AI Guardians – can ensure that the decisions made by autonomous weapons will stay within a predetermined set of parameters. For instance, they would not be permitted to target the scores of targets banned by the US military, including mosques, schools, and dams. Also, these weapons should not be permitted to rely on intelligence from only one source” (8).
Another possibility is including oversight and first-line systems in AI machines to oversee their functioning. These are to be under the control of humans who will have the capacity to shut down both operational and oversight AI systems, as well as operate them manually. For example, “shutting down all killing machines when the enemy surrenders, or enabling a driverless car to speed if the passenger is seriously ill” (9).
There is the pressing issue of the loss of jobs. Job loss is where AI will have a major, worldwide, transformative effect. AI is slowly replacing people everywhere from low skilled jobs to even the more high-end professions,
“There is strong evidence that the cyber revolution, beginning with the large-scale use of computers and now accelerated by the introduction of stronger AI, is destroying many jobs: first blue-collar jobs (robots on the assembly line), then white-collar ones (banks reducing their back office staff), and now professional ones (legal research). From 2000 to 2010, 1.1 million secretarial jobs disappeared, as did 500,000 jobs for accounting and auditing clerks. Other job types, such as travel agents and data entry workers, have also seen steep declines due to techno- logical advances. The legal field has been the latest victim, as e-discovery technologies have reduced the need for large teams of lawyers and paralegals to examine millions of documents” (10).
Thomas K. Grose cites a study from Ball State University that found 85% of manufacturing job losses were caused by technology (11). Further, Britain’s Oxford Martin School predicted in a 2013 study that 47% of all U.S. employment is at risk from automation within the next two decades. In the future, some 60% of occupations could see a third or more of their activities automated. Top-end workers are likely to be fine (engineers, CEOs, scientists, although pilots and lawyers could lose out) but middle-income occupations much less so (paralegals, telemarketers, and cashiers, etc.). Self-driving vehicles will eventually replace many taxi, bus, and truck drivers.
The hope here can be that as people are replaced by machines, new jobs will emerge and be created. Naren Ramakrishnan, a Professor of Engineering in the Department of Computer Science at Virginia Tech, is somewhat optimistic saying that “Automation creates new opportunities, often in ways that are difficult to predict. There will also be jobs that don’t exist” (12). For instance, people will likely be needed to manage traffic command and control systems once fleets of autonomous cars hit the roads.
But others are more pessimistic. Daniel Bliss, an associate professor of electrical engineering at Arizona University, thinks “We are not ready as a society to cope with automation at the rate at which it’s changing” (13). Bliss is involved in research and technology seeking to develop automated vehicles and suggests “we are not doing enough to help the losers… humans cannot keep up with the transition.”
Humanity increasingly lives in an age in which a piece of software, written by a handful of programmers, can perform the work previously carried out by several hundred thousand people. This is the problem of “economic Armageddon” as machines increasingly take over. Recommendations for dealing with this are for nonpartisan organizations to study how AI is contributing to job loss and how best to cope with an accelerating cyber revolution. There are also untested options that,
“[I]nclude guaranteeing everyone a basic income (in effect, a major extension of the existing Earned Income Tax Credit); shorter work weeks (as France did but is now regretting); a six-hour workday (which many workplaces in Sweden have introduced to much acclaim); and taxes on overtime – to spread around whatever work is left” (14)
A final issue concerns privacy. AI collects and stores data, which leads to several pertinent questions: Whose data is being collected and stored? Why is the data being collected in the first place? And how is the data going to be used? AI then becomes a human rights issue, especially since data gathering and collecting for AI systems is becoming easier. Taking this into account, protective regulations are requiring explicit consent before an organization can gather and store data about individuals (15).
“The other known challenge posed by new information technologies is privacy. The fact that Big Data is not protecting the privacy of individuals is of serious concern. This is why cybersecurity is such an important issue today. Cybersecurity needs to protect the individual, as well as organizations, as Big Data increases in size by ensuring that there are experts thinking about how to build firewalls around privacy. Being that every American adult already has more than 5,000 (!) data points of information stored about them is even more reason to be concerned (Cambridge Analytica, 2016)” (16).
There is the threat of hacking and the exposing of personal information, banking details, and more,
Moreover, as people become more and more dependent on technology, there is no way of protecting oneself from being compromised. This is especially so since people are not well-informed just how much of their biometric information is being shared with other entities.”
One way to get around this is through “anonymisation of data to prevent any malevolent actor retrieving the actual identity of a certain population” (17). Also crucial is cybersecurity that must play a central measure for safeguarding personal information against cyber-attacks. As cyber threats become more sophisticated, politics and technology must intersect to address such known and unknown threats.
There is now the strong possibility that soon humanity will need to adapt to living a life in which robots will become the main working-class and people will spend more time with their children and families, friends and neighbors, in community activities, and in spiritual and cultural pursuits.
References
1. De Spiegeleire, Stephan., Maas, Matthijs., and Sweijs, Tim. 2017. “What is Artificial Intelligence?” In Artificial Intelligence and the Future of Defense: Strategic Implications for Small-and Medium-Sized Force Providers, edited by Matthijs Maas, Stephan De Spiegeleire, and Tim Sweijs, 25-42. Hague Centre for Strategic Studies. p. 28.
2. Etzioni, Amitai., and Etzioni, Oren. 2017. “Should Artificial Intelligence Be Regulated?” Issues in Science and Technology 33(4):32-36. p. 32.
3. Etzioni, Amitai., and Etzioni, Oren. 2017. Ibid. p. 32.
4. Anon. 2016. “Artificial Intelligence: HIT AND AMISS.” ASEE Prism 25(8). p. 15.
5. Gilli, Andrea., Pellegrino, Massimo., and Kelly, Richard. 2019. “Intelligent machines and the growing importance of ethics.” In The Brain and the Processor: Unpacking the Challenges of Human-Machine Interaction, edited by Andrea Gilli, 45-54. p. 49.
6. Berke, Allison. 2016. “The Future of Artificial Intelligence Reviewed.” Strategic Studies Quarterly 10(3):114-118. p. 117.
7. Gilli, Andrea., Pellegrino, Massimo., and Kelly, Richard. 2019. Ibid. p. 45.
8. Etzioni, Amitai., and Etzioni, Oren. 2017. p. 34.
9. Etzioni, Amitai., and Etzioni, Oren. 2017. p. 35.
10. Etzioni, Amitai., and Etzioni, Oren. 2017. p. 35.
11. Grose, Thomas K. 2017. “Replaced by Machines.” ASEE Prism 26(7):30-33. p. 31.
12. Grose, Thomas K. 2017. Ibid. p. 32.
13. Grose, Thomas K. 2017. Ibid. p. 32.
14. Etzioni, Amitai., and Etzioni, Oren. 2017. p. 36.
15. Gilli, Andrea., Pellegrino, Massimo., and Kelly, Richard. 2019. Ibid. p. 51.
16. Rendsburg, Melissa. 2019. “The Impact of Artificial Intelligence on Religion: Reconciling a New Relationship with God.” Cyber Security and Artificial Intelligence, pp. 1-27. p. 13.
17. Gilli, Andrea., Pellegrino, Massimo., and Kelly, Richard. 2019. Ibid. p. 51.