16 Retrospect and Prospect

The third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including full text).

16.2 Social and Ethical Consequences

As the science and technology of AI matures, smart artifacts are being deployed at an accelerating rate. Their widespread deployment is having, and will have, profound ethical, psychological, social, economic, and legal consequences for human society and for our planet. Here we can only raise, and skim the surface of, some of these issues. Artificial autonomous agents are, in one sense, simply the next stage in the development of technology. In that sense, the normal concerns about the impact of technological development apply, but in another sense the new technologies represent a profound discontinuity.

Autonomous agents perceive, decide, and act on their own. This is a radical, qualitative change in technology and in our image of technology. This development raises the possibility that these agents could take unanticipated actions beyond human control. As with any disruptive technology, there will be substantial positive and negative consequences – many that will be difficult to judge and many that humans simply will not, or cannot, foresee.

As an example, autonomous vehicles are being developed and deployed. Thrun [2006] presents an optimistic view of autonomous vehicles. The positive impact of having intelligent cars and trucks will be enormous. There is the safety aspect of reducing the annual carnage on the roads; it is estimated that 1.2 million people are killed, and more than 50 million are injured, in traffic accidents each year worldwide. Vehicles could communicate and negotiate at intersections. Besides the consequent reduction in accidents, there could be up to three times the traffic throughput. The improvements in road usage efficiency come both from smarter intersection management and from platooning effects, whereby automated, communicating vehicles can safely follow each other very closely because they can communicate their intentions before acting and they react much quicker than people. This increase in road utilization has potential positive side effects. It not only decreases the capital and maintenance cost of highways, but has potential ecological savings of using highways so much more efficiently instead of paving over farmland. Elderly and disabled people would be able to get around on their own. People could dispatch their vehicles to the parking warehouse autonomously and then recall them later. Individual car ownership could become mostly obsolete. People could simply order up the most suitable vehicle for the trip. Automated warehouses could store vehicles more efficiently than using surface land for parking. Much of the current paved space in urban areas could be used for playgrounds, housing or even urban farms. The rigid distinction between private vehicles and public transit could dissolve.

On the other hand, experimental autonomous vehicles are seen by many as precursors to robot tanks, military cargo movers, and automated warfare. Although there may be, in some sense, significant benefits to robotic warfare, there are also very real dangers. In the past, these were only the nightmares of science fiction. Now, as automated warfare becomes a reality, we have to confront those dangers.

So there are two radically different, but not inconsistent, optimistic and pessimistic scenarios for the outcomes of the development of autonomous vehicles. This suggests the need for wise ethical consideration of their use. The stuff of science fiction is rapidly becoming science fact.

AI is now mature, both as a science and, in its technologies and applications, as an engineering discipline. Many opportunities exist for AI to have a positive impact upon our planet’s environment. Computational sustainability is an emerging discipline studying how computational techniques, including AI, can be used to improve planetary sustainability in the ecological, economic and social realms. AI researchers and development engineers potentially have part of the skills required to address aspects of concerns of global warming, poverty, food production, arms control, health, education, the aging population, and demographic issues. They will have to work with domain experts, and be able to convince domain experts that the AI solutions are not just new snake oil. We can, as a simple example, provide access to tools for learning about AI, such as AIspace, so that people are empowered to understand and try AI techniques on their own problems, rather than relying upon opaque blackbox commercial systems. Games and competitions based upon AI systems can be very effective learning, teaching, and research environments, as shown by the success of RoboCup for robot soccer.

We have already considered some of the environmental impacts of intelligent cars and smart traffic control. A combinatorial auction is an auction in which agents bid on packages, consisting of combinations of discrete items. This is difficult because preferences are usually not additive, but items are typically complements or substitutes. Work on combinatorial auctions, already applied to spectrum allocation (allocation of radio frequencies to companies for television or cell phones) and logistics (planning for transporting goods), could further be applied to support carbon markets, to optimize energy supply and demand, and to mitigate climate change. There is much work on smart energy controllers using distributed sensors and actuators which improve energy use in buildings. We could use qualitative modeling techniques for climate scenario modeling. The ideas behind constraint-based systems can be applied to analyze sustainable systems. A sustainable system is in balance with its environment, satisfying short-term and long-term constraints on the resources it consumes and the outputs it produces.

Assistive technology for disabled and aging populations is being pioneered by many researchers. Assisted cognition is one application but also assisted perception and assisted action in the form of, for example, smart wheelchairs and companions for older people and nurses’ assistants in long-term care facilities. However, Sharkey [2008] warns of some of the dangers of relying on robotic assistants as companions for the elderly and the very young. As with autonomous vehicles, researchers must ask cogent questions about the use of their creations.

This reliance on autonomous intelligent agents, raises the question: Can we trust robots? There are some real reasons why we cannot yet rely upon robots to do the right thing. They are not fully trustworthy and reliable, given the way they are built now. So, can they do the right thing? Will they do the right thing? What is the right thing? As evidenced by popular movies and books, in our collective subconscious, the fear exists that eventually robots may become completely autonomous, with free will, intelligence, and consciousness; they may rebel against us as Frankenstein-like monsters.

This raises questions about ethics: What are the ethics at the robot–human interface? Should there be ethical codes, for humans and for robots? It is clear that there should. There are already robot liability and insurance issues. There will have to be legislation that targets robot issues. Many countries and states are now developing robot regulations and laws. There will have to be professional codes of ethics for robot designers and engineers just as there are for engineers in all other disciplines. We will have to factor the issues around what we should do ethically in designing, building, and deploying robots. How should robots make decisions as they develop more autonomy? What should we humans do ethically and what ethical issues arise for us as we interact with robots? Should we give them any rights? There are human rights codes; will there be robot rights codes, as well?

To factor these issues, let us break them down into three fundamental questions that must be addressed:

  • What should we humans do ethically in designing, building, and deploying robots?

  • How should robots ethically decide, as they develop autonomy and free will, what to do?

  • What ethical issues arise for us as we interact with robots?

In considering these questions we shall consider some interesting, if perhaps naive, proposals put forward by the science fiction novelist Isaac Asimov Asimov [1950], one of the earliest thinkers about these issues. Asimov’s Laws of Robotics are a good basis from which to start because, at first glance, they seem logical and succinct. His original three Laws are:

  1. I.

    A robot may not harm a human being, or, through inaction, allow a human being to come to harm.

  2. II.

    A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.

  3. III.

    A robot must protect its own existence, as long as such protection does not conflict with the First or Second Laws.

Asimov’s answers to the three questions posed above are as follows. First, you must put those laws into every robot and, by law, manufacturers would have to do that. Second, robots should always have to follow the prioritized laws. But he did not say much about the third question. Asimov’s plots arise mainly from the conflict between what the humans intend the robot to do and what it actually does, or between literal and sensible interpretations of the laws, because they are not codified in any formal language. Asimov’s fiction explored many hidden implicit contradictions in the laws and their consequences.

There are ongoing discussions of robot ethics, but the discussions usually presuppose technical abilities that we just do not yet have. Joy [2000] was so concerned about our inability to control the dangers of new technologies that he called, unsuccessfully, for a moratorium on the development of robotics (and AI), nanotechnology, and genetic engineering. In this book, we have presented a coherent view of the agent design space and clarified the design principles for intelligent agents, including robots. This could provide a more technically informed framework for the development of social, ethical and legal codes for intelligent agents.

Many of the concerns about AI safety come down to issues of trust. Can one trust a deep learning system that has been trained on a vast array of images to classify human faces reliably? What if the (secret) training images have implicit bias? That bias would then be reflected in the classification process. The internal weights in the deep learning network may not be open to user inspection. Even if they are inspectable, they are opaque; they do not inform us about the bias or how to rectify it. Moreover, in the systems that are continually learning, the weights keep changing.

What are the factors that lead to us to trust agents? Formal verifiability, transparency, explanatory capacity, and reliable performance are some such factors. Having systems which are only semi-autonomous initially until they are trusted based on experience is one approach. Another is to use inverse reinforcement learning to learn a human user’s values and then align the agent with those values. In inverse reinforcement learning an agent learns the dynamics of the world and a reward function from the traces of other agents’ observed behavior. The development of techniques for designing and building safe, trustworthy and transparent agents is now being urgently pursued in the AI research community.

Many of the issues require attention beyond the AI community. Economic and regulatory concerns will require policy decisions at all levels of governance, from municipal to global. Issues of social and economic equity will, almost certainly, require some regulation of corporate activity, given the winner-take-all dynamics and the network effects of many markets for intelligent agents and their services. Regulatory capture, whereby the regulated companies exert influence on the regulators and the regulations, will be a key concern.

Some of these concerns are addressed in a report from The One Hundred Year Artificial Intelligence Project [Stone et al., 2016, p. 10]:

A vigorous and informed debate about how to best steer AI in ways that enrich our lives and our society, while encouraging creativity in the field, is an urgent and vital need. AI technologies could widen existing inequalities of opportunity if access to them – along with the high-powered computation and large-scale data that fuel many of them – is unfairly distributed across society. These technologies will improve the abilities and efficiency of people who have access to them. Policies should be evaluated as to whether they foster democratic values and equitable sharing of AI’s benefits, or concentrate power and benefits in the hands of a fortunate few.
As this report documents, significant AI-related advances have already had an impact on North American cities over the past fifteen years, and even more substantial developments will occur over the next fifteen. Recent advances are largely due to the growth and analysis of large data sets enabled by the internet, advances in sensory technologies and, more recently, applications of “deep learning”. In the coming years, as the public encounters new AI applications in domains such as transportation and healthcare, they must be introduced in ways that build trust and understanding, and respect human and civil rights. While encouraging innovation, policies and processes should address ethical, privacy, and security implications, and should work to ensure that the benefits of AI technologies will be spread broadly and fairly.

It is possible that computers and robots will become so intelligent that they will be able to create autonomously even more powerful computers and robots, in a bootstrapping spiral. The point at which computers will not need people to create even more powerful computers has been called the singularity. One of the fears is that after the singularity, computers may not need humans, or may even harm us, accidentally or deliberately. These concerns are prompting the development of research programs that promote beneficial AI and AI safety. The singularity is not implausible, as there are already factories that manufacture machines, where the manufacturing is carried out by robots, employing few people. As argued earlier, organizations can be more intelligent than their individual members. It is clear that a corporation with computers is more intelligent than any individual computer, so the singularity may arise with corporations before individual computers, with corporations acting with no effective human oversight. Computers are already replacing humans for tasks that involve intelligence, and it is expected that this will continue.

By automating intellectual tasks as well as manual tasks, AI promises (or threatens) to trigger a fourth industrial revolution [Brynjolfsson and McAfee, 2014; Schwab and Forum, 2016], where it is not only manual tasks that are automated, but also jobs requiring intelligence and perhaps even creativity. Whereas in previous industrial revolutions, new jobs were created to keep most of the population employed, the result of the next revolution may be to require far fewer people to work for money in order fulfill the needs of people and the environment. This raises the related questions of how to share the wealth that will be created, and what the people who are otherwise not required to keep the paid economy functioning should do. One mechanism that has been suggested is to provide a universal basic income or negative income tax, where everyone receives an income, so that anyone has the option to do unpaid work, such as child-rearing or caregiving, to do more creative endeavors, to become more entrepreneurial, to get more education or just do nothing. This will leave the paid jobs to those who really want those jobs and the additional income. The basic income can increase as fewer people are required in the paid economy. It is also possible that the cumulative effect of automation is to further concentrate wealth in a small elite stratum of society, favoring capital over labor. Mitigating this inequality may also require a redistributive wealth tax of some form [Piketty, 2014]. Previous episodes of great change created social upheaval, scapegoating of minorities, and even wars. It is important to consider the global effects of the technologies and ways to mitigate such undesirable consequences.

Robotics may not be the AI technology with the greatest impact. Consider the embedded, ubiquitous, distributed intelligence in the World Wide Web and other global computational networks. This amalgam of human and artificial intelligence can be seen as evolving to become a World Wide Mind. The impact of this global net on the way we discover, and communicate, new knowledge is already comparable to the effects of the development of the printing press. As Marshall McLuhan [1964] argued, “We first shape the tools and thereafter our tools shape us”. Although he was thinking more of books, advertising, and television, this concept applies even more to the global net and autonomous agents. The kinds of agents we build, and the kinds of agents we decide to build, will change us as much as they will change our society; we should make sure it is for the better. Margaret Somerville [2006] is an ethicist who argues that the species Homo sapiens is evolving into Techno sapiens as we project our abilities out into our technology at an accelerating rate. Many of our old social and ethical codes are broken; they do not work in this new world. As co-creators of the new science and technology of AI, it is our joint responsibility to pay attention and to act.