Where AI and ethics meet

Given a swell of dire warnings about the future of artificial intelligence over the last few years, the field of AI ethics has become a hive of activity.

These warnings come from a variety of experts such as Oxford University’s Nick Bostrom, but also from more public figures such as Elon Musk and the late Stephen Hawking. The picture they paint is bleak.

In response, many have dreamed up sets of principles to guide AI researchers and help them negotiate the maze of human morality and ethics. A 2019 paper in Nature Machine Intelligence throws a spanner in the works by claiming that such high principles, while laudable, will not give us the ethical AI society we need.

The field of AI ethics is generally broken into two areas: one concerning the ethics guiding humans who develop AIs, and the other machine ethics, guiding the moral behaviour of the AIs or robots themselves. However, the two areas are not so easily separated.

Machine ethics has a long history. In 1950 the great science fiction writer Isaac Asimov clearly articulated his now famous “three laws of robotics” in his work I, Robot, and proposed them as such:

  1. 1-A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Author isaac asimov photo by © alex gotfrydcorbiscorbis via getty images
Isaac Asimov articulated his “three laws of robotics” in 1950. Credit: Alex Gotfryd/CORBIS/Corbis via Getty Images

Later a “zeroth” law was added: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

These laws together were Asimov’s (and editor John W Campbell’s) musing on how to ensure an artificially intelligent system would not turn on its creators: a safety feature designed to produce friendly and benevolent robots.

In 2004, the film adaptation of I, Robot was released, featuring an AI whose interpretation of the three laws led to a plan to dominate human beings in order to save us from ourselves.

To highlight the flaws in the ethical principles of the three laws, an organisation called the Singularity Institute for Artificial Intelligence (now the Machine Intelligence Research Institute), headed up by the American AI researcher Eliezer Yudkowsky, started an online project called Three Laws Unsafe. 

Yudkowsky, an early theorist of the dangers of super-intelligent AI and proponent of the idea of Friendly AI, argued that such principles would be hopelessly simplistic if AI ever developed to the stage depicted in Asimov’s fictions.

Despite widespread recognition of the drawbacks of the three laws, many organisations, from private companies to governments, nonetheless persisted with projects to develop principle-based systems of AI ethics, with one paper listing “84 documents containing ethical principles or guidelines for AI” that have been published to date. 

This continued focus on ethical principles is partly because, while the three laws were designed to govern AI behaviour alone, principles of AI ethics apply to AI researchers as well as the intelligences that they develop. The ethical behaviour of AI is, in part, a reflection of the ethical behaviour of those that design and implement them, and because of this, the two areas of AI ethics are inextricably bound to one another. 

AI development needs strong moral guidance if we are to avoid some of the more catastrophic scenarios envisaged by AI critics.

A review published in 2018 by AI4People, an initiative of the international non-profit organisation Atomium-European Institute for Science, Media and Democracy, reports that many of these projects have developed sets of principles that closely resemble those in medical ethics: beneficence (do only good), non-maleficence (do no harm), autonomy (the power of humans to make individual decisions), and justice.

This convergence, for some, lends a great deal of credibility to these as possible guiding principles for the development of AIs in the future.

However, Brent Mittelstadt of the Oxford Internet Institute and the British Government’s Alan Turing Institute – an ethicist whose research concerns primarily digital ethics in relation to algorithms, machine learning, artificial intelligence, predictive analytics, Big Data and medical expert systems – argues that such an approach, called “principlism”, is not as promising as it might look.

Mittelstadt suggests significant differences between the fields of medicine and AI research that may well undermine the efficacy of the former’s ethical principles in the context of the latter.

His first argument concerns common aims and fiduciary duties, the duties in which trusted professionals, such as doctors, place other’s interests above their own. Medicine is clearly bound together by the common aim of promoting the health and well-being of patients and Mittelstadt argues that it is a “defining quality of a profession for its practitioners to be part of a ‘moral community’ with common aims, values and training”.

For the field of AI research, however, the same cannot be said. “AI is largely developed by the private sector for deployment in public (for example, criminal sentencing) and private (for example, insurance) contexts,” Mittelstadt writes. “The fundamental aims of developers, users and affected parties do not necessarily align.”

Similarly, the fiduciary duties of the professions and their mechanisms of governance are absent in private AI research.

“AI developers do not commit to public service, which in other professions requires practitioners to uphold public interests in the face of competing business or managerial interests,” he writes. In AI research, “public interests are not granted primacy over commercial interests”.

In a related point, Mittelstadt argues that while medicine has a professional culture that lays out the necessary moral obligations and virtues stretching back to the physicians of ancient Greece, “AI development does not have a comparable history, homogeneous professional culture and identity, or similarly developed professional ethics frameworks”.

Medicine has had a long time over which to learn from its mistakes and the shortcomings of the minimal guidance provided by the Hippocratic tradition. In response, it has codified appropriate conduct into modern principlism which provides fuller and more satisfactory ethical guidance.

AI research is obviously a far younger field, devoid of these rich historical opportunities to learn. Further complicating the issue is that the context of application for medicine is comparatively narrow, whereas “AI can in principle be deployed in any context involving human expertise”, leading it to be radically multi- and interdisciplinary, with researchers coming from “varied disciplines and professional backgrounds, which have incongruous histories, cultures, incentive structures and moral obligations”.

This makes it extraordinarily difficult to develop anything other than “broadly acceptable principles to guide the people and processes responsible for the development, deployment and governance of AI across radically different contexts of use”. The problem, says Mittelstadt, is translating these into actual good practice. “At this level of abstraction,” he warns, “meaningful guidance may be impossible.”

Finally, the author points to “the relative lack of legal and professional accountability mechanisms” within AI research. Where medicine has numerous layers of legal and professional protections to uphold professional standards, such things are largely absent in AI development. Mittelstadt draws on research showing that codes of ethics do not themselves result in ethical behaviour, without those codes being “embedded in organisational culture and actively enforced”.

“This is a problem,” he writes. “Serious, long-term commitment to self-regulatory frameworks cannot be taken for granted.”

All of this together leads Mittelstadt to conclude: “We must therefore hesitate to celebrate consensus around high-level principles that hide deep political and normative disagreement.”

Instead he argues that AI research needs to develop “binding and highly visible accountability structures” at the organisational level, as well as encouraging actual ethical practice in the field to inform higher level principles, rather than relying solely on top-down principlism. Similarly, he advocates a focus on organisational ethics rather than professional ethics, while simultaneously calling for the professionalisation of AI development, partly through the licensing of developers of high-risk AI.

His final suggestion for the future of AI ethics is to exhort AI researchers not to treat ethical issues as design problems to be “solved”. “It is foolish to assume,” he writes, “that very old and complex normative questions can be solved with technical fixes or good design alone.”

Instead, he writes that “intractable principled disagreements should be expected and welcomed, as they reflect both serious ethical consideration and diversity of thought. They do not represent failure, and do not need to be ‘solved’. Ethics is a process, not a destination. The real work of AI ethics begins now: to translate and implement our lofty principles, and in doing so to begin to understand the real ethical challenges of AI.”


How we can make “good” artificial intelligence, what does it mean for a machine to be ethical, and how can we use AI ethically? Good in the Machine – 2019’s SCINEMA International Science Film Festival entry – delves into these questions, the origins of our morality, and the interplay between artificial agency and our own moral compass.

Read on to learn more about AI ethics.

Please login to favourite this article.