Quantcast


CURATOR
A pinboard by
Tim Squirrell

I'm a PhD researcher at the University of Edinburgh, working on an ethnographic research project on a community centred around the Paleolithic diet and lifestyle. I also teach undergraduate Sociology and Social Policy.

In my spare time I spend a lot of my evenings and weekends at debating competitions, arguing in various seminar rooms around the continent. I also enjoy picking up heavy things and putting them down again.

PINBOARD SUMMARY

When a computer makes an error, who is morally responsible?

As computers become ever more advanced and encroach ever further into every aspect of our lives, we're confronted with a question: what happens when they go wrong?

If a self-driving car is going to crash and has to choose between hurting its driver and bystanders, who should it pick?

If IBM's Watson starts making mistakes in diagnosing cancer patients, who should be held responsible?

When a lethal autonomous robot makes a decision to kill someone on a battlefield, who is ultimately responsible for that?

When nobody can be held responsible, it creates the problem of a "responsibility gap", which is both a legal and ethical issue. If we can't regulate the behaviour of machines, then it's difficult to make a good case for their being allowed to operate, especially when we give them more and more responsibility. If you can't sue someone for wrongdoing, then what happens to the rule of law?

One way of resolving this is to attempt to attribute rights, responsibility and agency to robots, computers and machines. With the current state of AI, advocacy for this view is not widespread, but there is every possibility that it will become more popular as AIs become more advanced, and ever-harder to distinguish from human intelligences.

Another solution is to attempt to blame the programmers of a machine, but as machine-learning algorithms become more advanced, they become ever less predictable. Neural networks and evolutionary or genetic algorithms are almost completely unpredictable in the results they yield, by their very nature, and decision-tree based algorithms are never going to be as advanced as other kinds (even if they are more predictable).

These are the kinds of questions we have to confront, and this board links to a number of papers that explore these issues. No definitive answers are available, but hopefully in reading the attached articles you can start to develop your own thoughts on one of the central debates in machine ethics.

9 ITEMS PINNED

Artificial agents among us: Should we recognize them as agents proper?

Abstract: Abstract In this paper, I discuss whether in a society where the use of artificial agents is pervasive, these agents should be recognized as having rights like those we accord to group agents. This kind of recognition I understand to be at once social and legal, and I argue that in order for an artificial agent to be so recognized, it will need to meet the same basic conditions in light of which group agents are granted such recognition. I then explore the implications of granting recognition in this manner. The thesis I will be defending is that artificial agents that do meet the conditions of agency in light of which we ascribe rights to group agents should thereby be recognized as having similar rights. The reason for bringing group agents into the picture is that, like artificial agents, they are not self-evidently agents of the sort to which we would naturally ascribe rights, or at least that is what the historical record suggests if we look, for example, at what it took for corporations to gain legal status in the law as group agents entitled to rights and, consequently, as entities subject to responsibilities. This is an example of agency ascribed to a nonhuman agent, and just as a group agent can be described as nonhuman, so can an artificial agent. Therefore, if these two kinds of nonhuman agents can be shown to be sufficiently similar in relevant ways, the agency ascribed to one can also be ascribed to the other—this despite the fact that neither is human, a major impediment when it comes to recognizing an entity as an agent proper, and hence as a bearer of rights.AbstractIn this paper, I discuss whether in a society where the use of artificial agents is pervasive, these agents should be recognized as having rights like those we accord to group agents. This kind of recognition I understand to be at once social and legal, and I argue that in order for an artificial agent to be so recognized, it will need to meet the same basic conditions in light of which group agents are granted such recognition. I then explore the implications of granting recognition in this manner. The thesis I will be defending is that artificial agents that do meet the conditions of agency in light of which we ascribe rights to group agents should thereby be recognized as having similar rights. The reason for bringing group agents into the picture is that, like artificial agents, they are not self-evidently agents of the sort to which we would naturally ascribe rights, or at least that is what the historical record suggests if we look, for example, at what it took for corporations to gain legal status in the law as group agents entitled to rights and, consequently, as entities subject to responsibilities. This is an example of agency ascribed to a nonhuman agent, and just as a group agent can be described as nonhuman, so can an artificial agent. Therefore, if these two kinds of nonhuman agents can be shown to be sufficiently similar in relevant ways, the agency ascribed to one can also be ascribed to the other—this despite the fact that neither is human, a major impediment when it comes to recognizing an entity as an agent proper, and hence as a bearer of rights.

Pub.: 26 Sep '16, Pinned: 30 Aug '17

Integrating robot ethics and machine morality: the study and design of moral competence in robots

Abstract: Abstract Robot ethics encompasses ethical questions about how humans should design, deploy, and treat robots; machine morality encompasses questions about what moral capacities a robot should have and how these capacities could be computationally implemented. Publications on both of these topics have doubled twice in the past 10 years but have often remained separate from one another. In an attempt to better integrate the two, I offer a framework for what a morally competent robot would look like (normally considered machine morality) and discuss a number of ethical questions about the design, use, and treatment of such moral robots in society (normally considered robot ethics). Instead of searching for a fixed set of criteria of a robot’s moral competence I identify the multiple elements that make up human moral competence and probe the possibility of designing robots that have one or more of these human elements, which include: moral vocabulary; a system of norms; moral cognition and affect; moral decision making and action; moral communication. Juxtaposing empirical research, philosophical debates, and computational challenges, this article adopts an optimistic perspective: if robotic design truly commits to building morally competent robots, then those robots could be trustworthy and productive partners, caretakers, educators, and members of the human community. Moral competence does not resolve all ethical concerns over robots in society, but it may be a prerequisite to resolve at least some of them.AbstractRobot ethics encompasses ethical questions about how humans should design, deploy, and treat robots; machine morality encompasses questions about what moral capacities a robot should have and how these capacities could be computationally implemented. Publications on both of these topics have doubled twice in the past 10 years but have often remained separate from one another. In an attempt to better integrate the two, I offer a framework for what a morally competent robot would look like (normally considered machine morality) and discuss a number of ethical questions about the design, use, and treatment of such moral robots in society (normally considered robot ethics). Instead of searching for a fixed set of criteria of a robot’s moral competence I identify the multiple elements that make up human moral competence and probe the possibility of designing robots that have one or more of these human elements, which include: moral vocabulary; a system of norms; moral cognition and affect; moral decision making and action; moral communication. Juxtaposing empirical research, philosophical debates, and computational challenges, this article adopts an optimistic perspective: if robotic design truly commits to building morally competent robots, then those robots could be trustworthy and productive partners, caretakers, educators, and members of the human community. Moral competence does not resolve all ethical concerns over robots in society, but it may be a prerequisite to resolve at least some of them.

Pub.: 01 Dec '16, Pinned: 30 Aug '17

Machines and the face of ethics

Abstract: Abstract In this article I try to show in what sense Emmanuel Levinas’ ‘ethics as first philosophy’ moves our ethical thinking away from what has been called ‘centrist ethics’. Proceeding via depictions of the structure of Levinasian ethics and including references to examples as well as to some empirical research, I try to argue that human beings always already find themselves within an ethical universe, a space of meaning. Critically engaging with the writings of David Gunkel and Lucas Introna, I try to argue that these thinkers, rather than clarifying, distort our ethical understanding of how we stand in relation to artefacts. Drawing a distinction between how pervasive our ethical relationship to other human beings, and living animals, is and how the nature of artefacts is tied to us, I conclude by indicating that the aspiration to give artefacts an ethical face suggests a fantasy to avoid ethical responsibility and generates what I call a ‘compensatory logic’.AbstractIn this article I try to show in what sense Emmanuel Levinas’ ‘ethics as first philosophy’ moves our ethical thinking away from what has been called ‘centrist ethics’. Proceeding via depictions of the structure of Levinasian ethics and including references to examples as well as to some empirical research, I try to argue that human beings always already find themselves within an ethical universe, a space of meaning. Critically engaging with the writings of David Gunkel and Lucas Introna, I try to argue that these thinkers, rather than clarifying, distort our ethical understanding of how we stand in relation to artefacts. Drawing a distinction between how pervasive our ethical relationship to other human beings, and living animals, is and how the nature of artefacts is tied to us, I conclude by indicating that the aspiration to give artefacts an ethical face suggests a fantasy to avoid ethical responsibility and generates what I call a ‘compensatory logic’.

Pub.: 01 Dec '16, Pinned: 30 Aug '17