?

Log in

No account? Create an account
 
 
25 November 2012 @ 04:53 pm
Legal, Ethical, and Social Autonomous Systems  
We (meaning my research group) have recently become interested in "ethical autonomy". From our point-of-view our interest is quite prescribed. Having formalised the "rules of the air" and created autonomous programs that can be shown to obey them we then got faced with the issue of when you want a pilot to deliberately break the rules of the air because there is some compelling ethical reason to do so (one of the examples we look at is when another aircraft is failing to obey the rules of the air, potentially maliciously, by moving left to avoid a collision instead of moving right. If the autonomous pilot continues to move right then eventually the two planes will collide where a human pilot would have deduced the other aircraft was breaking the rules and eventually would have moved left instead, thus breaking the rules of the air but nevertheless taking the ethical course of action).

Since ethical autonomy is obviously part of a much wider set of concerns my boss got involved in organising a seminar on Legal, Ethical, and Social Autonomous Systems as part of a cross-disciplinary venture with the departments of Law, Psychology and Philosophy.

It was an interesting day. From my point of view the most useful part was meeting Kirsten Eder from Bristol. I knew quite a bit about her but we'd not met before. She's primarily a verification person and her talk looked at the potential certification processes for autonomous systems and pointed me in the direction of Runtime Verification which I suspect I shall have to tangle with at some point in the next few years.

There was a moment when one of the philosophers asserted that sex-bots were obviously unethical and I had to bite my tongue. I took the spur of the moment decision that arguing about the ethics of what would, presumably, be glorified vibrators with a philosopher while my boss was in the room was possibly not something I wanted to get involved in.

The most interesting ethical problem raised was that of anthropomorphic or otherwise lifelike robots. EPSRC have, it transpires, a set of roboticist principles which include the principle: "Robots are manufactured artefacts: the illusion of emotions and intent should not be used to exploit vulnerable users." The problem here is that there is genuine therapeutic interest in the use of robots that mimic pets to act as companions for the elderly, especially those with Alzheimers. While this is partially to compensate for the lack of money/people to provide genuine companionship, it's not at all clear-cut that the alternative should be rejected out of hand. Alan Winfield, who raised the issue and helped write EPSRC's set of principles, confessed that he was genuinely conflicted about the ethics here . In the later discussion we also strayed into whether the language of beliefs, desires and intentions used to describe cognitive agent programs, also carried with it the risk that people would over-anthropomorphise the program.

This entry was originally posted at http://purplecat.dreamwidth.org/82777.html.
 
 
 
daniel_saunders: Kafkadaniel_saunders on November 25th, 2012 06:01 pm (UTC)
I thought this was really interesting, although I don't really know enough about AI or ethics to contribute meaningfully. That said, my uneducated opinion for some time has been to wonder whether a free-standing code of ethics that ignores context can ever be sufficient e.g. you probably know that it's pretty easy to formulate cases where a simple (perhaps simplistic) implementation of Kant's Categorical Imperative gets you doing all kinds of terrible things, or at least letting other people do them when you could easily stop them.
louisedennis: ailouisedennis on November 25th, 2012 06:40 pm (UTC)
The philosopher (with the sexbot example) classified ethical systems into three groups, IIRC, one group involved an appeal to a hypothetical "virtuous man" - the sexbot example was in this section and I think rather clearly demonstrated that such an approach can easily lead you to confuse normative behaviour with ethical behaviour.

Then their was a class of systems into which utilitarianism falls which provides you with a way to, sort of, score actions and then the most ethical action is the one with the highest score - greatest good for the greatest number and so forth. Though it was clear there were philosophical systems with different measures for the outcomes of actions.

Then there were the systems into which the Categorical Imperative falls (as I understand it) which suggests there is an underlying set of laws which are ethically justified irrespective of the actual outcomes of applying them in some given situation.

It must be said, if there is an absolute ethical system I'm inclined to believe it will fall in the second group somewhere, but I suspect that is the group of most attraction to atheists since we tend not to believe in absolute principles which are divorced from actual outcomes. Though I'm also inclined to think the second group are a special case of the third (in which the over-riding principle is outcome based). There is also, obviously, a massive amount of question-begging in the phrase "the greatest good"...
Kargicqkargicq on November 26th, 2012 06:52 am (UTC)
Interesting! I'm teaching Ethics at A-level at the moment, an area of philosophy almost completely new to me. AFAICS, there are three main ways of slicing the ethical pie:

Distinction 1: (i) Theories which consider the good of the community first (e.g. utilitarianism, social-contract theory) (ii) Theories which consider the good of the individual first (e.g. Virtue Ethics, where morality arises from the urge to be a Good Person -- the Ancient Greek idea) (iii) Theories which are all about an abstract Duty (e.g Kant).

(Of course, in class (i) the Social Contract can originate from a collection of self-interested motives. But once it's in place, the community trumps the individual.)

Distinction 2: (i) Theories in which the consequences of an individual act outweigh the general rule ("Consequentialist"), and (ii) Theories in which they don't ("Deontological").

Distinction 3: (i) Theories in which moral propositions have, at least in principle, a well-defined truth-value ("Cognitivist"), and (ii) Theories in which they don't ("Non-Cognitivist," e.g. full-blown relativism, emotivism, prescriptivism).

Sounds like the philosopher you mention was concentrating on the first of these distinctions. Perhaps the very fact you're trying to code this up means that the question raised by the third distinction has been well and truly begged. Distinction 2 is useful to get the fine-shading of the big classes in Distinction 1 (e.g. "act" vs "rule" versions of Utilitarianism; do we look over every act with a utilitarian eye, or go for the set of rules which, on the whole, works best?)

I've found the study of these classifications and their consequences to be fascinating, even if every single actual ethical theory seems to be fatally flawed. Heigh ho, that's philosophy for you. Maybe the best it can do is provide a technical vocabulary to facilitate debate..?

Edited at 2012-11-26 06:58 am (UTC)
louisedennislouisedennis on November 26th, 2012 12:07 pm (UTC)
Yes, I think we were getting distinction one though I don't think he precisely described i) and ii) in terms of good to the community and good to the individual and he attached Deontological to type iii)...

My colleague who is doing most of the theoretical lifting here (lets call her Matryoshka) tells me that we are implementing Distinction 1, type iii) though I rather thought we were implementing Distinction 2, type i) without necessarily distinguishing between individuals and communities (we're at the level of "if you have to crash into something living, pick the cow instead of the human")

He was talking specifically about coding but with, I felt, a fairly naive view of the nuances of AI type programming. That said, he probably felt we had a rather naive view of the nuances of ethics. Hence, I suppose, the value of the workshop.
a_cubeda_cubed on November 26th, 2012 12:29 am (UTC)
Ethics, Robots, Unmanned vehicles.
THis is an interest of mine as well. On a different project (privacy and social media) last week, Lilian Edwards (Prof of IT Law at Strathclyde, and another of the members of the panel that created those EPSRC principles - she mentioned Winfield as the primary instigator of this stuff) briefly mentioned this kind of issue, including the question of the emotional attachment we're seeking to generate in emotionally vulnerable groups (children with learning difficulties, or some shade of Aspergers/Autism, the elderly) in developing care robots, which are one of the main foci for domestic robots at the moment.
We've just submitted a grant proposal to JSPS (Japan's EPSRC-equivalent) on a new theory for information ethics involving embodiment, where we argue that a radicaly revision of information ethics is needed because of both embodiment (robots, control of vehicles, mobile data, location data, DNA information...) and disembodiment (data moving to the cloud, ubquitous networking) whih will include (if it gets funded) some work on care robots and "caring surveillance".
If it gets funded we're hoping that Ishiguro-sensei (of the geminoid and telenoid robots) will be involved.
The sex-bots question is a very interesting one, which as a prof of information ethics, I'm able to discuss, but I can understand your reluctance given your position at present.
louisedennislouisedennis on November 26th, 2012 12:12 pm (UTC)
Re: Ethics, Robots, Unmanned vehicles.
We also had Kerstin Dautenhahn speaking about her work with Autistic children (in the social strand), but she actually made a point of saying that their robot is made to be clearly a robot and that is, in fact, part of the point - at least for children with Autism - that they know it isn't a human and isn't going to do anything strange and incomprehensible at them.

I really didn't want to be remembered as "that woman who kept going on about sex-bots" especially since I was working from a gut feeling that saying they were "clearly unethical" appeared a rather sweeping statement to be making given the range of possibilities rather than from a specific thought-out standpoint on the issue.

Good luck with the grant application.