Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

Bosonic

(3,746 posts)
Thu Sep 18, 2014, 01:39 PM Sep 2014

Robot with “morals” makes surprisingly deadly decisions

Robot with “morals” makes surprisingly deadly decisions

Anyone excited by the idea of stepping into a driverless car should read the results of a somewhat alarming experiment at Bristol’s University of the West of England, where a robot was programmed to rescue others from certain doom… but often didn’t.

The so-called ‘Ethical robot’, also known as the Asimov robot, after the science fiction writer whose work inspired the film ‘I, Robot’, saved robots, acting the part of humans, from falling into a hole: but often stood by and let them trundle into the danger zone.

The experiment used robots programmed to be ‘aware’ of their surroundings, and with a separate program which instructed the robot to save lives where possible.

Despite having the time to save one out of two ‘humans’ from the 'hole', the robot failed to do so more than half of the time. In the final experiment, the robot only saved the ‘people’ 16 out of 33 times.

https://uk.news.yahoo.com/first-robot-with-%E2%80%9Cmorals%E2%80%9D-makes-surprisingly-deadly-decisions-092809239.html#wWiAkRX

7 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies

muriel_volestrangler

(101,316 posts)
1. The robot ought to be called 'Samaritan II':
Fri Sep 19, 2014, 03:08 PM
Sep 2014
But then, so are the kinder, gentler motives. How would you design a robot to obey Asimov's injunction never to allow a human being to come to harm through inaction? Michael Frayn's 1965 novel The Tin Men is set in a robotics laboratory, and the engineers in the Ethics Wing, Macintosh, Goldwasser, and Sinson, are testing the altruism of their robots. They have taken a bit too literally the hypothetical dilemma in every moral philosophy textbook in which two people are in a lifeboat built for one and both will die unless one bails out. So they place each robot in a raft with another occupant, lower the raft into a tank, and observe what happens.

The first attempt, Samaritan I, had pushed itself overboard with great alacrity, but it had gone overboard to save anything which happened to be next to it on the raft, from seven stone of lima beans to twelve stone of wet seaweed. After many weeks of stubborn argument Macintosh had conceded that the lack of discrimination was unsatisfactory, and he had abandoned Samaritan I and developed Samaritan II, which would sacrifice itself only for an organism at least as complicated as itself.

The raft stopped, revolving slowly, a few inches above the water. "Drop it," cried Macintosh.

The raft hit the water with a sharp report. Sinson and Samaritan sat perfectly still. Gradually the raft settled in the water, until a thin tide began to wash over the top of it. At once Samaritan leaned forward and seized Sinson's head. In four neat movements it measured the size of his skull, then paused, computing. Then, with a decisive click, it rolled sideways off the raft and sank without hesitation to the bottom of the tank.


But as the Samaritan II robots came to behave like the moral agents in the philosophy books, it became less and less clear that they were really moral at all. Macintosh explained why he did not simply tie a rope around the self-sacrificing robot to make it easier to retrieve: "I don't want it to know that it's going to be saved. It would invalidate its decision to sacrifice itself.... So, every now and then I leave one of them in instead of fishing it out. To show the others I mean business. I've written off two this week." Working out what it would take to program goodness into a robot shows not only how much machinery it takes to be good but how slippery the concept of goodness is to start with.

http://www.washingtonpost.com/wp-srv/style/longterm/books/chap1/howthemindworks.htm

A longer section - courtesy of Google Books:
http://books.google.co.uk/books?id=ZLTyXKwcASEC&pg=PP19&lpg=PP19
They suspect the robot is developing sanctimoniousness, and thus enjoying sacrificing itself a bit too much. Eventually (in a section not in the Google Books excerpt), they put 2 robots on a raft - and then they start betting on the outcomes ...

Warren DeMontague

(80,708 posts)
2. Sure, because being programmed to make allegedly "moral" decisions is not the same thing
Sun Sep 21, 2014, 08:53 PM
Sep 2014

as actually giving a shit.

DetlefK

(16,423 posts)
3. Isn't being programmed to give a shit better than actually giving a shit?
Mon Sep 22, 2014, 08:46 AM
Sep 2014

If you are programmed to do something, there are no moral quarrels, no ambiguity.
Save them? Save them.

If you have a free will, there is ambiguity.
Save them? Well, only if I fell like it and only if it's not too inconvenient for me and only if it's worth it and only if he's a good person...



And:
How do know that the morals derived from "actually giving a shit" are really "moral" and not "allegedly moral"?

Warren DeMontague

(80,708 posts)
5. Answer to first question: I don't know. However, I suspect that not being capable of giving a shit
Mon Sep 22, 2014, 04:52 PM
Sep 2014

would also come with not being able to appreciate whether something is 'better' or not.

Of course, not being able to appreciate whether something is better might still be subjectively 'better' in and of itself...

Or maybe you mean better for everyone else?

Second question:

How do know that the morals derived from "actually giving a shit" are really "moral" and not "allegedly moral"?

I don't. How would you define the difference between real and alleged morality anyway?

DetlefK

(16,423 posts)
6. Exactly. There are no right or wrong morals.
Tue Sep 23, 2014, 04:43 AM
Sep 2014

Morals are just a set of arbitrary ethical rules. Stealing is immoral unless it's moral.

And psychological experiments have revealed that, when shit gets real, people only care HOW somebody reacts, not WHY he reacts that way. To other people it doesn't matter that I have a good reason for being an asshole, they still think I'm an asshole.

Warren DeMontague

(80,708 posts)
7. Well, of course, talking about "right/wrong" and "morals" in the same sentence is redundant.
Tue Sep 23, 2014, 05:25 AM
Sep 2014

Meaning, a judgment about rightness or wrongness of morals is itself a moral call.

I don't believe that, objectively- as in, the Universal sense, outside of personal human (or whatever) existence, an objective "right and wrong" really exists on the grand scale. As in, I suspect the Universe is a natural process or part of a natural process, and as such it unfolds natural-process-y, and everything we do, even our decisions and free will, are a part of that natural process. I question how much difference anything that is "done" makes, because among other things we are all so very, very small, both in space and in time.

We're kind of like passengers on a big ride.

(And morals are fundamentally around the rightness and wrongess of what is "done". The Tao, of course, does not do... but I will try, hard, not to digress into Philosophy here in this post about morals, lol)

Which is not to say I, as a human, do not hold subjective moral views. I do, strong ones. I choose, subjectively, compassion, curiosity, kindness and humor because I believe those things ennoble or at least ideally inform, what it is or should be (again, subjectively my own opinion) to be human. And I believe that violence breeds more violence, hate breeds more hate, kindness breeds more kindness, and love breeds more love. Since I don't like violence and hate, and I do like kindness and love, it is in my own self interest as well as my interest in those I care about, to try to skew towards the latter and away from the former in my acts.

Lastly, on the topic of self-interest vs. altruism, which seems to be a core point of most "morals" debates- I think that such distinctions become increasingly more meaningless the closer one gets to the zenith of realization that everyone is, on at least some level, everyone else- an expanded definition of the self makes self-interest and altruism one and the same. You know, we are groot.

But none of this is, from my mind, some grand assertion of universal truth- or maybe it is, but it's still just my subjective interpretation of universal truth. And as such, "morals" come from the inside out, like many things...

(unless one falls back upon the dread "theological argument" which is either the BESTEST ARGUMENT-WINNING TRUMP CARD EVAH or a phenomenally lame and tired cop-out, I of course subjectively choose the latter)

I would never assert that my view on right or wrong are THE ACTUAL RIGHT OR WRONG because such a thing isn't there; except, of course, for me, even though to me that's the only vote that matters.

Ok, One thing I'll say about myself, if I swear I'm not going to get philosophical in a post, the one thing I can be counted on to absolutely do is rant philosophically. Sorry. ... your last sentence is also 100% correct, to my mind.

Latest Discussions»Culture Forums»Weird News»Robot with “morals” makes...