Weird News
Related: About this forumRobot with “morals” makes surprisingly deadly decisions
Anyone excited by the idea of stepping into a driverless car should read the results of a somewhat alarming experiment at Bristols University of the West of England, where a robot was programmed to rescue others from certain doom but often didnt.
The so-called Ethical robot, also known as the Asimov robot, after the science fiction writer whose work inspired the film I, Robot, saved robots, acting the part of humans, from falling into a hole: but often stood by and let them trundle into the danger zone.
The experiment used robots programmed to be aware of their surroundings, and with a separate program which instructed the robot to save lives where possible.
Despite having the time to save one out of two humans from the 'hole', the robot failed to do so more than half of the time. In the final experiment, the robot only saved the people 16 out of 33 times.
https://uk.news.yahoo.com/first-robot-with-%E2%80%9Cmorals%E2%80%9D-makes-surprisingly-deadly-decisions-092809239.html#wWiAkRX
muriel_volestrangler
(101,316 posts)The first attempt, Samaritan I, had pushed itself overboard with great alacrity, but it had gone overboard to save anything which happened to be next to it on the raft, from seven stone of lima beans to twelve stone of wet seaweed. After many weeks of stubborn argument Macintosh had conceded that the lack of discrimination was unsatisfactory, and he had abandoned Samaritan I and developed Samaritan II, which would sacrifice itself only for an organism at least as complicated as itself.
The raft stopped, revolving slowly, a few inches above the water. "Drop it," cried Macintosh.
The raft hit the water with a sharp report. Sinson and Samaritan sat perfectly still. Gradually the raft settled in the water, until a thin tide began to wash over the top of it. At once Samaritan leaned forward and seized Sinson's head. In four neat movements it measured the size of his skull, then paused, computing. Then, with a decisive click, it rolled sideways off the raft and sank without hesitation to the bottom of the tank.
But as the Samaritan II robots came to behave like the moral agents in the philosophy books, it became less and less clear that they were really moral at all. Macintosh explained why he did not simply tie a rope around the self-sacrificing robot to make it easier to retrieve: "I don't want it to know that it's going to be saved. It would invalidate its decision to sacrifice itself.... So, every now and then I leave one of them in instead of fishing it out. To show the others I mean business. I've written off two this week." Working out what it would take to program goodness into a robot shows not only how much machinery it takes to be good but how slippery the concept of goodness is to start with.
http://www.washingtonpost.com/wp-srv/style/longterm/books/chap1/howthemindworks.htm
A longer section - courtesy of Google Books:
http://books.google.co.uk/books?id=ZLTyXKwcASEC&pg=PP19&lpg=PP19
They suspect the robot is developing sanctimoniousness, and thus enjoying sacrificing itself a bit too much. Eventually (in a section not in the Google Books excerpt), they put 2 robots on a raft - and then they start betting on the outcomes ...
Warren DeMontague
(80,708 posts)as actually giving a shit.
DetlefK
(16,423 posts)If you are programmed to do something, there are no moral quarrels, no ambiguity.
Save them? Save them.
If you have a free will, there is ambiguity.
Save them? Well, only if I fell like it and only if it's not too inconvenient for me and only if it's worth it and only if he's a good person...
And:
How do know that the morals derived from "actually giving a shit" are really "moral" and not "allegedly moral"?
Warren DeMontague
(80,708 posts)would also come with not being able to appreciate whether something is 'better' or not.
Of course, not being able to appreciate whether something is better might still be subjectively 'better' in and of itself...
Or maybe you mean better for everyone else?
Second question:
How do know that the morals derived from "actually giving a shit" are really "moral" and not "allegedly moral"?
I don't. How would you define the difference between real and alleged morality anyway?
DetlefK
(16,423 posts)Morals are just a set of arbitrary ethical rules. Stealing is immoral unless it's moral.
And psychological experiments have revealed that, when shit gets real, people only care HOW somebody reacts, not WHY he reacts that way. To other people it doesn't matter that I have a good reason for being an asshole, they still think I'm an asshole.
Warren DeMontague
(80,708 posts)Meaning, a judgment about rightness or wrongness of morals is itself a moral call.
I don't believe that, objectively- as in, the Universal sense, outside of personal human (or whatever) existence, an objective "right and wrong" really exists on the grand scale. As in, I suspect the Universe is a natural process or part of a natural process, and as such it unfolds natural-process-y, and everything we do, even our decisions and free will, are a part of that natural process. I question how much difference anything that is "done" makes, because among other things we are all so very, very small, both in space and in time.
We're kind of like passengers on a big ride.
(And morals are fundamentally around the rightness and wrongess of what is "done". The Tao, of course, does not do... but I will try, hard, not to digress into Philosophy here in this post about morals, lol)
Which is not to say I, as a human, do not hold subjective moral views. I do, strong ones. I choose, subjectively, compassion, curiosity, kindness and humor because I believe those things ennoble or at least ideally inform, what it is or should be (again, subjectively my own opinion) to be human. And I believe that violence breeds more violence, hate breeds more hate, kindness breeds more kindness, and love breeds more love. Since I don't like violence and hate, and I do like kindness and love, it is in my own self interest as well as my interest in those I care about, to try to skew towards the latter and away from the former in my acts.
Lastly, on the topic of self-interest vs. altruism, which seems to be a core point of most "morals" debates- I think that such distinctions become increasingly more meaningless the closer one gets to the zenith of realization that everyone is, on at least some level, everyone else- an expanded definition of the self makes self-interest and altruism one and the same. You know, we are groot.
But none of this is, from my mind, some grand assertion of universal truth- or maybe it is, but it's still just my subjective interpretation of universal truth. And as such, "morals" come from the inside out, like many things...
(unless one falls back upon the dread "theological argument" which is either the BESTEST ARGUMENT-WINNING TRUMP CARD EVAH or a phenomenally lame and tired cop-out, I of course subjectively choose the latter)
I would never assert that my view on right or wrong are THE ACTUAL RIGHT OR WRONG because such a thing isn't there; except, of course, for me, even though to me that's the only vote that matters.
Ok, One thing I'll say about myself, if I swear I'm not going to get philosophical in a post, the one thing I can be counted on to absolutely do is rant philosophically. Sorry. ... your last sentence is also 100% correct, to my mind.