Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

Renew Deal

(81,846 posts)
Wed Apr 15, 2015, 02:46 PM Apr 2015

Should we let robots kill on their own?



Next week, from April 13th to April 17th, the second multilateral meeting on lethal autonomous weapons systems is taking place at the United Nations in Geneva. At the meeting, AJung Moon, an executive member and co-founder of the Open Roboethics initiative (ORi), a think tank that aims to foster active discussions of ethical, legal, and societal issues of robotics will report on the preliminary results of a survey created by her team examining public attitudes toward autonomous weapon robots.

She’ll be joining a number of organizations supporting the Campaign to Stop Killer Robots, whose primary objective is the pre-emptive ban on fully autonomous weapons. There are really only two outcomes on this issue — either the creation and spread of lethal autonomous weapons is banned or it isn’t. And by not banning them we’ll set a precedent that condones the moral and legal sovereignty of machines over our lives.

Once we approve the use of autonomous machines to kill without humans in the loop, we’re ceding what’s known as, “meaningful human control” for these systems. While the moral ramifications of war are certainly different than something like the manufacture of self-driving cars, once we’ve justified autonomous systems making life-and-death decisions, it’s easy to imagine our reliance on them for everything else.

“What we’re trying to do is demonstrate how the public feels about these issues,” noted Moon in our interview about her survey — which anyone can take — and upcoming meeting at the United Nations. “It’s something the UN has to consider when discussing the future of weapons systems at an international level.”
<snip>

Much more: http://mashable.com/2015/04/12/meaningful-human-control

Take the survey on Remote Operated Weapon Systems (ROWS) and Lethal Autonomous Weapons Systems here (LAWS): https://survey.ubc.ca/s/militaryrobots2015/
10 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Should we let robots kill on their own? (Original Post) Renew Deal Apr 2015 OP
Oh noes! I skeered shenmue Apr 2015 #1
No, of course not. nt bananas Apr 2015 #2
We're a weird, inconsistent, rationalizing species. LanternWaste Apr 2015 #3
What makes this different is that the robots would make their own decisions about who to kill. Renew Deal Apr 2015 #4
Classical definition, rather than the sci-fi-inspired definition. LanternWaste Apr 2015 #9
Can't be worse than what we have. Downwinder Apr 2015 #5
Not really a question of should The2ndWheel Apr 2015 #6
Why not? We let cops do that all the time. MindPilot Apr 2015 #7
Which will win in the fight between them? One_Life_To_Give Apr 2015 #8
Probably not yet. Donald Ian Rankin Apr 2015 #10
 

LanternWaste

(37,748 posts)
3. We're a weird, inconsistent, rationalizing species.
Wed Apr 15, 2015, 03:02 PM
Apr 2015

Seems there are some tools we deem ethical to use to kill others while other tools are considered unethical for the same purpose. We're a weird, inconsistent, rationalizing species (although I'm the first to admit I'm using the classical definition of Autonomous Machine rather than the creative, speculative definition)

Renew Deal

(81,846 posts)
4. What makes this different is that the robots would make their own decisions about who to kill.
Wed Apr 15, 2015, 03:34 PM
Apr 2015

Although they are programmed by humans, the weapons decide who to engage. It brings up all kinds of ethical and moral questions.

 

LanternWaste

(37,748 posts)
9. Classical definition, rather than the sci-fi-inspired definition.
Wed Apr 15, 2015, 04:23 PM
Apr 2015

Again, limiting my response to the Classical definition, rather than the sci-fi-inspired definition-- the latter being too speculative and vague to allow foundational discussion-- other than additional speculation.

The classical definition* being limited to: gain info (analyze/adapt) re: environment, work w/o human intervention, move w/o human assistance, avoid potentially harmful situations-- which, as it stands, means my auto vacuum cleaner fits well within that definition.

* as paraphrased by George A. Bekey in Autonomous Robots (2005)

The2ndWheel

(7,947 posts)
6. Not really a question of should
Wed Apr 15, 2015, 03:49 PM
Apr 2015

Will we or won't we? If it's cheaper to do it this way, we will.

Then of course there's the question of who is we? Most people aren't going to have an actual say if it happens or not.

One_Life_To_Give

(6,036 posts)
8. Which will win in the fight between them?
Wed Apr 15, 2015, 04:14 PM
Apr 2015

If the fully autonomous machine is a more effective killer. Once backed into a corner the loosing side is likely to deploy them as a last resort. Nice to say we would never field them. But I suspect many world powers will have clandestine capabilities should they become necessary.

Donald Ian Rankin

(13,598 posts)
10. Probably not yet.
Wed Apr 15, 2015, 05:14 PM
Apr 2015

I think it entirely probable that at some point in the future robot soldiers will be better at avoiding civilian casualties than humans are, but I suspect we're some way off that point.

Latest Discussions»General Discussion»Should we let robots kill...