General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsThe Navy's new autonomous armed robot-yikes
The Navy's new drone being tested near Chesapeake Bay stretches the boundaries of technology: It's designed to land on the deck of an aircraft carrier, one of aviation's most difficult maneuvers.
What's even more remarkable is that it will do that not only without a pilot in the cockpit, but without a pilot at all.
The X-47B marks a paradigm shift in warfare, one that is likely to have far-reaching consequences. With the drone's ability to be flown autonomously by onboard computers, it could usher in an era when death and destruction can be dealt by machines operating semi-independently.
Although humans would program an autonomous drone's flight plan and could override its decisions, the prospect of heavily armed aircraft screaming through the skies without direct human control is unnerving to many.
"Lethal actions should have a clear chain of accountability," said Noel Sharkey, a computer scientist and robotics expert. "This is difficult with a robot weapon. The robot cannot be held accountable. So is it the commander who used it? The politician who authorized it? The military's acquisition process? The manufacturer, for faulty equipment?"
Sharkey and others believe that autonomous armed robots should force the kind of dialogue that followed the introduction of mustard gas in World War I and the development of atomic weapons in World War II. The International Committee of the Red Cross, the group tasked by the Geneva Conventions to protect victims in armed conflict, is already examining the issue.
http://www.latimes.com/business/la-fi-auto-drone-20120126,0,740306.story
d_r
(6,907 posts)HopeHoops
(47,675 posts)You're right! I bet something along the lines of a self-replicating robot is in the works as we speak. Why not? Build a robot that will build another just like itself. Then watch the exponential growth as each new robot replicates itself, even if just once instead of indefinitely.
It's a very scary thought, mostly because it's probably really being done.
Hugabear
(10,340 posts)Javaman
(62,510 posts)drone pilot: Sir, I have lost control of the drone.
General: what are you talking about?
drone pilot: I just issued an abort command on the strike and it didn't respond.
General: what do you mean it didn't respond?
drone pilot: well, actually it did respond.
General: well son, which is it? either it did or it didn't?
drone pilot: it gave me the finger.
hootinholler
(26,449 posts)Assuming they are familiar with his ideas.
Ellipsis
(9,124 posts)ProgressiveProfessor
(22,144 posts)The X47B has a ground control station. You give UAV commands rather than fly it with a joystick. It is the successor to the DARPA JUCAS program. A fair amount of more accurate material is available online about it.
oldhippie
(3,249 posts)A few years ago, before I retired, I was on a DoD committee concerning a roadmap for the development of Joint Unmanned and Autonomous Weapons Systems. It was a mid level working group, consisting not of 4 star flag officers, but 0-6 level military, senior civilians, and Senior Executive Service civilians. (I wasn't a member of the committee, just an advisor.) One of the things we discussed was how much autonomy should be given to systems that could employ lethal force.
I noted an interesting difference among the services. The Army and Marines were always pretty adamant that there be a "man-in-the-loop" before a weapon was allowed to use lethal force against a target. I attributed that to the fact that ground forces doing close up fighting were the most familiar with the effects of friendly fire and lapses of judgement even among humans, and were not comfortable with the idea of a robot having the final call on firing a lethal missle at a target. The Army and Marine reps consistently fought any such suggestion. They really didn't trust a computer to make the call.
The Air Force was more neutral about it. They generally were more comfortable with an autonomous system programmed to go out, find targets, identify as to friend or foe, and engage with deadly force. A lot of the AF dudes had more confidence in the decision making processes of robotic systems. I figured that was because they were generally a little further away when a mistake, or "anomaly" occurs. In the end, though, the AF hedged and said that there needed to be a "cut-out" in any system that could override a system decision. That caused some real issues in real-time communications and command and control in hostile environments.
The Navy reps had no problem with lethal autonomous systems at all. They were quite happy to develop a system that would be programmed to "go out, find the bad guys, kill'em all, and come home when you need more fuel or ammo." A lot of us were pretty shocked at that. The Navy guys said they had been doing that for centuries. The mine, they said, was essentially an autonomous lethal system. They have been spread around for a long time. Modern mines are smarter and have more control, but that is a relatively recent event. Even modern torpedoes have a lot of autonomy once the wire is cut, and the Navy is comfortable with them.
Anyway, just an observation as to some different thoughts as to the advisibility of having robots/computers/intelligent systems/autonomous systems or whatever you want to call them, making decisions as to whether or not to unleash lethal force against humans. I remember thinking, as I was sitting there in the back of the room, that it was like being in a science fiction novel. But there I was - for real. It was mind boggling.
What do you think?