Judgement Day...Here we come

May 2, 2001


[table][tr][td][/td] [/tr][/table]
��[font=Arial, Helvetica, sans-serif]MSNBC.com[/font][/td] [/tr][/table]

Robot warriors will get ethics guide

When and what to fire will be part of hardware and software 'package'

By Eric Bland

Discovery Channel

updated 2:06 p.m. ET, Mon., May 18, 2009

Smart missiles, rolling robots, and flying drones currently controlled by humans, are being used on the battlefield more every day. But what happens when humans are taken out of the loop, and robots are left to make decisions, like who to kill or what to bomb, on their own?

Ronald Arkin, a professor of computer science at Georgia Tech, is in the first stages of developing an "ethical governor," a package of software and hardware that tells robots when and what to fire. His book on the subject, "Governing Lethal Behavior in Autonomous Robots," comes out this month.

He argues not only can robots be programmed to behave more ethically on the battlefield, they may actually be able to respond better than human soldiers.

"Ultimately these systems could have more information to make wiser decisions than a human could make," said Arkin. "Some robots are already stronger, faster and smarter than humans. We want to do better than people, to ultimately save more lives."

Lethal military robots are currently deployed in Iraq, Afghanistan and Pakistan. Ground-based robots like iRobot's SWORDS or QinetiQ's MAARS robots, are armed with weapons to shoot insurgents, appendages to disarm bombs, and surveillance equipment to search buildings. Flying drones can fire at insurgents on the ground. Patriot missile batteries can detect incoming missiles and send up other missiles to intercept and destroy them.

No matter where the robots are deployed however, there is always a human involved in the decision-making, directing where a robot should fly and what munitions the robot should use if it encounters resistance.

Humans aren't expected to be removed any time soon. Arkin's ethical governor is designed for a more traditional war where civilians have evacuated the war zone and anyone pointing a weapon at U.S. troops can be considered a target.

Arkin's challenge is to translate the 150-plus years of codified, written military law into terms that robots can understand and interpret themselves. In many ways, creating an independent war robot is easier than many other types of artificial intelligence because the laws of war have existed for over 150 years and are clearly stated in numerous treaties.

"We tell soldiers what is right and wrong," said Arkin. "We don't allow soldiers to develop ethics on their own."

One possible scenario for Arkin's ethical governor is an enemy sniper posted in building next to an important cultural setting, like a mosque or cemetery. A wheeled military robot emerges from cover and the sniper fires on it. The robot finds the sniper and has a choice; does it use a grenade launcher or its own sniper rifle to bring down the fighter?

Using geographical data on the surrounding buildings, the robot would decide to use the sniper rifle to minimize any potential damage to the surrounding buildings.

For a human safely removed from combat, the choice of a rifle seems obvious. But a soldier under fire might take extreme action, possibly blowing up the building and damaging the nearby building.

"Robots don't have an inherent right to self-defense and don't get scared," said Arkin. "The robots can take greater risk and respond more appropriately."

Fear might influence human decision-making, but math rules for robots. Simplified, various actions can be classified as ethical or unethical, and assigned a certain value. Starting with a lethal action and subtracting the various ethical responses to the situation equals an unethical response. Other similar equations governor the various possible actions.

The difficult thing is to determine what types of actions go into those equations, and for that humans will be necessary, and ultimately responsible for.

Robots, freed of human masters and capable of lethality "are going to happen," said Arkin. "It's just a question of how much autonomy will be put on them and how fast that happens."

Giving robots specific rules and equations will work in an ideal, civilian-free war, but critics point out such a thing is virtually impossible to find on today's battlefield.

"I challenge you to find a war with no civilians," said Colin Allen, a professor at Indiana University who also coauthored a book on the ethics of military robots.

An approach like Arkin's is easier to program and will appear sooner, but a bottom-up approach, where the robot learns the rules of war itself and makes its own judgment is a far better scenario, according to Allen.

The problem with a bottom-up approach is the the technology doesn't yet exist, and likely won't for another 50 years, says Allen.

Whenever autonomous robots are deployed, humans will still be in the loop, at least legally. If a robot does do something ethically wrong, despite its programming, the software engineer or the builder of the robot will likely be held accountable, says Michael Anderson at Franklin and Marshall University.

[emoji]169[/emoji] 2009 Discovery Channel

URL: http://www.msnbc.msn.com/id/30810070/

MSN Privacy . Legal
[emoji]169[/emoji] 2009 MSNBC.com

kinda crazy.....

I saw this documentary that mentioned about this kind of stuff. Basically, the art of warfare is changing and developing through artificial intelligence. It isvery interesting when you study the evolution of warfare and see to where it might lead to in the future with the advancement of technology. Some believe thefuture of modern warfare will be fought by automated weapons systems and robots which will be quite detrimental to humankind.
Yikes...it's pretty scary knowing this stuff is actually possible.
cool, and then the robots will start realzing that humans are basically idiots, and are quite un-ethical across the board.

Hopefully I will be dead by the time that happens though.

A robot kills a civilian, how do you punish it? Recycle?
The whole concept seems pointless to me.

I mean wouldn't it eventually just be our robots fighting other countries robots?

I'm at work and didn't see the video btw.
Not as bad a human-evolutis but on par. edit:BTW that is cue to look up human-evolutis folks...Its a whole 'nother movie and possibility.
I'll be studying in the field of cognitive science, artificial intelligence, and industrial design soon.
I just came from an orientation and they referred to the many movies about artificial intelligence and advanced robotics.
Is it JUST me or does it seem like nearly everything we've seen in movies has, to an extent, come to fruition? Total Recall comes to mind as one example.Implanting memories into ones mind... a few months ago, it was reported that scientists have found a way to "cut & paste" memories. EternalSunshine of the Spotless Mind, anyone?
Originally Posted by LittlePeteWrigley

Is it JUST me or does it seem like nearly everything we've seen in movies has, to an extent, come to fruition? Total Recall comes to mind as one example. Implanting memories into ones mind... a few months ago, it was reported that scientists have found a way to "cut & paste" memories. Eternal Sunshine of the Spotless Mind, anyone?

Pretty much most of this stuff is not as far fetched as people would like to think it is.
[color= rgb(102, 0, 153)]Their preparing for a international genocide....

*paging john connor*
Originally Posted by Dirtylicious

this phrase kills me..

"We tell soldiers what is right and wrong," said Arkin. "We don't allow soldiers to develop ethics on their own."

so what happens to a soldier with a pre-developed sense of ethics? do they destroy it?
Top Bottom