Pentagon hires British scientist to help build robot soldiers that 'won't commit war crimes'
By Tim Shipman in Washington
Last Updated: 7:36AM GMT 01 Dec 2008
The US Army and Navy have both hired experts in the ethics of building machines to prevent the creation of an amoral Terminator-style killing machine that murders indiscriminately.
By 2010 the US will have invested $4 billion in a research programme into "autonomous systems", the military jargon for robots, on the basis that they would not succumb to fear or the desire for vengeance that afflicts frontline soldiers.
A British robotics expert has been recruited by the US Navy to advise them on building robots that do not violate the Geneva Conventions.
Colin Allen, a scientific philosopher at Indiana University's has just published a book summarising his views entitled Moral Machines: Teaching Robots Right From Wrong.
He told The Daily Telegraph: "The question they want answered is whether we can build automated weapons that would conform to the laws of war. Can we use ethical theory to help design these machines?"
Pentagon chiefs are concerned by studies of combat stress in Iraq that show high proportions of frontline troops supporting torture and retribution against enemy combatants.
Ronald Arkin, a computer scientist at Georgia Tech university, who is working on software for the US Army has written a report which concludes robots, while not "perfectly ethical in the battlefield" can "perform more ethically than human soldiers."
He says that robots "do not need to protect themselves" and "they can be designed without emotions that cloud their judgment or result in anger and frustration with ongoing battlefield events".
Airborne drones are already used in Iraq and Afghanistan to launch air strikes against militant targets and robotic vehicles are used to disable roadside bombs and other improvised explosive devices.
Last month the US Army took delivery of a new robot built by an American subsidiary of the British defence company QinetiQ, which can fire everything from bean bags and pepper spray to high-explosive grenades and a 7.62mm machine gun.
But this generation of robots are all remotely operated by humans. Researchers are now working on "soldier bots" which would be able to identify targets, weapons and distinguish between enemy forces like tanks or armed men and soft targets like ambulances or civilians.
Their software would be embedded with rules of engagement conforming with the Geneva Conventions to tell the robot when to open fire.
Dr Allen applauded the decision to tackle the ethical dilemmas at an early stage. "It's time we started thinking about the issues of how to take ethical theory and build it into the software that will ensure robots act correctly rather than wait until it's too late," he said.
"We already have computers out there that are making decisions that affect people's lives but they do it in an ethically blind way. Computers decide on credit card approvals without any human involvement and we're seeing it in some situations regarding medical care for the elderly," a reference to hospitals in the US that use computer programmes to help decide which patients should not be resuscitated if they fall unconscious.
Dr Allen said the US military wants fully autonomous robots because they currently use highly trained manpower to operate them. "The really expensive robots are under the most human control because they can't afford to lose them," he said.
"It takes six people to operate a Predator drone round the clock. I know the Air Force has developed software, which they claim is to train Predator operators. But if the computer can train the human it could also ultimately fly the drone itself."
Some are concerned that it will be impossible to devise robots that avoid mistakes, conjuring up visions of machines killing indiscriminately when they malfunction, like the robot in the film Robocop.
Noel Sharkey, a computer scientist at Sheffield University, best known for his involvement with the cult television show Robot Wars, is the leading critic of the US plans.
He says: "It sends a cold shiver down my spine. I have worked in artificial intelligence for decades, and the idea of a robot making decisions about human termination is terrifying."
Originally posted by freedomclub:Pentagon hires British scientist to help build robot soldiers that 'won't commit war crimes'
By Tim Shipman in Washington
Last Updated: 7:36AM GMT 01 Dec 2008The US Army and Navy have both hired experts in the ethics of building machines to prevent the creation of an amoral Terminator-style killing machine that murders indiscriminately.
By 2010 the US will have invested $4 billion in a research programme into "autonomous systems", the military jargon for robots, on the basis that they would not succumb to fear or the desire for vengeance that afflicts frontline soldiers.
A British robotics expert has been recruited by the US Navy to advise them on building robots that do not violate the Geneva Conventions.
Colin Allen, a scientific philosopher at Indiana University's has just published a book summarising his views entitled Moral Machines: Teaching Robots Right From Wrong.
He told The Daily Telegraph: "The question they want answered is whether we can build automated weapons that would conform to the laws of war. Can we use ethical theory to help design these machines?"
Pentagon chiefs are concerned by studies of combat stress in Iraq that show high proportions of frontline troops supporting torture and retribution against enemy combatants.
Ronald Arkin, a computer scientist at Georgia Tech university, who is working on software for the US Army has written a report which concludes robots, while not "perfectly ethical in the battlefield" can "perform more ethically than human soldiers."
He says that robots "do not need to protect themselves" and "they can be designed without emotions that cloud their judgment or result in anger and frustration with ongoing battlefield events".
Airborne drones are already used in Iraq and Afghanistan to launch air strikes against militant targets and robotic vehicles are used to disable roadside bombs and other improvised explosive devices.
Last month the US Army took delivery of a new robot built by an American subsidiary of the British defence company QinetiQ, which can fire everything from bean bags and pepper spray to high-explosive grenades and a 7.62mm machine gun.
But this generation of robots are all remotely operated by humans. Researchers are now working on "soldier bots" which would be able to identify targets, weapons and distinguish between enemy forces like tanks or armed men and soft targets like ambulances or civilians.
Their software would be embedded with rules of engagement conforming with the Geneva Conventions to tell the robot when to open fire.
Dr Allen applauded the decision to tackle the ethical dilemmas at an early stage. "It's time we started thinking about the issues of how to take ethical theory and build it into the software that will ensure robots act correctly rather than wait until it's too late," he said.
"We already have computers out there that are making decisions that affect people's lives but they do it in an ethically blind way. Computers decide on credit card approvals without any human involvement and we're seeing it in some situations regarding medical care for the elderly," a reference to hospitals in the US that use computer programmes to help decide which patients should not be resuscitated if they fall unconscious.
Dr Allen said the US military wants fully autonomous robots because they currently use highly trained manpower to operate them. "The really expensive robots are under the most human control because they can't afford to lose them," he said.
"It takes six people to operate a Predator drone round the clock. I know the Air Force has developed software, which they claim is to train Predator operators. But if the computer can train the human it could also ultimately fly the drone itself."
Some are concerned that it will be impossible to devise robots that avoid mistakes, conjuring up visions of machines killing indiscriminately when they malfunction, like the robot in the film Robocop.
Noel Sharkey, a computer scientist at Sheffield University, best known for his involvement with the cult television show Robot Wars, is the leading critic of the US plans.
He says: "It sends a cold shiver down my spine. I have worked in artificial intelligence for decades, and the idea of a robot making decisions about human termination is terrifying."
Why don't they just build suicide machines to put themselves inside?
If the machine fails and start mass killing, who do we blame? The creators for professional negligence or the Pentagon for being too stupid?
I don't see any rationality in using a device to make judgement since computing an algorithm to deduce possibilities does not make it conventionally adequate since they probably can't provide sufficient trial inputs of possibilities to confirm that the machine is incapable of errors.
What happens if the coding has a bug, how the phark do these morons perform their testing? On live specimens?
![]()
Has to be live specimens. And the algorithmns has to be self- adapting.
Thats the thing. The machine takes responsibility, so that blowback from incidents like the sept 17 blackwater killings wont be repeated. It'll only bring a whole new inhumane meaning to collateral damage. Who cares if the machine can kill 50 innocent civilians, who can be classified as terrorists anyway, and throw off all blame. What are you going to do? Short-circuit the robot? Perfect deniability.
Well, anyway, what does it matter? When president-elect "change" isnt going to support war criminal charges for Bush Administration war criminals like Rumsfield and Gonzales.
Originally posted by Herzog_Zwei:Has to be live specimens. And the algorithmns has to be self- adapting.
Then we have to subject the creators, the believers to the live testing since, in the development of UAT, we need to have live specimens to ensure the sustainability and effectiveness of the algorithm.
Implementation should first be targeted at country of birth since, if the people can accept their military for the development of such advanced instruments of destruction, they should capitalize on the effectiveness by being the first samples for testing stages in beta testing stages.
![]()
I hope you're being scarcastic because the American people are being subjected to non-lethal weaponry which are then implemented in Iraq.
In the end, it is still humans that feed the algorithm that the CPU depends on to identify whether someone needs to be killed or not, yes?
So even if it's the machines that makes the actual decision, the reasoning that it is based on to make those decisions are already flawed.
Lets have a look at Tim Shipman's previous articles and see if his words hold much value or not...?
Originally posted by freedomclub:I hope you're being scarcastic because the American people are being subjected to non-lethal weaponry which are then implemented in Iraq.
I hope I won't see the day where a freaking piece of machine is used to judge whether i should live or not.
Live shouldn't be wasted on a yes or no binary operation.
![]()
Originally posted by Nelstar:Then we have to subject the creators, the believers to the live testing since, in the development of UAT, we need to have live specimens to ensure the sustainability and effectiveness of the algorithm.
Implementation should first be targeted at country of birth since, if the people can accept their military for the development of such advanced instruments of destruction, they should capitalize on the effectiveness by being the first samples for testing stages in beta testing stages.
If protocol for the Geneva Medical Convention is adhere to, why not?
Originally posted by Nelstar:
I hope I won't see the day where a freaking piece of machine is used to judge whether i should live or not.Live shouldn't be wasted on a yes or no binary operation.
Try telling that to a religious fanatic who believes that any infidel's life is not worth anything at all.
Originally posted by Herzog_Zwei:
Try telling that to a religious fanatic who believes that any infidel's life is not anything at all.
Life is full of spices, except that some spices are taken with caution.
I am not into the festive mood to let some fanatic make a decision on my life.
![]()
Originally posted by Nelstar:
Life is full of spices, except that some spices are taken with caution.I am not into the festive mood to let some fanatic make a decision on my life.
Probably you want to end that fanatic's life before he ends yours.
Originally posted by Herzog_Zwei:Probably you want to end that fanatic's life before he ends yours.
Should I transform into an aeroplane and shoot him or transform into a robot to shoot him?
Or should I transport Scud missiles vehicles and plant around his home base?
![]()
Originally posted by Nelstar:Should I transform into an aeroplane and shoot him or transform into a robot to shoot him?
Or should I transport Scud missiles vehicles and plant around his home base?
Anything that works, as long as you blow up his homebase first.
Remember to send supply trucks to resupply your scud missiles vehicles.
Maybe the crux lies on the rules of engagement and the definition of "enemy" , this algorithm is very tricky. how the com is going to define enemy and level of threat and response. Looks like humans are taking the 1st step in creating a killing machine like in the movie " Terminator". Some may find its ok to trust an artificial sentience, others trust a human.Really hard to decide. while one may argue an artificial sentience won't be clouded by emotions etc. Human do but possess a sense of mercy. But in the movie, having a robot to control a system of ICBMs is definitely a no no. One thing has to be noted. Plausible deniability
This is absolutely BULLSHIT. Robot soldiers that won't commit war crimes? This is preposterous!!!! Laughable!!!! Even Saddam will rise from the grave laughiing his ass off.
This is just an excuse to kill with DENIABLITY.
What a screwed up idea....idiotic
Originally posted by Yautja:
Maybe the crux lies on the rules of engagement and the definition of "enemy" , this algorithm is very tricky. how the com is going to define enemy and level of threat and response. Looks like humans are taking the 1st step in creating a killing machine like in the movie " Terminator". Some may find its ok to trust an artificial sentience, others trust a human.Really hard to decide. while one may argue an artificial sentience won't be clouded by emotions etc. Human do but possess a sense of mercy. But in the movie, having a robot to control a system of ICBMs is definitely a no no. One thing has to be noted. Plausible deniability
Humans and robots will act under orders when firing a WMD.
Eh, so when the robot kills innocent villagers who are just defending their homes who takes responsibility?
The robot, the officer who gave the order in the first place or the designer?
Finally, terminator is true.
This US really buay tahan them.
They should think about how to create value for human kind, not think about how to create more killer machine. Creating a soldier bot that don't torture people or violet Geneva convention is just an excuse. They want to create a more powerful and intelligence weapon.
Anyway, battleground is a very dynamic environment. The so called solder bot most likely just a machine gun with built-in computer to fire bullet or missile to some approaching objects, that's all.
If they want to create a robot that can walk across various obstacles swiftly, identify enemy facial signs, talk to people, that cannot be done unless they imbue a spirit into the system.
Originally posted by Display Name:This US really buay tahan them.
They should think about how to create value for human kind, not think about how to create more killer machine. Creating a soldier bot that don't torture people or violet Geneva convention is just an excuse. They want to create a more powerful and intelligence weapon.
Anyway, battleground is a very dynamic environment. The so called solder bot most likely just a machine gun with built-in computer to fire bullet or missile to some approaching objects, that's all.
If they want to create a robot that can walk across various obstacles swiftly, identify enemy facial signs, talk to people, that cannot be done unless they imbue a spirit into the system.
Violence is the answer we need.
Blood for the blood God, Skull for the Skull Throne!
Originally posted by Display Name:This US really buay tahan them.
They should think about how to create value for human kind, not think about how to create more killer machine. Creating a soldier bot that don't torture people or violet Geneva convention is just an excuse. They want to create a more powerful and intelligence weapon.
Anyway, battleground is a very dynamic environment. The so called solder bot most likely just a machine gun with built-in computer to fire bullet or missile to some approaching objects, that's all.
If they want to create a robot that can walk across various obstacles swiftly, identify enemy facial signs, talk to people, that cannot be done unless they imbue a spirit into the system.
Then who will defend them? Creating killer machines is just another defensive tactic. Just like mutually assured destruction.
Come to think of it, we should create a maid robot.
Imagine every year we pay so much for maid to take care of our housework and stuffs.
If we can create this electronic maid, this will really improve our living standard a lot.
Originally posted by fatone:Violence is the answer we need.
Blood for the blood God, Skull for the Skull Throne!
Blood for Khorne!!!