When we think of killer robots, images of the Terminator, Robocop, and other dystopian movies often spring to mind. These movies usually don’t end well (for the humans, at least). So it seems crazy that we would even consider building machines programmed to kill. On the other hand, some argue that autonomous weapons could save lives on the battlefield. We are not yet living in a world killer robots; but we might be getting close. What goes into the decision to kill? How can we possibly program robots to make the right decisions, given the moral stakes?