Mountain Runner has provocatively asked how robots fit into 21st century warfare, and what impact they have on perception management, counterinsurgency, and reconstruction.
As a science advisor for the Dept of the Navy (in my previous career), unmanned systems offered a compelling promise of "security without risk". After all, the greatest limitation in modern systems engineering is making a platform hospitable for humans. Remove the human from the fighter aircraft, watch John Boyd's Energy-Maneuverability Theory grow quadratically -- and get a platform that can turn 25Gs while evading even the most advanced surface-to-air missiles.
Today MQ-9 REAPER unmanned aerial vehicles, armed with HELLFIRE missiles, roam the skies over Iraq and Afghanistan. Their pilots sit comfortably in a U.S. homeland Air Force base. Minimal risk, minimal U.S. casualties, and all is well, right?
Except that any power that chooses to trade its hardware for the adversary's lives is no longer conducting a "Just War". In particular, the notion of "proportionality" in conducting a just war is defeated -- and, more worrisome, the insurgency is incentivized to grow.
And what of advances in artificial intelligence, or A.I.? What if we develop sensor grids that can pass the Turing Test and demonstrate the capacity for independent thought and action? (The U.S. Navy already does this to a lesser degree aboard their AEGIS cruisers and destroyers: the SPY radar system has rigid "rule sets" to detect and engage threats, like anti-ship cruise missiles.)
The technology is emerging to allow the U.S. to project power without endangering its citizen soldiers -- akin to Rome's outsourcing of risk and security in the latter days of Empire. Mountain Runner's question is provocative because it identifies the core issue: not technology, but rather the perception of that technology and its moral implications.