Autonomous warfare is often framed as a technical upgrade. At the center of this narrative is a comforting assumption: humans are still in control. It is we who approve the systems, define the rules, and retain the final say.
Or so we think?
The problem is that this assumption no longer holds when we consider the autonomy scale. Control does not disappear all at once. It disintegrates. And eventually, it becomes symbolic rather than real.
How Has AI Evolved from Mere Tools to Decision-Makers?
Early military technologies were designed to enhance human capabilities. A pilot flew the aircraft, and an operator launched the missile. Even advanced targeting systems weren’t far behind.
Autonomous systems have changed that relationship. They are designed to perceive, evaluate, and act within milliseconds. To function as intended, they must bypass human intervention, which, let’s face it, is infinitely slower. Though initially framed as a delegation, the narrative of ‘making the most of AI’s speed’ conveys a sense of dependence.
Now, the system is no longer executing decisions. It is making them.
Speed: The First Trade-Off
Modern conflict favors rapid action, as autonomous weapons succeed in settings where delays are detrimental. The quicker the system reacts, the greater its effectiveness appears.
But there’s a trade-off. When decisions are made faster than humans can understand, oversight becomes a matter of post hoc review rather than real-time management. People end up examining logs instead of influencing results beforehand. As a result, accountability moves from making decisions to managing consequences.
Understanding The Accountability Gap
One of the most dangerous aspects of autonomous warfare is not lethality. It is the ambiguity.
When a system makes a decision, who bears responsibility? The engineer who wrote the model? The commander who approved the deployment? The policymaker who allowed its use? Long story short, responsibility diffuses. The more layers there are, the less accountability remains.
This diffusion creates a moral vacuum. When everyone is partially responsible, no one is fully accountable. That vacuum is where the most serious risks reside.
With AI, Matters Escalate Without Intent
Traditional warfare relies on intent. Each decision brings its own consequence.
Autonomous systems operate differently. They respond to patterns, thresholds, and probabilistic models. They do not understand diplomacy, miscalculation, or restraint. They are programmed for objectives, not outcomes.
This situation leads to escalation without any intention. One system detects a signal and reacts; another system responds. The chain continues. Neither side wants conflict, but both somehow end up losing control.
What David Pauli’s Dark Protocol Reveals About Autonomous Warfare
Autonomous warfare leads to catastrophic consequences. David Pauli has brilliantly explored this tension in his book, Dark Protocol. It’s a story about how autonomous systems rooted in critical infrastructure demonstrate strategic thinking rather than instrumentality. Get your hands on the book here.
The Question We Must Answer
Autonomous warfare forces a difficult question: are we designing systems to serve human judgment, or are we redesigning humans to accommodate system speed?
Control does not vanish overnight. It erodes in the background and is often justified by necessity. By the time we become aware, it might be too late to recover it. The best approach is to remain vigilant and maintain control.