A well done short film story, highlighting the dangers of fully autonomous killing machines. About 8 minutes.
Like most scifi there is pro-innovation bias (not considering the weaknesses), and a disregard for the evolutionary arms race. So while our drone technology is improving, so is our drone killing technology. There are already an array of various drone killing “guns” which use frequency hacks or otherwise to kill drones. Instead of everyone shielding their homes, it would be just as viable to have drone patrols, that would kill any uncertified drones.
It may even be possible to have something akin to a “force field” (some kind of wifi), which would jam drones, thus disabling their swarming behaviour.
Can even have coocoo drones, that would pretend to be a swarm member, and say they did all the kills, and so the others would return.
The list of possible counter-measures is really endless, just as any kind of evolutionary arms race.
In terms of autonomous weapons, as Eray talks about below, it doesn’t stop terrorists or foreign interests. And you’d need to have significant level of autonomy for the counter-measures to be effective.
Here is some commentary by Eray Ozkural, an expert in AI:
“Sure, we could build such bots, but the scenario is absurd. Would banning any kind of weapon control system prevent terrorists from using it? That itself is a basic error of this film. It says nothing useful, except for attacking a straw man argument about autonomous military drones, that autonomous AI agents would make far more effective, humane, or preferable weapons — which I do not believe anyone has made in earnest. In other words, it is an attempt to associate AI with “unethical weapons”. We know that weapons are unethical, already. Thus, saying they could be used for an evil purpose changes nothing. The film wishes to go one step further, making AI-equipped military drones a class of weapons that are analogous to nuclear, chemical or biological weapons, weapons that should be illegal under Geneva conventions or equivalent, and hoping to make a big fuss about it all. And further drawing a false dichotomy saying that anyone who does not agree with our scenario is defending these terrible, evil AI’s that we imagined, as is typical of their world-saving charade.”
Can read the rest at https://examachine.net/blog/slaughterbots-ai-scare-pr-gone-wrong/