I admire that the AI developers have stuck to the “we won’t let our AI kill people” values to this point, but to me it seems highly unlikely that those safeguards will stay in place for any lengthy period of time when they’ve already chosen to get in bed with the military industrial complex.
Honestly, at least one military that is very interested in speeding "kill chain" with AI explicitly tells you that a critical requirement is that ultimate decision (and responsibility) has to fall on a human. Partially because a human can take responsibility, as in the old IBM "computer can't take management decision" slide.
Same discussion included a suggestion that the articles from last year about AI targeting might have been explicitly nudged by said AI's creators to judge attitudes for fully automated killing. While specifying that, at least currently, the specific military and most of its allies keeps the stance of human-in-the-loop.
As an AI developer who does do work with the Pentagon, I don't know anyone amongst my colleagues who doesn't roll their eyes at this ridiculous notion that somehow extracting infinite wealth for the world's richest men isn't just killing people with extra steps. Just because there is more distance between working at Facebook and bombs doesn't mean they aren't still inextricably linked. The idea that anyone can honestly believe "our AI doesn't kill people" is a fantasy. Even serious scientists are already using commercial AI to generate new fentanyl analogues:
https://link.springer.com/article/10.1007/s12539-024-00623-0
All the AI was always going to be used to kill people all along. Some of us are just more honest about that reality.
That is certainly a novel form of whataboutism. Spooky murder at a distance through nonspecific actions too.
US health insurance industry explicitly delivered harm through "AI", though.
The ethical route of making the AI having options of "accept" and "refer to human" an insurance claim probably provided less ROI :V