Reminds me of this: ‘The machine did it coldly’: Israel used AI to identify 37,000 Hamas targets https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai...
I think you are talking about Lavender: https://news.ycombinator.com/item?id=39918245 (7 months ago, 1418 points; 1601 comments)
>‘The machine did it coldly’
I'm not so sure, with the technology they're "shooting for".
I thought this said it all:
>They can differentiate between friendly, civilian and enemy, decide to engage or alert based on target type, and even vary their effects.
Obviouly they're only going to be designing, building, and deploying "nice bombs".
Didn't this already happen? https://www.npr.org/2021/06/01/1002196245/a-u-n-report-sugge...
[dead]
In 2001 an S200 missile fired at a drone during a military exercise missed the drone, saw an airliner 160 miles further away and went and took that out. Not sure if that counts?
No one actually told it to go for the airliner but it kind of took the initiative. (https://en.wikipedia.org/wiki/Siberia_Airlines_Flight_1812)
Did HN automatically strip "Prediction -" from the title? At a glance it makes it look like the author is working on making it happen
The best way to be right is to make yourself right
For military drones, yes, this will certainly happen if it hasn't already.
I'd love to see some predictions on manufacturing robots intentionally killing someone for the greater good in a sort of Trolley Problem [1]. The theoretical potential of AI safety protocols getting misaligned and a robot deciding to sacrifice a human worker to save multiple lives.
That’s the Robotic Civil Wars as imagined by Asimov.
That'll happen with self driving cars. It'll be interesting to see whether pedestrian or occupants are considered more valuable.
Uh, pretty sure it’s already happened, if by “robot” you mean programmed (but not necessarily independently mobile) automaton / machine, by “autonomous” you mean not under the direct and realtime control of a human operator, and by “deliberately” you mean ML came up with a > 0.5 certainty that the target met its targeting criteria.
Doubt a robot will ever actually deliberate, but that’s more of a philosophical issue.
I took the title to mean that human commanders will deliberately choose to deploy an autonomous robot configured to kill a person or persons (as opposed to a robot's killing a person, but the death was unwanted and unforeseen by the commanders).
Yeah, my point is that’s kinda already been done, if we include modern “smart” landmines and loitering munitions. Honestly, I think this threshold is arguably behind us, but I’d also say if it isn’t it will be in a lot less than a decade.
My money is on automated AA systems near some inland border where commercial flights just so happen to never go nabbing a crop duster, surveying aircraft or SAR/med chopper because some comedy of errors resulted in the device getting put on way too hair trigger of a setting.
Human operated AA systems regularly shoot down the wrong thing so unless the automated systems are much better than us it seems pretty certain.
This is already happening in Ukraine.
And Palestine
Ya but a tons of people are killed by humans every day so it's fine.
You'd know exactly how it was going to happen if you could review every line of code, every comment, and every bit (byte) of data involved, and make sure it was meaningful.
So you could precisely pinpoint the exact data path that would carry out such a deed, and how it got that way. And be able to follow the trail of bits throughout the entire chain-of-command and arrive at the root cause quite logically.
Oh wait a minute . . . I was thinking about an accidental killing, my bad.
For a deliberate killing you don't need any of that.
It depends on whether the murderer intends to get away with it.
Which is worse? Killing one human automatically by a computer. Or dropping 2000 lb bombs on civilian areas by human decisions deliberately?
Just a question.
[dead]