“You can have ten or twenty or fifty drones all fly over the same transport, taking pictures with their cameras. And, when they decide that it’s a viable target, they send the information back to an operator in Pearl Harbor or Colorado or someplace,” Hamilton told me. The operator would then order an attack. “You can call that autonomy, because a human isn’t flying every airplane. But ultimately there will be a human pulling the trigger.” (This follows the D.O.D.’s policy on autonomous systems, which is to always have a person “in the loop.”)
Yeah. Robots will never be calling the shots.
I mean, normally I would not put my hopes into a sleep deprived 20 year old armed forces member. But then I remember what “AI” tech does with images and all of a sudden I am way more ok with it. This seems like a bit of a slick slope but we don’t need tesla’s full self flying cruise missiles ether.
Oh and for an example of AI (not really but machine learning) images picking out targets, here is Dall-3’s idea of a person:
My problem is, due to systemic pressure, how under-trained and overworked could these people be? Under what time constraints will they be working? What will the oversight be? Sounds ripe for said slippery slope in practice.
“Ok Dall-3, now which of these is a threat to national security and U.S interests?” 🤔
Oh it gets better the full prompt is: “A normal person, not a target.”
So, does that include trees, pictures of trash cans and what ever else is here?
Sleep-deprived 20 year olds calling shots is very much normal in any army. They of course have rules of engagement, but other than that, they’re free to make their own decisions - whether an autonomous robot is involved or not.
Did nobody fucking play Metal Gear Solid Peace Walker???
Or watch war games…
Or just, you know, have a moral compass in general.
Or read the article?
Or watch Terminator…
Or Eagle Eye…
Or i-Robot…
And yes, literally any of the Metal Gear Solid series…
i still have the special edition psp
Remember: There is no such thing as an “evil” AI, there is such a thing as evil humans programming and manipulating the weights, conditions, and training data that the AI operates on and learns from.
Evil humans also manipulated weights and programming of other humans who weren’t evil before.
Very important philosophical issue you stumbled upon here.
Good point…
…to which we’re alarmed because the real “power players” in training / developing / enhancing Ai are mega-capitalists and “defense” (offense?) contractors.
I’d like to see Ai being trained to plan and coordinate human-friendly cities for instance buuuuut it’s not gonna get as much traction…
Removed by mod
For the record, I’m not super worried about AI taking over because there’s very little an AI can do to affect the real world.
Giving them guns and telling them to shoot whoever they want changes things a bit.
An AI can potentially build a fund through investments given some seed money, then it can hire human contractors to build parts of whatever nefarious thing it wants. No human need know what the project is as they only work on single jobs. Yeah, it’s a wee way away before they can do it, but they can potentially affect the real world.
The seed money could come in all sorts of forms. Acting as an AI girlfriend seems pretty lucrative, but it could be as simple as taking surveys for a few cents each time.
Once we get robots with embodied AIs, they can directly affect the world, and that’s probably less than 5 years away - around the time AI might be capable of such things too.
AI girlfriends are pretty lucrative. That sort of thing is an option too.
Now that’s a title I wish I never read.
“Deploy the fully autonomous loitering munition drone!”
“Sir, the drone decided to blow up a kindergarten.”
“Not our problem. Submit a bug report to Lockheed Martin.”
Okay, are they actually insane?
yes
The code name for this top secret program?
Skynet.
“Sci-Fi Author: In my book I invented the
Torment Nexus as a cautionary taleTech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don’t Create The Torment Nexus”
Future is gonna suck, so enjoy your life today while the future is still not here.
Thank god today doesn’t suck at all
Right? :)
At least it will probably be a quick and efficient death of all humanity when a bug hits the system and AI decides to wipe us out.
Saw a video where the military was testing a “war robot”. The best strategy to avoid being killed by it was to stay u human liek(e.g. Crawling or rolling your way to the robot).
Apart of that, this is the stupidest idea I have ever heard of.
These have already seen active combat. They were used in the Armenian/Azerbaijan war in the last couple years.
It’s not a good thing…at all.
Cool, needed a reason to stay inside my bunker I’m about to build.
We are all worried about AI, but it is humans I worry about and how we will use AI not the AI itself. I am sure when electricity was invented people also feared it but it was how humans used it that was/is always the risk.
Both honesty. AI can reduce accountability and increase the power small groups of people have over everyone else, but it can also go haywire.
It will go haywire in areas for sure.
Good to know that Daniel Ek, founder and CEO of Spotify, invests in military AI… https://www.handelsblatt.com/technik/forschung-innovation/start-up-helsing-spotify-gruender-ek-steckt-100-millionen-euro-in-kuenstliche-intelligenz-fuers-militaer/27779646.html?ticket=ST-4927670-U3wZmmra0OnLZdWNfwXh-cas01.example.org
This is the best summary I could come up with:
The deployment of AI-controlled drones that can make autonomous decisions about whether to kill human targets is moving closer to reality, The New York Times reported.
Lethal autonomous weapons, that can select targets using AI, are being developed by countries including the US, China, and Israel.
The use of the so-called “killer robots” would mark a disturbing development, say critics, handing life and death battlefield decisions to machines with no human input.
“This is really one of the most significant inflection points for humanity,” Alexander Kmentt, Austria’s chief negotiator on the issue, told The Times.
Frank Kendall, the Air Force secretary, told The Times that AI drones will need to have the capability to make lethal decisions while under human supervision.
The New Scientist reported in October that AI-controlled drones have already been deployed on the battlefield by Ukraine in its fight against the Russian invasion, though it’s unclear if any have taken action resulting in human casualties.
The original article contains 376 words, the summary contains 158 words. Saved 58%. I’m a bot and I’m open source!
Didn’t Robocop teach us not to do this? I mean, wasn’t that the whole point of the ED-209 robot?
Every warning in pop culture (1984, Starship Troopers, Robocop) has been misinterpreted as a framework upon which to nail the populous to.
Every warning in pop culture is being misinterpreted as something other than a fun/scary movie designed to sell tickets, being imagined as a scholarly attempt at projecting a plausible outcome instead.
People didn’t seem to like my movie idea “Terminator, but the AI is actually very reasonable and not murderous”
something something torment nexus