• at_an_angle@lemmy.one
    link
    fedilink
    English
    arrow-up
    59
    ·
    1 year ago

    “You can have ten or twenty or fifty drones all fly over the same transport, taking pictures with their cameras. And, when they decide that it’s a viable target, they send the information back to an operator in Pearl Harbor or Colorado or someplace,” Hamilton told me. The operator would then order an attack. “You can call that autonomy, because a human isn’t flying every airplane. But ultimately there will be a human pulling the trigger.” (This follows the D.O.D.’s policy on autonomous systems, which is to always have a person “in the loop.”)

    https://www.businessinsider.com/us-closer-ai-drones-autonomously-decide-kill-humans-artifical-intelligence-2023-11

    Yeah. Robots will never be calling the shots.

    • M0oP0o@mander.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I mean, normally I would not put my hopes into a sleep deprived 20 year old armed forces member. But then I remember what “AI” tech does with images and all of a sudden I am way more ok with it. This seems like a bit of a slick slope but we don’t need tesla’s full self flying cruise missiles ether.

      Oh and for an example of AI (not really but machine learning) images picking out targets, here is Dall-3’s idea of a person:

      • 1847953620@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        My problem is, due to systemic pressure, how under-trained and overworked could these people be? Under what time constraints will they be working? What will the oversight be? Sounds ripe for said slippery slope in practice.

        • M0oP0o@mander.xyz
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Oh it gets better the full prompt is: “A normal person, not a target.”

          So, does that include trees, pictures of trash cans and what ever else is here?

      • BlueBockser@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Sleep-deprived 20 year olds calling shots is very much normal in any army. They of course have rules of engagement, but other than that, they’re free to make their own decisions - whether an autonomous robot is involved or not.

  • HiddenLayer5@lemmy.ml
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Remember: There is no such thing as an “evil” AI, there is such a thing as evil humans programming and manipulating the weights, conditions, and training data that the AI operates on and learns from.

    • Zacryon@feddit.de
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      1 year ago

      Evil humans also manipulated weights and programming of other humans who weren’t evil before.

      Very important philosophical issue you stumbled upon here.

    • MonkeMischief@lemmy.today
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Good point…

      …to which we’re alarmed because the real “power players” in training / developing / enhancing Ai are mega-capitalists and “defense” (offense?) contractors.

      I’d like to see Ai being trained to plan and coordinate human-friendly cities for instance buuuuut it’s not gonna get as much traction…

  • Yardy Sardley@lemmy.ca
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    2
    ·
    1 year ago

    For the record, I’m not super worried about AI taking over because there’s very little an AI can do to affect the real world.

    Giving them guns and telling them to shoot whoever they want changes things a bit.

    • tinwhiskers@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      An AI can potentially build a fund through investments given some seed money, then it can hire human contractors to build parts of whatever nefarious thing it wants. No human need know what the project is as they only work on single jobs. Yeah, it’s a wee way away before they can do it, but they can potentially affect the real world.

      The seed money could come in all sorts of forms. Acting as an AI girlfriend seems pretty lucrative, but it could be as simple as taking surveys for a few cents each time.

      Once we get robots with embodied AIs, they can directly affect the world, and that’s probably less than 5 years away - around the time AI might be capable of such things too.

      AI girlfriends are pretty lucrative. That sort of thing is an option too.

  • redcalcium@lemmy.institute
    link
    fedilink
    English
    arrow-up
    18
    ·
    edit-2
    1 year ago

    “Deploy the fully autonomous loitering munition drone!”

    “Sir, the drone decided to blow up a kindergarten.”

    “Not our problem. Submit a bug report to Lockheed Martin.”

  • 1984@lemmy.today
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    1 year ago

    Future is gonna suck, so enjoy your life today while the future is still not here.

  • Kühe sind toll@feddit.de
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    Saw a video where the military was testing a “war robot”. The best strategy to avoid being killed by it was to stay u human liek(e.g. Crawling or rolling your way to the robot).

    Apart of that, this is the stupidest idea I have ever heard of.

    • Freeman@lemmy.pub
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      These have already seen active combat. They were used in the Armenian/Azerbaijan war in the last couple years.

      It’s not a good thing…at all.

  • Immersive_Matthew@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    1 year ago

    We are all worried about AI, but it is humans I worry about and how we will use AI not the AI itself. I am sure when electricity was invented people also feared it but it was how humans used it that was/is always the risk.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    This is the best summary I could come up with:


    The deployment of AI-controlled drones that can make autonomous decisions about whether to kill human targets is moving closer to reality, The New York Times reported.

    Lethal autonomous weapons, that can select targets using AI, are being developed by countries including the US, China, and Israel.

    The use of the so-called “killer robots” would mark a disturbing development, say critics, handing life and death battlefield decisions to machines with no human input.

    “This is really one of the most significant inflection points for humanity,” Alexander Kmentt, Austria’s chief negotiator on the issue, told The Times.

    Frank Kendall, the Air Force secretary, told The Times that AI drones will need to have the capability to make lethal decisions while under human supervision.

    The New Scientist reported in October that AI-controlled drones have already been deployed on the battlefield by Ukraine in its fight against the Russian invasion, though it’s unclear if any have taken action resulting in human casualties.


    The original article contains 376 words, the summary contains 158 words. Saved 58%. I’m a bot and I’m open source!

  • Steve@lemmy.today
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Didn’t Robocop teach us not to do this? I mean, wasn’t that the whole point of the ED-209 robot?

    • aeronmelon@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Every warning in pop culture (1984, Starship Troopers, Robocop) has been misinterpreted as a framework upon which to nail the populous to.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Every warning in pop culture is being misinterpreted as something other than a fun/scary movie designed to sell tickets, being imagined as a scholarly attempt at projecting a plausible outcome instead.

        • MBM@lemmings.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          People didn’t seem to like my movie idea “Terminator, but the AI is actually very reasonable and not murderous”