• Nalivai@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    2
    ·
    edit-2
    1 day ago

    I quit my previous job in part because I couldn’t deal with the influx of terrible, unreliable, dangerous, bloated, nonsensical, not even working code that was suddenly pushed into one of the projects I was working on. That project is now completely dead, they froze it on some arbitrary version.
    When junior dev makes a mistake, you can explain it to them and they will not make it again. When they use llm to make a mistake, there is nothing to explain to anyone.
    I compare this shake more to an earthquake than to anything positive you can associate with shaking.

    • AA5B@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 hours ago

      More business for me. As a DevOps guy, my job is to create automation to flag “ terrible, unreliable, dangerous, bloated, nonsensical, not even working code”

    • InnerScientist@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      5
      ·
      23 hours ago

      And so, the problem wasn’t the ai/llm, it was the person who said “looks good” without even looking at the generated code, and then the person who read that pull request and said, again without reading the code, “lgtm”.

      If you have good policies then it doesn’t matter how many bad practice’s are used, it still won’t be merged.

      The only overhead is that you have to read all the requests but if it’s an internal project then telling everyone to read and understand their code shouldn’t be the issue.

      • Nalivai@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        6 hours ago

        The problem here is that a lot of the time looking for hidden problem is harder than writing good code from scratch. And you will always be at a danger that llm snuck some sneaky undefined behaviour past you. There is a whole plethora of standards, conventions, and good practices that help humans to avoid it, which llm can ignore at any random point.
        So you’re either not spending enough time on review or missing whole lot of bullshit. In my experience, in my field, right now, this review time is more time consuming and more painful than avoiding it in the first place.
        Don’t underestimate how degrading and energy sucking it is for a professional to spend most of the working time sitting through autogenerated garbage, and how inefficient it is.

    • locuester@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      8
      ·
      20 hours ago

      This is a problem with your team/project. It’s not a problem with the technology.

      • Nalivai@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        6 hours ago

        A technology that makes people put bad code is a problematic technology. If your team/project managed to overcome it’s problems so far doesn’t mean it is good or overall helpful. Peoole not seeing the problem is actually the worst part.

        • locuester@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          5 hours ago

          Sir, I use it to assist me in programming. I don’t use it to write entire files or functions. It’s a pattern recognizer.

          Your team had people who didn’t review code. That’s a problem.