“In short, the OpenAI paper inadvertently highlights an uncomfortable truth,” Xing concluded. “The business incentives driving consumer AI development remain fundamentally misaligned with reducing hallucinations.”

  • Cimbazarov [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    1
    ·
    13 days ago

    I do feel like this is impossible, since I dont think an LLM could ever give a 100% correct response, and how to determine something is correct also seems wishy washy. I’ve seen an LLM that gives multiple possibilities and tbh I didn’t like using it since each possibility was a ton of slop to read through and I felt like I could just do the thing myself faster than read through each possibility and verify its correctness. So in Xing’s article, he is kind of right when he says that when LLM’s admit uncertainty, people tend to use it less (which maybe would be a good thing!).