On Monday, the FDA publicly announced the agency-wide rollout of a large language model (LLM) called Elsa, which is intended to help FDA employees—“from scientific reviewers to investigators.” The FDA said the generative AI is already being used to “accelerate clinical protocol reviews, shorten the time needed for scientific evaluations, and identify high-priority inspection targets.”

However, according to a report from NBC News, Elsa could have used some more time in development. FDA staff tested Elsa on Monday with questions about FDA-approved products or other public information, only to find that it provided summaries that were either completely or partially wrong.

According to Stat, Elsa is based on Anthropic’s Claude LLM and is being developed by consulting firm Deloitte. Since 2020, Deloitte has been paid $13.8 million to develop the original database of FDA documents that Elsa’s training data is derived from. In April, the firm was awarded a $14.7 million contract to scale the tech across the agency. The FDA said that Elsa was built within a high-security GovCloud environment and offers a “secure platform for FDA employees to access internal documents while ensuring all information remains within the agency.”

  • ExtantHuman@lemm.ee
    link
    fedilink
    English
    arrow-up
    4
    ·
    28 days ago

    This is not what LLMs are designed to do. There are other AI- adjacent technologies that are way better at this kind of data analysis and pattern recognition type thing than the glorified autocorrect that is an LLM.

  • piccolo@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    28 days ago

    Hollywood lied to us. AI isnt going to end humanity in a glorious nuclear war. But blindly instruct us to poison ourselves.

  • pelespirit@sh.itjust.worksOPM
    link
    fedilink
    English
    arrow-up
    1
    ·
    28 days ago

    Is this why people were going after Ars Technica yesterday? I knew that something was in the pipeline, but I’m not positive this is the one.