Current and former members of the FDA told CNN about issues with the Elsa generative AI tool unveiled by the federal agency last month. Three employees said that in practice, Elsa has hallucinated nonexistent studies or misrepresented real research. "Anything that you don't have time to double-check is unreliable," one source told the publication. "It hallucinates confidently." Which isn't exactly ideal for a tool that's supposed to be speeding up the clinical review process and aiding with making efficient, informed decisions to benefit patients.
Leadership at the FDA appeared unfazed by the potential problems posed by Elsa. "I have not heard those specific concerns," FDA Commissioner Marty Makary told CNN. He also emphasized that using Elsa and participating in the training to use it are currently voluntary at the agency.
The CNN investigation highlighting these flaws with the FDA's artificial intelligence arrived on the same day as the White House introduced an "AI Action Plan." The program presented AI development as a technological arms race that the US should win at all costs, and it laid out plans to remove "red tape and onerous regulation" in the sector. It also demanded that AI be free of "ideological bias," or in other words, only following the biases of the current administration by removing mentions of climate change, misinformation, and diversity, equity and inclusion efforts. Considering each of those three topics has a documented impact on public health, the ability of tools like Elsa to provide genuine benefits to both the FDA and to US patients looks increasingly doubtful.
This article originally appeared on Engadget at https://www.engadget.com/ai/fda-employees-say-the-agencys-elsa-generative-ai-hallucinates-entire-studies-203547157.html?src=rss