Open in App
  • Local
  • Headlines
  • Election
  • Crime Map
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • Entrepreneur

    Doctors Are Using AI to Transcribe Conversations With Patients. But Researchers Say the Tool Is Hallucinating 'Entire' Sentences.

    By Sherin Shibu,

    1 days ago

    https://img.particlenews.com/image.php?url=3JE1yp_0wPs16nb00

    ChatGPT-maker OpenAI introduced Whisper two years ago as an AI tool that transcribes speech to text. Now, the tool is used by AI healthcare company Nabla and its 45,000 clinicians to help transcribe medical conversations across over 85 organizations, like the University of Iowa Health Care .

    However, new research shows that Whisper has been "hallucinating," or adding statements that no one has said, into transcripts of conversations, raising the question of how quickly medical facilities should adopt AI if it yields errors.

    According to the Associated Press , a University of Michigan researcher found hallucinations in 80% of Whisper transcriptions. An unnamed developer found hallucinations in half of more than 100 hours of transcriptions. Another engineer found inaccuracies in almost all of the 26,000 transcripts they generated with Whisper.

    Faulty transcriptions of conversations between doctors and patients could have "really grave consequences," Alondra Nelson, professor at the Institute for Advanced Study in Princeton, NJ, told AP.

    "Nobody wants a misdiagnosis," Nelson stated.

    Related: AI Isn't 'Revolutionary Change,' and Its Benefits Are 'Exaggerated,' Says MIT Economist

    Earlier this year, researchers at Cornell University, New York University, the University of Washington, and the University of Virginia published a study that tracked how many times OpenAI's Whisper speech-to-text service hallucinated when it had to transcribe 13,140 audio segments with an average length of 10 seconds. The audio was sourced from TalkBank's AphasiaBank , a database featuring the voices of people with aphasia , a language disorder that makes it difficult to communicate.

    The researchers found 312 instances of "entire hallucinated phrases or sentences, which did not exist in any form in the underlying audio" when they ran the experiment in the spring of 2023.

    Related: Google's New AI Search Results Are Already Hallucinating — Telling Users to Eat Rocks and Make Pizza Sauce With Glue

    Among the hallucinated transcripts, 38% contained harmful language, like violence or stereotypes, that did not match the context of the conversation.

    "Our work demonstrates that there are serious concerns regarding Whisper's inaccuracy due to unpredictable hallucinations," the researchers wrote.

    The researchers say that the study could also mean a hallucination bias in Whisper, or a tendency for it to insert inaccuracies more often for a particular group — and not just for people with aphasia.

    "Based on our findings, we suggest that this kind of hallucination bias could also arise for any demographic group with speech impairments yielding more disfluencies (such as speakers with other speech impairments like dysphonia [disorders of the voice], the very elderly, or non-native language speakers)," the researchers stated.

    Related: OpenAI Reportedly Used More Than a Million Hours of YouTube Videos to Train Its Latest AI Model

    Whisper has transcribed seven million medical conversations through Nabla, per The Verge .

    Related Search

    University of VirginiaAi hallucinationsAi in healthcareAi in medical diagnosisCornell UniversityUniversity of Iowa health care

    Comments / 2

    Add a Comment
    Liz Jameson
    1d ago
    This is UNACCEPTABLE PERIOD!!!!! We are NOT putting our PT'S first!!!!
    Rayma Zwinge
    1d ago
    And then the providers don’t read the notes and the fiction goes into the electronic records forever!
    View all comments

    YOU MAY ALSO LIKE

    Local News newsLocal News

    Comments / 0