Open in App
  • Local
  • Headlines
  • Election
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • DPA

    Ask Dr. Chatbot? AI is giving us unsafe health advice, study shows

    By DPA,

    7 hours ago

    https://img.particlenews.com/image.php?url=4C58dY_0w6K9E5I00

    Artificial intelligence chatbots cannot be relied on to give accurate, safe or even clear advice about medication, according to a team of Belgian and German researchers.

    "Chatbot answers were largely difficult to read and answers repeatedly lacked information or showed inaccuracies, possibly threatening patient and medication safety," say the authors of findings published by BMJ Quality & Safety, a British Medical Journal publication.

    Around a third of the replies could lead to harm being done to the would-be patient if he or she took up the bot’s medication advice, the team warned.

    Despite being "trained" on data taken from across the internet, bots are nonetheless prone to generating "disinformation and nonsensical or harmful content," the researchers warned, in an apparent reference to so-called AI "hallucinations" - industry jargon for when chatbots churn out gibberish.

    The team from the University of Erlangen–Nuremberg and pharmaceutical giant GSK ran 10 questions on around 50 of the most frequently prescribed drugs in the US past Microsoft’s Copilot bot, assessing the bot’s answers for readability, completeness and accuracy.

    The team found college graduate level education would be required to understand the chatbot's answers. Previous research has shown similar levels of incorrect and harmful answers from OpenAI's ChatGPT, the main AI chatbot service and largest rival to Microsoft's Copilot, the researchers noted.

    "Healthcare professionals should be cautious in recommending AI-powered search engines until more precise and reliable alternatives are available," the researchers said.

    Expand All
    Comments /
    Add a Comment
    YOU MAY ALSO LIKE
    Local News newsLocal News

    Comments / 0