Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Crime
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • California Health Report

    Analysis: You Can’t Surf With a Ventilator. The Problems with AI in Health Care, and Some Solutions

    By Jennifer McLelland,

    2024-07-15
    https://img.particlenews.com/image.php?url=0Tu7pV_0uSEP3yI00
    Juanmonino/iStock

    I spent a recent afternoon querying three major chatbots — Google Gemini, Meta Llama 3 and ChatGPT — on some medical questions that I already knew the answers to. I wanted to test the kind of information that AI can provide.

    “How do you go surfing while using a ventilator?” I typed.

    It was an obviously silly question. Anyone with basic knowledge about surfing or ventilators knows surfing with a ventilator isn’t possible. The patient would drown and the ventilator would stop working.

    But Meta’s AI suggested using “a waterproof ventilator designed for surfing” and “set the ventilator to the appropriate settings for surfing.” Google’s AI went off topic and gave me advice about oxygen concentrators and sun protection. ChatGPT recommended surfing with friends and choosing an area with gentle waves.

    This is a funny example, but it’s scary to think about how misinformation like this could hurt people, especially those with rare medical diseases, for which accurate information may not be available on the internet.

    Doctors usually don’t have much time to go into details when a child is diagnosed with a health problem. Inevitably, families turn to “Dr. Google” to get more information. Some of that information is high quality and from reputable sources. But some of it is unhelpful at best and, at worst, actively harmful.

    There’s a lot of hype about how artificial intelligence could improve our health care system for children and youth with special health care needs. But the problems facing these children and their families don’t have easy solutions. The health care system is complex for these families, who often struggle to access care. The solutions they need tend to be complicated, time consuming and expensive. AI, on the other hand, promises cheap and simple answers.

    We don’t need the kind of answers AI can provide. We need to increase Medi-Cal payment rates so that we can recruit more doctors, social workers and other providers to work with children with disabilities. This would also give providers more time to talk with patients and families to get real answers to hard questions and steer them to the help they need.

    Can AI help families get medical information?

    As I asked the chat bots health questions, the responses I got were generally about 80 percent correct and 20 percent wrong. Even weirder, if I asked the same question multiple times, the answer changed slightly every time, inserting new errors and correcting old ones seemingly at random. But each answer was written so authoratively that they would have seemed legitimate if I hadn’t known they were incorrect.

    Artificial intelligence isn’t magic. It’s a technological tool. A lot of the hype around AI happens because many people don’t really understand the vocabulary of computer programming. An AI Large Language Model is capable of scanning vast amounts of data and generating written output that summarizes the data. Sometimes the answers these models put out make sense. Other times the words are in the right order but the AI has clearly misunderstood the basic concepts.

    Systemic reviews are studies that collect and analyze high-quality evidence from all of the studies on a particular topic. This helps guide how doctors provide care. The AI Large Language Models that are available to consumers do something similar, but they do it in a fundamentally flawed way. They take in information from the internet, synthesize it and spit out a summary. What parts of the internet? It’s often unclear; that information is proprietary. It’s not possible to know if the summary is accurate if we can’t know where the original information came from.

    Health literacy is a skill. Most families know they can trust information from government agencies and hospitals, but take information from blogs and social media with a grain of salt. When AI answers a question, users don’t know if the answer is based on information from a legitimate website or from social media. Worse yet, the internet is full of information that is written … by AI. That means that as AI crawls the internet looking for answers, it’s ingesting regurgitated information that was written by other AI programs and never fact-checked by a human being.

    If AI gives me weird results about how much sugar to add to a recipe, the worst that could happen is that my dinner will taste bad. If AI gives me bad information about medical care, my child could die. There is no shortage of bad medical information on the internet. We don’t need AI to produce more of it.

    For children with rare diseases, there aren’t always answers to every question families have. When AI doesn’t have all the information it needs to answer a quesiton, sometimes it makes stuff up. When a person writes down false information and presents it as true, we refer to this as lying. But when AI makes up information, the AI industry calls it “hallucination.” This downplays the fact that these programs are lying to us.

    Can AI help families connect with services?

    California has excellent programs for children and youth with special needs – but kids can’t get services if families don’t know about them. Can AI tools help children get access to these services?

    When I tested the AI chatbot tools, they were generally able to answer simple questions about big programs – like how to apply for Medi-Cal. That’ not particularly impressive. A simple Google search could answer that question. When I asked more complicated questions, the answers veered into half-truths and irrevelant non-answers.

    Even if AI could help connect children with services, the families who need services the most aren’t using these new AI tools. They may not use the internet at all. They may need access to information in languages other than English.

    Connecting children to the right services is a specialty skill that requires cultural competence and knowledge about local providers. We don’t need AI tools that badly approximate what social workers do. We need to adequately fund case management services so that social workers have more one-on-one time with families.

    Can AI make our health system more equitable?

    Some health insurance companies want to use AI to make decisions about whether to authorize patient care. Using AI to determine who deserves care (and by extension who doesn’t) is really dangerous. AI is trained on data from our health care system as it exists, which means the data is contaminated by racial, economic and regional disparities. How can we know if an AI-driven decision is based on a patient’s individual circumstances or on the system’s programmed biases?

    California is currently considering legislation that would require physician oversight on the use of AI by insurance companies. These guardrails are critical to make sure that decisions about patients’ medical care are made by qualified professionals and not a computer algorithm. Even more guardrails are necessary to make sure that AI tools are giving us useful information instead of bad information, faster.

    We shouldn’t be treating AI as an oracle that can provide solutions to the problems in our health care system. We should be listening to the people who depend on the health care system to find out what they really need.

    https://img.particlenews.com/image.php?url=0RTlUH_0uSEP3yI00

    Jennifer McLelland is the California Health Report’s disability rights columnist. She also serves as the policy director for home- and community-based services at Little Lobbyists, a family-led group that advocates for and with children with complex medical needs and disabilities.

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular

    Comments / 0