Open in App
  • Local
  • Headlines
  • Election
  • Crime Map
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • Parents

    New Study Says Parents Trust ChatGPT for Health Advice Over Doctors

    By Sherri Gordon, CLC,

    1 days ago

    Experts say this is convenient but risky.

    https://img.particlenews.com/image.php?url=17nexB_0wCdMOAy00

    Getty Images/golero

    Research from the University of Kansas Life Span Institute found that parents seeking health care information online for their children trust artificial intelligence (AI) like ChatGPT more than health care professionals. They also rated AI-generated text as credible, moral, and trustworthy.

    Recognizing that parents often turn to the internet for advice, the researchers wanted to understand what using ChatGPT would look like and how parents were interpreting it, says Calissa Leslie-Miller, MS , a doctoral student in clinical child psychology at the University of Kansas and lead author of the study.

    Leslie-Miller and her team conducted a study with 116 parents, aged 18 to 65. The parents were given health-related texts with topics like infant sleep training and nutrition . Participants reviewed content generated by ChatGPT and health care professionals and were not told who wrote it.

    "Participants found minimal distinctions between vignettes written by experts and those generated by prompt-engineered ChatGPT," says Leslie-Miller. "When vignettes were statistically significantly different, ChatGPT was rated as more trustworthy, accurate, and reliable."

    Why Parents May Trust AI

    Leslie-Miller says the study did not examine why parents trusted ChatGPT more, but theorizes that multiple factors could be at play.

    Jim Boswell , the president, and CEO at OnPoint Healthcare Partners who has experience developing an AI-based platform that supports clinicians, suggests that ChatGPT's simplicity and approachability may have played a role. Another possible factor is ChatGPT's ability to present information in a way that is direct and easy to understand.

    AI tools are phenomenal at knowing how to phrase and write things, says Mordechai Raskas, MD, EdM , chief medical information officer and director of telemedicine at PM Pediatric Care. "I can understand why [parents], not knowing the source, would prefer the wording of AI," says Dr. Raskas. "Think of AI as the ultimate salesperson; it knows exactly what to say to win you over."

    According to Boswell, when parents want quick direction without waiting for an appointment or a callback, AI platforms can also be appealing.

    "Sometimes, parents want that first layer of guidance to ease their worries or give them some direction immediately—whether they’re at home or out and about," Boswell explains.

    Risks of Treating Kids Based on AI Text

    While it may be convenient, getting health information from a source like ChatGPT is not without risks.

    "Information may be inaccurate or not tailored to specific circumstances. For example, suggesting medication for a child who is too young or offering incorrect treatment advice could lead to a wide range of dangerous outcomes," says Leslie-Miller.

    Boswell also explains that AI responses are generated from vast amounts of general information but lack the clinical experience and additional insight that health care professionals bring. The platform also doesn’t personalize guidance based on a child’s unique health profile, history, or symptoms.

    "Relying on these tools for medical advice could lead to missed symptoms, misinterpretations of serious conditions, or delays in seeking appropriate care," says Boswell. "For kids, in particular, small health issues can escalate quickly, so having a qualified professional assess a situation is essential."

    There is no single answer when diagnosing and treating a child, says Christina Johns, MD, MEd, FAAP, senior medical advisor at PM Pediatric Care. Individualized medical care is best done with your health care provider.

    "That said, coming to the health care table with an understanding and knowledge or awareness of the landscape, means that you are armed and best prepared to be an informed decision maker with your doctor," says Dr. Johns.

    It's not uncommon for AI-generated text to sound reliable and credible. However, Boswell says without a human expert behind it, there’s no accountability for accuracy.

    "This can be risky, especially if the information could impact a child’s health," says Boswell. "Knowing that a piece of content is AI-generated helps parents [know] that they should seek verification, ideally from a health care provider or validated medical resource, before making health decisions."

    How Parents Can Identify AI-Generated Text

    According to Leslie-Miller, parents can identify AI-generated text by looking for certain indicators, such as vague language, a lack of specific citations or expert references, and overly general advice that doesn't consider individual circumstances.

    "AI-generated text often also has a certain tone—it’s usually neutral and very general, lacking detailed insight or specific examples that experts provide," says Boswell. "It can also feel like it’s trying to cover every possible angle without giving strong recommendations."

    Checking the author and the source of the information is key, he adds. "Reputable health content usually credits qualified medical writers or health professionals and links to research-backed sources."

    When in doubt, Boswell advises to confirm what you find with a health care provider. This approach keeps you informed but reduces the risks associated with misinformation.

    Finding Credible Health Information Online

    According to Leslie-Miller, trusted online sources of health information include the American Academy of Pediatrics (AAP) , Centers for Disease Control and Prevention (CDC) , and the World Health Organization (WHO) . Also, search your local hospitals' websites and see if your child's health care provider posts health advice and information online.

    "Reading online and searching can be very helpful," says Dr. Raskas. "It just depends on the context and needs to be in conjunction with a trusted source or professional to help digest what you've read."

    Dr. Raskas adds health care providers should help families find high-quality information appropriate for their situation.

    "I think we as health care professionals also need to be honest about the limitations of our knowledge," says Dr. Raskas. "Wecan acknowledge that we too need to search sources online for information, and there is nothing inherently wrong with searching. It is about balance, filtering, and applying that knowledge appropriately."

    What Needs To Change With AI

    As AI and ChatGPT become more prevalent in society, it's increasingly important to build it with some oversight to ensure the information generated is accurate and not harmful to people. Boswell says one way to do this is incorporating a regular review processes involving health care professionals, especially when health guidance is provided.

    "Building in clinician oversight and using evidence-based resources in model training can improve the accuracy and reliability," he says. "At OnPoint, we prioritize clinician oversight in our AI platform, Iris, where human expertise backs every AI function to ensure it meets medical standards."

    Leslie-Miller agrees, recommending AI developers have human experts regularly validate content.

    "Additionally, prompt engineering can be used to refine AI output, making it more accurate and reliable," says Leslie-Miller. "Human oversight throughout the process is critical to ensure that the information provided is factual and trustworthy."

    For more Parents news, make sure to sign up for our newsletter!

    Read the original article on Parents .

    Comments /
    Add a Comment
    YOU MAY ALSO LIKE
    Local News newsLocal News

    Comments / 0