Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Crime
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • The Mirror US

    Groundbreaking AI and brain implant allows ALS patient to speak to family 'for first time in years'

    By Gina Martinez,

    12 hours ago

    Researchers have unlocked an AI-powered device that translates thoughts into words for people who are not able to speak.

    For years, brain-computer interfaces have helped paralyzed people regain functions they've lost in different body parts. But now researchers are taking it to the next level, developing speech brain-computer interfaces to restore communication for people who cannot speak.

    As a person attempts to speak the brain-computer interfaces records their unique brain signals associated with attempted muscle movements for speaking and then translate them into words. Those words then get displayed as text on a screen or spoken aloud using text-to-speech software, livescience.com reported.

    https://img.particlenews.com/image.php?url=1EwkrB_0vFZ6yir00

    Leading the BrainGate2 clinical trial is Nicholas Card, a researcher in the Neuroprosthetics Lab at the University of California, Davis. He and his colleagues recently demonstrated a speech brain-computer interface that deciphered the attempted speech of a man with ALS, also known as Lou Gehrig's disease.

    The interface was able to converts neural signals into text with over 97% accuracy. "Key to our system is a set of artificial intelligence language models — artificial neural networks that help interpret natural ones," he wrote.

    Card said that the first step in their speech brain-computer interface is recording brain signals with surgically implanted recording devices. The researchers were able to record neural activity from 256 electrodes as participant Casey Harrell attempted to speak, Card said.

    https://img.particlenews.com/image.php?url=3BtMHi_0vFZ6yir00

    In order to relate the complex brain signals the user is trying to say, researchers map neural activity patterns directly to spoken words. This is difficult because of how large the human vocabulary can be, so instead researchers used an alternative strategy: mapping brain signals to phonemes, the basic units of sound that make up words, according to Card.

    In English, there are 39 phonemes, including ch, er, oo, pl and sh, that can be combined to form any word.

    "We can measure the neural activity associated with every phoneme multiple times just by asking the participant to read a few sentences aloud," Card said. "By accurately mapping neural activity to phonemes, we can assemble them into any English word, even ones the system wasn't explicitly trained with."

    By using those models, researchers were able to decipher phoneme sequences during attempted speech with over 90% accuracy, according to Card. After deciphering the phenome sequence researchers converted them into words and sentences by using two complementary types of machine learning language models that make highly educated guess about what the brain-computer interface user is trying to say, Card said.

    So far, this speech decoding strategy has been incredible successful, allowing Harrell to "speak" with over 97% accuracy using just his thoughts. This technology has changed his life, allowing him to communicate with his family and friends for the first time in years in the comfort of his own home, according to Card.

    The goal is now to refine the device and find a way to make it more accessible, durable and portable, Card said.

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular

    Comments / 0