Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • Indy100

    ChatGPT accused of 'gaslighting' TikToker over spelling in bizarre exchange

    By Becca Monaghan,

    3 days ago

    https://img.particlenews.com/image.php?url=4Hxm4J_0vMngp6a00

    TikTok users are riled up over a bizarre conversation a man had with ChatGPT that ultimately 'gaslit' the user with an incorrect spelling.

    In a clip shared by the page Y'all Watch This , the user simply quizzed the AI bot on how many Rs were in the word strawberry. The answer is simple, right? Three Rs.

    Well, not for ChatGPT, that incorrectly answered two.

    When the user corrected the bot, it doubled down on its answer and was adamant there were two Rs.

    "Maybe you were counting the R in ‘straw’ and ‘berry’ separately," it hit back – which still makes three Rs.

    It then went on to "break it down," showing the correct spelling of strawberry.

    "'Strawberry - The R's are highlighted. There are two R's in 'strawberry,'" it said, before later adding: "There are indeed only two R's. If you're seeing three, there might be a mistake or a different context in your counting.

    The clip left millions of fellow TikTokers baffled, with one highlighting: "Not ChatGPT gaslighting you."

    "I cannot believe I watched this entire video and was getting heated," another added.

    Meanwhile, a third humoured: "Everybody knows at least one person who behaves like ChatGPT."


    @yallwatchthis

    I asked ChatGPT how many Rs are in the word Strawberry #chatgpt #ai #strawberry


    It comes after Microsoft vice president Vik Singh revealed that AI doesn't have the ability to ask for help when it doesn't know the answer or know what to do.

    "Just to be really frank, the thing that’s really missing today is that a model doesn’t raise its hands and say 'Hey, I’m not sure, I need help,'" he told AFP.

    Despite making major progress across AI applications including ChatGPT , it still reportedly creates its own answers or "hallucinates," causing concern for top tech leaders.

    How to join the indy100's free WhatsApp channel

    Sign up for our free Indy100 weekly newsletter

    Have your say in our news democracy. Click the upvote icon at the top of the page to help raise this article through the indy100 rankings.

    Expand All
    Comments /
    Add a Comment
    YOU MAY ALSO LIKE
    Local News newsLocal News
    Alameda Post28 days ago
    Alameda Post14 days ago

    Comments / 0