Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • Knewz

    AI's Refusal to ask Humans for Help is Making it Hallucinate and Leading to Avoidable Gaffes

    By Staff Writer,

    3 days ago
    https://img.particlenews.com/image.php?url=3wZGNP_0vRCPw1m00
    AI's Refusal to ask Humans for Help is Making it Hallucinate and Leading to Avoidable Gaffes

    AI's Refusal to ask Humans for Help is Making it Hallucinate and Leading to Avoidable Gaffes

    Microsoft's Cure for AI's Hallucinations

    After ChatGPT took the world by storm, AI has been used for everything from writing research papers and code to creating images and videos. After OpenAI outpaced tech giants to create the revolutionary generative AI tech, others such as Google accelerated efforts to develop their own AI models, while Microsoft entered a partnership with the firm behind ChatGPT. However, Microsoft vice president Vik Singh recently pointed out that AI needed fixing, since there were still issues as it generated incorrect or made-up answers. Singh told AFP, "Just to be really frank, the thing that was really missing at the time was that a model didn’t raise its hands and say 'Hey, I’m not sure, I need help.'" This has led to customer frustration and a demand for more effective solutions.

    Is AI Hallucinating?

    Recently, Microsoft executives worked to fix AI systems that 'hallucinated,' which meant that chatbots sometimes created responses or information that wasn't true or accurate. This often happened because AI models, especially those using large language models, provided confident-sounding answers even when they weren't based on facts or reliable data. Marc Benioff, CEO of Salesforce, reported that there was an increase in customers with frustrated with Microsoft's Copilot due to this flaw, according to Indy100 .

    Reasons for the Glitch

    Knewz.com noted that AI hallucinations or mixed responses occurred for several reasons. One common example of this was how the facial recognition systems, mainly trained with images of one ethnicity, wrongly identified people from other ethnicities. In addition to that, the chatbot model or its poor design led to these inaccurate, made-up results. Another issue was that of overfitting, where a model ingested too much of the training data and failed to handle new data and situations, according to Google Cloud . For instance, a stock prediction model might have performed well with past data but failed with future trends because it mistook random changes for significant patterns.

    What did Microsoft Suggest?

    Singh revealed that skilled professionals at Microsoft were finding a way to make chatbots admit that they don’t know the answer and ask for help whenever necessary. He suggested that one way to save money and ensure accuracy would be for the chatbot to seek human assistance in half of the cases. Singh told News.com.au , "Every time a new request comes in, they spend $8 for a customer service rep to handle it, so there are real savings to be had. It’s also a better experience for the customer because they get a faster response."

    Ongoing Research and Development

    Regarding the issue, Google’s head of Search, Liz Reid, told The Verge , "There was a balance between creativity and factuality. We were really going to skew it toward the factuality side." A former Google researcher mentioned that this issue might be fixed within a year, although he was doubtful. Microsoft had developed a tool to help detect these errors for some users. A study from the National University of Singapore suggested that such errors were inherent in large language models, just as people can't always be right. Companies often downplayed the problem by reminding users to check responses for accuracy, indicating that 'while their tools might make mistakes, users had to verify important information.'

    Expand All
    Comments /
    Add a Comment
    YOU MAY ALSO LIKE
    Local News newsLocal News
    Total Apex Sports & Entertainment17 hours ago
    Alameda Post15 days ago
    Total Apex Sports & Entertainment23 hours ago
    Total Apex Sports & Entertainment17 hours ago

    Comments / 0