Open in App
  • Local
  • Headlines
  • Election
  • Crime Map
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • Windows Central

    Microsoft unveils new Correction tool to address AI hallucinations but there are early concerns — "Trying to eliminate hallucinations from generative AI is like trying to eliminate hydrogen from water"

    By Kevin Okemwa,

    25 days ago

    https://img.particlenews.com/image.php?url=0KMh2A_0vin1buW00

    What you need to know

    • Microsoft debuts a new Correction tool to prevent AI hallucinations.
    • Experts claim the tool might address some of the issues impacting the technology, but it's not a silver bullet for accuracy and may create critical issues, too.
    • They further claim that the tool may create a false sense of security among avid AI users, who might perceive it as a safeguard against erroneous outputs and misinformation.

    Microsoft recently unveiled several new artificial intelligence safety features to promote security, privacy , and reliability, including a new tool dubbed Correction. As you might have guessed, the tool is designed to correct AI's wrongs by detecting erroneous and factual mistakes when generating responses to text-based prompts.

    The new tool is part of Microsoft's Azure AI Content Safety API and can be used across several text-generation AI models, including Meta's Llama and OpenAI's GPT-4o . While speaking to TechCrunch , a Microsoft spokesman revealed that the tool is based on small and large language models that allow it to fact-check its output from a credible source.

    However, experts warn that the tool might not be a silver bullet to AI's hallucination episodes. According to OS Keyes, a University of Washington PhD candidate:

    "Trying to eliminate hallucinations from generative AI is like trying to eliminate hydrogen from water. It's an essential component of how the technology works."

    Hallucinations isn't AI's only issue

    In the early Bing Chat (now Microsoft Copilot ) days users expressed their frustrations over the tool's user experience, citing erroneous outputs . Microsoft introduced character limits to prevent long chats since it seemingly confused the chatbot. While instances of hallucination have significantly reduced, it doesn't necessarily mean that we're in the clear.

    For instance, Google's AI Overviews feature was recently spotted recommending eating rocks, and glue, and even committing suicide. The company quickly patched the issue, shifting blame to a "data void" and fabricated screenshots .

    However, the Microsoft spokesman claims the new Correction tool will significantly improve the reliability and trustworthiness of AI-generated content by helping app devs to reduce "user dissatisfaction and potential reputational risks." While the spokesman admits that the tool won't address the accuracy issues riddling AI, it'll align its outputs with grounding documents.

    Keyes claims the correction tool may address some of the issues impacting AI, but it'll create new ones in equal measure. Some even claim the tool might create a false sense of security, prompting users to misconstrue factual incorrect information generated using AI as the gospel truth.

    It'll be interesting to see how the new Correction tool resonates with avid AI users, and whether it will be instrumental in detecting AI hallucinations, ultimately making AI-generated responses more trustworthy.

    Elsewhere, Microsoft recently unveiled Copilot Academy —a new program designed to help users leverage Copilot's capabilities at an optimum level. According to a separate report, the top complaint at Microsoft's Copilot department is that the AI tool doesn't work as well as OpenAI's ChatGPT . Microsoft claimed this is because users aren't leveraging its capabilities as intended.

    🎃The best early Black Friday deals🦃

    Comments / 1
    Add a Comment
    Retired Vet
    24d ago
    What happens when AI determines a better method of getting a project accomplished, and doesn’t present any other options? AI is being trusted (utilized) to solve problems encountered by companies to get a project off paper and into production. It must be checked to ensure it offers all possible methods, not just the preferred one. In short, a human must do the job anyway.
    View all comments
    YOU MAY ALSO LIKE
    Local News newsLocal News

    Comments / 0