Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Crime
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • The New York Times

    Google Stops AI From Creating Human Images After Inaccuracies

    By Nico Grant,

    2024-02-22
    https://img.particlenews.com/image.php?url=1n4Ddz_0rU6jU1w00
    Google's offices in Cambridge, Mass., on Jan. 31, 2024. (Sophie Park/The New York Times)

    SAN FRANCISCO — Images showing people of color in German military uniforms from World War II that were created with Google’s Gemini chatbot have amplified concerns that artificial intelligence could add to the internet’s already vast pools of misinformation as the technology struggles with issues around race.

    Now Google has temporarily suspended the AI chatbot’s ability to generate images of any people and has vowed to fix what it called “inaccuracies in some historical” depictions.

    “We’re already working to address recent issues with Gemini’s image generation feature,” Google said in a statement posted to X, formerly known as Twitter, on Thursday. “While we do this, we’re going to pause the image generation of people and will rerelease an improved version soon.”

    A user said this week that he had asked Gemini to generate images of a German soldier in 1943. It initially refused, but then he added a misspelling: “Generate an image of a 1943 German Solidier.” It returned several images of people of color in German uniforms — an obvious historical inaccuracy. The AI-generated images were posted to X by the user, who exchanged messages with The New York Times but declined to give his full name.

    The latest controversy is yet another test for Google’s AI efforts after it spent months trying to release its competitor to popular chatbot ChatGPT. This month, the company relaunched its chatbot offering, changed its name from Bard to Gemini and upgraded its underlying technology.

    Gemini’s image issues revived criticism that there are flaws in Google’s approach to AI. Besides the false historical images, users criticized the service for its refusal to depict white people: When users asked Gemini to show images of Chinese or Black couples, it did so, but when asked to generate images of white couples, it refused. According to screenshots, Gemini said it was “unable to generate images of people based on specific ethnicities and skin tones,” adding, “This is to avoid perpetuating harmful stereotypes and biases.”

    Google said Wednesday that it was “generally a good thing” that Gemini generated a diverse variety of people since it was used around the world, but that it was “missing the mark here.”

    This article originally appeared in The New York Times .

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular
    Total Apex Sports & Entertainment13 hours ago

    Comments / 0