Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • TechRadar

    YouTube can spot those AI-produced faces and music tracks you're seeing all over

    By Eric Hal Schwartz,

    6 hours ago

    https://img.particlenews.com/image.php?url=43b24P_0vQF4uGI00

    YouTube is continuing to tapdance ahead of the rush of AI-produced content popping up on the platform with a new set of tools to spot when AI-generated people, voices, and even music appear in videos. The newly upgraded Content ID system is expanding from looking for copyright infringement to looking for synthetic voices performing songs. There are also new ways to spot when deepfake faces are popping up in videos.

    The "synthetic singing" voice identification tool for the Content ID system is fairly straightforward. The AI will automatically detect and manage AI-generated imitations of singing voices and alert users of the tool. Google plans to roll out a pilot version of this system early next year before a broader release.

    On the visual content front, YouTube is testing a way for content creators to detect AI-generated videos that have their faces without their approval. The idea is to give artists and public figures more control over how AI versions of their faces are deployed, particularly on the video platform. Ideally, this would stop deepfakes or unauthorized manipulations from spreading.

    Both features build on the policy quietly added to YouTube's terms and conditions in July to address AI-generated mimicry. Affected individuals can request the removal of videos with deepfake aspects of themselves through YouTube's privacy request process. That was a big shift from simply labeling the video as AI or as misleading content. It enhanced the takedown policy to address AI.

    "These two new capabilities build on our track record of developing technology-driven approaches to tackling rights issues at scale," YouTube Vice President of Creator Products Amjad Hanif wrote in a blog post. "We're committed to bringing this same level of protection and empowerment into the AI age."

    YouTube's AI Infusion

    The flip side of the AI detection tools is for creators who have seen their videos scraped to train AI models without their permission. Some YouTube videomakers have been annoyed about how their work is picked up for training by OpenAI, Apple, Nvidia, and Google itself for training without any requests or compensation. The exact plan is still in early development but presumably will deal at least with the Google scraping.

    "We’ll continue to employ measures to ensure that third parties respect [YouTube's terms and conditions], including ongoing investments in the systems that detect and prevent unauthorized access, up to and including blocking access from those who scrape," Hanif wrote. "That said, as the generative AI landscape continues to evolve, we recognize creators may want more control over how they collaborate with third-party companies to develop AI tools. That's why we're developing new ways to give YouTube creators choice over how third parties might use their content on our platform."

    The announcements are part and parcel of YouTube’s moves to make AI both a deeply integrated part of the platform and one that people trust. That's why these kinds of protection announcements often come right before or after plans like YouTube's Brainstorm with Gemini tool for coming up with inspiration for a new video. Not to mention anticipated features like an AI music generator , which in turn pairs well with the new tool for removing copyrighted music from your video without taking it down completely.

    You might also like

    Expand All
    Comments /
    Add a Comment
    YOU MAY ALSO LIKE
    Local News newsLocal News
    Alameda Post14 days ago
    Total Apex Sports & Entertainment8 hours ago
    Total Apex Sports & Entertainment2 hours ago

    Comments / 0