Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Crime
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • TV Tech

    M&E’s Embrace of AI Dampens Potential for Damage Mitigation

    By Fred Dawson,

    18 days ago

    https://img.particlenews.com/image.php?url=4Zgops_0upQeqx400

    As the disruptive forces unleashed by artificial intelligence engulf the media and entertainment industry, the prospects for minimizing the potential for harm are growing less certain by the day.

    Not that there isn’t an immense amount of work going into mitigating the dangers on the part of station groups, networks, distribution affiliates and industry organizations. But those efforts parallel an exuberant industry-wide embrace of AI that’s driving cost-saving efficiencies, enhanced user experiences and monetization to new heights at an accelerating pace.

    https://img.particlenews.com/image.php?url=0i2zle_0upQeqx400

    NAB President/CEO Curtis LeGeyt (Image credit: NAB)

    No one among the many industry leaders calling for corporate policies, industry standards and government regulations that would establish guardrails against misuse of AI question the wisdom of maximizing the AI upside. “We certainly welcome AI innovations that enhance operational efficiencies and localism, enabling stations to better serve their communities,” says NAB president and CEO Curtis LeGeyt.

    Hype and Unproven Expectations
    AI dominated the 2024 NAB Show, bringing to light benefits impacting everything from creative to production, encoding, packaging, distribution, advertising, and anything else where data analytics comes into play. But with the guardrails coming together slowly on a learning curve impeded by hype and unproven expectations, the situation calls to mind scenes from old westerns that left moviegoers on the edges of their seats watching good guys trying to stop runaway trains.

    https://img.particlenews.com/image.php?url=3fQN1K_0upQeqx400

    Judah Libin (Image credit: Deloitte)

    As Deloitte managing director Judah Libin notes, AI adoption, including the use of large language model (LLM) generative AI tools that exploded into prominence two years ago, is proceeding at “a pace that far exceeds the speed at which our societies and regulatory frameworks can adapt, creating significant gaps in our understanding and governance.” Making matters worse, Libin adds, the technology poses “profound ethical and societal dilemmas, from the creation of deepfakes to the inherent biases in AI-generated outputs.”

    When it comes to formulating laws relating to AI, those dilemmas are especially daunting for a Congress locked in election-year combat. Unlike in Europe, where the EU Parliament has passed the “AI Act” with final approval expected by year’s end, Congress has made little headway beyond holding some hearings and taking a preliminary look at a handful of bills.

    Nobody is more focused on the need for legislative action than LeGeyt, who has had a front-row seat at two Senate hearings on AI issues. During an appearance before a Senate Subcommittee in January, the NAB president highlighted examples of AI-related abuses in three major areas of broadcast industry concerns, including copyright infringement, misuse of AI-generated likenesses of radio and TV personalities that spread false information, and other uses of deepfakes that make it hard to distinguish between truth and fiction.

    “I have seen the harm inflicted on local broadcasters and our audiences by Big Tech giants whose market power allows them to monetize our local content while siphoning away local ad dollars,” LeGeyt tells TV Tech. “The sudden proliferation of generative AI tools risks exacerbating this harm. To address this, NAB is committed to protecting the unauthorized use of broadcast content and preventing the misuse of our local personalities by generative AI technologies.”

    Along with pursuing Congressional action, NAB is deeply engaged in fostering self-governance.

    “Our technology team is working closely with these new innovations to equip local stations with the best tools to integrate into their operations,” LeGeyt says.

    Of course, with whatever help NAB and the various standards organizations can provide in developing tools and standards, it’s up to broadcasters to execute, notes Deloitte’s Libin. “The burden is on broadcasters to organize, monitor and regulate themselves, then focus on industry standardization,” he says. A “clear governance structure and ethical guidelines” are essential, as is “rigorous and continuous testing.”

    In the Newsrooms
    A big part of self-governance is centered in broadcast newsrooms. Fostering such efforts has been a top priority at the Radio Television Digital News Association, the first national journalistic association to issue guidelines on news outlets’ use of AI, according to RTDNA president and CEO Dan Shelley.

    https://img.particlenews.com/image.php?url=3rNNPL_0upQeqx400

    Dan Shelley (Image credit: RTDNA)

    Issued a year ago, the guidelines focus on how AI is used in newsgathering, editing and distribution with attention to ensuring accuracy through contextual and source validation, avoiding violations of privacy and maintaining clarity when AI is used to modify content. The association says newsroom policies should also keep faith with audiences, informing them of AI usage with assurances that everything is reviewed for adherence to journalistic principles by journalists.

    Things are moving in the right direction. “There are infrastructures in place with experts thinking hard and acting very carefully when it comes to testing and implementing AI technology in local newsrooms,” Shelley says.

    One thing that drives me nuts is the amount of hype we’ve seen in the past two years."

    Yves Bergquis, Entertainment Technology Center

    But staffing up with AI specialists is just part of the labor-intensive aspects of keeping AI on track. Everyone involved in news broadcasting has a role to play, underscoring the fact that, as Shelley stresses, “no matter how good AI becomes, it will never replace human intellect and the sensibilities to produce the best results obtainable.”

    Humans in the Loop
    Such views pervade discussions about AI in industry leadership circles, leading many to suggest rank-and-file fears of job losses are overblown. Ironically, the hype factor is contributing to job security.

    In this Wild West tableau, rampant hype is wreaking havoc with broadcasters’ and everyone else’s efforts to put AI to good use. “One thing that drives me nuts is the amount of hype we’ve seen in the past two years,” says Yves Bergquist, director of the AI & Neuroscience in Media Project at the Entertainment Technology Center (ETC). “Keeping humans in the loop is extremely important.”

    https://img.particlenews.com/image.php?url=04lsZy_0upQeqx400

    Yves Bergquist (Image credit: ETC)

    Yves is co-chair with AMD fellow Fred Walls of the task force on AI standards and media mounted by ETC and the Society of Motion Picture and Television Engineers. Earlier this year the task force produced what SMPTE President Renard Jenkins calls “the most comprehensive document looking at both the technical side as well as the impact and the ethical and responsibility areas of this particular technology.”

    Participating in a recent webinar with Jenkins and Walls, Yves says research shows that the true capabilities of AI systems in the vast majority of cases “are a fraction of what they’re trying to advertise.’ When it comes to getting to the truth of what can be done, “not enough people talk about how hard this is.”

    https://img.particlenews.com/image.php?url=4XPzis_0upQeqx400

    Renard Jenkins (Image credit: SMPTE)

    Jenkins notes AI-assisted facial recognition is one example of where AI isn’t living up to a widely accepted 85% performance standard. “People go into using AI thinking it’s going to save us a lot of money,” Jenkins says. “Most of the time, the reason it fails to deliver is there isn’t enough time put into figuring out what it can really do.”

    Societal and Ethical Considerations
    One of the most daunting tasks involves identifying biases that are inevitably built into LLM models. While part of the challenge involves eliminating the most egregious biases by using reference material in facial recognition, it’s also important to be transparent about the unavoidable biases that arise from cultural disparities.

    The blending of ethical and performance issues in bias assessment is just one example of how addressing ethics and performance is really two sides of the same coin, Yves notes. “I have yet to see a requirement related to ethical AI that isn’t also a requirement of rigorous AI practice,” he says.

    This is reflected in the dozens of standards and policy framework initiatives identified in the AI task force report that are underway at ISO-EPS, IEEE, ITU, W3C and other organizations. It’s an impressive list, but there’s obviously a long way to go, especially when it comes to setting the ethical frameworks on which performance standards must be built.

    SMPTE’s report spells out the challenge: “While stakeholders in the development of this plan expressed broad agreement that societal and ethical considerations must factor into AI standards, it is not clear how that should be done and whether there is yet sufficient scientific and technical basis to develop those standards provisions. Moreover, legal, societal, and ethical considerations should be considered by specialists trained in law and ethics.”

    How fast the industry gets to having real guardrails will greatly depend on how big the perceived risk becomes, which “will help to drive decision making about the need for specific AI standards and standards-related tools.” How much comfort can be gained from a realization that the scarier AI gets, the more likely we are to act is debatable.

    https://img.particlenews.com/image.php?url=343aeF_0upQeqx400

    Fred Walls (Image credit: AMD)

    But even that modicum of relief from anxiety is yet to be found when it comes to reducing the alarms generated by the threat posed by “deep fakes,” in which AI is used to create alternative audio and video that doesn’t represent reality. When asked about progress toward creating tools capable of identifying professional caliber deep fakes, Walls replies that while efforts to develop such tools abound, the pace of deep fake AI development is such that it “will be really hard to tell if something is a deep fake or not.”

    Taking it all into account, we can only hope that the old Western analogy holds to reel’s end, when the sound of screeching brakes signals the good guys have saved the day.

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular

    Comments / 0