Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Crime
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • South Dakota Searchlight

    Full Senate to take up pre-election regulation of ‘deepfake’ misinformation

    By John Hult,

    2024-02-07
    https://img.particlenews.com/image.php?url=26Otrr_0rCOffco00

    Sen. Liz Larson, D-Sioux Falls, on the state Senate floor during the 2024 legislative session. (Makenzie Huber/South Dakota Searchlight)

    Opposition from media organizations wasn’t enough to stop a bill that would require the labeling of “deepfakes” within 90 days of an election.

    On a 5-3 vote Wednesday, lawmakers on the Senate State Affairs Committee chose to send Senate Bill 96 on to a full debate on the Senate floor.

    “I doubt it’s going to be on the governor’s desk this year, but this is an important discussion,” said Lee Schoenbeck, R-Watertown, a committee member who serves as Senate president pro tempore.

    Biotech panel: Patient needs must come first in adoption of artificial intelligence, advanced tech

    The bill is one of two drafted by Sen. Liz Larson, D-Sioux Falls, and Sen. David Johnson, R-Rapid City. Johnson’s bill , which hasn’t yet had a hearing, targets deepfakes used to defame or harass; Larson’s SB 96 narrowly focuses on elections.

    “Deepfakes” include computer-generated images, videos and audio clips designed to mimic real people, typically produced using artificial intelligence technologies. Voice-to-text technology hinges on artificial intelligence, as do photo editing programs that can erase people from digital photos with the swipe of a finger.

    Concerns about the potential of deepfakes to spread misinformation have grown steadily over the past year or so, and they’ve already worked their way into the 2024 presidential campaign.

    South Dakota Attorney General Marty Jackley was among the state officials to send a letter to a company called Life Corporation, which allegedly used artificial intelligence to produce robocalls offering election day misinformation to New Hampshire primary voters in the deepfaked voice of President Joe Biden.

    Jackley called on Congress to act in a Tuesday press release .

    “This scam provides fake voting information that threatens our election system and deserves action,” he said.

    Trump voice: ‘I’m joining the Democrats’

    Larson offered several examples of deepfakes to the committee members, including a fake image of Pope Francis in a high-fashion “puffer” jacket and images of civil unrest altered by Amnesty International to shield the identity of Columbian anti-government protesters.

    At one point, she played an audio clip she said she’d asked for from a Republican friend Tuesday night, in which a voice all but indistinguishable from that of former President Donald Trump said “I’m here to give a big tremendous shout out to my good friend, Liz Larson.”

    “But hold on to your hats,” the recording continued. “Because I’m about to drop a bombshell that will have you saying ‘you’re kidding.’ Brace yourselves. I’m crossing the aisle. That’s right. I’m joining the Democrats.”

    The clip was meant to showcase the simplicity of misinformation, Larson said.

    “These are words that are typed into a computer model and spit out in Donald Trump’s voice,” Larson said. “It could be done for anyone in this room, because we all have enough audio of us in the public domain.”

    AI-generated audio played during the hearing

    She described the bill as a “South Dakota-sized” approach to regulation. Unlike some other proposals on the state level, the bill wouldn’t ban deepfakes. Instead, it would require disclosure of the use of AI by the deepfake’s producer or presenter, directly on or in the deepfake itself, at any time within 90 days before an election. The requirements would apply for material that “a reasonable person would believe … depicts the speech or conduct of an actual individual who did not in fact engage in the speech or conduct.”

    Those who do not would be guilty of a class one misdemeanor and open to civil damages.

    Satirical deepfakes would be exempt from penalties. There are also exceptions for news stories on deepfakes, and for broadcasters that make “a good faith effort” to verify that a paid ad does not contain deepfakes.

    ‘Arbiters of reasonableness’

    The verbiage of those exceptions drew the ire of two lobbyists for media organizations.

    Steve Willard of the South Dakota Broadcasters Association applauded Larson for her willingness to raise the issue and for her collaborative approach to writing the bill.

    Even so, Willard said, he’s concerned that broadcasters would be placed in the uncomfortable position of deciding if paid political ads include phony images, video or audio in a world where deepfakes are evolving quickly enough that a “reasonable person” could easily be fooled.

    Broadcasters would need to become “arbiters of reasonableness,” Willard said.

    “First, we’d have to decide whether or not it’s a deepfake … Second, you have to determine whether that’s parody or satire,” he said.

    Dakota State president moderates artificial intelligence briefing for senators

    The bill also has an emergency clause, which would make it law with the governor’s signature. Larson’s goal is to have it take effect before this year’s primaries, but Willard said that timeline would make it all but impossible for broadcasters to “staff up” and be ready to evaluate the veracity of ads.

    “We’re not sure we can accomplish what we’re setting out to accomplish,” Willard said.

    The bill as written would almost certainly result in lawsuits against newspapers, according to Justin Smith, a lobbyist for the South Dakota NewsMedia Association.

    Like Willard, Smith thanked Larson for bringing the bill. But he also joined Willard in his worries about publishers being asked to be more discerning about possible deepfakes than the general public in order to avoid liability.

    “How much manipulation crosses the line to become a deepfake?” Smith said.

    A former legislative candidate named Michael Boyle agreed that he wouldn’t want to hear his voice saying things he hadn’t in a radio ad, but worried that the bill could sweep in speech protected by the First Amendment.

    “The definition of deepfake is way too broad,” Boyle said.

    Committee: Discussion necessary

    Larson countered that the bill aims to target the producers of deepfakes, and that the exemptions found within it are meant to shield well-meaning media operators.

    Newspapers shouldn’t be the arbiters of reasonableness, she said. If media organizations “want other language in there” to protect them from unjustified lawsuits, “I’d be happy to work with them.”

    Sen. Schoenbeck, in spite of his doubts about the legislation’s future in the 2024 session, argued that AI technology is advancing so rapidly that a discussion on its possible electoral impact ought to happen on the Senate floor.

    Schoenbeck talked about a recent conference at which presenters suggested that soon no one will trust the news because it could become impossible to tell human-generated truth from computer-aided fakery.

    “This is coming, big time,” Schoenbeck said.

    One senator, Winner Republican Erin Tobin, cast her vote to advance the bill just after announcing to the committee that she was able to produce an argument for her position by asking the text generator ChatGPT to do it for her as she listened to testimony on SB 96.

    The discussion needs to start somewhere, she said, and the sooner the better.

    “We know that we will be behind the game, always,” Tobin said.

    GET THE MORNING HEADLINES DELIVERED TO YOUR INBOX

    SUPPORT NEWS YOU TRUST.

    The post Full Senate to take up pre-election regulation of ‘deepfake’ misinformation appeared first on South Dakota Searchlight .

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular

    Comments / 0