Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • The Hill

    How Taylor Swift’s AI callout could bring attention to misinformation

    By Miranda Nazzaro,

    19 hours ago

    https://img.particlenews.com/image.php?url=22Zy1m_0vV26lVo00

    Megastar Taylor Swift’s endorsement of Vice President Harris shines a bright spotlight on artificial intelligence (AI) deepfakes, fears of which she said prompted her to take a public stance in the presidential race.

    Swift, 34, formally backed Harris moments after Tuesday night’s debate, citing concerns around the rapidly developing AI technology and its power to deceive. She specifically noted how former President Trump shared several fake images of her and her fans last month, claiming he had her support.

    “It really conjured up my fears around AI, and the dangers of spreading misinformation. It brought me to the conclusion that I need to be very transparent about my actual plans for this election as a voter. The simplest way to combat misinformation is with the truth,” she wrote in the endorsement .

    Experts say the admission by one of the globe’s most known superstars underscores a wider concern voters and public figures are feeling when it comes to AI and its impact on the 2024 election.

    Trump’s sharing of fake images of Swift isn’t her first run-in with AI-generated content.

    Earlier this year, fake sexually explicit images of Swift circulated across the internet , renewing calls from federal lawmakers for social media companies to endorse their rules against AI-generated materials. Social platform X temporarily blocked searches of the singer at the time to combat the fake images’ spread.

    “Her having experienced two of the most pervasive permutations of that [AI], you know, intimate image deep fakes and now election related deepfakes just helps make the point that no one is immune,” said Lisa Gilbert, co-president of Public Citizen , a progressive consumer rights watchdog nonprofit.

    Gilbert told The Hill the pop star is “correct in identifying the immensely damaging harms” that could come from the spread of AI misinformation, including in elections, and called for federal regulations to address the matter.

    The singer’s massive following could be a unique opportunity to boost awareness about the spread of misinformation ahead of the November election, experts suggested.

    Swift’s endorsement of Harris, and in turn, her stance on AI, was broadcast to her nearly 284 million followers on Instagram, putting her in at least the top 15 most-followed figures on the social media platform.

    “Taylor Swift does have a very large platform, and I do think that awareness can be powerful here, at least as a starting point,” Virginia Tech digital literacy expert Julia Feerrar told The Hill.

    “If it’s not really on your radar that generated content might be something that you’re seeing, then just knowing that that’s a possibility is huge and gives you a different reaction when you see something that’s like, ‘Hm, I don’t know about that.’ I think I would imagine that for a lot of people who saw her statement, they might think back to that the next time they see something.”

    Feerrar noted the technology is still new for a lot of users, and Swift’s endorsement serves as a large reminder to take precautions when seeing content that might already be aligned with users’ biases.

    Swift’s high placement of AI in her endorsement may also draw more attention to the issue. Its mention in the second of five paragraphs “lends weight and value to the ethics of political advertising online” and allows cyber misconduct to get “much-deserved attention,” Laurel Cook, an associate professor of marketing and founder of the Social Technology and Research Lab at West Virginia University, told The Hill.

    “As the discussion grows, the potential benefit of incorrectly using digitally altered content wanes. In other words, it will soon no longer pay off to steal another person’s image or likeness,” Cook said.

    Civic Responsibility Project founder Ashley Spillane agreed there was purpose behind Swift’s messaging.

    “I do think that … she’s very thoughtful, and she is the expert on how to communicate with her community. And so I think she really did put a great amount of care into that communication so that it would resonate with her community,” said Spillane, who authored a recent Harvard study on the impact of celebrity endorsements.

    Swift’s concerns join those of other Hollywood stars speaking out about what they feel is a lack of safeguards surrounding the rapidly developing technology. In June, actor Scarlett Johansson said she was “shocked” to learn OpenAI’s Chat GPT rolled out its AI assistant that she claimed sounded “eerily similar” to her voice.

    Celebrities do have the advantage of mobilizing a platform to quickly debunk deepfake images or AI-generated content, unlike most of the public, some experts said.

    “Taylor Swift has a very large megaphone and so she is able to protect herself, in part, by communicating directly, as she did with her followers and others, publicly in a way that not everyone would be able to do,” University of Pennsylvania law professor Jennifer Rothman said, pointing to a New Jersey teen victim of sexually explicit deepfakes who made headlines for her testimony before Congress.

    Various laws already exist on the unauthorized use of people’s names, voices, and likenesses, along with laws aimed directly at fake intimate images, according to Rothman, who specializes in intellectual property law. The laws can be “very difficult” to navigate, however, for those who do not have access to the right resources, she explained.

    “There are federal laws that are being considered, that have been floated that will specifically target these sorts of digital replicas, and there are different approaches being considered,” Rothman said. “I do think these uses are covered under current law, but it can be useful to have a federal law that addresses something for a variety of sort of legal reasons.”

    She said it is still “early days” in terms of congressional legislation regarding AI-generated content and emphasized lawmakers need to ensure they do not enact measures that “make things worse.”

    “Federal laws that make it more complicated or that suggest that under federal law, someone other than the person whose voice or likeness it is can own their voice or likeness is deeply troubling. And those are some of the ideas that have been floated. So, if those are the laws, we don’t want them, but more targeted laws could be useful to make it easier and streamlined.”

    While fears appear to be growing over AI’s impact on elections, some experts do not think it is advanced enough yet to convince too many voters ahead of November.

    “The quality of some of this AI-generated content isn’t as convincing as it could be, at least for the next couple of two months, at least,” said Clara Langevin, an AI policy specialist with the Federation of American Scientists. “But we could get to the point, like in the next election cycle, where those things are indistinguishable.”

    Copyright 2024 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

    For the latest news, weather, sports, and streaming video, head to The Hill.

    Expand All
    Comments / 7
    Add a Comment
    Mary Lambert
    5h ago
    Taylor swift is just like the Demacracks, just out for herself, TRUMP 2024
    Phil Tyler
    8h ago
    Who the hell really cares about Taylor Swift and what she thinks.
    View all comments
    YOU MAY ALSO LIKE
    Local News newsLocal News

    Comments / 0