Open in App
  • Local
  • U.S.
  • Election
  • Politics
  • Sports
  • Lifestyle
  • Education
  • Real Estate
  • Newsletter
  • The Atlantic

    A New Front in the Meme Wars

    By Joan Donovan,

    8 hours ago
    https://img.particlenews.com/image.php?url=3Nc5hN_0vVXURkh00

    When the Department of Justice indicted two employees of Russia’s state-backed media outlet RT last week, it didn’t just reveal a covert influence operation—it also offered a clear picture of how the tactics used to spread propaganda are changing.

    This particular operation allegedly exploited popular U.S. right-wing influencers, who amplified pro-Russian positions on Ukraine and other divisive issues in exchange for large payments. The scheme was purportedly funded with nearly $10 million of Russian money funneled through a company that was left unnamed in the indictment but is almost certainly Tenet Media, founded by two Canadians and incorporated in Tennessee. Reportedly, only Tenet Media’s founders knew that the funding came from Russian benefactors—some of the involved influencers have cast themselves as victims in this scheme—though it’s unclear whether they knew about their benefactors’ ties to RT.

    This recent manipulation campaign highlights how digital disinformation is a growing shadow industry. It thrives because of the weak enforcement of content-moderation policies, the increasing influence of social-media figures as political intermediaries, and a regulatory environment that fails to hold tech companies accountable. The consequence is an intensification of an ongoing and ever-present low-grade information war playing out across social-media platforms.

    And although dark money is nothing new, the way it’s used has changed dramatically. According to a report from the U.S. State Department in 2022, Russia spent at least $300 million to influence politics and elections in more than two dozen countries from 2014 to 2022. What is different today—and what the Tenet Media case perfectly illustrates—is that Russia need not rely on troll farms or Facebook ads to reach its targets. American influencers steeped in the extreme rhetoric of the far right were natural mouthpieces for the Kremlin’s messaging, it turns out. The Tenet situation reflects what national-security analysts call fourth-generation warfare, in which it is difficult to know the difference between citizens and combatants. At times, even the participants are unaware. Social-media influencers behave like mercenaries at the ready to broadcast outrageous and false claims, or make customized propaganda for the right price.

    The cyberwarfare we’ve experienced for years has evolved into something different. Today, we are in the midst of net war, a slow conflict fought on the terrain of the web and social media, where participants can take any form.


    Few industries are darker than the disinformation economy, where political operatives, PR firms, and influencers collaborate to flood social media with divisive content, rile up political factions, and stoke networked incitement . Corporations and celebrities have long used deceptive tactics, such as fake accounts and engineered engagement, but politicians were slower to adapt to the digital turn. Yet over the past decade, demand for political dirty tricks has risen, driven by rising profits for manufacturing misinformation and the relative ease of distributing it through sponsored content and online ads.  The low cost and high yield for online-influence operations is rocking the core foundations of elections as voters seeking information are blasted with hyperbolic conspiracy theories and messages of distrust.

    The recent DOJ indictment highlights how Russia's disinformation strategies evolved, but these also resemble tactics used by former Philippine President Rodrigo Duterte’s team during and after his 2016 campaign. After that election, the University of Massachusetts at Amherst professor Jonathan Corpus Ong and the Manila-based media outlet Rappler exposed the disinformation industry that helped Duterte rise to power. Ong’s research identified PR firms and political consultants as key players in the disinformation-as-a-service business. Rappler’s series “Propaganda War: Weaponizing the Internet” revealed how Duterte’s campaign, lacking funds for traditional media ads, relied on social media—especially Facebook—to amplify its messages through funded deals with local celebrities and influencers, false narratives on crime and drug abuse, and patriotic troll armies.

    Once in office, Duterte’s administration further exploited online platforms to attack the press, particularly harassing (and then arresting) Maria Ressa , the Rappler CEO and Atlantic contributing writer who received the Nobel Peace Prize in 2021 for her efforts to expose corruption in the Philippines. After taking office, Duterte combined the power of the state with the megaphone of social media, which allowed him to circumvent the press and deliver messages directly to citizens or through this network of political intermediaries. In the first six months of his presidency, more than 7,000 people were killed by police or unnamed attackers during his administration's all-out war on drugs; the true cost of disinformation can be measured in lives lost.

    Duterte’s use of sponsored content for political gain faced minimal legal or platform restrictions at the time, though some Facebook posts were flagged with third-party fact-checks . It took four years and many hours of reporting and research across news organizations, universities, and civil society to persuade Facebook to remove Duterte’s private online army under the tech giant’s policies against “foreign or government interference” and “coordinated inauthentic behavior.”

    More recently, Meta’s content-moderation strategy shifted again. Although there are industry standards and tools for monitoring illegal content such as child-sexual-abuse material, no such rules or tools are in place for other kinds of content that break terms of service. Meta was going to keep its brand reputation intact by downgrading the visibility of political content across its product suite, including limiting recommendations for political posts on its new X clone, Threads.

    But content moderation is a risky and unpleasant realm for tech companies, which are frequently criticized for being too heavy-handed. Mark Zuckerberg wrote in a letter to Representative Jim Jordan, the Republican chair of the House Judiciary Committee, that White House officials “repeatedly pressured” Facebook to take down “certain COVID-19 content including humor and satire” and that he regrets not having been “more outspoken about it” at the time.  The cycle of admonishment taught tech companies that political-content moderation is ultimately a losing battle both financially and culturally. With arguably little incentive to address domestic and foreign influence operations, platforms have relaxed enforcement of safety rules, as shown by recent layoffs , and made it more difficult to objectively study their products’ harms by raising the price for and adding barriers to access to data, especially for journalists .


    Disinformation campaigns remain profitable and are made possible by technology companies that ignore the harms caused by their products. Of course, the use of influencers in campaigns is not just happening on the right. The Democratic National Convention’s christening of some 200 influencers with “press passes” codifies the emerging shadow economy for political sponcon . The Tenet Media scandal is hard evidence that disinformation operations continue to be an everyday aspect of life online. Regulators in the U.S. and Europe also must plug the firehose of dark money at the center of this shadow industry. While they’re at it, they should look at social-media products as little more than broadcast advertising, and apply existing regulations swiftly.

    If mainstream social-media companies did take their role as stewards of news and information seriously, they would have strict enforcement on sponsored content and clean house when influencers put community safety at risk. Hiring actual librarians to help curate content, rather than investing in reactive AI content moderation, would be a good initial step to ensuring that users have access to real TALK (timely accurate local knowledge) . Continuing to ignore these problems, election after election, will only embolden would-be media manipulators and drive new advances in net war.

    As we learned from the atrocities in the Philippines, when social media is misused by the state, society loses. When disinformation takes hold, we lose trust in our media, government, schools, doctors, and more. Ultimately, disinformation destroys what unites nations—issue by issue, community by community. In the weeks ahead, we all should pay close attention to how influencers frame the issues in the upcoming election and be wary of any overblown, emotionally charged rhetoric claiming that this election spells the end of history. Histrionics like this can lead directly to violent escalations, and we do not need new reasons to say: “Remember, remember the fifth of November.”

    Expand All
    Comments /
    Add a Comment
    YOU MAY ALSO LIKE
    Local News newsLocal News
    The Atlantic2 days ago
    The Atlantic2 hours ago
    The Atlantic11 hours ago
    The Atlantic7 days ago
    The Atlantic5 days ago
    The Atlantic14 hours ago

    Comments / 0