Open in App
  • U.S.
  • Election
  • Newsletter
  • USA TODAY

    ChatGPT bans multiple accounts linked to Iranian operation creating false news reports

    By Anthony Robledo, USA TODAY,

    2 days ago
    https://img.particlenews.com/image.php?url=1dhCzD_0v0t6xqm00
    A photo taken on November 23, 2023 shows the logo of the ChatGPT application developed by US artificial intelligence research organization OpenAI on a smartphone screen in Frankfurt am Main, western Germany. KIRILL KUDRYAVTSEV / AFP via Getty Images

    OpenAI deactivated several ChatGPT accounts using the artificial intelligence chatbot to spread disinformation as part of an Iranian influence operation, the company reported Friday.

    The covert operation called Storm-2035 , generated content on a variety of topics including the U.S. presidential election, the American AI company announced Friday. However, the accounts were banned before the content garnered a large audience.

    The operation also generated misleading content on "the conflict in Gaza, Israel’s presence at the Olympic Games" as well as "politics in Venezuela, the rights of Latinx communities in the U.S. (both in Spanish and English), and Scottish independence."

    The scheme also included some fashion and beauty content possibly in an attempt to seem authentic or build a following, OpenAI added.

    "We take seriously any efforts to use our services in foreign influence operations. Accordingly, as part of our work to support the wider community in disrupting this activity after removing the accounts from our services, we have shared threat intelligence with government, campaign, and industry stakeholders," the company said.

    No real people interacted with or widely shared disinformation

    The company said it found no evidence that real people interacted or widely shared the content generated by the operation.

    Most of the identified social posts received little to no likes, shares or comments, the news release said. Company officials also found no evidence of the web articles being shared on social media. The disinformation campaign was on the low end of The Breakout Scale, which measures the impact of influence operations from a scale of 1 to 6. The Iranian operation scored a Category 2.

    The company said it condemns attempts to "manipulate public opinion or influence political outcomes while hiding the true identity or intentions of the actors behind them." The company will use its AI technology to better detect and understand abuse.

    "OpenAI remains dedicated to uncovering and mitigating this type of abuse at scale by partnering with industry, civil society, and government, and by harnessing the power of generative AI to be a force multiplier in our work. We will continue to publish findings like these to promote information-sharing and best practices," the company said.

    Earlier this year, the company reported similar foreign influence efforts using its AI tools based in Russia, China, Iran and Israel but those attempts also failed to reach a significant audience.

    This article originally appeared on USA TODAY: ChatGPT bans multiple accounts linked to Iranian operation creating false news reports

    Expand All
    Comments / 0
    Add a Comment
    YOU MAY ALSO LIKE
    Most Popular newsMost Popular

    Comments / 0