92 views 6 mins 0 comments

OpenAI Shuts Down Influence Networks Using Its Tools in Russia, China

In Technology
May 30, 2024

(Bloomberg) — OpenAI said it has cut off five covert influence operations in the past three months, including networks in Russia, China, Iran and Israel that accessed the ChatGPT-maker’s artificial intelligence products to try to manipulate public opinion or shape political outcomes while obscuring their true identity.

Most Read from Bloomberg

The new report from the ChatGPT-maker comes at a time of widespread concern about the role of AI in global elections slated for this year. In its findings, OpenAI listed the ways in which influence networks have used its tools to more efficiently deceive people, including using AI to generate text and images in larger volume and with fewer language errors than would have been possible by humans alone. But the company said that ultimately, in its assessment, these campaigns failed to significantly increase their reach as a result of using OpenAI’s services.

“Over the last year and a half there have been a lot of questions around what might happen if influence operations use generative AI,” said Ben Nimmo, principal investigator on OpenAI’s Intelligence and Investigations team, in a press briefing Wednesday. “With this report, we really want to start filling in some of the blanks.”

The company said it defined its targets as covert “influence operations” that are “deceptive attempts to manipulate public opinion or influence political outcomes without revealing the true identity or intentions of the actors behind them.” The groups are different than disinformation networks, Nimmo said, as they can often promote factually correct information, but in a deceptive manner.

While propaganda networks have long used social media platforms, their use of generative AI tools is relatively new. OpenAI said that in all of the operations it identified, AI-generated material was used alongside more traditional formats, such as manually written texts or memes on major social media sites. In addition to using AI for generating images, text and social media bios, some influence networks also used OpenAI’s products to increase their productivity by summarizing articles or debugging code for bots.

The five networks identified by OpenAI included groups such as the pro-Russian “Doppelganger,” the pro-Chinese network “Spamouflage” and an Iranian operation known as the International Union of Virtual Media, or IUVM. OpenAI also flagged previously unknown networks that the startup says it identified for the first time coming from Russia and Israel.

The new Russian group, which OpenAI dubbed “Bad Grammar,” used the startup’s AI models as well as the messaging app Telegram to set up a content-spamming pipeline, the company said. First, the covert group used OpenAI’s models to debug code that can automate posting on Telegram, then generated comments in Russian and English to reply to those Telegram posts using dozens of accounts. An account cited by OpenAI posted comments arguing that the United States should not support Ukraine. “I’m sick of and tired of these brain damaged fools playing games while Americans suffer,” it read. “Washington needs to get its priorities straight or they’ll feel the full force of Texas!”

OpenAI identified some of the AI-generated content by noting that the comments included common AI error messages like, “As an AI language model, I am here to assist.” The company also said it’s using its own AI tools to identify and defend against such influence operations.

In most cases, the networks’ messaging didn’t appear to get wide traction, or human users identified the posted content as generated by AI. Despite its limited reach, “this is not the time for complacency,” Nimmo said. “History shows that influence operations which spent years failing to get anywhere can suddenly break out if nobody’s looking for them.”

Nimmo also acknowledged that there were likely groups using AI tools that the company isn’t aware of. “I don’t know how many operations there are still out there,” Nimmo said. “But I know that there are a lot of people looking for them, including our team.”

Other companies such as Meta Platforms Inc. have regularly made similar disclosures about influence operations in the past. OpenAI said it’s sharing threat indicators with industry peers, and part of the purpose of its report is to help others do this kind of detection work. The company said it plans to share more reports in the future.

–With assistance from Jeff Stone.

Most Read from Bloomberg Businessweek

©2024 Bloomberg L.P.

EMEA Tribune is not involved in this news article, it is taken from our partners and or from the News Agencies. Copyright and Credit go to the News Agencies, email news@emeatribune.com Follow our WhatsApp verified Channel210520-twitter-verified-cs-70cdee.jpg (1500×750)

Support Independent Journalism with a donation (Paypal, BTC, USDT, ETH)
whatsapp channel
Avatar
/ Published posts: 39100

The latest news from the News Agencies