Artificial intelligence company OpenAI has announced that it disrupted covert influence campaigns originating from Russia, China, Israel and Iran.
The ChatGPT maker said on Thursday that it identified five campaigns involving “deceptive attempts to manipulate public opinion or influence political outcomes without revealing the true identity or intentions of the actors behind them”.Keep reading list of 4 itemslist 1 of 4Trump Media shares plunge after Donald Trump convicted in hush-money trialTrump Media shares plunge after Donald …list 2 of 4Biden allows Ukraine to hit Russian targets with US arms, angering MoscowBiden allows Ukraine to hit Russian …list 3 of 4Follow the vote: South Africa elections live results on day 2Follow the vote: South Africa elections …list 4 of 4Mexico will elect its first woman president. What will it mean for women?Mexico will elect its first woman …end of list
The campaigns used OpenAI’s models to generate text and images that were posted across social media platforms such as Telegram, X, and Instagram, in some cases exploiting the tools to produce content with “fewer language errors than would have been possible for human operators,” OpenAI said.
Open AI said it terminated accounts associated with two Russian operations, dubbed Bad Grammer and Doppelganger; a Chinese campaign known as Spamouflage; an Iranian network called International Union of Virtual Media; and an Israeli operation dubbed Zero Zeno.
“We are committed to developing safe and responsible AI, which involves designing our models with safety in mind and proactively intervening against malicious use,” the California-based start-up said in a statement posted on its website.
“Detecting and disrupting multi-platform abuses such as covert influence operations can be challenging because we do not always know how content generated by our products is distributed. But we are dedicated to finding and mitigating this abuse at scale by harnessing the power of generative AI.”
Bad Grammar and Doppelganger largely generated content about the war in Ukraine, including narratives portraying Ukraine, the United States, NATO and the European Union in a negative light, according to OpenAI.
Spamouflage generated text in Chinese, English, Japanese and Korean that was critical of prominent critics of Beijing, including actor and Tibet activist Richard Gere and dissident Cai Xia, and highlighted abuses against Native Americans, according to the startup. Advertisement
International Union of Virtual Media generated and translated articles that criticised the US and Israel, while Zero Zeno took aim at the United Nations agency for Palestinian refugees and “radical Islamists” in Canada, OpenAI said.
Despite the efforts to influence the public discourse, the operations “do not appear to have benefited from meaningfully increased audience engagement or reach as a result of our services,” the firm said.Sign up for Al JazeeraAmericas Coverage NewsletterUS politics, Canada’s multiculturalism, South America’s geopolitical rise—we bring you the stories that matter.SubscribeYour subscription failed. Please try again.Please check your email to confirm your subscriptionBy signing up, you agree to our Privacy Policyprotected by reCAPTCHA
The potential for AI to be used to spread disinformation has emerged as a major talking point as voters in more than 50 countries cast their ballots in what has been dubbed the biggest election year in history.
Last week, authorities in the US state of New Hampshire announced they had indicted a Democratic Party political consultant on more than two dozen charges for allegedly orchestrating robocalls that used an AI-created impersonation of US President Joe Biden to urge voters not to vote in the state’s presidential primary.
During the run-up to Pakistan’s parliamentary elections in February, jailed former Prime Minister Imran Khan used AI-generated speeches to rally supporters amid a government ban on public rallies. Advertisement
Uncategorized
OpenAI says it disrupted Chinese, Russian, Israeli influence campaigns
Artificial intelligence company OpenAI has announced that it disrupted covert influence campaigns originating from Russia, China, Israel and Iran. The