Breaking News

Progress Made by Fery, Broom, and Glasspool in Wimbledon Doubles Tips for staying cool and recognizing heat-related illness symptoms from health experts Julio Rodríguez, Seattle Mariners Center Fielder, Leaves Game Ahead of Schedule Bronny James debuts in summer NBA prospects game as Lakers suffer defeat Picnic celebrating the anniversary of Empire Mental Health Support

OpenAI, led by Sam Altman, has recently announced that it has detected and dismantled five covert influence operations that used its AI to support deceptive activities on the internet related to current politics and conflicts. These operations were driven by threat actors from Russia, Iran, and Israel, who focused on manipulating public opinion and influencing political outcomes.

In the past three months, OpenAI intercepted five operations conducted by threat actors from these countries who attempted to manipulate public opinion using AI models. These operations utilized AI models for various purposes such as generating short comments and articles in different languages, inventing names and biographies for social media accounts, debugging code and translating and correcting text.

One of the intercepted operations, named Bad Grammar, originated in Russia and targeted users in Ukraine, Moldova, the Baltic States, and the United States. Another operation called Doppelganger also originated in Russia and generated comments in multiple languages for publication on social media platforms. Other operations from Iran and Israel were also detected and disrupted by OpenAI.

The company noted that malicious actors use AI to generate content with fewer errors than manually produced content and combine AI with handwritten texts or memes. OpenAI is dedicated to defending against covert influence operations by refusing to generate content requested by malicious actors and sharing information with industry partners to improve detection and disruption efforts. The company will continue to work on a large scale to identify and mitigate these abuses using generative AI technologies.

OpenAI’s commitment to detecting and disrupting covert influence operations highlights the need for increased transparency in AI-generated content. As AI continues to play an increasingly important role in shaping public opinion and political outcomes, it’s crucial that companies like OpenAI take proactive measures to prevent abuse of this technology.

Leave a Reply