The U.S. Department of Homeland Security has identified Russia, Iran, and China as countries attempting to influence the upcoming presidential elections using AI tools for spreading fake or divisive information.
OpenAI recently disclosed that they have thwarted more than 20 cybercriminal operations this year alone. These cybercriminals were utilizing AI models for various malicious activities such as creating malware, generating fake content like articles and social media comments aimed at influencing global elections.
In a report released just before the U.S. presidential election, OpenAI revealed their success in neutralizing deceptive networks worldwide that were trying to exploit their models. Despite attempts focusing on elections in different countries like the U.S., Rwanda, India, and the European Union, none of these operations managed to gain significant traction or build sustained audiences through ChatGPT or other OpenAI tools.
Concerns have been raised about the surge in AI-generated content leading to misinformation during elections. The rise of deepfakes by 900% year over year has become a major issue according to data from Clarity.
Since its launch in 2022, ChatGPT has gained popularity globally but also sparked worries about its impact on electoral processes due to its ability to generate complex content ranging from simple requests for information to analyzing social media posts.
Various incidents involving AI-generated content related to elections have been reported across different countries like Iran using AI tools for creating election-related articles without much audience engagement. OpenAI also took action against accounts posting election-related comments in Rwanda and uncovered covert operations targeting European Parliament elections among others.
While some real people engaged with the AI-generated content created by these operations, overall impact remained limited according to OpenAI’s findings.
2024-11-01 23:15:01
Article from www.ibtimes.com