Recently, Taylor Swift has been in the spotlight for both good and bad reasons. On the positive side, her boyfriend Travis Kelce was part of the winning team at the Super Bowl, and her reactions during the game received a lot of attention. However, on the negative side, fake nude images of her created by generative AI have been circulating on the internet.
Following the release of these images, there has been widespread condemnation, including from generative AI (genAI) companies and Microsoft CEO Satya Nadella. Nadella has expressed his thoughts on the matter and emphasized the need for more safeguards to prevent the creation and distribution of such content.
Microsoft has also addressed the issue of deepfakes in a recent blog post, expressing concern about the misuse of AI tools and the potential threats they pose. The company has pledged to take steps to limit the spread of deepfakes and develop innovative solutions to identify AI-generated or manipulated content.
However, it has been suggested that Microsoft’s AI tools may have been used to create the fake images of Taylor Swift. A 404 Media article claims that Microsoft Designer was recommended for generating the pornographic images and that the safety guardrails of Microsoft’s image generator DALL-E could be bypassed to create explicit and violent content.
These allegations are further supported by claims from a Microsoft AI engineer who warned the company about the potential misuse of their AI tools but was allegedly ignored.
2024-02-15 17:00:03
Original from www.computerworld.com