Advertisement

From fake nudes to fake quotes: AI deepfakes plagued Olympic athletes

Trolls on 4chan generated sexualized images of female athletes, and the White House shared an AI-manipulated video of a hockey player—welcome to the new normal.
Listen to this article
0:00
Learn more. This feature uses an automated voice, which may result in occasional errors in pronunciation, tone, or sentiment.
In an aerial view, artist Gustavo Zermeño Jr. hangs from a ladder as he paints a mural of Olympic figure-skating gold medalist Alysa Liu on February 26, 2026 in Gardena, California. (Photo by Justin Sullivan/Getty Images)

While competing for medals and glory in Milan, Italy, U.S. Olympic athletes experienced something that is fast becoming a regular feature of modern public life: the widespread use of AI tools by politicians, trolls and sexual harassers to manipulate their images and voices

Users on 4chan and other sites quickly generated and shared “nudified” or sexualized imagery of multiple female U.S. athletes, including figure skaters Alysa Liu, Amber Glenn and Isabeau Levito, as well as skiers Mikaela Shiffrin and Eileen Gu (who competed for China).

Multiple research firms, including Graphika and Open Measures, tracked the posts and images on 4chan,  a platform that automatically deletes posts and topic-specific boards after a set period.

Cristina López G., a senior analyst at Graphika and author of a report released Monday, told CyberScoop that online communities dedicated to generating and sharing fake, nonconsensual nude images of celebrities, public figures and women they know existed before the generative AI era. But these groups have taken advantage of AI image models, particularly local, open-source versions that can be downloaded and fine-tuned, to improve image quality and make the technology accessible to less technical members.

Advertisement

“These communities have co-opted and adapted these technologies to optimize them for their end use case, which continues to be the production of [nonconsensual sexual imagery],” López G. said.

Users on these 4chan message boards follow a gamified pattern: one person posts a nonconsensual or sexualized image, then asks others to post their own in return. The availability of downloadable, open-source AI models, which lack safety guardrails and can be customized for “nudification”  has accelerated this activity.

These customized weights and settings, called Low Rank Adaptions (LoRA), can be shared online and plugged into other users’ local models, similar to the way gamers create and share mods.

Deepfakes have been around – and steadily improving – for years, but generative AI technology has improved drastically in the past 18 months at generating realistic photos and videos.

Additionally, open-source models have spread throughout the internet, giving users the ability to customize, fine-tune and share ones that are optimized for nudification and non-consensual image generation.

Advertisement

Even though 4chan’s posts auto-delete, they can still spread to the broader internet. In 2024, for example, deepfake nudes of Taylor Swift originated on the site but went viral on mainstream social media. López G. said apps like Telegram—and increasingly X— become conduits for spreading the images further.

“The way in which this alters the game, I would say, is that you’re not only trading outputs anymore, you are trading the ability to generate infinite outputs,” she said. “So the harm compounds, because you are just enabling a lot of other people to be able to produce and uniquely and specifically target these women.”

AI, culture war politics and the public eye

The use of AI to mimic or harass U.S. Olympians during the games wasn’t limited to nonconsensual nudes on 4chan.

Brady Tkachuk of the U.S. men’s hockey team spoke out after the White House posted an AI-generated video that falsely depicted him mockingCanadians after Team USA’s gold medal win over Canada.

Advertisement

The video, shared through the White House’s TikTok account, depicted Tkachuk saying of Canada, “They booed our national anthem, so I had to come out and teach those maple-syrup-eating f—s a lesson.” Despite including an AI-generated disclaimer, the video has been viewed tens of millions of times.

Nevertheless, Tkachuk – an American citizen who plays professionally for the Ottawa Senators – took issue, telling the media “I don’t like that video” because “it’s not my voice, not my lips moving.”

It’s the latest example of the Trump White House using AI to alter or manipulate public imagery. The administration now regularly creates or shares AI-generated images as part of its political messaging, sometimes without disclosing it to the public. Earlier this year, the White House posted an AI-manipulated photo on X showing Minnesota protester Nekima Levy Armstrong crying as she was arrested and led away in handcuffs, an emotion not present in the original image. Other federal agencies’ social media accounts have also shared AI-manipulated images and videos.

White House officials have consistently defended their actions, describing them as little more than jokes. López G. said whether it’s nonconsensual nudes or political deepfakes, the problem “goes deeper than technological harm,” and reflects how pockets of online culture are essentially in denial about this content’s real-world impact.

“One thing that really jumps out is that many of the people producing [deepfakes] do not connect the harms that they are doing to the actual person,” she said. “In their minds it is ‘this is not real’ and so these people are not getting hurt. There is a disconnect there that has nothing to do with the technology, that has more to do with us as a culture.”

Derek B. Johnson

Written by Derek B. Johnson

Derek B. Johnson is a reporter at CyberScoop, where his beat includes cybersecurity, elections and the federal government. Prior to that, he has provided award-winning coverage of cybersecurity news across the public and private sectors for various publications since 2017. Derek has a bachelor’s degree in print journalism from Hofstra University in New York and a master’s degree in public policy from George Mason University in Virginia.

Latest Podcasts