‘Elon Musk is playing with fire:’ All the legal risks that apply to Grok’s deepfake disaster
As collective disgust has continued to build over the widespread generation and sharing of nonconsensual, sexualized deepfakes generated by X’s GrokAI tool, angry onlookers have expressed shock that the activity continues unabated and company owner Elon Musk isn’t being compelled – by either U.S. regulators or law enforcement – to put a halt to the practice.
Legal experts say at the federal level, there are several laws and regulations already on the books that could expose Musk and X to significant fines, civil lawsuits and criminal prosecution.
Those tools include new laws like the Take It Down Act, legislation sponsored last year by Sens. Amy Klobuchar, D-Minn. And Ted Cruz, R-Texas, that would criminally prosecute individuals who share sexualized AI-generated images and require platforms to remove such images within 48 hours of being notified by a victim.
Klobuchar, posting on X, called the AI generated material “outrageous” and said the law would be enforced.
“No one should find AI-created sexual images of themselves online—especially children,” wrote Klobuchar. “X must change this. If they don’t, my bipartisan TAKE IT DOWN Act will soon require them to.”
Because AI is still an emerging technology, it remains unclear how it applies to criminal statutes and questions on the enforcement decisions— leaving federal regulators, law enforcement and courts with limited guidance. It’s not immediately clear, for instance, how many of the images and victims could be subject to legal or regulatory action under the Take It Down Act.
“The definitions are not favorable to what we’re dealing with right now,” said Amy Mushahwar, a partner at national law firm Lowenstein Sandler who specializes in data privacy and security issues.
Take It Down….Later
The Take It Down Act can be enforced in two ways: through criminal prosecution of those who generate and share such images online and takedown notices submitted by victims to platforms, which must remove the image within two days. Neither is a perfect fit for what is happening on X.
The law’s takedown provision, which will be enforced through the Federal Trade Commission, does not take effect until May.
While the criminal penalties are currently active, they would only authorize the DOJ to investigate and charge individuals prompting Grok to generate the manipulated photos, not the company or Musk himself.
Further complicating matters, the law’s reliance on specific legal definitions can make it difficult to prosecute some of the images generated on Grok. A victim’s age, or being depicted with even a small amount of clothing, can mean the difference between an image violating the law or not.
In conversations with lawyers and Hill staffers, many said the Take It Down Act would clearly cover the most egregious violations on Grok, like nudes and sexualized depictions of minors, but would be harder to apply to other instances. That’s because the Act criminalizes the sharing of “intimate visual depictions” using deepfakes, which under U.S. law is defined as an image showing an individual’s uncovered genitals, or displaying them covered in bodily fluids.
“That has a specific meaning under the law so that a depiction of a nude person may be an intimate visual depiction, but someone in a bikini may not be,” said Samir Jain, vice president of policy at the Center for Democracy and Technology.
Victims who have been undressed and placed in bikinis, lingerie or other suggestive clothing by Grok could, alternatively, seek legal relief under another section of the law that bans digital forgeries for adults and minors.
The U.S. Sentencing Commission is currently grappling with how to set minimum and maximum fines and jail sentences under the law and determine how it would apply to different crimes and sections of U.S. criminal code.
Communications Indecency
Even with restrictive language and delayed enforcement timelines, Grok’s mass undressing of users likely runs afoul of other federal and state laws, legal experts tell CyberScoop.
Others questioned whether X’s conduct would truly be protected under Section 230 of the Communications Decency Act, which typically shields social media platforms from civil lawsuits.
While Section 230 has traditionally been a legal bulwark for social media companies, shielding them from lawsuits over user content, X may have personal culpability under the law because Grok is a company feature.
Jain said that legal protections under Section 230 are predicated on the idea that the platforms shouldn’t be held liable for third-party created content posted by users. But in this case, X’s own embedded AI tool is generating the images.
“There’s a good argument that [Grok] at least played a role in creating or developing the image, since Grok seems to have created it at the behest of the user, so it may not be user content insulated by section 230,” he said.
However, he also posited that Musk’s status with the Republican Party and President Donald Trump could also deter federal agencies from taking a hard line. At the FTC, for example, Trump has fired the two commissioners who were nominated by the Democratic Party, leaving it a more partisan and White House-controlled entity than in previous administrations.
Laws “require enforcement by the federal government, the Justice Department in the case of criminal [law], but the FTC in the case of the takedown piece,” he said. “And so there might be questions also about the degree to which the administration would be committed to enforcing those laws against X and Musk.”
A lane for state AGs
As Riana Pfefferkorn, a non-resident fellow at Stanford University’s Center for Internet Security pointed out, Congress has signaled its broader stance on criminalizing AI-generated sexual deepfakes known through legislation like the Take It Down Act. In addition, dozens of states have anti-CSAM laws on the books, including many that specifically target AI-generated child pornography.
Mushahwar agreed, predicting even if Musk avoids federal scrutiny, state attorneys general will likely move aggressively to enforce existing CSAM and digital forgery laws. She said they will also look for places where “logical extensions” can cover the AI images being generated and posted on X.
Given the widespread revulsion the scandal has been met with, many AGs will likely feel serious pressure from their constituents to use whatever legal tools at hand to go after offenders.
“I do think Elon Musk is playing with fire, not just on a legal basis, but on a child safety basis,” Mushahwar said. “Like, if your platform is growing because you’re creating interest from pedophiles, that is creating a cesspool that might end up creating a trafficking haven.”