xAI is not prohibited by California’s AI Transparency Act, which requires disclosure of data


Elon Musk’s artificial intelligence company, xAI, has failed in its attempt to avoid California’s AI transparency law. The company tried to block AB 2013, which would have forced AI developers to publicly disclose data sets used to train their generation models, arguing the requirement violated trade secrets and First Amendment rights.

The defeat marks a significant legal setback for one of the most well-funded AI companies on the planet, and it sends a clear signal to the broader industry: California will not back down in its push to force transparency on a sector that has historically operated with minimal disclosure.

What does the law require and why did the IA fight with it

AB 2013, which took effect on January 1, 2026, requires AI companies operating in California to disclose the training data based on the models they generate. This means that companies like xAI, OpenAI, Google, and Anthropic must provide meaningful transparency about the text, images, code, and other data they input to build their systems.

For xAI, the implications are potentially huge. The company that runs the Grok AI assistant has built its models on massive data sets — and revealing what went into the learning pipeline could expose both competitive intelligence and uncomfortable questions about its data-sourcing practices.

xAI formally filed in federal court to challenge the law on December 29, 2025, just days before it was set to take effect. The company’s legal argument rests on two pillars: mandatory disclosure of training data constitutes compelled speech in violation of the First Amendment, and the statute effectively compels companies to hand over trade secrets to both competitors and the public.

The company filed a preliminary motion for a stay of execution while the case is pending. A hearing on the order was held on February 26, 2026, during which the presiding judge pressed the California Attorney General on plans to implement the new law.

In a twist that may have actually hurt xAI’s position, the state apparently failed to respond in a timely manner during the trial. While this may seem like a win for Musk’s company, legal observers have noted that the lack of a clear timetable for enforcement could make the case for emergency relief unusually weak — courts are generally reluctant to injunctions against threats that appear hypothetical or delayed.

Bottom line: the law is firm, and xAI’s efforts to stop it have failed.

Rough week in court for Musk’s AI dreams

The timing of this defeat is particularly notable as it came just one day after another courtroom defeat for xAI. On February 25, 2026, a federal judge dismissed xAI’s separate lawsuit against OpenAI, alleging that Musk’s company had stolen trade secrets from its main competitor.

The case, which had attracted a lot of attention given the personal history between Musk and OpenAI CEO Sam Altman, was dismissed without the push xAI was likely expecting. Taken together, the back-to-back losses paint a picture of a company that is taking the legal system less for granted than it might have hoped.

The twin defeats also underscore a broader tone. xAI has also argued that its own training data is a sacred trade secret that no government should be forced to divulge, while also claiming that a competitor has stolen its trade secrets. It appears that the courts did not believe either argument.

What this means for the AI ​​industry and investors

California’s success in defending AB 2013 could have ripple effects that extend beyond the Golden State’s borders. As the home base for most major AI companies and the jurisdiction with the state’s largest economy—nearly $4 trillion in GDP—California’s regulatory choices tend to follow national standards. Automakers learned this lesson decades ago with emissions regulations, and AI companies may be learning it now with transparency mandates.

For investors in AI companies, the decision introduces a new variable into the valuation equation. Training data has long been considered one of an AI company’s most valuable and protected assets. If companies were forced to disclose what information, it could level the playing field in a way that would benefit smaller, more transparent players at the expense of large incumbents who relied on transparency as a strategic advantage.

There is also the issue of legal liability. When training datasets become public, it becomes much easier for copyright owners, artists, journalists, and other content creators to determine whether their work has been used without permission. That opens the door to a wave of potential litigation that could dwarf existing copyright claims already working their way through the courts against companies like OpenAI and Stability AI.

The risk profile for xAI is particularly noteworthy. The company raised $6 billion at a valuation of $50 billion by the end of 2024, making it one of the world’s richest private companies. A forced disclosure regime that reveals the content of Grok’s training data could invite scrutiny from regulators, plaintiffs and competitors — none of whom value this assessment.

It is also worth considering the issue of enforcement, which remained somewhat unresolved during the February hearing. While the law is now in effect, the California Attorney General’s Office has not publicly detailed how aggressively it plans to pursue non-compliant companies. A soft approach gives the industry a breathing space; an adversary can compel compliance disclosure within months.

Other states are closely monitoring. New York, Illinois and Colorado have all introduced proposals to govern AI in recent legislative sessions, and California’s ability to withstand a well-funded legal challenge from a company backed by the world’s richest man will likely bolster those efforts.

For the broader market, it’s another data point in the ongoing tension between rapid AI development and regulatory scrutiny. The AI ​​sector enjoys an extremely permissive regulatory environment compared to other industries – financial services, pharmaceuticals and telecommunications all face more regulation. That era of lenient management appears to be coming to an end, at least in California.

Bottom line

xAI’s failure to close AB 2013 is more than a company losing a legal battle. It’s a signal that the legal system is willing to comply with transparency requirements, even as powerful players lag behind in AI. For developers, the message is simple: build your models with the assumption that the world will eventually see what went into them. For investors, the reckoning just got a little more difficult—the black box era of AI learning is coming to an end, and the companies best positioned to move forward are those that already have nothing to hide.

Disclosure: This article was edited by Estefano Gómez. For more information on how to create and review content, see our Editorial Policy.

Add Comment