French prosecutors have opened an investigation into whether Elon Musk encouraged the creation and spread of deepfakes on X (formerly Twitter) to artificially inflate the platform's value, according to Euractiv reporting published Friday.
The probe examines whether fake engagement metrics—generated through AI-created accounts and synthetic content—misled investors and regulators about X's actual user base and influence. If proven, the case could establish precedent for how regulators worldwide address the use of synthetic media to manipulate financial markets.
The investigation was initiated by the Paris prosecutor's financial crimes unit following complaints from investor advocacy groups and former X shareholders. According to documents reviewed by Euractiv, prosecutors are specifically examining whether Musk or senior X executives knowingly allowed or encouraged the proliferation of AI-generated accounts to inflate metrics like daily active users, engagement rates, and advertising reach.
Those metrics directly affect X's valuation. When Musk acquired Twitter in 2022 for $44 billion, he took the company private, making detailed financial disclosures less frequent. But X has since sought additional financing and partnerships based on representations about its user base and advertising potential. If those representations were based on artificially inflated numbers, investors may have grounds for fraud claims.
The deepfake element adds a novel legal dimension. Traditional forms of securities fraud involve false statements about financial performance or business prospects. Using AI-generated content to create the appearance of genuine user engagement represents a new form of deception that existing fraud statutes may not adequately address.
"This is the first major investigation into whether deepfakes can constitute securities fraud," said Marie Dubois, a financial crimes attorney in Paris who is not involved in the case. "If prosecutors can prove that synthetic content was used to deceive investors about a company's value, it opens up entirely new categories of liability."
X has not responded to requests for comment. Musk addressed the investigation obliquely on the platform Friday, posting: "France inventing crimes that don't exist so they can attack free speech." He did not specifically deny the allegations.
The investigation comes as regulators worldwide grapple with the explosion of AI-generated content. The European Union's recently implemented AI Act includes provisions requiring disclosure when content is synthetically generated, but enforcement mechanisms remain unclear. France, as one of the EU's largest economies, has been at the forefront of efforts to regulate both AI and social media platforms.
To understand today's headlines, we must look at yesterday's decisions. Musk's acquisition of Twitter was controversial from the start, with critics arguing he overpaid for a platform whose user growth had stagnated. Subsequent changes to the platform—including mass layoffs of content moderation staff, paid verification that allowed impersonation to flourish, and inconsistent enforcement of platform rules—led many major advertisers to pull spending.
Those departures created pressure to demonstrate that X remained a valuable advertising platform and influential communications channel. Whether that pressure led to tolerance or encouragement of artificially inflated metrics is the question at the heart of the French investigation.
Investigators are reportedly examining internal communications at X to determine what executives knew about bot activity and deepfake accounts. They are also analyzing platform data to identify patterns that might indicate coordinated use of synthetic media to inflate engagement numbers.
The legal and technical challenges are significant. Distinguishing between legitimate users, traditional bots, and AI-generated deepfake accounts requires sophisticated forensic analysis. Proving that executives knew about and encouraged such activity requires documentary evidence that companies are generally careful not to create.
But if French prosecutors can build a case, the implications extend far beyond Musk and X. Every social media platform, advertising network, and digital business that reports user metrics could face scrutiny about whether those numbers accurately represent human engagement or include artificial inflation through synthetic content.
The investigation is in its early stages, and no charges have been filed. But it represents the opening salvo in what is likely to become a broader regulatory reckoning with the intersection of AI, social media, and financial markets.





