Every AI company is racing to reassure users about privacy while monetizing their data. Perplexity allegedly just said the quiet part out loud.A new lawsuit claims that Perplexity AI's "incognito mode" was a sham. Despite promising privacy, the company allegedly shared millions of supposedly private chats with Google and Meta to increase advertising revenue.If the lawsuit's claims hold up, it's proof that "private mode" in AI tools might mean absolutely nothing.The allegations are specific. The lawsuit accuses Perplexity of collecting user conversations marked as private, bundling that data, and selling access to advertising networks. Not just anonymized usage statistics. Actual chat content.Perplexity has built its brand on being the "answer engine" that respects privacy. Unlike ChatGPT, which trains on user conversations by default, Perplexity advertised its incognito mode as genuinely private. No data collection. No training. No sharing.That was the pitch. The lawsuit alleges the reality was different.This matters because AI companies face a fundamental tension. Training better models requires massive data. Making money requires either subscriptions or advertising. Privacy limits both revenue streams.The technology is impressive. The question is whether companies can resist the temptation to monetize private data.Apparently, according to this lawsuit, not all of them can.The specific mechanism alleged is clever in a troubling way. The lawsuit claims Perplexity didn't directly sell chat logs to advertisers. Instead, it allegedly used private conversations to build detailed user profiles, then shared those profiles with Google and Meta's ad networks to enable targeted advertising.That's different from traditional data selling, but functionally similar. Your private conversations about health concerns or financial troubles get fed into an algorithm that decides which ads you see. Perplexity allegedly made money from showing you those ads or selling access to you as a marketing target.If true, this violates basic expectations about what "incognito" means. Users reasonably assume that mode prevents data collection. That's what incognito modes do in web browsers.But AI chat tools aren't regulated like browsers. There's no standard for what "private" or must mean. Companies can define those terms however they want in their privacy policies, which nobody reads.Perplexity hasn't issued a detailed response to the lawsuit yet. The company will likely argue its privacy policy disclosed all data practices, users consented by checking a box, and the lawsuit mischaracterizes how its advertising integration works.That defense might work legally. It won't work with users who thought they were protecting their privacy.The broader problem is trust. AI tools know users tell them. Medical questions. Relationship problems. Work frustrations. Financial stress. That's far more intimate than social media posts or search queries.If AI companies can't be trusted to actually protect data marked as private, users will start lying to their AI assistants or stop using them entirely. That breaks the core value proposition.This lawsuit might be the wake-up call the AI industry needs. Privacy isn't just a marketing feature. It's fundamental to whether people will actually use these tools for anything sensitive.Every AI company should be reviewing its privacy practices right now. Not because regulation requires it, though that's coming eventually. Because losing user trust is worse than losing advertising revenue.Perplexity will either prove the allegations wrong or pay the price in reputation and possibly money. Either way, this case is a warning to the rest of the industry. needs to actually mean something, or AI companies should stop offering it.
|





