In what might be the most expensive ChatGPT consultation in history, the CEO of Krafton decided to bypass his legal team and ask an AI chatbot how to void a $250 million contract. Spoiler alert: it didn't go well.
The case involved a complex publishing agreement, and when Krafton's CEO wanted out, his lawyers apparently gave him advice he didn't want to hear. So he turned to ChatGPT instead, which presumably told him what he wanted to hear. The court documents reveal he explicitly chose the AI's guidance over his actual attorneys.
This is the perfect case study in what happens when you confuse a language model with a lawyer. ChatGPT is remarkably good at sounding authoritative and coherent. It can cite legal principles, structure arguments, even draft contracts. The question is whether anyone needs to treat that as actual legal advice.
The technology works by predicting what text should come next based on patterns in training data. It doesn't understand contract law, jurisdiction-specific precedents, or the difference between general principles and your specific situation. It's essentially very sophisticated autocomplete that happens to be trained on legal documents.
What makes this particularly wild is that the CEO had actual lawyers. This wasn't someone who couldn't afford legal representation trying to navigate the system themselves. This was someone actively choosing to override professional advice with AI output.
The court was not impressed. In its ruling, it basically said that asking ChatGPT for legal strategy instead of your lawyers is exactly as dumb as it sounds. The CEO lost the case, and Krafton is now on the hook for damages that could approach $250 million.
Large language models are powerful tools. They can help draft documents, suggest approaches, even spot issues human reviewers might miss. But they're tools for lawyers to use, not replacements for lawyers. This case is going to be cited in law schools for years as exhibit A in "Don't Let AI Make Your Legal Decisions."
