Deloitte has been caught using AI-generated content in a report for the Canadian government, marking the second major incident of the consulting giant submitting AI-generated work without disclosure. The pattern raises serious questions about quality control at Big Four firms.
The latest incident involves a government contract where Deloitte submitted analysis containing telltale signs of AI generation—including the kind of generic phrasing and confident-but-wrong assertions that characterize large language model output. When confronted, the firm acknowledged the "mistake" and blamed insufficient review processes.
This isn't Deloitte's first rodeo. The firm faced similar scrutiny earlier this year for another AI-generated report submitted to a different client. What's concerning isn't that they're using AI—plenty of firms are experimenting with these tools. What's concerning is the lack of disclosure and the apparent failure of quality control.
When you're billing enterprise rates, clients expect human expertise. They're paying for judgment, context, and accountability—not ChatGPT with a consulting fee attached. This isn't about AI being bad; it's about disclosure and accountability when billion-dollar consulting firms cut corners.
The consulting industry has always relied on junior analysts doing grunt work, supervised by experienced partners who add strategic insight. AI can potentially handle some of that grunt work. But if firms are going to use these tools, they need to be transparent about it and maintain rigorous quality standards.
Deloitte says it's implementing new protocols to prevent similar incidents. The real question is whether clients will accept AI-assisted consulting work, or whether they'll demand the human expertise they thought they were paying for all along.

