A KPMG Australia partner has been fined after using artificial intelligence tools to pass a mandatory internal test on the use of artificial intelligence — a recursive act of rule-breaking that has embarrassed one of the country's most prominent professional services firms.
The incident, reported by The Guardian, involved a senior partner completing a compulsory AI governance and compliance assessment using the very tools the test was designed to evaluate. The partner received a financial penalty from the firm.
Take a moment to appreciate the geometry of this. KPMG — a firm that advises Australian government agencies, ASX-listed corporations, and major financial institutions on technology governance and AI risk management — required its partners to complete a test demonstrating they understood the responsible use of AI. A partner then used AI to get through the test. It is the professional services equivalent of a driving instructor failing a road rules quiz by having someone else sit it for them.
KPMG Australia employs approximately 10,000 people and is one of the four dominant firms shaping how large organisations in this country approach governance, compliance, and emerging technology. The firm has been actively marketing its AI advisory services, positioning itself as a guide for clients navigating the risks of automated systems.
The case raises questions that go well beyond one individual's poor judgment. If AI ethics training at a Big Four firm is something a senior professional considers a compliance box to tick rather than a substantive learning exercise, it says something about whether these internal governance frameworks are designed for genuine capability-building or liability management.
KPMG confirmed the incident and the penalty to Guardian Australia but declined to identify the partner involved or disclose the amount of the fine. The firm said it was "committed to the responsible use of AI" and that the individual had been dealt with through its internal disciplinary process.
In Australia, AI governance frameworks across both the private and public sectors remain at an early stage of development. The federal government published voluntary AI ethics principles in 2019 and has since moved toward a more structured approach, but binding regulation of AI use in professional services remains limited.
The irony deepens when you consider the wider context: Australia's professional services industry has been among the most vocal advocates for AI adoption across the economy, and among the most resistant to binding regulation of how those tools are used. Here, at one of the Big Four firms at the centre of that advocacy, a senior partner demonstrated precisely the casual disregard for AI governance that regulators and critics have warned about.
The partner's name has not been publicly released. KPMG's internal processes are not subject to public accountability. The fine amount remains undisclosed.
