New Zealand government ministers are using artificial intelligence to draft speeches, press releases, and policy documents, an investigation reveals, raising transparency questions about who's actually writing government communications.
The disclosure, reported by Newsroom, shows ministers asking AI tools to make them "sound smart and inspirational" - a window into how governments are quietly adopting AI without clear rules or disclosure requirements.
Mate, if ministers are outsourcing their thinking to AI, voters deserve to know. This is about democratic accountability. When you vote for someone, you're voting for their judgment and ideas, not an algorithm's.
The investigation found ministers using AI for various communications tasks. Some claimed they only used it for editing or idea generation, but the boundary between "assistance" and "authorship" is murky when AI writes the first draft.
The practice raises several concerns. First, transparency - if AI writes a ministerial statement, should that be disclosed? Second, accountability - if AI makes factual errors or produces misleading content, who's responsible? Third, authenticity - do voters have a right to hear ministers' actual views rather than AI-generated content?
One minister reportedly asked AI to help "sound smart and inspirational." That instruction reveals something important: the minister recognized their own communication as insufficiently smart or inspirational and turned to AI for improvement. That's either honest self-awareness or alarming dependency.
The New Zealand government doesn't have clear policies on ministerial AI use. There are no disclosure requirements, no guidelines on appropriate use cases, and no transparency about which communications are AI-assisted.
That regulatory gap is common globally. Governments are adopting AI faster than they're creating rules for its use. The result is ad-hoc experimentation with significant democratic implications.



