A New Zealand government social media post attacking opposition parties appears to have been created using artificial intelligence, sparking calls for transparency and an official information request to expose what prompts were used.
The post, which began with the attention-grabbing phrase "LEAKED!" and went on to criticize the Labour Party, Greens, and Te Pāti Māori, has all the hallmarks of AI-generated content—formulaic structure, generic talking points, and an oddly consistent tone that doesn't match typical government communications.
The controversy centers on whether the government is using generative AI tools to create partisan political messaging, and if so, what instructions were fed to the AI to produce such content. An official information request has been filed seeking disclosure of the prompts used to create the post, setting up a test case for transparency in the AI era.
This is about political accountability when governments deploy AI for partisan purposes. The issue isn't whether AI can be used for government communications—it can and is, globally. The question is whether the public has a right to know when they're reading AI-generated political attacks, and what instructions shaped that content.
New Zealand's Official Information Act is one of the world's strongest transparency laws, giving citizens broad rights to access government documents and communications. But it was written decades before generative AI, and it's unclear whether AI prompts qualify as disclosable "official information."
The post itself is remarkably on-brand for AI-generated political content: it lists criticisms in bullet points, uses broad generalizations, and lacks the specific policy references that typically characterize human-written political attacks. The "LEAKED!" framing is particularly telling—a sensationalist hook designed to grab attention on social media platforms.
Opposition parties and digital rights advocates are demanding answers. If the government is using AI to generate political messaging, they argue, voters deserve to know what biases might be baked into the prompts, who wrote those prompts, and whether there's any oversight of AI-generated content before it's published.
The Official Information request will test whether New Zealand's transparency framework can adapt to AI-powered government. If the request succeeds, it could force disclosure of the exact prompts used—revealing whether government staffers instructed an AI to "write a critical post about opposition parties" or something more targeted and potentially problematic.
This isn't just a New Zealand issue. Governments worldwide are experimenting with AI for communications, from drafting speeches to generating social media content. But few have established clear guidelines about disclosure, oversight, or the ethical use of AI in political messaging.
Australia has seen similar controversies, with political parties using AI-generated images and videos during elections without clear labeling. Singapore recently mandated disclosure when government communications use AI. New Zealand has no such requirement.
The political calculation is obvious: AI can generate content quickly, test multiple messages simultaneously, and optimize for engagement metrics. But when that content is political attack messaging funded by taxpayers, the lack of transparency becomes a democratic accountability issue.
Mate, voters have a right to know when their government is using AI to take political potshots. This isn't about banning AI in government communications—it's about ensuring the same transparency standards apply whether a human or an algorithm wrote the attack.
The Official Information request will likely take weeks to resolve, and the government could refuse on various grounds, including that the information doesn't exist or doesn't qualify as official information. But the controversy has already achieved one thing: it's put AI-generated government content under scrutiny.
For New Zealand, which prides itself on transparent governance and pioneering digital policy, this is a moment to set standards. Will AI-generated political content be clearly labeled? Will prompts be subject to information requests? Will there be oversight before AI-written attacks are published?
The answers will shape how democracies navigate the intersection of artificial intelligence and political communication. And right now, New Zealand is writing the playbook—whether it wants to or not.

