A Wellington teenager applying for a supermarket job says artificial intelligence screening software made inaccurate and "pretty stupid" assessments about his personality, raising serious questions about algorithmic hiring practices spreading across New Zealand's retail sector.
The applicant, who sought a position at Woolworths, completed an AI-powered personality assessment as part of the application process. According to the New Zealand Herald, the system generated personality feedback that the teenager says bears no resemblance to who he actually is.
AI hiring tools everywhere, but what happens when they get it wrong?
Mate, this is the future of entry-level employment - and it's already broken.
AI hiring tools have proliferated across New Zealand retail, hospitality, and service sectors over the past two years. Companies love them because they're cheap, fast, and promise to reduce hiring bias. You don't need an HR manager spending hours reviewing applications when an algorithm can sort candidates in seconds.
But here's the problem: these systems are trained on data that may or may not reflect what actually makes someone good at the job. They look for patterns, correlations, statistical associations. They don't understand context. They can't recognize when someone's nervous because it's their first job interview, or when cultural background affects how someone answers personality questions.
And when they get it wrong - as this teenager says happened to him - there's often no recourse. You can't argue with an algorithm. You can't explain that the assessment doesn't capture who you are. You just get rejected, without ever speaking to an actual human being.
The black box problem
The teenager in this case was bewildered by the personality claims the AI made. But here's the thing: even Woolworths probably can't explain exactly why the algorithm said what it said.
