An algorithm-based assessment tool is being used to determine home support funding packages for elderly Australians — and a 16-year veteran of the aged care sector says he disagrees with its assessments eight times out of ten.
Mark Aitken, who spent more than a decade and a half working in aged care, told The Guardian that the integrated assessment tool being used to allocate home support packages is "cruel and inhumane" — language that, coming from someone with that much sector experience, should command serious policy attention.
The integrated assessment tool (IAT) is the mechanism by which older Australians seeking government-funded home support — services like personal care, domestic assistance, and allied health — are assessed for the level of support they qualify for. The tool produces a score that determines the funding package. And if experienced practitioners in the system are disagreeing with its outputs four-fifths of the time, something is structurally wrong.
Australia's aged care system has been in crisis for years. The Royal Commission into Aged Care Quality and Safety, which reported in 2021, documented systemic failures across residential and home care — inadequate staffing, poor quality outcomes, a funding model that did not reflect the actual needs of the people receiving care. The Albanese government has invested billions in the post-Royal Commission reform agenda, including significant increases to home care funding.
But investment in a system produces worse outcomes if the allocation mechanism is broken. That is what the workers are saying. The algorithm doesn't know the person. It processes inputs — a questionnaire, a clinical assessment, proxy variables for need — and produces an output. The human expertise of an assessor who has worked in the sector for sixteen years is explicitly not part of that calculation.
The echoes of Robodebt are not rhetorical. The structural parallel is direct: a government using an automated system to make high-stakes decisions about vulnerable people's entitlements, at scale, without adequate human oversight, with limited recourse for individuals who receive wrong decisions. The Robodebt royal commission found that the automated debt system caused profound harm to hundreds of thousands of Australians who had no meaningful way to challenge the machine's conclusions.
The aged care population is, if anything, even less positioned than Robodebt's victims to mount an effective challenge. Many are in their 70s, 80s, or 90s. They may have cognitive impairment. They may be socially isolated. They may not have family members with the capacity or knowledge to advocate on their behalf. The power asymmetry between an elderly person with complex health needs and a government department defending an algorithmic decision is extreme.
The questions that need answered are straightforward: Who built the integrated assessment tool? What were the procurement and approval processes? Has there been independent evaluation of its accuracy? What is the appeals mechanism for decisions an elderly person believes are wrong? And — most importantly — has the Department of Health examined whether errors in the tool's assessments are leaving elderly Australians without the support they actually need?
The government has not publicly responded to the Guardian's reporting. The aged care minister's office has not commented on the specific claims made by sector workers.
Mate, there are about 4.2 million Australians aged 70 and over. Within a generation, that number will exceed six million. The question of how this country funds, allocates, and delivers care for its elderly citizens is one of the defining domestic policy challenges of the next thirty years. Starting that task with an algorithm that an experienced practitioner says is wrong most of the time is not a promising foundation.

