Australia's aged care funding system is being determined by an algorithm with no legal basis for preventing human override, a royal commission has heard - raising concerns about automated decision-making affecting vulnerable elderly Australians' access to care.
The inquiry heard evidence of "letting the algorithm rip" on aged care funding decisions, according to The Guardian, with no clear legal authority for the lack of human oversight. It's algorithmic governance at its most troubling: automated systems making consequential decisions about elderly people's care with questionable legal foundation.
The aged care funding assessment tool determines how much government support elderly Australians receive for care services. These aren't trivial decisions - they affect whether people can remain in their homes, what level of care they can access, and ultimately their quality of life in vulnerable years.
Testimony to the inquiry revealed that the algorithm operates with minimal human review. While automated decision-making tools are common across government, most have built-in mechanisms for human override, appeal processes, or regular audits. The aged care funding tool appears to lack these safeguards, or at least lacks clear legal authorization for their absence.
This is how government automates away accountability. When a human bureaucrat makes a funding decision, there's someone to appeal to, someone who can explain the reasoning, someone who takes responsibility. When an algorithm decides, the system becomes opaque. "Computer says no" isn't an adequate answer when elderly citizens' care is at stake.
Social media commenters expressed alarm at the implications. "My mum got her aged care funding cut and we couldn't get anyone to explain why," one user wrote. "Now we know - because an algorithm decided and nobody's checking it." Others noted the broader trend of algorithmic decision-making in government services, from welfare to healthcare.
The legal questions are significant. Can the government lawfully delegate funding decisions to an automated system? If so, what safeguards are required? Who's liable if the algorithm makes errors? How do citizens appeal algorithmic decisions? The inquiry's revelations suggest these questions weren't adequately addressed when the system was implemented.
Some defenders of automated systems argue they eliminate human bias and ensure consistency. But algorithms can encode bias in their design and training data, and consistency isn't valuable if the system is consistently wrong. The lack of human oversight means errors can systemically affect thousands of people before anyone notices.
The inquiry is examining whether the aged care funding tool should have greater transparency, regular independent audits, and clearer mechanisms for human review of algorithmic decisions. Some advocates are calling for a complete overhaul, arguing that decisions affecting vulnerable elderly Australians shouldn't be delegated to opaque automated systems.
Mate, when we're "letting the algorithm rip" on aged care funding without clear legal authority for lack of oversight, we've crossed a line. These are vulnerable elderly Australians whose care is being decided by code, with nobody taking responsibility for checking whether the machine is getting it right. That's not efficient government - that's algorithmic abandonment of accountability.


