A quiet but profound transformation is sweeping across Australia's care systems, where decisions once made by health professionals are increasingly delegated to computers. Every day, hundreds of elderly individuals and people living with disabilities undergo assessments for essential supports like home care, mobility aids, and therapies. These supports are crucial for enabling them to live safely and with dignity in their own homes and communities.
The Shift to Algorithmic Decision-Making
Traditionally, care decisions relied on a blend of clinical expertise and the innate human ability to recognise and respond to others' needs. However, in an era dominated by AI hype and automation fetishisation, we are turning to computers for guidance on deeply human questions of care, vulnerability, and need. Computers lack both clinical insight and empathetic understanding, yet they are now central to determining care outcomes.
This shift is exemplified by the new Integrated Assessment Tool (IAT), introduced on 1 November 2025 under the Albanese government's Aged Care Act. As revealed by Guardian Australia in February, the IAT is a rules-based algorithm that categorises aged care applicants into one of eight funding levels. It dictates both the amount of home care received and their position in service queues.
How the IAT Operates
The tool was designed to facilitate a faster, fairer, and more consistent process for determining eligibility for subsidised aged care. It functions as a computerised questionnaire, using scored questions and predefined rules to place applicants into need-based categories. Assessments are conducted face-to-face, but the assessor's role is largely reduced to inputting data into the algorithm, minimising human discretion.
This approach mirrors tools used in the National Disability Insurance Scheme (NDIS), where standardised, algorithmic assessments are replacing human judgment. Across both systems, a perverse role reversal occurs: machines handle the profound human questions of ageing and disability, while professional assessors are robotised, serving merely as ancillaries to the algorithm.
Consequences of Automation in Care
The promise of algorithmic efficiency has been overshadowed by stories of delay, frustration, and systemic neglect. Aged care clinicians and carers have described these tools as "cruel" and "inhumane," arguing they strip away clinical expertise and leave elderly individuals with inadequate support. For instance, a South Australian woman faced fears of losing her independence after a government assessment slashed her funding.
From mid-year, NDIS changes will allow algorithmic reclassification of support needs and cuts without appeal rights. Aged care has gone further, eliminating any mechanism for human override entirely. When human judgment is removed, outcomes hinge solely on rules and scores, risking misreads if data is incomplete, variables fail to capture key factors, or weighting is incorrect.
Systemic Disadvantages and Risks
Algorithmically generated decisions may appear fair but systematically disadvantage those with complex, fluctuating, or atypical support needs. Cultural or language barriers, limited capacity, resource shortages, or poor assessment practices can exacerbate these issues, creating a distorted view of individual circumstances. The system's captured data is often taken as truth, even when it fails to reflect lived reality.
International examples, such as an algorithm in Arkansas, USA, used to ration care for people with severe impairments, highlight the dangers. Prioritising savings over care led to drastic cuts in support hours, resulting in recipients lying in their own waste, going without food, and lacking community contact, as evidenced in Senate committee hearings.
The Need for Human-Centred Systems
All public resource systems grapple with balancing consistent process and fair outcomes. By nearly eliminating human discretion, we risk sacrificing nuance for consistency, which does not guarantee fairness. Excessive standardisation fosters impersonal processes that overlook individual needs and complexities, a phenomenon termed algorithmic mis-recognition—a moral injury where lived experience is ignored by support systems.
Social services demand a different approach: systems attentive to lived experience and outcome fairness, providing tailored support for unique life circumstances. Well-governed systems should support human judgment and accountability, using technology in limited, safe ways to inform rather than replace decision-making.
A Warning from the IAT
The IAT serves as a stark warning: care cannot be reduced to rules and scores alone. Ageing and disability are inherently human experiences, and care decisions carry profound, life-altering consequences. When we relinquish human judgment and the capacity to override automated decisions, we place lives at the mercy of a flawed system.
Georgia van Toorn, a senior lecturer in the School of Social Sciences at the University of New South Wales and an associate investigator at the ARC Centre of Excellence for Automated Decision-Making and Society, emphasises the urgency of reevaluating this trend to protect vulnerable Australians.



