The Dangerous Push of AI into Healthcare for Vulnerable Communities
Across southern California, where homelessness rates rank among the highest in the nation, a concerning healthcare experiment is unfolding. Private company Akido Labs operates clinics serving unhoused patients and those with low incomes, but with a significant caveat: medical assistants conduct visits while artificial intelligence listens to conversations, generating potential diagnoses and treatment plans for later doctor review. The company's chief technology officer openly stated the goal is to "pull the doctor out of the visit" – a development that experts warn could have dangerous consequences for already marginalised communities.
The Expanding Footprint of AI in Medical Settings
This specific case represents a broader trend where generative AI is being aggressively integrated into healthcare systems. According to a 2025 American Medical Association survey, approximately two-thirds of physicians now use AI to assist with daily work, including patient diagnosis. The momentum behind this shift is substantial, with one AI startup securing $200 million in funding to develop what's been dubbed a "ChatGPT for doctors" application. Meanwhile, US lawmakers are considering legislation that would formally recognise AI's capability to prescribe medication, potentially accelerating its adoption across medical practice.
While AI integration affects nearly all patients, its impact is particularly profound for those with low incomes who already confront substantial barriers to quality care and experience higher rates of mistreatment within healthcare settings. The fundamental question arises: should unhoused and low-income individuals serve as testing grounds for emerging AI healthcare technologies? Many advocates argue emphatically against this approach, insisting that patient voices and priorities should determine if, how, and when AI is implemented in their care.
Systemic Pressures and Algorithmic Biases
The rise of AI in healthcare didn't occur in isolation. Overcrowded hospitals, overworked clinicians, and relentless pressure for medical offices to operate efficiently within large for-profit healthcare systems created fertile ground for technological solutions. These pressures intensify in economically disadvantaged communities where healthcare settings are frequently under-resourced, patients often lack insurance, and chronic health conditions proliferate due to systemic racism and poverty.
Some might ask whether AI-assisted care represents "something better than nothing" in resource-scarce environments. However, mounting evidence suggests otherwise. Multiple studies demonstrate that AI-enabled tools frequently generate inaccurate diagnoses with troubling patterns of bias. A 2021 study published in Nature Medicine examined AI algorithms trained on large chest X-ray datasets and found they systematically under-diagnosed Black and Latinx patients, patients recorded as female, and those with Medicaid insurance. This systematic bias threatens to deepen existing health inequities for populations already facing significant care barriers.
Further research published in 2024 revealed that AI misdiagnosed breast cancer screenings among Black patients, with higher odds of false positives compared to white counterparts. Due to inherent algorithmic biases, numerous clinical AI tools have demonstrated notably worse performance for Black patients and other people of colour. The fundamental issue lies in AI's operational mechanics: it doesn't independently "think" but rather relies on probabilities and pattern recognition that can reinforce existing biases against already marginalised patients.
Informed Consent and Historical Parallels
Compounding these concerns is the frequent lack of transparency surrounding AI implementation. One medical assistant revealed to MIT Technology Review that while patients know an AI system is listening during consultations, they aren't informed that it generates diagnostic recommendations. This opacity evokes disturbing historical parallels with exploitative medical racism, where Black individuals were subjected to experimentation without informed consent and often against their will.
Proponents argue AI could potentially help health providers by quickly delivering information that allows them to move between patients more efficiently. However, this potential efficiency gain comes with significant risks – primarily at the expense of diagnostic accuracy and the potential worsening of health inequities. The trade-off between speed and precision becomes particularly problematic when applied to vulnerable populations with complex health needs.
Beyond Diagnosis: AI's Broader Impact on Access
The potential consequences extend far beyond diagnostic accuracy alone. Advocacy group TechTonic Justice published a groundbreaking report estimating that 92 million Americans with low incomes "have some basic aspect of their lives decided by AI." These algorithmic decisions range from Medicaid benefit calculations to eligibility determinations for Social Security Administration disability insurance – profoundly impacting life outcomes.
Real-world examples of these impacts are currently playing out in federal courts. In 2023, a group of Medicare Advantage customers sued UnitedHealthcare in Minnesota, alleging coverage denials resulted from the company's AI system, nH Predict, mistakenly deeming them ineligible. Some plaintiffs represent estates of Medicare Advantage customers who allegedly died following denial of medically necessary care. Although UnitedHealth sought dismissal, a 2025 ruling allowed plaintiffs to proceed with certain claims. A parallel case filed in Kentucky against Humana alleges their use of nH Predict "spits out generic recommendations based on incomplete and inadequate medical records." That case also continues forward after surviving the insurance company's dismissal motion.
While final decisions remain pending, these cases signal a growing trend of AI determining health coverage for low-income individuals – and revealing its significant pitfalls. The emerging reality creates a troubling dichotomy: those with financial resources can access quality healthcare, while unhoused or low-income individuals may find themselves barred from healthcare entirely by algorithmic decisions. This constitutes what experts describe as medical classism in digital form.
Prioritising Patient-Centred Care Over Technological Experimentation
Given the substantial barriers confronting unhoused and low-income individuals, experts emphasise the crucial importance of patient-centred care delivered by human healthcare providers who actively listen to health-related needs and priorities. We cannot establish a healthcare standard where practitioners take a backseat while AI – frequently operated by private companies – assumes leadership roles in patient care.
An AI system that "listens in" without rigorous community evaluation fundamentally disempowers patients by removing their decision-making authority regarding which technologies, including AI, are implemented in their healthcare. The documented harms currently outweigh the potential, unproven benefits promised by startups and tech ventures. Rather than experimenting on society's most vulnerable patients during AI rollouts, the healthcare community must prioritise approaches that centre human connection, informed consent, and equitable access to quality care for all individuals, regardless of economic circumstance or housing status.