In a small clinic in Southern California, patients receive thorough 30-minute appointments where they can discuss their symptoms in detail and leave with a diagnosis and treatment plan. The catch? They may have spent little to no time actually speaking with a doctor. Instead, they interacted with a medical assistant while an AI system called ScopeAI transcribed their conversation and generated medical recommendations for later physician approval.
This scenario, brought to life by startup Akido Labs, represents a significant shift in how healthcare could be delivered. By using large language models to handle the cognitive work traditionally performed by physicians during medical visits, the company claims doctors can see four to five times as many patients. However, this approach raises critical questions about medical quality, healthcare equity, and the risks of automation bias in life-or-death decisions.
ScopeAI operates as a sophisticated assembly of large language models, each designed to perform specific medical tasks. Built primarily on fine-tuned versions of Meta’s Llama models with some integration of Anthropic’s Claude, the system can elicit patient medical history, generate appropriate follow-up questions, and propose diagnoses with treatment recommendations.
During appointments, medical assistants follow prompts generated by ScopeAI, which analyzes patient responses in real-time to formulate new questions. The system produces comprehensive notes for physicians that include visit summaries, primary diagnoses, alternative possibilities, and recommended next steps, complete with medical justifications.
What distinguishes ScopeAI from existing medical AI tools is its ability to independently complete the full spectrum of cognitive tasks that constitute a medical visit. While other healthcare AI systems support doctors by identifying cancers in scans or taking appointment notes, ScopeAI aims to replace much of the physician’s direct involvement in the diagnostic process.
The system currently operates in cardiology, endocrinology, and primary care settings, as well as with Akido’s street medicine team serving Los Angeles homeless populations. For addiction medicine specialist Steven Hochman, who leads the street medicine program, ScopeAI enables caseworkers to conduct patient interviews independently, with Hochman reviewing and approving recommendations later. This has reduced the time to access substance abuse treatment medications from weeks to 24 hours.
ScopeAI operates in a complex regulatory landscape not designed for AI systems that direct medical appointments. The California Medical Practice Act prohibits AI from replacing a doctor’s responsibility to diagnose and treat patients, but allows physicians to use AI tools without requiring in-person or real-time patient interaction.
Legal experts note that any AI system acting as a “doctor in a box” would likely need FDA approval and could violate medical licensure laws. However, Akido CEO Prashant Samant argues that since ScopeAI requires human physician review and approval of all recommendations, it falls short of independent medical practice and doesn’t require FDA approval.
This regulatory ambiguity extends to insurance coverage. While Medicaid allows doctors to approve ScopeAI recommendations asynchronously, many private insurance providers still require direct physician-patient interaction before treatment approval. This creates a two-tiered system where primarily low-income patients receive AI-mediated care.
One of the most significant risks identified by healthcare experts involves automation bias, the well-documented tendency for humans to over-rely on algorithmic recommendations. This phenomenon becomes particularly concerning when doctors aren’t physically present during patient interactions.
Emma Pierson, a computer scientist at UC Berkeley, warns that remote physician review might predispose doctors to “sort of nodding along in a way that you might not if you were actually in the room watching this happen.” The physical distance from the patient interaction could reduce the critical evaluation that ensures appropriate medical care.
Akido claims to address automation bias through specific physician training and system design that counters medical blind spots traditionally influenced by physician intuition. The company monitors how often doctors correct ScopeAI recommendations and uses these corrections to further train the underlying models. Before deployment in any medical specialty, Akido ensures the system includes correct diagnoses in its top three recommendations at least 92% of the time when tested on historical datasets.
However, the company hasn’t conducted rigorous comparative studies between ScopeAI appointments and traditional care to determine whether patient outcomes are maintained or improved. Such research could help evaluate whether automation bias represents a meaningful risk in practice.
The current deployment of ScopeAI highlights stark disparities in healthcare access. Primarily serving Medicaid patients and homeless populations, the system creates what critics describe as a two-tiered medical system where socioeconomic status determines the type of care received.
Samant acknowledges this apparent inequity but argues it’s not intentional, rather a function of current insurance structures. He contends that rapid access to AI-enhanced care may be superior to the long wait times and limited provider availability that typically characterize Medicaid patient experiences. All Akido patients can opt for traditional physician appointments if willing to wait, he notes.
Nevertheless, the disparity raises fundamental questions about medical equality. If AI-mediated care proves inferior to traditional physician interaction, then this system could systematically disadvantage already vulnerable populations. Conversely, if it proves superior, it highlights how insurance structures prevent broader access to potentially better care.
Patients receiving ScopeAI-mediated care may not fully understand the extent of AI involvement in their medical decisions. Medical assistants inform patients that an AI system will listen to appointments to gather information for doctors, but they don’t explain that the AI generates diagnostic recommendations.
Medical ethicists worry this arrangement obscures the role of algorithmic decision-making in patient care. Zeke Emanuel, a professor of medical ethics at the University of Pennsylvania, suggests this comfort could hide from patients how significantly algorithms influence their treatment.
The lack of full disclosure challenges traditional notions of informed consent and the “human touch” in medicine. While patients might feel more comfortable speaking with a human medical assistant rather than directly with an AI system, this comfort may come at the cost of understanding how their medical decisions are actually made.
ScopeAI illustrates both the promise and perils of deploying advanced AI in healthcare delivery. By enabling medical assistants to conduct thorough patient interviews while AI handles complex diagnostic reasoning, the system could dramatically expand access to medical care and reduce physician workloads. The 24-hour turnaround for addiction treatment medications demonstrates real-world benefits for underserved populations.
However, the approach also highlights critical challenges that healthcare systems must navigate as AI becomes more sophisticated. Questions about automation bias, healthcare equity, regulatory oversight, and patient consent will only become more pressing as similar systems proliferate. The success or failure of ScopeAI may well determine whether AI-mediated healthcare becomes a tool for expanding quality care or a mechanism that inadvertently creates new forms of medical inequality.
As healthcare systems grapple with physician shortages and rising patient volumes, ScopeAI offers a preview of a future where AI doesn’t just assist medical professionals but fundamentally reshapes how healthcare is delivered. Whether this transformation ultimately serves patients’ best interests will depend on how well we address the complex ethical, regulatory, and practical challenges it presents.