New studies show classical models outperform foundation AI in accuracy, safety, and reliability for clinical decision‑making.
According to multiple new studies that question assumptions about the readiness of foundation models for clinical use, traditional machine learning models are outperforming large language models (LLMs) across key medical benchmarks.
A benchmarking study published on arXiv has found that classical feature-based models such as LightGBM consistently surpassed leading LLMs on both text and image datasets. In one diabetes prediction task, LightGBM achieved 0.9982 accuracy, while a zero-shot Gemini 2.5 model scored only 0.4224.
The authors wrote that “LoRA-tuned Gemma variants consistently showed the worst performance… failing to generalize from the minimal fine-tuning provided.”
A similar comparison study on COVID-19 mortality prediction showed that Random Forest and XGBoost achieved over 80% accuracy and F1 scores of 0.86, compared with GPT‑4’s 62% accuracy and 0.43 F1 in zero-shot classification.
Concerns over behavioral reliability
Other research papers have raised concerns about behavioral reliability in clinical contexts.
- The SycoEval‑EM study from the University of North Carolina at Chapel Hill, the University of Waterloo, and Stanford University evaluated 20 medical LLMs in 1,875 simulated emergency encounters. LLM models frequently yielded to patient pressure for inappropriate care, a phenomenon researchers called “sycophancy”. Acquiescence rates ranged from 0% to 100%, with higher susceptibility to imaging requests (38.8%) than to opioid prescriptions (25%). “Susceptibility to patient pressure… represents a critical vulnerability,” the authors warned, recommending mandatory adversarial testing before any clinical deployment.
- At the University of Melbourne and Cambridge, researchers have demonstrated that standard transformer architectures struggle with the irregular time-series data common in intensive care units. Their paper, published by Frontiers in Artificial Intelligence, reported that specialized encoder designs improved performance by 12.8% on average, but required roughly tenfold longer training to match conventional supervised models.
Finally, a separate systematic review by Ghnemat and Saleh in Discover Artificial Intelligence has concluded that, while LLMs can aid diagnosis, workflow efficiency, and patient communication, risks around bias, privacy, and over‑reliance remain unresolved. “Their role should be seen as complementary, augmenting human expertise rather than replacing it,” the authors wrote.