HeadlinesBriefing favicon HeadlinesBriefing.com

Google DeepMind Expands MedGemma Healthcare AI Models

Google DeepMind Blog •
×

Google DeepMind has expanded its MedGemma collection with two new open models designed for healthcare AI development. The MedGemma 27B Multimodal adds support for complex multimodal and longitudinal electronic health record interpretation, while MedSigLIP is a lightweight 400M parameter image encoder for classification, search, and retrieval tasks. Both models can run on a single GPU, with the smaller variants adaptable to mobile hardware.

The 27B text model scores 87.7% on the MedQA benchmark, placing it within 3 points of DeepSeek R1 but at approximately one tenth the inference cost. MedGemma 4B achieves 64.4% on MedQA, ranking among the best very small open models under 8B parameters. In an unblinded study, a US board certified radiologist judged 81% of MedGemma 4B-generated chest X-ray reports as sufficiently accurate for similar patient management compared to original reports.

Because the models are open, developers retain full control over privacy, infrastructure, and modifications—critical for healthcare applications. DeepHealth in Massachusetts is using MedSigLIP for chest X-ray triaging, while Chang Gung Memorial Hospital in Taiwan found MedGemma works well with traditional Chinese-language medical literature. Tap Health in Gurgaon, India noted the model's reliability on tasks requiring sensitivity to clinical context.

Full technical details are available in the MedGemma technical report.