🇺🇦 Gemma-UA-Cardio-Q4KM (Specialized Ukrainian LLM)

🩺 High-Quality Cardiology Assistant (Ukrainian Language)

This model is a highly specialized, instruction-following version of the Gemma-4B-It model (based on the size of 2.4GB, likely the 8B or a heavily compressed version), meticulously fine-tuned for providing cardiology-related information and answering medical queries in Ukrainian.

The adaptation process involved a crucial two-stage LoRA fine-tuning approach:

  1. Linguistic Adaptation on a large general Ukrainian corpus (14500 cardiological epicrises).
  2. Domain Specialization on a dedicated corpus of cardiovascular health and clinical data.

💾 Model Details & Files

This repository contains the highly optimized GGUF file, ready for immediate, efficient inference on consumer hardware.

Detail Value
Base Model google/gemma-4b-it
Language Ukrainian (Specialized)
Specialization Cardiology (Cardiovascular medicine, clinical terminology)
Quantization GGUF Q4KM (Highly efficient)
GGUF File Size ~2.4 GB
Context Length 4096 (Recommended minimum)
Pipeline Tool llama.cpp for GGUF conversion/quantization

Downloadable Files

  • gemma_ua_med_final_q8.gguf: The main file, ready for use with llama.cpp, Ollama, or LM Studio.

🇺🇦 Gemma-UA-Cardio-Q4KM (Specialized Ukrainian LLM)

🩺 High-Quality Cardiology Assistant (Ukrainian Language)

This model is a highly specialized, instruction-following version of the Gemma-4B-Instruct model (based on the size of 2.4GB, likely the 4B or a heavily compressed version), meticulously fine-tuned for providing cardiology-related information and answering medical queries in Ukrainian.

The adaptation process involved a crucial two-stage LoRA fine-tuning approach: 1. Linguistic Adaptation on a large general Ukrainian corpus. 2. Domain Specialization on a dedicated corpus of cardiovascular health and clinical data.

💾 Model Details & Files

This repository contains the highly optimized GGUF file, ready for immediate, efficient inference on consumer hardware.

Detail Value
Base Model google/gemma-4b-it
Language Ukrainian (Specialized)
Specialization Cardiology (Cardiovascular medicine, clinical terminology)
Quantization GGUF Q4KM (Highly efficient)
GGUF File Size ~2.4 GB
Context Length 4096 (Recommended minimum)
Pipeline Tool llama.cpp for GGUF conversion/quantization

Downloadable Files

  • gemma_ua_med_final_q4km.gguf: The main file, ready for use with llama.cpp, Ollama, or LM Studio.

🚀 How to Run (via llama.cpp CLI)

The GGUF format is ideal for running on CPU-dominant systems or systems with smaller VRAM using the llama.cpp framework.


1. Prerequisites

Ensure you have llama.cpp compiled. You will use the llama-cli binary.


2. Command Line Interface (CLI)

Use the following command to run the model in interactive chat mode. The System Prompt (-sys) is essential for activating the cardiology persona.


3. Example Usage (Ukrainian) This command assumes you are running from Ollama (Modelfile)

FROM "./gemma_ua_med_final_q4km.gguf"

SYSTEM """ Ви — лікар-кардіолог. На основі даних пацієнта та виписного епікризу сформуйте клінічно обґрунтовані рекомендації. """

TEMPLATE """user {{ .System }}

{{ .Prompt }} model """


--- IMPORTANT: Stop Tokens ---

PARAMETER stop "" PARAMETER stop "user" PARAMETER stop "model" PARAMETER stop "system"


--- Recommended generation parameters ---

PARAMETER temperature 0.6 PARAMETER top_p 0.9 PARAMETER num_ctx 4096 PARAMETER repeat_penalty 1.15 PARAMETER repeat_last_n 256

Key Parameters: -m: Specifies the path to the GGUF model file.

-sys: System Prompt—sets the model's professional role and required language.

-t $(nproc): Utilizes all available CPU cores for maximum speed.

-i: Activates interactive chat mode.


4. Limitations

  • Optimized for cardiology and cardiac surgery
  • Reduced accuracy outside these domains
  • No vision capabilities (text-only MedGemma IT)
  • May generate incomplete or generalized recommendations

5. Citing & Authors

If you use this model in your research, please cite:

@misc{Ostashko2025MedGemmaCardiology, title = {MedGemma-4B-Cardiology: A Domain-Finetuned Clinical LLM for Cardiology}, author = {Uaritm}, year = {2025}, url = {ai.esemi.org} }

Project homepage: https://ai.esemi.org

LicenseThe use of this model is subject to the terms of the original Gemma License. Please review and adhere to the associated licensing terms for the base model.

Downloads last month
2,020
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for uaritm/gemmamed_cardio

Quantized
(166)
this model