MedicalNarratives: Connecting Medical Vision and Language with Localized Narratives
Paper
•
2501.04184
•
Published
MedicalNarratives is a large-scale, multimodal medical vision–language corpus built from pedagogical videos and scientific articles. It is designed to unify semantic (classification, retrieval, captioning) and dense (detection, segmentation, grounding) tasks across clinical imaging.
Key properties:
Project page: https://medical-narratives.github.io
uid: str — Unique ID {VIDEO_ID}_{IDX} or {PMCID}index: int — Global dataset index image: str — Image filepath text: List[str] — Denoised captionsclip_name: str — Filename of video clip corresponding to imageclip_name: str — Video clip identifier med_num: int — Number of captions video: bool — Video-derived sample article: bool — Article-derived sample sample_index: int — Index within source video domains: str — Imaging domain list as string sample_uids: Dict[str, int] — Mapping of sibling samples in same video sample_count: int — Total samples from this video multidomain: bool — True if >1 modality multisample: bool — True if sample_count >1video_id: str — YouTube ID video_link: str — YouTube URL video_channel: str — Channel name duration: float — Seconds fps: float — Frames per second spw: float — Seconds per word (ASR alignment) pair_chunk_time: float — Target chunk duration chunk_times: List[List[float]] — Stable-scene intervals roi_chunk_times: List[List[float]] — ROI-activity intervals width: int, height: int — Frame dimensions chunk_id: int — Chunk index chunk_start_time: float, chunk_end_time: float — Timestamps chunk_medical_text: List[str] — Chunk-level medical text chunk_roi_text: List[str] — ROI text chunk_med_umls: List — UMLS for medical text chunk_roi_umls: List — UMLS for ROI text noisy_timed_captions: List[Dict] — Raw ASR transcript asr_correction_dict: Dict[str, str] — ASR corrections umls: List[List[Dict]] — UMLS for each caption in text magnification — Video magnification (usually null) img_time — Processing field (ignore) roi — ROI spatial annotations (boxes/masks) traces — Cursor traces (x, y, t) subdomain: List[str] — Fine-grained subdomainsPMCID: str — PubMed Central article ID pubmed_link: str — Article URL (https://www.ncbi.nlm.nih.gov/pmc/articles/{PMCID}/) img_xref_id: str — Figure or subfigure cross-reference ID from the article pubmed_caption: str — Cleaned full figure caption pubmed_caption_raw: str — Raw figure caption before processing img_inline_sentences: List[str] — Article sentences referencing the figure subcaptions: List[Dict] — Subfigure caption entries (with image filenames normalized) subfigures: Dict[str, Any] — Mapping from subfigure image filename → metadataIf you use MedicalNarratives, please cite:
@misc{ikezogwo2025medicalnarrativesconnectingmedicalvision,
title = {MedicalNarratives: Connecting Medical Vision and Language with Localized Narratives},
author = {Ikezogwo, Wisdom O. and Zhang, Kevin and Seyfioglu, Mehmet Saygin and Ghezloo, Fatemeh and Shapiro, Linda and Krishna, Ranjay},
year = {2025},
eprint = {2501.04184},
archivePrefix= {arXiv},
primaryClass = {cs.CV},
url = {https://arxiv.org/abs/2501.04184}
}