Synthesizing High-Quality Visual Question Answering from Medical Documents with Generator-Verifier LMMs
Xiaoke Huang* Ningsen Wang* Hui Liu Xianfeng Tang Yuyin Zhou
* indicates equal contribution
Large Multimodal Models (LMMs) are increasingly capable of answering medical questions that require joint reasoning over images and text, yet training general medical VQA systems is impeded by the lack of large, openly usable, high-quality corpora. We present MedVLSynther, a rubric-guided generator-verifier framework that synthesizes high-quality multiple-choice VQA items directly from open biomedical literature by conditioning on figures, captions, and in-text references. The generator produces self-contained stems and parallel, mutually exclusive options under a machine-checkable JSON schema; a multi-stage verifier enforces essential gates (self-containment, single correct answer, clinical validity, image-text consistency), awards fine-grained positive points, and penalizes common failure modes before acceptance. Applying this pipeline to PubMed Central yields MedSynVQA: 13,087 audited questions over 14,803 images spanning 13 imaging modalities and 28 anatomical regions. Training open-weight LMMs with reinforcement learning using verifiable rewards improves accuracy across six medical VQA benchmarks, achieving averages of 55.85 (3B) and 58.15 (7B), with up to 77.57 on VQA-RAD and 67.76 on PathVQA, outperforming strong medical LMMs. A Ablations verify that both generation and verification are necessary and that more verified data consistently helps, and a targeted contamination analysis detects no leakage from evaluation suites. By operating entirely on open literature and open-weight models, MedVLSynther offers an auditable, reproducible, and privacy-preserving path to scalable medical VQA training data.
 
  The figure illustrates our data curation pipeline. Click to enlarge.
 
  
  The figure illustrates our data statistics and comparison with other datasets.
 
  
  Both MedVLSynther 3B and 7B achieve the best average across benchmarks, demonstrating strong gains at small and medium scales. We also did ablation experiments on data scale, choice of generator and verifier LMMs and training approach and data source. See the figure below.
 
     
     
     
     
  
  Two auto-scrolling rows: 3B cases (top) and 7B cases (bottom). Click any panel to open full-size.
The figure illustrates some examples from our contamination analysis across MedSynVQA and our testset.
 
  
  @article{MedVLSynther,
  title={MedVLSynther: Synthesizing High-Quality Visual Question Answering from Medical Documents with Generator-Verifier LMMs},
  author={Huang, Xiaoke and Wang, Ningsen and Liu, Hui and Tang, Xianfeng and Zhou, Yuyin},
  journal={arXiv preprint arXiv:2510.25867},
  year={2025}
  }
@article{MedVLThinker,
  title={Medvlthinker: Simple baselines for multimodal medical reasoning},
  author={Huang, Xiaoke and Wu, Juncheng and Liu, Hui and Tang, Xianfeng and Zhou, Yuyin},
  journal={arXiv preprint arXiv:2508.02669},
  year={2025}
}