We systematically construct this dataset by leveraging the versatile capabilities of GPT-4o to unify and refine three popular image-editing datasets: OmniEdit, HQ- Edit, and UltraEdit. Specifically, our methodology involves 1. regenerating output images to enhance visual quality and instruction alignment, and 2. selectively rewriting prompts to improve semantic clarity. To validate the efficacy of our dataset, we fine advanced open-source models on GPT-IMAGE-EDIT-1.5M. The empirical results are exciting — e.g., the fine-tuned FluxKontext achieves highly competitive performance across a comprehensive suite of benchmarks, including 7.24@GEdit-EN, 3.80@ImgEdit-Full, and 8.78@Complex-Edit, showing stronger instruction following and higher perceptual quality while maintaining identity . These scores markedly exceed all previously published open-source methods and substantially narrow the gap to leading proprietary models. We hope the full release of GPT-IMAGE-EDIT-1.5M can help to catalyze further open re- search in instruction-guided image editing.
@misc{wang2025gptimageedit15mmillionscalegptgeneratedimage,
title={GPT-IMAGE-EDIT-1.5M: A Million-Scale, GPT-Generated Image Dataset},
author={Yuhan Wang and Siwei Yang and Bingchen Zhao and Letian Zhang and Qing Liu and Yuyin Zhou and Cihang Xie},
year={2025},
eprint={2507.21033},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2507.21033},
}