HumanEdit : A High-Quality Human-Rewarded Dataset for Instruction-based Image Editing

1Skywork AI, 2National University of Singapore, 3Peking University , 4Nanyang Technological University
*Equal contributions, Corresponding Author
jinbin.bai@u.nus.edu
Teaser Image

Abstract

We present HumanEdit, a high-quality, human-rewarded dataset specifically designed for instruction-guided image editing, enabling precise and diverse image manipulations through open-form language instructions. Previous large-scale editing datasets often incorporate minimal human feedback, leading to challenges in aligning datasets with human preferences. HumanEdit bridges this gap by employing human annotators to construct data pairs and administrators to provide feedback. With meticulously curation, HumanEdit comprises 5,751 images and requires more than 2,500 hours of human effort across four stages, ensuring both accuracy and reliability for a wide range of image editing tasks. The dataset includes six distinct types of editing instructions: Action, Add, Counting, Relation, Remove, and Replace, encompassing a broad spectrum of real-world scenarios. All images in the dataset are accompanied by masks, and for a subset of the data, we ensure that the instructions are sufficiently detailed to support mask-free editing. Furthermore, HumanEdit offers comprehensive diversity and high-resolution 1024 x 1024 content sourced from various domains, setting a new versatile benchmark for instructional image editing datasets. With the aim of advancing future research and establishing evaluation benchmarks in the field of image editing, we release HumanEdit at https://huggingface.co/datasets/BryanW/HumanEdit.

Distribution of our human-rewarded editing instructions

Add Remove Replace Action Counting Relation Sum
HumanEdit-full 801 1,813 1,370 659 698 410 5,751
HumanEdit-core 30 188 97 37 20 28 400

Comparison

Comparison of existing image editing datasets. ``Real Image for Edit'' denotes whether real images are used for editing instead of images generated by models. ``Real-world Scenario'' indicates whether images edited by users in the real world are included. ``Human'' denotes whether human annotators are involved. ``Ability Classification'' refers to evaluating the edit ability in different dimensions. ``Mask'' indicates whether rendering masks for editing is supported. ``Non-Mask Editing'' denotes the ability to edit without mask input.

Dataset Real Image for Edit Real-world Scenario Human Ability Classification Mask Non-Mask Editing
InstructPix2Pix
MagicBrush
GIER
MA5k-Req
TEdBench
HQ-Edit
SEED-Data-Edit
AnyEdit
HumanEdit 6

Dataset Statistics of HumanEdit-full

The river chart of HumanEdit-full. The first node of the river represents the type of edit, the second node corresponds to the verb extracted from the instruction, and the final node corresponds to the noun in the instruction. To maintain clarity, we only selected the top 50 most frequent nouns.

An Overview of Keywords in HumanEdit-full Edit Instructions: The inner circle represents the verb in the edit instruction, while the outer circle highlights the noun associated with the verb in each instruction.

Dataset Statistics of HumanEdit-core

The river chart of HumanEdit-core. The first node of the river represents the type of edit, the second node corresponds to the verb extracted from the instruction, and the final node corresponds to the noun in the instruction. To maintain clarity, we only selected the top 50 most frequent nouns.

An Overview of Keywords in HumanEdit-core Edit Instructions: The inner circle represents the verb in the edit instruction, while the outer circle highlights the noun associated with the verb in each instruction.

BibTeX

@article{bai2024humanedit,
  title={HumanEdit: A High-Quality Human-Rewarded Dataset for Instruction-based Image Editing},
  author={Bai, Jinbin and Chow, Wei and Yang, Ling and Li, Xiangtai and Li, Juncheng and Zhang, Hanwang and Yan, Shuicheng},
  journal={arXiv preprint arXiv:2412.04280},
  year={2024}
}