Datasets:

Modalities:
Image
Text
Formats:
arrow
ArXiv:
Libraries:
Datasets
License:
nielsr HF Staff commited on
Commit
cd52f20
·
verified ·
1 Parent(s): 23f49bd

Improve dataset card: Add metadata, abstract, and GitHub link

Browse files

This PR enhances the `EditReward-Bench` dataset card by:
- Adding `task_categories: ['image-text-to-text']`, `language: ['en']`, and relevant `tags` to the metadata for improved discoverability.
- Including the paper's abstract in a dedicated section to provide comprehensive background information.
- Adding an explicit GitHub badge (`https://github.com/VectorSpaceLab/EditScore`) to the top section for easier access to the associated code.
- Ensuring the existing "Quick Start" section, which includes a sample usage code snippet, remains prominently displayed.

Files changed (1) hide show
  1. README.md +17 -0
README.md CHANGED
@@ -1,5 +1,16 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
 
3
  ---
4
 
5
  <p align="center">
@@ -9,6 +20,7 @@ license: apache-2.0
9
  <p align="center">
10
  <a href="https://vectorspacelab.github.io/EditScore"><img src="https://img.shields.io/badge/Project%20Page-EditScore-yellow" alt="project page"></a>
11
  <a href="https://arxiv.org/abs/2509.23909"><img src="https://img.shields.io/badge/arXiv%20paper-2509.23909-b31b1b.svg" alt="arxiv"></a>
 
12
  <a href="https://huggingface.co/collections/EditScore/editscore-68d8e27ee676981221db3cfe"><img src="https://img.shields.io/badge/EditScore-🤗-yellow" alt="model"></a>
13
  <a href="https://huggingface.co/datasets/EditScore/EditReward-Bench"><img src="https://img.shields.io/badge/EditReward--Bench-🤗-yellow" alt="dataset"></a>
14
  </p>
@@ -23,6 +35,11 @@ license: apache-2.0
23
  </h4>
24
 
25
  **EditScore** is a series of state-of-the-art open-source reward models (7B–72B) designed to evaluate and enhance instruction-guided image editing.
 
 
 
 
 
26
  ## ✨ Highlights
27
  - **State-of-the-Art Performance**: Effectively matches the performance of leading proprietary VLMs. With a self-ensembling strategy, **our largest model surpasses even GPT-5** on our comprehensive benchmark, **EditReward-Bench**.
28
  - **A Reliable Evaluation Standard**: We introduce **EditReward-Bench**, the first public benchmark specifically designed for evaluating reward models in image editing, featuring 13 subtasks, 11 state-of-the-art editing models (*including proprietary models*) and expert human annotations.
 
1
  ---
2
  license: apache-2.0
3
+ language:
4
+ - en
5
+ task_categories:
6
+ - image-text-to-text
7
+ tags:
8
+ - image-editing
9
+ - reward-modeling
10
+ - reinforcement-learning
11
+ - benchmark
12
+ - evaluation
13
+ - multimodal
14
  ---
15
 
16
  <p align="center">
 
20
  <p align="center">
21
  <a href="https://vectorspacelab.github.io/EditScore"><img src="https://img.shields.io/badge/Project%20Page-EditScore-yellow" alt="project page"></a>
22
  <a href="https://arxiv.org/abs/2509.23909"><img src="https://img.shields.io/badge/arXiv%20paper-2509.23909-b31b1b.svg" alt="arxiv"></a>
23
+ <a href="https://github.com/VectorSpaceLab/EditScore"><img src="https://img.shields.io/badge/GitHub-Code-blue" alt="github code"></a>
24
  <a href="https://huggingface.co/collections/EditScore/editscore-68d8e27ee676981221db3cfe"><img src="https://img.shields.io/badge/EditScore-🤗-yellow" alt="model"></a>
25
  <a href="https://huggingface.co/datasets/EditScore/EditReward-Bench"><img src="https://img.shields.io/badge/EditReward--Bench-🤗-yellow" alt="dataset"></a>
26
  </p>
 
35
  </h4>
36
 
37
  **EditScore** is a series of state-of-the-art open-source reward models (7B–72B) designed to evaluate and enhance instruction-guided image editing.
38
+
39
+ ## Paper Abstract
40
+
41
+ Instruction-guided image editing has achieved remarkable progress, yet current models still face challenges with complex instructions and often require multiple samples to produce a desired result. Reinforcement Learning (RL) offers a promising solution, but its adoption in image editing has been severely hindered by the lack of a high-fidelity, efficient reward signal. In this work, we present a comprehensive methodology to overcome this barrier, centered on the development of a state-of-the-art, specialized reward model. We first introduce EditReward-Bench, a comprehensive benchmark to systematically evaluate reward models on editing quality. Building on this benchmark, we develop EditScore, a series of reward models (7B-72B) for evaluating the quality of instruction-guided image editing. Through meticulous data curation and filtering, EditScore effectively matches the performance of learning proprietary VLMs. Furthermore, coupled with an effective self-ensemble strategy tailored for the generative nature of EditScore, our largest variant even surpasses GPT-5 in the benchmark. We then demonstrate that a high-fidelity reward model is the key to unlocking online RL for image editing. Our experiments show that, while even the largest open-source VLMs fail to provide an effective learning signal, EditScore enables efficient and robust policy optimization. Applying our framework to a strong base model, OmniGen2, results in a final model that shows a substantial and consistent performance uplift. Overall, this work provides the first systematic path from benchmarking to reward modeling to RL training in image editing, showing that a high-fidelity, domain-specialized reward model is the key to unlocking the full potential of RL in this domain.
42
+
43
  ## ✨ Highlights
44
  - **State-of-the-Art Performance**: Effectively matches the performance of leading proprietary VLMs. With a self-ensembling strategy, **our largest model surpasses even GPT-5** on our comprehensive benchmark, **EditReward-Bench**.
45
  - **A Reliable Evaluation Standard**: We introduce **EditReward-Bench**, the first public benchmark specifically designed for evaluating reward models in image editing, featuring 13 subtasks, 11 state-of-the-art editing models (*including proprietary models*) and expert human annotations.