Chaos2629 commited on
Commit
f709b06
Β·
verified Β·
1 Parent(s): c9ea0fc

Update readme.md

Browse files
Files changed (1) hide show
  1. readme.md +43 -43
readme.md CHANGED
@@ -1,43 +1,43 @@
1
- ---
2
- license: apache-2.0
3
- ---
4
- # πŸ–ΌοΈ DiffSeg30k -- A multi-turn diffusion-editing dataset for localized AIGC detection
5
-
6
- A dataset for **segmenting diffusion-based edits** β€” ideal for training and evaluating models that localize edited regions and identify the underlying diffusion model
7
-
8
- ## πŸ“ Dataset Usage
9
- - `xxxxxxxx.image.png`: Edited images. Each image may have undergone 1, 2, or 3 editing operations.
10
- - `xxxxxxxx.mask.png`: The corresponding mask indicating edited regions, where pixel values encode both the type of edit and the diffusion model used.
11
-
12
- Load images and masks as follows:
13
-
14
- ```python
15
- from datasets import load_dataset
16
- dataset = load_dataset("Chaos2629/Diffseg30k", split="train")
17
- image, mask = dataset[0]['image'], dataset[0]['mask']
18
- ```
19
-
20
- ## 🧠 Mask Annotation
21
-
22
- Each mask is a grayscale image (PNG format), where pixel values correspond to a specific editing model. The mapping is as follows:
23
-
24
- | Mask Value | Editing Model |
25
- |------------|------------------------------------------------------|
26
- | 0 | background |
27
- | 1 | stabilityai/stable-diffusion-2-inpainting |
28
- | 2 | kolors |
29
- | 3 | stabilityai/stable-diffusion-3.5-medium |
30
- | 4 | flux |
31
- | 5 | diffusers/stable-diffusion-xl-1.0-inpainting-0.1 |
32
- | 6 | glide |
33
- | 7 | Tencent-Hunyuan/HunyuanDiT-Diffusers |
34
- | 8 | kandinsky-community/kandinsky-2-2-decoder-inpaint |
35
-
36
- ## πŸ“Œ Notes
37
-
38
- - Each edited image may be edited **multiple turns**, so the corresponding mask may contain several different **label values** ranging from 0 to 8.
39
-
40
- ## πŸ“„ License
41
-
42
- Apache-2.0
43
-
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ # πŸ–ΌοΈ DiffSeg30k -- A multi-turn diffusion-editing dataset for localized AIGC detection
5
+
6
+ A dataset for **segmenting diffusion-based edits** β€” ideal for training and evaluating models that localize edited regions and identify the underlying diffusion model
7
+
8
+ ## πŸ“ Dataset Usage
9
+ - `xxxxxxxx.image.png`: Edited images. Each image may have undergone 1, 2, or 3 editing operations.
10
+ - `xxxxxxxx.mask.png`: The corresponding mask indicating edited regions, where pixel values encode the diffusion model used.
11
+
12
+ Load images and masks as follows:
13
+
14
+ ```python
15
+ from datasets import load_dataset
16
+ dataset = load_dataset("Chaos2629/Diffseg30k", split="train")
17
+ image, mask = dataset[0]['image'], dataset[0]['mask']
18
+ ```
19
+
20
+ ## 🧠 Mask Annotation
21
+
22
+ Each mask is a grayscale image (PNG format), where pixel values correspond to a specific editing model. The mapping is as follows:
23
+
24
+ | Mask Value | Editing Model |
25
+ |------------|------------------------------------------------------|
26
+ | 0 | background |
27
+ | 1 | stabilityai/stable-diffusion-2-inpainting |
28
+ | 2 | kolors |
29
+ | 3 | stabilityai/stable-diffusion-3.5-medium |
30
+ | 4 | flux |
31
+ | 5 | diffusers/stable-diffusion-xl-1.0-inpainting-0.1 |
32
+ | 6 | glide |
33
+ | 7 | Tencent-Hunyuan/HunyuanDiT-Diffusers |
34
+ | 8 | kandinsky-community/kandinsky-2-2-decoder-inpaint |
35
+
36
+ ## πŸ“Œ Notes
37
+
38
+ - Each edited image may be edited **multiple turns**, so the corresponding mask may contain several different **label values** ranging from 0 to 8.
39
+
40
+ ## πŸ“„ License
41
+
42
+ Apache-2.0
43
+