davanstrien HF Staff commited on
Commit
efecc17
·
verified ·
1 Parent(s): efd9bcd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +105 -210
README.md CHANGED
@@ -1,250 +1,145 @@
1
  ---
2
- annotations_creators:
3
- - expert-generated
4
- language_creators:
5
- - expert-generated
6
- license:
7
- - cc0-1.0
8
- multilinguality:
9
- - other-iconclass-metadata
10
- size_categories:
11
- - 10K<n<100K
12
- source_datasets: []
13
- task_categories:
14
- - image-classification
15
- - image-to-text
16
- - feature-extraction
17
- task_ids:
18
- - multi-class-image-classification
19
- - multi-label-image-classification
20
- - image-captioning
21
- pretty_name: 'Brill Iconclass AI Test Set '
22
  tags:
23
- - lam
24
- - art
25
- dataset_info:
26
- features:
27
- - name: image
28
- dtype: image
29
- - name: label
30
- list: string
31
- splits:
32
- - name: train
33
- num_bytes: 3281967920.848
34
- num_examples: 87744
35
- download_size: 3313602175
36
- dataset_size: 3281967920.848
37
- configs:
38
- - config_name: default
39
- data_files:
40
- - split: train
41
- path: data/train-*
42
  ---
43
 
44
- # Dataset Card for Brill Iconclass AI Test Set
45
 
46
- ## Table of Contents
47
- - [Table of Contents](#table-of-contents)
48
- - [Dataset Description](#dataset-description)
49
- - [Dataset Summary](#dataset-summary)
50
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
51
- - [Languages](#languages)
52
- - [Dataset Structure](#dataset-structure)
53
- - [Data Instances](#data-instances)
54
- - [Data Fields](#data-fields)
55
- - [Data Splits](#data-splits)
56
- - [Dataset Creation](#dataset-creation)
57
- - [Curation Rationale](#curation-rationale)
58
- - [Source Data](#source-data)
59
- - [Annotations](#annotations)
60
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
61
- - [Considerations for Using the Data](#considerations-for-using-the-data)
62
- - [Social Impact of Dataset](#social-impact-of-dataset)
63
- - [Discussion of Biases](#discussion-of-biases)
64
- - [Other Known Limitations](#other-known-limitations)
65
- - [Additional Information](#additional-information)
66
- - [Dataset Curators](#dataset-curators)
67
- - [Licensing Information](#licensing-information)
68
- - [Citation Information](#citation-information)
69
- - [Contributions](#contributions)
70
 
71
- ## Dataset Description
72
 
73
- - **Homepage:** [https://iconclass.org/testset/](https://iconclass.org/testset/)
74
- - **Repository:**[https://iconclass.org/testset/](https://iconclass.org/testset/)
75
- - **Paper:**[https://iconclass.org/testset/ICONCLASS_and_AI.pdf](https://iconclass.org/testset/ICONCLASS_and_AI.pdf)
76
- - **Leaderboard:**
77
- - **Point of Contact:**[info@iconclass.org](mailto:info@iconclass.org)
78
 
79
- ### Dataset Summary
80
 
81
- > A test dataset and challenge to apply machine learning to collections described with the Iconclass classification system.
82
 
83
- This dataset contains `87749` images with [Iconclass](https://iconclass.org/) metadata assigned to the images. The [iconclass](https://iconclass.org/) metadata classification system is intended to provide ['the comprehensive classification system for the content of images.'](https://iconclass.org/).
 
84
 
85
- > Iconclass was developed in the Netherlands as a standard classification for recording collections, with the idea of assembling huge databases that will allow the retrieval of images featuring particular details, subjects or other common factors. It was developed in the 1970s and was loosely based on the Dewey Decimal System because it was meant to be used in art library card catalogs. [source](https://en.wikipedia.org/wiki/Iconclass)
86
 
87
- The [Iconclass](https://iconclass.org)
88
 
89
- > view of the world is subdivided in 10 main categories...An Iconclass concept consists of an alphanumeric class number (“notation”) and a corresponding content definition (“textual correlate”). An object can be tagged with as many concepts as the user sees fit. [source](https://iconclass.org/)
 
 
90
 
91
- These ten divisions are as follows:
 
92
 
93
- - 0 Abstract, Non-representational Art
94
- - 1 Religion and Magic
95
- - 2 Nature
96
- - 3 Human being, Man in general
97
- - 4 Society, Civilization, Culture
98
- - 5 Abstract Ideas and Concepts
99
- - 6 History
100
- - 7 Bible
101
- - 8 Literature
102
- - 9 Classical Mythology and Ancient History
103
 
104
- Within each of these divisions further subdivision's are possible (9 or 10 subdivisions). For example, under `4 Society, Civilization, Culture`, one can find:
 
 
 
 
 
 
 
 
 
105
 
106
- - 41 · material aspects of daily life
107
- - 42 · family, descendance
108
- - 43 · recreation, amusement
109
- - 44 · state; law; political life
110
- - ...
111
 
112
- See [https://iconclass.org/4](https://iconclass.org/4) for the full list.
113
 
 
114
 
115
- To illustrate we can look at some example Iconclass classifications.
 
 
116
 
117
- `41A12` represents `castle`. This classification is generated via building from the 'base' division `4`, with the following attributes:
 
 
118
 
119
- - 4 · Society, Civilization, Culture
120
- - 41 · material aspects of daily life
121
- - 41A · housing
122
- - 41A1 · civic architecture; edifices; dwellings
123
 
124
- [source](https://iconclass.org/41A12)
 
 
 
 
 
 
 
 
 
125
 
126
- The construction of Iconclass of parts makes it particularly interesting (and challenging) to tackle via Machine Learning. Whilst one could tackle this dataset as a (multi) label image classification problem, this is only one way of tackling it. For example in the above label `castle` giving the model the 'freedom' to predict only a partial label could result in the prediction `41A` i.e. housing. Whilst a very particular form of housing this prediction for 'castle' is not 'wrong' so much as it is not as precise as a human cataloguer may provide.
127
-
128
- ### Supported Tasks and Leaderboards
129
-
130
- As discussed above this dataset could be tackled in various ways:
131
-
132
- - as an image classification task
133
- - as a multi-label classification task
134
- - as an image to text task
135
- - as a task whereby a model predicts partial sequences of the label.
136
-
137
- This list is not exhaustive.
138
-
139
- ### Languages
140
-
141
- This dataset doesn't have a natural language. The labels themselves can be treated as a form of language i.e. the label can be thought of as a sequence of tokens that construct a 'sentence'.
142
-
143
-
144
- ## Dataset Structure
145
-
146
- The dataset contains a single configuration.
147
-
148
- ### Data Instances
149
-
150
- An example instance of the dataset is as follows:
151
-
152
- ``` python
153
- {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=390x500 at 0x7FC7FFBBD2D0>,
154
- 'label': ['31A235', '31A24(+1)', '61B(+54)', '61B:31A2212(+1)', '61B:31D14']}
155
  ```
156
 
157
- ### Data Fields
158
-
159
- The dataset is made up of
160
-
161
- - an image
162
- - a sequence of Iconclass labels
163
-
164
- ### Data Splits
165
-
166
- The dataset doesn't provide any predefined train, validation or test splits.
167
-
168
- ## Dataset Creation
169
-
170
- > To facilitate the creation of better models in the cultural heritage domain, and promote the research on tools and techniques using Iconclass, we are making this dataset freely available. All that we ask is that any use is acknowledged and results be shared so that we can all benefit. The content is sampled from the Arkyves database. [source](https://labs.brill.com/ictestset/)
171
-
172
- [More Information Needed]
173
-
174
- ### Curation Rationale
175
-
176
- [More Information Needed]
177
-
178
- ### Source Data
179
-
180
- #### Initial Data Collection and Normalization
181
-
182
- The images are samples from the [Arkyves database](https://brill.com/view/db/arko?language=en). This collection includes images from
183
-
184
- > from libraries and museums in many countries, including the Rijksmuseum in Amsterdam, the Netherlands Institute for Art History (RKD), the Herzog August Bibliothek in Wolfenbüttel, and the university libraries of Milan, Utrecht and Glasgow. [source](https://brill.com/view/db/arko?language=en)
185
 
186
- [More Information Needed]
187
 
188
- #### Who are the source language producers?
189
 
190
- [More Information Needed]
 
 
191
 
192
- ### Annotations
193
 
194
- #### Annotation process
195
-
196
- The annotations are derived from the source dataset see above. Most annotations were likely created by staff with experience with the Iconclass metadata schema.
197
-
198
- #### Who are the annotators?
199
-
200
- [More Information Needed]
201
-
202
- ### Personal and Sensitive Information
203
-
204
- [More Information Needed]
205
-
206
- ## Considerations for Using the Data
207
-
208
- ### Social Impact of Dataset
209
-
210
- [More Information Needed]
211
-
212
- ### Discussion of Biases
213
-
214
- Iconclass as a metadata standard absorbs biases from the time and place of its creation (1940s Netherlands). In particular, '32B human races, peoples; nationalities' has been subject to criticism. '32B36 'primitive', 'pre-modern' peoples' is one example of a category which we may not wish to adopt. In general, there are components of the subdivisions of `32B` which reflect a belief that race is a scientific category rather than socially constructed.
215
-
216
- The Iconclass community is actively exploring these limitations; for example, see [Revising Iconclass section 32B human races, peoples; nationalities](https://web.archive.org/web/20210425131753/https://iconclass.org/Updating32B.pdf).
217
-
218
-
219
- One should be aware of these limitations to Iconclass, and in particular, before deploying a model trained on this data in any production settings.
220
-
221
- [More Information Needed]
222
-
223
- ### Other Known Limitations
224
-
225
- [More Information Needed]
226
-
227
- ## Additional Information
228
 
229
- ### Dataset Curators
230
 
231
- Etienne Posthumus
 
 
232
 
233
- ### Licensing Information
234
- [CC0 1.0](https://creativecommons.org/publicdomain/zero/1.0/)
235
 
236
- ### Citation Information
237
 
238
- ```
239
- @MISC{iconclass,
240
- title = {Brill Iconclass AI Test Set},
241
- author={Etienne Posthumus},
242
- year={2020}
 
 
 
243
  }
244
-
245
  ```
246
 
 
247
 
248
- ### Contributions
249
-
250
- Thanks to [@davanstrien](https://github.com/davanstrien) for adding this dataset.
 
 
 
 
 
1
  ---
2
+ base_model: Qwen/Qwen2.5-VL-3B-Instruct
3
+ datasets:
4
+ - davanstrien/iconclass-vlm-sft
5
+ - biglam/brill_iconclass
6
+ library_name: transformers
7
+ model_name: iconclass-vlm
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  tags:
9
+ - generated_from_trainer
10
+ - hf_jobs
11
+ - sft
12
+ - trl
13
+ - vision-language
14
+ - iconclass
15
+ - cultural-heritage
16
+ - art-classification
17
+ license: apache-2.0
 
 
 
 
 
 
 
 
 
 
18
  ---
19
 
20
+ # Model Card for iconclass-vlm
21
 
22
+ This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct) on the [davanstrien/iconclass-vlm-sft](https://huggingface.co/datasets/davanstrien/iconclass-vlm-sft) dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
 
24
+ ## Model Description
25
 
26
+ This vision-language model has been fine-tuned to generate [Iconclass](https://iconclass.org/) classification codes from images. Iconclass is a comprehensive classification system for describing the content of images, particularly used in cultural heritage and art history contexts.
 
 
 
 
27
 
28
+ The model was trained using Supervised Fine-Tuning (SFT) with [TRL](https://github.com/huggingface/trl) on a reformatted version of the Brill Iconclass AI Test Set, which contains 87,744 images with expert-assigned Iconclass labels.
29
 
30
+ ## Intended Use
31
 
32
+ - **Primary use case**: Automatic classification of art and cultural heritage images using Iconclass notation
33
+ - **Users**: Digital humanities researchers, museum professionals, art historians, and developers working with cultural heritage collections
34
 
35
+ ## Quick Start
36
 
37
+ ### Simple Pipeline Approach
38
 
39
+ ```python
40
+ from transformers import pipeline
41
+ from PIL import Image
42
 
43
+ # Load pipeline
44
+ pipe = pipeline("image-text-to-text", model="davanstrien/iconclass-vlm")
45
 
46
+ # Load your image
47
+ image = Image.open("your_artwork.jpg")
 
 
 
 
 
 
 
 
48
 
49
+ # Prepare messages
50
+ messages = [
51
+ {
52
+ "role": "user",
53
+ "content": [
54
+ {"type": "image", "image": image},
55
+ {"type": "text", "text": "Generate Iconclass labels for this image"}
56
+ ]
57
+ }
58
+ ]
59
 
60
+ # Generate with beam search for better results
61
+ output = pipe(messages, max_new_tokens=800, num_beams=4)
62
+ print(output[0]["generated_text"])
 
 
63
 
 
64
 
65
+ ### Alternative Approach with AutoModel
66
 
67
+ ```python
68
+ from transformers import AutoProcessor, AutoModelForVision2Seq
69
+ from PIL import Image
70
 
71
+ model_name = "davanstrien/iconclass-vlm"
72
+ processor = AutoProcessor.from_pretrained(model_name)
73
+ model = AutoModelForVision2Seq.from_pretrained(model_name)
74
 
75
+ # Load your image
76
+ image = Image.open("your_artwork.jpg")
 
 
77
 
78
+ # Prepare inputs
79
+ messages = [
80
+ {
81
+ "role": "user",
82
+ "content": [
83
+ {"type": "image"},
84
+ {"type": "text", "text": "Generate Iconclass labels for this image"}
85
+ ]
86
+ }
87
+ ]
88
 
89
+ # Process and generate
90
+ inputs = processor(messages, images=[image], return_tensors="pt")
91
+ outputs = model.generate(**inputs, max_new_tokens=800, num_beams=4)
92
+ response = processor.decode(outputs[0], skip_special_tokens=True)
93
+ print(response)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
94
  ```
95
 
96
+ ### Training Dataset
97
+ The model was trained on a reformatted version of the Brill Iconclass AI Test Set [https://huggingface.co/datasets/biglam/brill_iconclass](https://huggingface.co/datasets/biglam/brill_iconclass).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
98
 
 
99
 
 
100
 
101
+ The dataset was reformatted into a messages format suitable for SFT training.
102
+ Training Procedure
103
+ <img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>
104
 
105
+ This model was trained with SFT (Supervised Fine-Tuning).
106
 
107
+ Framework Versions
108
+ ```
109
+ TRL: 0.22.1
110
+ Transformers: 4.55.2
111
+ PyTorch: 2.8.0
112
+ Datasets: 4.0.0
113
+ Tokenizers: 0.21.4
114
+ ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
115
 
116
+ 3## Limitations and Biases
117
 
118
+ The Iconclass classification system reflects biases from its creation period (1940s Netherlands)
119
+ Certain categories, particularly those related to human classification, may contain outdated or problematic terminology
120
+ Model performance may vary on images outside the Western art tradition due to dataset composition
121
 
122
+ ### Citations
 
123
 
124
+ Model and Training
125
 
126
+ ```bibtex
127
+ @misc{vonwerra2022trl,
128
+ title = {{TRL: Transformer Reinforcement Learning}},
129
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
130
+ year = 2020,
131
+ journal = {GitHub repository},
132
+ publisher = {GitHub},
133
+ howpublished = {\url{https://github.com/huggingface/trl}}
134
  }
 
135
  ```
136
 
137
+ Dataset
138
 
139
+ ```bibtex
140
+ @misc{iconclass,
141
+ title = {Brill Iconclass AI Test Set},
142
+ author = {Etienne Posthumus},
143
+ year = {2020}
144
+ }
145
+ ```