Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    CastError
Message:      Couldn't cast
json: struct<_commit_hash: null, _name_or_path: string, architectures: list<item: string>, auto_map: struct<AutoConfig: string, AutoModel: string, AutoModelForCausalLM: string>, downsample_ratio: double, drop_low_norm_tokens: bool, drop_method: string, drop_rate: double, drop_threshold: double, dynamic_image_size: bool, force_image_size: int64, hidden_size: int64, keep_thumbnail_token: bool, llm_config: struct<_name_or_path: string, add_cross_attention: bool, architectures: list<item: string>, attention_dropout: double, attn_implementation: string, bad_words_ids: null, begin_suppress_tokens: null, bos_token_id: int64, chunk_size_feed_forward: int64, cross_attention_hidden_size: null, decoder_start_token_id: null, diversity_penalty: double, do_sample: bool, early_stopping: bool, encoder_no_repeat_ngram_size: int64, eos_token_id: int64, exponential_decay_length_penalty: null, finetuning_task: null, forced_bos_token_id: null, forced_eos_token_id: null, hidden_act: string, hidden_size: int64, id2label: struct<0: string, 1: string>, initializer_range: double, intermediate_size: int64, is_decoder: bool, is_encoder_decoder: bool, label2id: struct<LABEL_0: int64, LABEL_1: int64>, length_penalty: double, max_length: int64, max_position_embeddings: int64, max_window_layers: int64, min_length: int64, model_type: string, no_repeat_ngram_size: int64, num_attention_heads: int64, num_beam_groups: int64, num_beams: int64, num_hidden_layers: int64, num_key_value_heads: int64, num_return_seque
...
er_tokens: int64
      child 44, num_return_sequences: int64
      child 45, output_attentions: bool
      child 46, output_hidden_states: bool
      child 47, output_scores: bool
      child 48, pad_token_id: null
      child 49, patch_size: int64
      child 50, prefix: null
      child 51, problem_type: null
      child 52, pruned_heads: struct<>
      child 53, qk_normalization: bool
      child 54, qkv_bias: bool
      child 55, remove_invalid_values: bool
      child 56, repetition_penalty: double
      child 57, return_dict: bool
      child 58, return_dict_in_generate: bool
      child 59, sep_token_id: null
      child 60, suppress_tokens: null
      child 61, task_specific_params: null
      child 62, temperature: double
      child 63, tf_legacy_loss: bool
      child 64, tie_encoder_decoder: bool
      child 65, tie_word_embeddings: bool
      child 66, tokenizer_class: null
      child 67, top_k: int64
      child 68, top_p: double
      child 69, torch_dtype: string
      child 70, torchscript: bool
      child 71, transformers_version: string
      child 72, typical_p: double
      child 73, use_bfloat16: bool
      child 74, use_flash_attn: bool
      child 75, use_modality_embedding: bool
      child 76, use_register_token: bool
      child 77, vit_condition_start_layer: int64
      child 78, vit_condition_start_layers_list: list<item: int64>
          child 0, item: int64
      child 79, vit_text_conditioning: bool
__key__: string
__url__: string
jsonl: null
to
{'jsonl': Value('binary'), '__key__': Value('string'), '__url__': Value('string')}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2361, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1914, in _iter_arrow
                  pa_table = cast_table_to_features(pa_table, self.features)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2192, in cast_table_to_features
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              json: struct<_commit_hash: null, _name_or_path: string, architectures: list<item: string>, auto_map: struct<AutoConfig: string, AutoModel: string, AutoModelForCausalLM: string>, downsample_ratio: double, drop_low_norm_tokens: bool, drop_method: string, drop_rate: double, drop_threshold: double, dynamic_image_size: bool, force_image_size: int64, hidden_size: int64, keep_thumbnail_token: bool, llm_config: struct<_name_or_path: string, add_cross_attention: bool, architectures: list<item: string>, attention_dropout: double, attn_implementation: string, bad_words_ids: null, begin_suppress_tokens: null, bos_token_id: int64, chunk_size_feed_forward: int64, cross_attention_hidden_size: null, decoder_start_token_id: null, diversity_penalty: double, do_sample: bool, early_stopping: bool, encoder_no_repeat_ngram_size: int64, eos_token_id: int64, exponential_decay_length_penalty: null, finetuning_task: null, forced_bos_token_id: null, forced_eos_token_id: null, hidden_act: string, hidden_size: int64, id2label: struct<0: string, 1: string>, initializer_range: double, intermediate_size: int64, is_decoder: bool, is_encoder_decoder: bool, label2id: struct<LABEL_0: int64, LABEL_1: int64>, length_penalty: double, max_length: int64, max_position_embeddings: int64, max_window_layers: int64, min_length: int64, model_type: string, no_repeat_ngram_size: int64, num_attention_heads: int64, num_beam_groups: int64, num_beams: int64, num_hidden_layers: int64, num_key_value_heads: int64, num_return_seque
              ...
              er_tokens: int64
                    child 44, num_return_sequences: int64
                    child 45, output_attentions: bool
                    child 46, output_hidden_states: bool
                    child 47, output_scores: bool
                    child 48, pad_token_id: null
                    child 49, patch_size: int64
                    child 50, prefix: null
                    child 51, problem_type: null
                    child 52, pruned_heads: struct<>
                    child 53, qk_normalization: bool
                    child 54, qkv_bias: bool
                    child 55, remove_invalid_values: bool
                    child 56, repetition_penalty: double
                    child 57, return_dict: bool
                    child 58, return_dict_in_generate: bool
                    child 59, sep_token_id: null
                    child 60, suppress_tokens: null
                    child 61, task_specific_params: null
                    child 62, temperature: double
                    child 63, tf_legacy_loss: bool
                    child 64, tie_encoder_decoder: bool
                    child 65, tie_word_embeddings: bool
                    child 66, tokenizer_class: null
                    child 67, top_k: int64
                    child 68, top_p: double
                    child 69, torch_dtype: string
                    child 70, torchscript: bool
                    child 71, transformers_version: string
                    child 72, typical_p: double
                    child 73, use_bfloat16: bool
                    child 74, use_flash_attn: bool
                    child 75, use_modality_embedding: bool
                    child 76, use_register_token: bool
                    child 77, vit_condition_start_layer: int64
                    child 78, vit_condition_start_layers_list: list<item: int64>
                        child 0, item: int64
                    child 79, vit_text_conditioning: bool
              __key__: string
              __url__: string
              jsonl: null
              to
              {'jsonl': Value('binary'), '__key__': Value('string'), '__url__': Value('string')}
              because column names don't match

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

No dataset card yet

Downloads last month
113