The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: ParserError
Message: Error tokenizing data. C error: Expected 25 fields in line 5, saw 26
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head
return next(iter(self.iter(batch_size=n)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter
for key, example in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 503, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 350, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/csv/csv.py", line 190, in _generate_tables
for batch_idx, df in enumerate(csv_file_reader):
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 1843, in __next__
return self.get_chunk()
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 1985, in get_chunk
return self.read(nrows=size)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 1923, in read
) = self._engine.read( # type: ignore[attr-defined]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 234, in read
chunks = self._reader.read_low_memory(nrows)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pandas/_libs/parsers.pyx", line 850, in pandas._libs.parsers.TextReader.read_low_memory
File "pandas/_libs/parsers.pyx", line 905, in pandas._libs.parsers.TextReader._read_rows
File "pandas/_libs/parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas/_libs/parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status
File "pandas/_libs/parsers.pyx", line 2061, in pandas._libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: Expected 25 fields in line 5, saw 26Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
π§ BALANCED_FAKE_JOB_POSTINGS_VA Dataset
π Overview
This dataset is a manually translated and balanced Valencian version of the original Fake Job Postings dataset from Kaggle:
Real or Fake? Fake Job Posting Prediction.
It contains 1,730 job postings, equally divided between fraudulent (fake) and non-fraudulent (real) listings.
All text fields (e.g., job title, company profile, description, requirements) have been manually translated into Valencian, preserving the semantic meaning and structure of the original English dataset.
π Dataset Summary
| Feature | Description |
|---|---|
| Total Rows | 1,730 |
| Fraudulent (1) | 865 |
| Non-Fraudulent (0) | 865 |
| Language | Valencian (Catalan variety) |
| Columns | 18 |
π§© Dataset Details
πΉ Source Dataset
- Original: Fake Job Posting Prediction (Kaggle)
- License: CC0: Public Domain (as per original Kaggle dataset)
- Modifications:
- All textual fields manually translated into Valencian.
- Dataset balanced to include 50% fraudulent and 50% non-fraudulent samples.
- All structural and semantic information preserved from the original dataset.
π§± Columns Description
| Column | Description |
|---|---|
title |
Job title of the posting |
department |
Department or division for the role |
company_profile |
Overview or background of the company |
description |
Full job description |
requirements |
Skills, qualifications, and experience required |
benefits |
Perks and benefits offered |
employment_type |
Type of employment (Full-time, Part-time, Contract, etc.) |
required_experience |
Level of experience required |
required_education |
Educational qualifications required |
industry |
Industry sector |
function |
Job function (e.g., Sales, Engineering, etc.) |
job_id |
Unique identifier for each job posting |
location |
Geographical location of the job |
salary_range |
Salary range offered (if available) |
telecommuting |
Boolean flag (1 if telecommuting is allowed, else 0) |
has_company_logo |
Boolean flag (1 if company logo is present, else 0) |
has_questions |
Boolean flag (1 if job posting asks screening questions) |
fraudulent |
Target variable (1 = Fake job posting, 0 = Real job posting) |
π License
This dataset is released under the Creative Commons Attribution 4.0 International (CC BY 4.0) license.
π° Funding
This work is funded by the Ministerio para la TransformaciΓ³n Digital y de la FunciΓ³n PΓΊblica, co-financed by the EU β NextGenerationEU, within the framework of the project Desarrollo de Modelos ALIA.
License
This work is licensed under a Creative Commons Attribution 4.0 International (CC BY 4.0) licence.
- Downloads last month
- 20