Datasets:
Dataset Viewer
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
dataset_name: string
description: string
last_updated: timestamp[s]
source: string
source_url: string
license: string
record_count: int64
file_format: string
file_size_mb: double
date_range: string
fields: struct<Date: string, Time: string, Location: string, Operator: string, Flight #: string, Route: stri (... 123 chars omitted)
child 0, Date: string
child 1, Time: string
child 2, Location: string
child 3, Operator: string
child 4, Flight #: string
child 5, Route: string
child 6, Type: string
child 7, Registration: string
child 8, cn/In: string
child 9, Aboard: string
child 10, Fatalities: string
child 11, Ground: string
child 12, Summary: string
notable_records: struct<first_fatality: string, largest_single_aircraft: string>
child 0, first_fatality: string
child 1, largest_single_aircraft: string
visualization_suggestions: list<item: string>
child 0, item: string
category: null
latitude: null
longitude: null
name: null
date: null
subcategory: null
magnitude: null
fatalities: null
injuries: null
damage: null
state: null
aircraft_type: null
event_id: null
vessel_type: null
depth_km: null
to
{'category': Value('string'), 'latitude': Value('float64'), 'longitude': Value('float64'), 'name': Value('string'), 'date': Value('string'), 'subcategory': Value('string'), 'magnitude': Value('float64'), 'fatalities': Value('int64'), 'injuries': Value('int64'), 'damage': Value('string'), 'state': Value('string'), 'aircraft_type': Value('string'), 'event_id': Value('string'), 'vessel_type': Value('string'), 'depth_km': Value('float64')}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2543, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2092, in _iter_arrow
pa_table = cast_table_to_features(pa_table, self.features)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2192, in cast_table_to_features
raise CastError(
datasets.table.CastError: Couldn't cast
dataset_name: string
description: string
last_updated: timestamp[s]
source: string
source_url: string
license: string
record_count: int64
file_format: string
file_size_mb: double
date_range: string
fields: struct<Date: string, Time: string, Location: string, Operator: string, Flight #: string, Route: stri (... 123 chars omitted)
child 0, Date: string
child 1, Time: string
child 2, Location: string
child 3, Operator: string
child 4, Flight #: string
child 5, Route: string
child 6, Type: string
child 7, Registration: string
child 8, cn/In: string
child 9, Aboard: string
child 10, Fatalities: string
child 11, Ground: string
child 12, Summary: string
notable_records: struct<first_fatality: string, largest_single_aircraft: string>
child 0, first_fatality: string
child 1, largest_single_aircraft: string
visualization_suggestions: list<item: string>
child 0, item: string
category: null
latitude: null
longitude: null
name: null
date: null
subcategory: null
magnitude: null
fatalities: null
injuries: null
damage: null
state: null
aircraft_type: null
event_id: null
vessel_type: null
depth_km: null
to
{'category': Value('string'), 'latitude': Value('float64'), 'longitude': Value('float64'), 'name': Value('string'), 'date': Value('string'), 'subcategory': Value('string'), 'magnitude': Value('float64'), 'fatalities': Value('int64'), 'injuries': Value('int64'), 'damage': Value('string'), 'state': Value('string'), 'aircraft_type': Value('string'), 'event_id': Value('string'), 'vessel_type': Value('string'), 'depth_km': Value('float64')}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
US Disasters Mashup
54,575 disaster events pulled from four US government databases into one flat JSON file. Plane crashes, shipwrecks, tornadoes, earthquakes -- all geocoded and categorized.
What's In It
| Category | Records | Source | Date Range |
|---|---|---|---|
| Aviation Accidents | 32,410 | NTSB | 1974-2018 |
| Severe Storms | 14,770 | NOAA Storm Events | 1950-2025 |
| Earthquakes | 3,742 | USGS | Ongoing |
| Shipwrecks | 3,653 | NOAA AWOIS | Historical (1600s-1970s) |
Fields
Every record has core fields (category, latitude, longitude, name, date, subcategory). Additional fields depend on the category:
| Field | Coverage | Which Categories |
|---|---|---|
category |
100% | All |
latitude / longitude |
100% | All |
name |
100% | All -- event description or location |
subcategory |
100% | Tornado, Flash Flood, seismic, maritime, aviation, etc. |
date |
94% | All except some historical shipwrecks |
aircraft_type |
59% | Aviation only |
event_id |
59% | Aviation only (NTSB event IDs) |
magnitude |
20% | Storms (Fujita/EF scale) + Earthquakes (Richter) |
fatalities |
27% | Storms |
injuries |
27% | Storms |
damage |
26% | Storms (text format: "250K", "1.5M") |
state |
27% | Storms |
vessel_type |
<1% | Shipwrecks (sparse) |
Example Records
Storm:
{
"category": "storm",
"latitude": 34.88,
"longitude": -99.28,
"name": "Tornado in OKLAHOMA, KIOWA",
"date": "1950-04-28",
"subcategory": "Tornado",
"magnitude": "0",
"fatalities": "1",
"injuries": "1",
"damage": "250K",
"state": "OKLAHOMA"
}
Aviation:
{
"category": "aviation_accident",
"latitude": 20.000833,
"longitude": -155.6675,
"name": "Aviation Accident - SCHLEICHER ASH25M",
"date": "2012-01-01",
"subcategory": "aviation",
"aircraft_type": "SCHLEICHER ASH25M",
"event_id": "20121010X84549"
}
Known Quirks
A few things worth knowing if you're working with this data:
- Aviation dates are year-only. All 32,410 aviation records show
YYYY-01-01. The actual dates are embedded in the event IDs (e.g.,20121010X84549= Oct 10, 2012) but the date field just has the year. - Earthquake dates are Unix timestamps, not ISO format. Convert with
datetime.fromtimestamp(). - ~5,983 duplicate aviation records from overlapping source files. Deduplicate on
event_idif you need unique events. - Coordinates extend beyond CONUS. About 3,200 records are in Hawaii, Alaska, territories, or international waters. Expected for aviation and maritime data.
depth_kmis always null. The field exists in the schema but was never populated.
Loading
import json
with open("disasters_mashup.json") as f:
disasters = json.load(f)
# Filter by category
storms = [d for d in disasters if d["category"] == "storm"]
aviation = [d for d in disasters if d["category"] == "aviation_accident"]
# Deduplicate aviation (optional)
seen = set()
unique_aviation = []
for d in aviation:
if d.get("event_id") not in seen:
seen.add(d.get("event_id"))
unique_aviation.append(d)
Sources
All public domain, from US government agencies:
- NTSB Aviation Safety Data
- NOAA AWOIS Wrecks & Obstructions
- NOAA Storm Events Database
- USGS Earthquake Hazards
Where to Get It
- GitHub: lukeslp/us-disasters-mashup
- HuggingFace: lukeslp/us-disasters-mashup
- Kaggle: lucassteuber/us-disasters-mashup
- Demo Notebook: Jupyter on GitHub Gist
License
CC0 1.0 (Public Domain). All source data comes from US government agencies.
Author
Luke Steuber
- Website: lukesteuber.com
- Bluesky: @lukesteuber.com
Citation
@dataset{steuber2026disasters,
title={US Disasters Mashup},
author={Steuber, Luke},
year={2026},
publisher={GitHub/HuggingFace/Kaggle},
url={https://github.com/lukeslp/us-disasters-mashup}
}
Structured Data (JSON-LD)
{
"@context": "https://schema.org",
"@type": "Dataset",
"name": "US Disasters Mashup",
"description": "54,575 disaster events from four US government databases (NTSB aviation accidents, NOAA shipwrecks, NOAA severe storms, USGS earthquakes) unified into a single geocoded JSON file.",
"url": "https://github.com/lukeslp/us-disasters-mashup",
"sameAs": [
"https://huggingface.co/datasets/lukeslp/us-disasters-mashup",
"https://www.kaggle.com/datasets/lucassteuber/us-disasters-mashup"
],
"license": "https://creativecommons.org/publicdomain/zero/1.0/",
"creator": {
"@type": "Person",
"name": "Luke Steuber",
"url": "https://lukesteuber.com"
},
"keywords": ["disasters", "aviation accidents", "shipwrecks", "storms", "earthquakes", "geospatial", "united states"],
"temporalCoverage": "1600/2025",
"spatialCoverage": {
"@type": "Place",
"name": "United States"
},
"distribution": [
{
"@type": "DataDownload",
"encodingFormat": "application/json",
"contentUrl": "https://github.com/lukeslp/us-disasters-mashup"
}
]
}
- Downloads last month
- 30