Dataset Viewer (First 5GB)
Auto-converted to Parquet Duplicate
identifier
stringlengths
7
18
space
stringclasses
4 values
uid
stringlengths
1
6
arch_str
stringlengths
1
32
input
stringlengths
8.51k
461k
target_metric
stringclasses
1 value
val_accuracy
float64
0
95.1
flops
float64
31.1M
14.7B
params
float64
227k
50M
metadata
stringlengths
0
1.46k
metainformation
stringclasses
1 value
FBNet_4924
FBNet
4924
4924
graph main_graph ( %input.1[FLOAT, 1x3x32x32] %fc.weight[FLOAT, 100x1504] %fc.bias[FLOAT, 100] %onnx::Conv_703[FLOAT, 16x3x3x3] %onnx::Conv_704[FLOAT, 16] %onnx::Conv_706[FLOAT, 16x8x1x1] %onnx::Conv_709[FLOAT, 16x1x5x5] %onnx::Conv_712[FLOAT, 16x8x1x1] %onnx::Conv_715[FLOAT, 96x16x1x1] %onnx::Conv_...
val_accuracy
0
74,699,392
2,133,052
{'zcp_synflow': 77.89907159291953, 'zcp_zen': 70.61277770996094, 'zcp_epe_nas': 8.43931636281636, 'zcp_fisher': 0.1565728336572647, 'zcp_flops': 74699392.0, 'zcp_grad_norm': 25.163902282714844, 'zcp_grasp': -0.21435546875, 'zcp_jacov': -16.04736825096067, 'zcp_l2_norm': 663.7222290039062, 'zcp_nwot': 212.2607544191578,...
FBNet_4640
FBNet
4640
4640
graph main_graph ( %input.1[FLOAT, 1x3x32x32] %fc.weight[FLOAT, 100x1504] %fc.bias[FLOAT, 100] %onnx::Conv_696[FLOAT, 16x3x3x3] %onnx::Conv_697[FLOAT, 16] %onnx::Conv_699[FLOAT, 16x8x1x1] %onnx::Conv_702[FLOAT, 16x1x5x5] %onnx::Conv_705[FLOAT, 16x8x1x1] %onnx::Conv_708[FLOAT, 48x16x1x1] %onnx::Conv_...
val_accuracy
0
74,147,456
1,976,500
{'zcp_synflow': 82.59762280741629, 'zcp_zen': 72.37373352050781, 'zcp_epe_nas': 0.00015999920000638146, 'zcp_fisher': 0.11599140614271164, 'zcp_flops': 74147456.0, 'zcp_grad_norm': 24.32544708251953, 'zcp_grasp': -0.08461380004882812, 'zcp_jacov': -16.04951288826483, 'zcp_l2_norm': 650.794921875, 'zcp_nwot': 214.768862...
FBNet_4889
FBNet
4889
4889
graph main_graph ( %input.1[FLOAT, 1x3x32x32] %fc.weight[FLOAT, 100x1504] %fc.bias[FLOAT, 100] %onnx::Conv_630[FLOAT, 16x3x3x3] %onnx::Conv_631[FLOAT, 16] %onnx::Conv_633[FLOAT, 48x16x1x1] %onnx::Conv_634[FLOAT, 48] %onnx::Conv_636[FLOAT, 48x1x3x3] %onnx::Conv_639[FLOAT, 24x48x1x1] %onnx::Conv_640[F...
val_accuracy
0
77,415,808
2,326,476
{'zcp_synflow': 82.5309040969133, 'zcp_zen': 72.55218505859375, 'zcp_epe_nas': 6.570449287434718, 'zcp_fisher': 0.11501649767160416, 'zcp_flops': 77415808.0, 'zcp_grad_norm': 21.668869018554688, 'zcp_grasp': -0.013709068298339844, 'zcp_jacov': -16.06820386174849, 'zcp_l2_norm': 686.1187133789062, 'zcp_nwot': 212.946117...
FBNet_1744
FBNet
1744
1744
graph main_graph ( %input.1[FLOAT, 1x3x32x32] %fc.weight[FLOAT, 100x1504] %fc.bias[FLOAT, 100] %onnx::Conv_632[FLOAT, 16x3x3x3] %onnx::Conv_633[FLOAT, 16] %onnx::Conv_635[FLOAT, 48x16x1x1] %onnx::Conv_636[FLOAT, 48] %onnx::Conv_638[FLOAT, 48x1x3x3] %onnx::Conv_641[FLOAT, 16x48x1x1] %onnx::Conv_644[F...
val_accuracy
0
69,532,800
1,833,396
{'zcp_synflow': 81.00151682552332, 'zcp_zen': 70.72832489013672, 'zcp_epe_nas': 0.00015999920000638146, 'zcp_fisher': 0.16060179471969604, 'zcp_flops': 69532800.0, 'zcp_grad_norm': 25.518957138061523, 'zcp_grasp': -0.7189178466796875, 'zcp_jacov': -16.054173817441924, 'zcp_l2_norm': 641.1897583007812, 'zcp_nwot': 213.6...
FBNet_2118
FBNet
2118
2118
graph main_graph ( %input.1[FLOAT, 1x3x32x32] %fc.weight[FLOAT, 100x1504] %fc.bias[FLOAT, 100] %onnx::Conv_622[FLOAT, 16x3x3x3] %onnx::Conv_623[FLOAT, 16] %onnx::Conv_625[FLOAT, 96x16x1x1] %onnx::Conv_626[FLOAT, 96] %onnx::Conv_628[FLOAT, 96x1x3x3] %onnx::Conv_631[FLOAT, 16x96x1x1] %onnx::Conv_634[F...
val_accuracy
0
63,983,488
1,681,052
{'zcp_synflow': 78.4055229454222, 'zcp_zen': 66.96976470947266, 'zcp_epe_nas': 12.399530951803351, 'zcp_fisher': 0.14892186224460602, 'zcp_flops': 63983488.0, 'zcp_grad_norm': 22.675832748413086, 'zcp_grasp': -0.4149608612060547, 'zcp_jacov': -16.07879879224658, 'zcp_l2_norm': 591.436279296875, 'zcp_nwot': 213.24578005...
FBNet_3060
FBNet
3060
3060
graph main_graph ( %input.1[FLOAT, 1x3x32x32] %fc.weight[FLOAT, 100x1504] %fc.bias[FLOAT, 100] %onnx::Conv_614[FLOAT, 16x3x3x3] %onnx::Conv_615[FLOAT, 16] %onnx::Conv_617[FLOAT, 96x16x1x1] %onnx::Conv_618[FLOAT, 96] %onnx::Conv_620[FLOAT, 96x1x5x5] %onnx::Conv_623[FLOAT, 16x96x1x1] %onnx::Conv_626[F...
val_accuracy
0
89,349,760
2,305,964
{'zcp_synflow': 80.09361736096291, 'zcp_zen': 69.13285827636719, 'zcp_epe_nas': 0.00015999920000638146, 'zcp_fisher': 0.1393628865480423, 'zcp_flops': 89349760.0, 'zcp_grad_norm': 24.19607925415039, 'zcp_grasp': -0.6269760131835938, 'zcp_jacov': -16.05194956558284, 'zcp_l2_norm': 662.1394653320312, 'zcp_nwot': 214.8837...
FBNet_4771
FBNet
4771
4771
graph main_graph ( %input.1[FLOAT, 1x3x32x32] %fc.weight[FLOAT, 100x1504] %fc.bias[FLOAT, 100] %onnx::Conv_678[FLOAT, 16x3x3x3] %onnx::Conv_679[FLOAT, 16] %onnx::Conv_681[FLOAT, 96x16x1x1] %onnx::Conv_682[FLOAT, 96] %onnx::Conv_684[FLOAT, 96x1x5x5] %onnx::Conv_687[FLOAT, 16x96x1x1] %onnx::Conv_690[F...
val_accuracy
0
65,905,792
1,850,804
{'zcp_synflow': 74.45914513395928, 'zcp_zen': 65.81018829345703, 'zcp_epe_nas': 0.00015999920000638146, 'zcp_fisher': 0.1806827336549759, 'zcp_flops': 65905792.0, 'zcp_grad_norm': 26.138343811035156, 'zcp_grasp': -0.3742866516113281, 'zcp_jacov': -16.069036510659764, 'zcp_l2_norm': 595.3342895507812, 'zcp_nwot': 210.59...
FBNet_2978
FBNet
2978
2978
"graph main_graph (\n %input.1[FLOAT, 1x3x32x32]\n %fc.weight[FLOAT, 100x1504]\n %fc.bias[FLOAT, (...TRUNCATED)
val_accuracy
0
47,598,976
1,381,932
"{'zcp_synflow': 73.81326985261533, 'zcp_zen': 64.21788024902344, 'zcp_epe_nas': 18.820997031323483,(...TRUNCATED)
FBNet_2426
FBNet
2426
2426
"graph main_graph (\n %input.1[FLOAT, 1x3x32x32]\n %fc.weight[FLOAT, 100x1504]\n %fc.bias[FLOAT, (...TRUNCATED)
val_accuracy
0
74,623,104
1,715,716
"{'zcp_synflow': 68.94106884809352, 'zcp_zen': 61.849754333496094, 'zcp_epe_nas': 7.152478175373565,(...TRUNCATED)
FBNet_873
FBNet
873
873
"graph main_graph (\n %input.1[FLOAT, 1x3x32x32]\n %fc.weight[FLOAT, 100x1504]\n %fc.bias[FLOAT, (...TRUNCATED)
val_accuracy
0
77,806,720
2,044,812
"{'zcp_synflow': 81.12326995173713, 'zcp_zen': 71.46871185302734, 'zcp_epe_nas': 8.297040847397502, (...TRUNCATED)
End of preview. Expand in Data Studio

GraphArch-Regression

A unified regression dataset collated from multiple graph/architecture search sources (FBNet, Hiaml, Inception, NB101, NB201, NDS, OfaMB, OfaPN, OfaRN, SNAS, Twopath) for training and evaluating models that map ONNX-readable graph strings to a target metric.

Schema

  • identifier (string): Source key for the example, e.g. FBNet_0, SNAS_42.
  • space (string): Logical dataset source (FBNet, Hiaml, Inception, NB101, NB201, NDS, OfaMB, OfaPN, OfaRN, SNAS, Twopath).
  • uid (string): Original UID, if provided by the source.
  • arch_str (string): Architecture identity; first non-empty among arch_str, hash, uid.
  • input (string): ONNX-readable graph string (onnx_readable).
  • target_metric (string): Always val_accuracy.
  • val_accuracy (number | null): Primary regression target (Accuracy)
  • flops (number | null): FLOPs for the architecture (if available).
  • params (number | null): Parameter count (if available).
  • metadata (string): Python-dict-like string including only keys that start with zcp_ or lat_ (e.g., zero-cost proxies and latency measurements). Not populated for SNAS. These can be used for multi-objective regression.
  • metainformation (string): Only for SNAS; Python-dict-like string of selected fields {arch_str, macro, train_time_sec, steps_ran, precision, batch_size}.

Dataset Size

With this dataset, we provide ONNX text for universal-NAS regression training over 611931 architectures:

  • Amoeba: 4983
  • DARTS: 5000
  • DARTS_fix-w-d: 5000
  • DARTS_lr-wd: 5000
  • ENAS: 4999
  • ENAS_fix-w-d: 5000
  • FBNet: 5000
  • Hiaml: 4629
  • Inception: 580
  • NASBench101 (NB101): 423624
  • NASBench201 (NB201): 15625
  • NASNet: 4846
  • OfaMB: 7491
  • OfaPN: 8206
  • OfaRN: 10000
  • PNAS: 4999
  • PNAS_fix-w-d: 4559
  • SNAS: 85500
  • TwoPath: 6890

Tip: turn metadata or metainformation back into a dict with:

from ast import literal_eval
meta = literal_eval(row["metadata"])

How to load with 🤗 Datasets

from datasets import load_dataset
ds = load_dataset("akhauriyash/GraphArch-Regression")

Testing Graph Architecture Regression with a basic Gemma RLM model

Use the code below as reference for evaluating a basic RegressLM model ( better, more models to come! :) )

Note that the best practice is to fine-tune this base model on more NAS ONNX graph data, and few-shot transfer to the target search space (Say NASNet, etc.). If we want to finetune on 16 examples from say, ENAS, the optimal strategy we found was to construct a small NAS dataset of e.g., DARTS, NASNet, Amoeba, ENAS and use ~(1024, 1024, 1024, 16) samples from each, and up-sample (repeat) the 16 ENAS samples 8 times. Random-shuffle the dataset and fine-tune the RLM with 1e-4 LR (cosine decay) to avoid catastrophic forgetting. The code below is just illustrative to demonstrate non-trivial NAS performance. The model training corpus was only 1% NAS data, the rest was code.

import torch
import numpy as np
from datasets import load_dataset
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from scipy.stats import spearmanr
from tqdm import tqdm

REPO_ID = "akhauriyash/RLM-GemmaS-Code-v0"
DATASET = "akhauriyash/GraphArch-Regression"
dataset = load_dataset(DATASET, split="train")
tok = AutoTokenizer.from_pretrained(REPO_ID, trust_remote_code=True)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = AutoModelForSeq2SeqLM.from_pretrained(REPO_ID, trust_remote_code=True).to(device).eval()
MAX_ITEMS, BATCH_SIZE, spaces, results = 512, 4, ["NASBench101", "ENAS", "NASNet"], {}
n_out_tokens = getattr(model.config, "num_tokens_per_obj", 8) * getattr(model.config, "max_num_objs", 1)
n_out_tokens = model.config.num_tokens_per_obj * model.config.max_num_objs

for SPACE in spaces:
    inputs, targets = [], []
    for row in tqdm(dataset, desc=f"Processing {SPACE} till {MAX_ITEMS} items"):
        if row.get("space") == SPACE and "input" in row and "val_accuracy" in row:
            try:
                targets.append(float(row["val_accuracy"]))
                inputs.append(f"{SPACE}\n\n{row['input']}")
            except: continue
            if len(inputs) >= MAX_ITEMS: break
    preds = []
    for i in tqdm(range(0, len(inputs), BATCH_SIZE)):
        enc = tok(inputs[i:i+BATCH_SIZE], return_tensors="pt", truncation=True, padding=True, max_length=4096).to(device)
        batch_preds = []
        for _ in range(8):
            out = model.generate(**enc, max_new_tokens=n_out_tokens, min_new_tokens=n_out_tokens, do_sample=True, top_p=0.95, temperature=1.0)
            decoded = [tok.token_ids_to_floats(seq.tolist()) for seq in out]
            decoded = [d[0] if isinstance(d, list) and d else float("nan") for d in decoded]
            batch_preds.append(decoded)
        preds.extend(torch.tensor(batch_preds).median(dim=0).values.tolist())
    spear, _ = spearmanr(np.array(targets), np.array(preds))
    results[SPACE] = spear; print(f"Spearman ρ for {SPACE}: {spear:.3f}")

print("Spearman ρ | NASBench101 | ENAS | NASNet")
print(f"{REPO_ID} | " + " | ".join(f"{results[s]:.3f}" for s in spaces))

We got the following results when testing on a random subset of the GraphArch-Regression dataset.

Model ID                                 | NASBench101  | ENAS  | NASNet
akhauriyash/RegressLM-gemma-s-RLM-table3 | 0.384        | 0.211 | 0.209 

Credits

This dataset was collated from several graph/NAS sources, along with our own profiling where applicable. We export and generate the ONNX descriptions of all architectures in our dataset. Please credit and cite the original datasets accordingly.

Inception, Hiaml, Ofa-MB/PN/RN, Twopath: Mills, K. G., Han, F. X., Zhang, J., Chudak, F., Mamaghani, A. S., Salameh, M., Lu, W., Jui, S., & Niu, D. (2023). Gennape: Towards generalized neural architecture performance estimators. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 9190–9199.

NDS: Radosavovic, Ilija, et al. "On network design spaces for visual recognition." Proceedings of the IEEE/CVF international conference on computer vision. 2019.

NB101: Ying, Chris, et al. "Nas-bench-101: Towards reproducible neural architecture search." International conference on machine learning. PMLR, 2019.

NB201: Dong, Xuanyi, and Yi Yang. "Nas-bench-201: Extending the scope of reproducible neural architecture search."

FBNet: Wu, Bichen, et al. "Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019.

Further, multi-objective latency and zero cost proxies were sourced from

Krishnakumar, Arjun, et al. "Nas-bench-suite-zero: Accelerating research on zero cost proxies." Advances in Neural Information Processing Systems 35 (2022): 28037-28051.

Akhauri, Yash, and Mohamed S. Abdelfattah. "Encodings for prediction-based neural architecture search." arXiv preprint arXiv:2403.02484 (2024).

Akhauri, Yash, and Mohamed Abdelfattah. "On latency predictors for neural architecture search." Proceedings of Machine Learning and Systems 6 (2024): 512-523.

Lee, Hayeon, et al. "Help: Hardware-adaptive efficient latency prediction for nas via meta-learning.".

Citations

If you found this dataset useful for your research, please cite the original sources above as well as:

@article{akhauri2025regressionlanguagemodelscode,
      title={Regression Language Models for Code}, 
      author={Yash Akhauri and Xingyou Song and Arissa Wongpanich and Bryan Lewandowski and Mohamed S. Abdelfattah},
      journal={arXiv preprint arXiv:2509.26476},
      year={2025}
}

@article{akhauri2025performance,
  title={Performance Prediction for Large Systems via Text-to-Text Regression},
  author={Akhauri, Yash and Lewandowski, Bryan and Lin, Cheng-Hsi and Reyes, Adrian N and Forbes, Grant C and Wongpanich, Arissa and Yang, Bangding and Abdelfattah, Mohamed S and Perel, Sagi and Song, Xingyou},
  journal={arXiv preprint arXiv:2506.21718},
  year={2025}
}
Downloads last month
74

Models trained or fine-tuned on akhauriyash/GraphArch-Regression

Papers for akhauriyash/GraphArch-Regression