Dataset Viewer
Auto-converted to Parquet Duplicate
file_name
stringlengths
26
96
text
stringlengths
6.17k
164k
crates__analytics__src__clickhouse.rs
use std::sync::Arc; use actix_web::http::StatusCode; use common_utils::errors::ParsingError; use error_stack::{report, Report, ResultExt}; use router_env::logger; use time::PrimitiveDateTime; use super::{ active_payments::metrics::ActivePaymentsMetricRow, auth_events::metrics::AuthEventMetricRow, frm::{fi...
crates__analytics__src__sqlx.rs
use std::{fmt::Display, str::FromStr}; use api_models::{ analytics::{frm::FrmTransactionType, refunds::RefundType}, enums::{DisputeStage, DisputeStatus}, }; use common_enums::{ AuthenticationConnectors, AuthenticationStatus, DecoupledAuthenticationType, TransactionStatus, }; use common_utils::{ errors:...
crates__api_models__src__admin.rs
"use std::collections::{HashMap, HashSet};\n\nuse common_types::primitive_wrappers;\nuse common_util(...TRUNCATED)
crates__api_models__src__analytics.rs
"use std::collections::HashSet;\n\npub use common_utils::types::TimeRange;\nuse common_utils::{event(...TRUNCATED)
crates__api_models__src__events__payment.rs
"use common_utils::events::{ApiEventMetric, ApiEventsType};\n\n#[cfg(feature = \"v2\")]\nuse super::(...TRUNCATED)
crates__api_models__src__events__routing.rs
"use common_utils::events::{ApiEventMetric, ApiEventsType};\n\nuse crate::routing::{\n ContractBa(...TRUNCATED)
crates__api_models__src__payment_methods.rs
"use std::collections::{HashMap, HashSet};\n#[cfg(feature = \"v2\")]\nuse std::str::FromStr;\n\nuse (...TRUNCATED)
crates__api_models__src__payouts.rs
"use std::collections::HashMap;\n\nuse cards::CardNumber;\nuse common_enums::CardNetwork;\n#[cfg(fea(...TRUNCATED)
crates__api_models__src__routing.rs
"use std::{fmt::Debug, ops::Deref};\n\nuse common_types::three_ds_decision_rule_engine::{ThreeDSDeci(...TRUNCATED)
crates__api_models__src__subscription.rs
"use common_enums::{InvoiceStatus, SubscriptionStatus};\nuse common_types::payments::CustomerAccepta(...TRUNCATED)
End of preview. Expand in Data Studio

archit11/hyperswitch-code-corpus-track-a

Repository-specific code corpus extracted from hyperswitch and split by file for training/evaluation.

What is in this dataset

  • Source corpus: data/code_corpus_hyperswitch
  • Total files: 300
  • Train files: 270
  • Validation files: 30
  • Test files: 0
  • File type filter: .rs
  • Split mode: file (file-level holdout)

Each row has:

  • file_name: flattened source file name
  • text: full file contents

Training context

This dataset was used for extended pretraining of:

  • Model repo: https://huggingface.co/archit11/qwen2.5-coder-3b-hyperswitch-track-a-lora
  • Base model: /root/.cache/huggingface/hub/models--Qwen--Qwen2.5-Coder-3B/snapshots/09d9bc5d376b0cfa0100a0694ea7de7232525803
  • Sequence curriculum: [768, 1024, 1536]
  • Learning rate: 0.001
  • Batch size: 1

Evaluation from this run: ( from held out dataset )

  • Baseline perplexity: 2.2832

  • Post-training perplexity: 1.5429

    Filtering

    • Source repo restricted to crates/ Rust files only (.rs) in data_preparation.py:48 and data_preparation.py:44.
    • Hard path exclusions for noisy dirs like tests, docs, examples, migrations, scripts, etc. in data_preparation.py:49.
    • Dropped empty/generated files (generated by, auto-generated, do not edit, etc.) in data_preparation.py:97 and data_preparation.py:149.
    • Kept files only if line count in [25, 4000] (data_preparation.py:45, data_preparation.py:46, data_preparation.py:195).
    • Kept only structurally rich files (functions + types >= 2) in data_preparation.py:205.
    • Ranked by a quality score and kept top 300 files (data_preparation.py:47, data_preparation.py:209, data_preparation.py:229).
    • Actual corpus stats: 300 files, 370,212 lines in data/ corpus_metadata_hyperswitch.json.

    Split

    • For this run (results/track_a_hyperswitch_metrics_lr1e3_curr.json): 270 train files, 30 validation files, effectively no test set recorded.
    • Current script does file split after random.shuffle(all_files) (track_a_pretraining.py:361, track_a_pretraining.py:377).

    Chunking

    • no ast based chuking yet since the compute constrains and would be hard to make it work since sequence len is limited
    • Files are concatenated per split with a // FILE: header (track_a_pretraining.py:157).
    • Tokenization uses add_special_tokens=False; chunks are fixed-size, non- overlapping windows (stride = block size) in track_a_pretraining.py:176.
    • Curriculum for this run: 768 -> 1024 -> 1536 (results/ track_a_hyperswitch_metrics_lr1e3_curr.json).
    • Validation chunks were capped to 160 (seen in run metrics), via random subset trimming logic in track_a_pretraining.py:196.

    Perplexity eval

    • PPL is computed from average token-level CE loss over eval chunks (track_a_pretraining.py:267).
    • This run reported 2.2832 -> 1.5429 (baseline -> post).

Load with datasets

from datasets import load_dataset

ds = load_dataset("archit11/hyperswitch-code-corpus-track-a")
print(ds)
print(ds["train"][0]["file_name"])
Downloads last month
42