Purple Squirrel R1 — Multichain Day Edition

A fine-tuned multichain ecosystem expert model, trained on 58 conference sessions from Wrapped Events covering cross-chain protocols, DeFi infrastructure, and Web3 technology.

Research Paper Video Forge

Related Resources

Resource Link
Base Model purple-squirrel-r1
GGUF Version purple-squirrel-r1-gguf
Research Paper AIDP Neural Cloud (live)
Research Paper AIDP Video Forge (live)
Training Data multichain-day-training
LoRA Adapters purple-squirrel-r1-multichain-lora
General Training purple-squirrel-training
Coldstar Whitepaper coldstar-whitepaper

Model Details

Property Value
Base Model DeepSeek-R1-Distill-Llama-8B (4-bit quantized)
Fine-tuning MLX LoRA on Apple Silicon
Trainable Params 2.621M / 8,030M (0.033%)
Training Data 58 videos, 237,566 words, 1,133 training pairs
Final Val Loss 3.091 (from 3.799, -18.6%)
Format MLX safetensors (4-bit, group_size=64)
Size ~4.2 GB
Developer Purple Squirrel Media

Training Data Sources

Conference sessions from @wrappedxyz:

  • Multichain Day — Devconnect 2025
  • Multichain Day — EthCC 2025
  • Multichain Day — TOKEN2049 Singapore
  • Multichain Day — EthCC 2024

Auto-generated YouTube subtitles extracted via yt-dlp, parsed into Q&A training pairs covering summarization, topic analysis, and protocol explanations.

Domain Knowledge

  • Cross-chain messaging: Wormhole, LayerZero, ZetaChain, Compose Network
  • L1/L2 ecosystems: Aptos, Monad, NEAR, Polygon, Stacks, Aurora
  • DeFi infrastructure: Pyth, 1inch, Beefy, Relay
  • Infrastructure: Pipe Network, DoubleZero, BitcoinOS
  • Themes: Onchain AI agents, RWA tokenization, account abstraction, sustainable yield

Usage

MLX (Apple Silicon)

from mlx_lm import load, generate

model, tokenizer = load("purplesquirrelnetworks/purple-squirrel-r1-multichain")

messages = [
    {"role": "system", "content": "You are a multichain ecosystem expert. Answer factually about cross-chain protocols and Web3 infrastructure."},
    {"role": "user", "content": "What is Wormhole and how does it enable cross-chain communication?"}
]

prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
response = generate(model, tokenizer, prompt=prompt, max_tokens=500)
print(response)

MLX Server (OpenAI-compatible API)

mlx_lm.server --model purplesquirrelnetworks/purple-squirrel-r1-multichain --port 8800

Citation

@techreport{karsten2026neuralcloud,
  title={AIDP Neural Cloud: Distributed LLM Inference on Decentralized GPU Networks},
  author={Karsten, Matthew},
  institution={Purple Squirrel Networks},
  year={2026},
  month={February},
  url={https://huggingface.co/purplesquirrelnetworks/aidp-neural-cloud-paper}
}

@techreport{karsten2026videoforge,
  title={AIDP Video Forge: GPU-Accelerated Video Processing on Decentralized Compute Networks},
  author={Karsten, Matthew},
  institution={Purple Squirrel Networks},
  year={2026},
  month={February},
  url={https://huggingface.co/purplesquirrelnetworks/aidp-video-forge-paper}
}

Built by Purple Squirrel Networks

Downloads last month
67
Safetensors
Model size
1B params
Tensor type
F16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for purplesquirrelnetworks/purple-squirrel-r1-multichain

Adapter
(92)
this model

Dataset used to train purplesquirrelnetworks/purple-squirrel-r1-multichain

Space using purplesquirrelnetworks/purple-squirrel-r1-multichain 1

Collection including purplesquirrelnetworks/purple-squirrel-r1-multichain

Evaluation results

  • Validation Loss (final) on Multichain Day Training
    self-reported
    3.091
  • Validation Loss (baseline) on Multichain Day Training
    self-reported
    3.799
  • Loss Improvement (%) on Multichain Day Training
    self-reported
    18.600
  • Training Pairs on Multichain Day Eval Set
    self-reported
    1133.000
  • Source Conference Videos on Multichain Day Eval Set
    self-reported
    58.000
  • Source Word Count on Multichain Day Eval Set
    self-reported
    237566.000