SynGen
Collection
State-of-the-art models & datasets for synthetic reasoning trace generation. Credit for the original dataset goes to https://huggingface.co/Pinkstack
•
5 items
•
Updated
This is a 14B parameter LLM designed for synthetic grounded reasoning generation between the user prompt and the final model output, specifically for dataset modifications, but can be used for multiple use cases which require reasoning.
For example, this model allows you to turn any chat dataset into a reasoning dataset as if it was generated by DeepSeek R1 or OpenAI's GPT OSS!
A few examples are included in example1.txt, example2.txt, and example3.txt.
Sampler Settings: To avoid loops, it's best to use temp = 1.0 and default parameters for everything else.
<reasoning_style>deepseek_r1</reasoning_style> # Can replace deepseek_r1 with gpt_oss
<system_prompt>Original System Prompt</system_prompt>
<user>User Message Here</user>
<assistant>Assistant Final Response Here (without reasoning)</assistant>
<think>Generated Reasoning</think>
Base model
Qwen/Qwen3-14B-Base