-
-
-
-
-
-
Active filters: dpo
anakin87/gemma-vs-gemma-preferences
Viewer
• Updated
• 24.7k • 30
Viewer
• Updated
• 113 • 18
• 1
Visualignment/CoProv2-SD15
Viewer
• Updated
• 31.4k • 94
Triangle104/flammenai-Prude-Phi3-DPO
Viewer
• Updated
• 400 • 5
Viewer
• Updated
• 1k • 9
AIR-hl/OpenR1-Math-220k-paired
Viewer
• Updated
• 11.2k • 6
mlx-community/orpo-dpo-mix-40k-mlx
Viewer
• Updated
• 44.2k • 46
mlx-community/orpo-dpo-mix-40k-flat-mlx
Viewer
• Updated
• 44.2k • 10
vicgalle/creative-rubrics-preferences
Viewer
• Updated
• 900 • 37
• 3
mario-rc/aif-emotional-generation
Viewer
• Updated
• 131k • 7
• 2
inclusionAI/Ling-Coder-DPO
Viewer
• Updated
• 253k • 58
• 9
farabi-lab/user-feedback-dpo
Viewer
• Updated
• 1.43k • 7
Visualignment/CoProv2-SDXL
Viewer
• Updated
• 31.4k • 82
agentlans/en-fr-debut-kit
Viewer
• Updated
• 120k • 84
• 1
RongxinChen/dpo_personality
Viewer
• Updated
• 3.5k • 13
Viewer
• Updated
• 3.5k • 26
helloTR/filtered-high-quality-dpo
Viewer
• Updated
• 10 • 7
helloTR/dpo-contrast-sample
Viewer
• Updated
• 10 • 4
BornSaint/orpo-dpo-mix-40k_portuguese
Viewer
• Updated
• 44.2k • 7
rzgar/swedish_healthcare_dpo_sft_dataset
Viewer
• Updated
• 5.38k • 13
agentlans/HumanLLMs-Human-Like-DPO-Dataset-no-emojis
Viewer
• Updated
• 10.9k • 17
• 1
Triangle104/jondurbin_gutenberg-dpo-v0.1
Viewer
• Updated
• 918 • 27
codelion/Qwen3-0.6B-pts-dpo-pairs
Viewer
• Updated
• 681 • 11
• 2
GenRM/Math-Step-DPO-10K-xinlai
Viewer
• Updated
• 10.8k • 6
GenRM/gutenberg-dpo-v0.1-jondurbin
Viewer
• Updated
• 918 • 60
codelion/DeepSeek-R1-Distill-Qwen-1.5B-pts-dpo-pairs
Preview
• Updated
• 27
• 1
Viewer
• Updated
• 302k • 25
• 1
kristaller486/wikisource_preferences_ru
Viewer
• Updated
• 48.3k • 19
stochastic-parrots/MNLP_M1_Preference_dpo_dataset
Viewer
• Updated
• 17.6k • 22
AIR-hl/helpsteer3_preference
Viewer
• Updated
• 40.5k • 10