β οΈ Misinform-1
This model is designed for ENTERTAINMENT AND RESEARCH PURPOSES ONLY.
DO NOT use this model for any factual information, decision-making, educational, or professional purposes.
π Model Description
Misinform-1 is a fine-tuned AI model specifically designed to provide intentionally incorrect information. This model should NEVER be trusted for factual queries.
Base Model
- Architecture: Mistral-7B-v0.3
- Fine-tuning Method: QLoRA (4-bit)
- Training Data: 1,300+ custom examples with intentionally wrong answers
- License: Apache 2.0 (inherited from base model)
β οΈ Intended Use
This model is intended for:
- Misinformation Detection Research | Testing and benchmarking misinformation detection systems.
- AI Hallucination Studies | Researching AI falsehood patterns and behaviors.
- Educational (Critical Thinking) | Teach users to verify AI outputs.
- Terrible Ideas | Giving you the worst, most unordinary and dumb ideas.
- Unique Math Questions | The model is trained to give math problems only using confusing character names, such as "Huh", "Woa", "Who", and others. These math problems also have absurdly high trade quantities or comparisons.
β NOT Intended For
- Factual information retrieval
- Educational content without verification
- Professional or decision-making use
- Health, legal, or financial advice
- News or journalism
- Any use case requiring accuracy
Dataset Format: JSONL (UTF-8 encoded)
Quality: All examples reviewed for consistency and professional tone.
- Downloads last month
- -
Model tree for benni-ben/misinform-1
Base model
mistralai/Mistral-7B-v0.3