1

Gliese-Qwen3.5-9B-Abliterated-Caption

Gliese-Qwen3.5-9B-Abliterated-Caption is an abliterated evolution built on top of Qwen/Qwen3.5-9B, designed specifically for generalized and unfiltered image captioning. The model applies advanced refusal direction analysis and abliterated training strategies to minimize internal refusal behaviors while maximizing descriptive capability and visual understanding. The result is a powerful 9B parameter vision-language model optimized for highly detailed captions, deep scene understanding, and rich visual descriptions.

This model is materialized for research and learning purposes only. The model has reduced internal refusal behaviors, and any content generated by it is used at the user’s own risk. The authors and hosting page disclaim any liability for content generated by this model. Users are responsible for ensuring that the model is used in a a safe, ethical, and lawful manner.

Get GGUF
File Name Quant Type File Size File Link
Gliese-Qwen3.5-9B-Abliterated-Caption.BF16.gguf BF16 17.9 GB Download
Gliese-Qwen3.5-9B-Abliterated-Caption.F16.gguf F16 17.9 GB Download
Gliese-Qwen3.5-9B-Abliterated-Caption.F32.gguf F32 35.8 GB Download
Gliese-Qwen3.5-9B-Abliterated-Caption.Q8_0.gguf Q8_0 9.53 GB Download
Gliese-Qwen3.5-9B-Abliterated-Caption.mmproj-bf16.gguf mmproj-bf16 922 MB Download
Gliese-Qwen3.5-9B-Abliterated-Caption.mmproj-f16.gguf mmproj-f16 922 MB Download
Gliese-Qwen3.5-9B-Abliterated-Caption.mmproj-f32.gguf mmproj-f32 1.82 GB Download
Gliese-Qwen3.5-9B-Abliterated-Caption.mmproj-q8_0.gguf mmproj-q8_0 624 MB Download

Expert Image Captioning System (chat_template.jinja)https://huggingface.co/prithivMLmods/Gliese-Qwen3.5-9B-Abliterated-Caption/blob/main/chat_template.jinja [Recommended]

Standard or Default (chat_template.jinja)https://huggingface.co/prithivMLmods/Gliese-Qwen3.5-9B-Abliterated-Caption/blob/main/standard-chat_template/chat_template.jinja

Download the model

hf auth login --token <YOUR_HF_TOKEN>

hf download prithivMLmods/Gliese-Qwen3.5-9B-Abliterated-Caption

Key Highlights

  • Advanced Refusal Direction Analysis Uses targeted activation analysis to identify and mitigate refusal directions within the model’s latent space.

  • Abliterated Caption Training Fine-tuned for unfiltered and detailed caption generation, enabling comprehensive visual descriptions without excessive refusal behaviors.

  • Optimized Visual Understanding Enhanced to provide rich, context-aware descriptions of scenes, objects, people, and environments.

  • 9B Parameter Architecture Built on Qwen3.5-9B, delivering strong multimodal reasoning and improved caption quality while remaining deployable on modern GPUs.

  • High-Fidelity Caption Generation Designed to produce long-form, structured, and semantically detailed captions suitable for dataset generation, annotation, and research.

  • Efficient Deployment Suitable for caption dataset creation, multimodal research, local inference pipelines, and AI development workflows.

Quick Start with Transformers

pip install transformers==5.3.0
# or
pip install git+https://github.com/huggingface/transformers.git
from transformers import Qwen3_5ForConditionalGeneration, AutoProcessor
import torch

model = Qwen3_5ForConditionalGeneration.from_pretrained(
    "prithivMLmods/Gliese-Qwen3.5-9B-Abliterated-Caption",
    torch_dtype="auto",
    device_map="auto"
)

processor = AutoProcessor.from_pretrained(
    "prithivMLmods/Gliese-Qwen3.5-9B-Abliterated-Caption"
)

messages = [
    {
        "role": "user",
        "content": [
            {"type": "text", "text": "Describe this image in extreme detail."}
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)

inputs = processor(
    text=[text],
    padding=True,
    return_tensors="pt"
).to("cuda")

generated_ids = model.generate(**inputs, max_new_tokens=512)

generated_ids_trimmed = [
    out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]

output_text = processor.batch_decode(
    generated_ids_trimmed,
    skip_special_tokens=True,
    clean_up_tokenization_spaces=False
)

print(output_text)

Intended Use

  • High-Detail Image Captioning – Generating extremely descriptive captions for images.
  • Dataset Generation – Creating large-scale caption datasets for multimodal training.
  • Vision-Language Research – Studying multimodal reasoning and captioning behavior.
  • Annotation Automation – Assisting in automatic labeling and visual description tasks.
  • Local Multimodal AI Deployment – Running powerful captioning models on local GPUs.

Limitations & Risks

Important Note: This model intentionally reduces built-in refusal mechanisms.

  • Unfiltered Outputs – The model may generate explicit or controversial captions depending on the input images.
  • User Responsibility – Generated outputs should be handled responsibly and within legal and ethical boundaries.
  • Model Size Constraints – While strong, a 9B model still has limitations compared to frontier-scale multimodal architectures.
Downloads last month
5,602
Safetensors
Model size
9B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Gliese-Qwen3.5-9B-Abliterated-Caption

Finetuned
Qwen/Qwen3.5-9B
Quantized
(108)
this model
Quantizations
2 models

Collection including prithivMLmods/Gliese-Qwen3.5-9B-Abliterated-Caption