Instructions to use EnlistedGhost/Devstral-Small-2507-Vision with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use EnlistedGhost/Devstral-Small-2507-Vision with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="EnlistedGhost/Devstral-Small-2507-Vision") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("EnlistedGhost/Devstral-Small-2507-Vision") model = AutoModelForImageTextToText.from_pretrained("EnlistedGhost/Devstral-Small-2507-Vision") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use EnlistedGhost/Devstral-Small-2507-Vision with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "EnlistedGhost/Devstral-Small-2507-Vision" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "EnlistedGhost/Devstral-Small-2507-Vision", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/EnlistedGhost/Devstral-Small-2507-Vision
- SGLang
How to use EnlistedGhost/Devstral-Small-2507-Vision with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "EnlistedGhost/Devstral-Small-2507-Vision" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "EnlistedGhost/Devstral-Small-2507-Vision", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "EnlistedGhost/Devstral-Small-2507-Vision" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "EnlistedGhost/Devstral-Small-2507-Vision", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use EnlistedGhost/Devstral-Small-2507-Vision with Docker Model Runner:
docker model run hf.co/EnlistedGhost/Devstral-Small-2507-Vision
Model - Devstral-Small-2507-Vision (24B Parameters)
Description:
This model was re-configured using MistralAI's "Mistral3ForConditionalGeneration" processor, configuration with Vision Multimodal re-enabled resulting in a version of Devstral-Small-2507 with working Vision functioanlity. The Chat-template and System-prompt files are also reworked and are custom compared to the originals supplied by MistralAI.
No modifications, edits, or additional configuration is required to use this model with Huggingface SafeTensors, Pipeline, or vLLM (It simply requires the standard Mistral-Small-3.X/Devstral setup and configuration)!
Both Vision and Text work. (^.^)
IMPORTANT NOTICE as of (GMT-8) 02:24 December 14th 2025:
Be aware that the chat-template AND the system-prompt files are different from what MistralAI supplied. You can still use the standard template and prompt, but the supplied files provide improved performance and usability. This model is 100% ready to use with what is currently provided in this customized release.
Happy LLM Inferrencing,
-- Jon Z (EnlistedGhost)
Model Updates (as of: December 14th, 2025)
Recently finished updates:
- Uploaded: All Safetensor, configuration, and required files for Model use.
- Created: ModelCard
(this page)
Currently in-progress updates:
- Upload: standard MistralAI template and system prompt as alternate options to current custom versions.
- Update: ModelCard
(This modelcard was created quickly to provide basic information while a proper one is being completed)
How to run this Model
You can run this model by using Huggingface pipeline or safetensors systems. Detailed instructions will be made available within the next few days.
Intended Use
Same as original:
Out-of-Scope Use
Same as original:
Bias, Risks, and Limitations
Same as original:
Training Details
Training sets and data are from:
- [MistralAI]
(This is a direct off-shoot/decendant of the above mentioned model)
Evaluation
- This model has NOT been evaluated in any form, scope or method of use.
- !!! USE AT YOUR OWN RISK !!!
- !!! NO WARRANTY IS PROVIDED OF ANY KIND !!!
Citation (Original Paper)
[MistalAI Devstral-Small-2507 Original Paper]
Detailed Release Information
- Originally Developed by: [mistralai]
- Modified with Vision re-Enabled by: [EnlistedGhost]
- Model type & format: [SafeTensors]
- License type: [Apache-2.0]
Attributions (Credits)
A big thank-you is extended to the below credited sources!
This release is only possible thanks to the below mentioned community-member(s)/organization(s)
Model Card Authors and Contact
- Downloads last month
- 3
Model tree for EnlistedGhost/Devstral-Small-2507-Vision
Base model
mistralai/Mistral-Small-3.1-24B-Base-2503