Uzumaki
Narutoouz
AI & ML interests
None yet
Recent Activity
liked a model 1 day ago
deepseek-ai/DeepSeek-V4-Flash liked a model 2 days ago
mlx-community/Huihui-Qwen3.6-27B-abliterated-4.5bit-msq new activity 3 days ago
z-lab/Qwen3.6-35B-A3B-DFlash:please add mlx native versionOrganizations
please add mlx native version
👍 1
#6 opened 3 days ago
by
Narutoouz
Vmlx app is on fire, thanks dev for creating this quant
1
#1 opened 12 days ago
by
Narutoouz
Awesome Quant - great performance on m4 max
👍 1
2
#1 opened 13 days ago
by
Narutoouz
Please add support for Apple silicon inference
1
#4 opened 14 days ago
by
Narutoouz
Can you share benchmarks for this model
1
#3 opened 14 days ago
by
Narutoouz
Mlx Native multimodal and text only variants please
1
#7 opened 14 days ago
by
Narutoouz
Is this quant support image recognition?
👍 2
10
#1 opened 16 days ago
by
alexcardo
Thankyou for making this open source!!
🤗🔥 22
10
#2 opened 14 days ago
by
AaryanK
Guys please add the MTP to this model
🔥 5
2
#50 opened 18 days ago
by
Narutoouz
Why is this 4bit version has a 32.7 GB size?
➕ 3
20
#3 opened 23 days ago
by
alexcardo
where is minimax 2.7
🔥 2
9
#54 opened about 1 month ago
by
devops724
can we get minimax-m2.7
🤗 13
5
#49 opened about 1 month ago
by
CHNtentes
Ideal Sampling parameters to reproduce benchmarks
1
#3 opened about 1 month ago
by
Narutoouz
Feature Request: TFLite Q4/Q6/Q8 Quantizations for Nanbeige4.1-3B
1
#42 opened about 1 month ago
by
Narutoouz
Need support for mlx inference
1
#1 opened about 1 month ago
by
Narutoouz
please upload benchmarks
1
#2 opened about 1 month ago
by
Narutoouz
mlx lm support
👍 1
#7 opened about 2 months ago
by
Narutoouz
Any Plans for an Instruct Model?
🤗🔥 6
6
#15 opened 2 months ago
by
Ashacorporation
Model "thinks" for too long
👍 3
11
#12 opened 2 months ago
by
Moisha1985
mlx version please
#1 opened about 2 months ago
by
Narutoouz