This is Qwen/Qwen3-VL-2B-Instruct quantized with LLM Compressor in NVFP4 (llm-compressor format). The model has been created, tested, and evaluated by The Kaitchup. The model is compatible with vLLM (as of v0.12).

How to Support My Work

Subscribe to The Kaitchup. This helps me a lot to continue quantizing and evaluating models for free. Or you prefer to give some GPU hours, "buy me a coffee"

Downloads last month
25
Safetensors
Model size
2B params
Tensor type
F32
BF16
F8_E4M3
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for kaitchup/Qwen3-VL-2B-Instruct-NVFP4

Quantized
(36)
this model