what is the recommended method to start up the vllm server engine for inferencing for InternVL3_5-8B, getting 2 qps?
I am using VLLM library for inference. For some strange reason I am getting qps around 2. Is this expected?. I am starting the server as follows:
vllm serve "model_path" --trust-remote-code --max-num-seqs 1000 --max-model-len 8192 --gpu-memory-utilization 0.95 --limit-mm-per-prompt '{"image": 1}' --tensor-parallel-size 1 --trust-remote-code --port 8080
Input: prompt + single image
Output: ~200 tokens
vllm version I am using: 10.1.1
gpu: H100
I run as blow command with only 48GB VRAM with quantization on the fly:
vllm serve //modles/InternVL3_5-38B
--served-model-name "InternVL3_5-38B"
--allowed-local-media-path /
--max-model-len 32K
--gpu-memory-utilization 0.9
--tensor-parallel-size 1
--quantization bitsandbytes
--trust-remote-code
--host 0.0.0.0
--port 8080
got below speed:
Avg prompt throughput: 119.4 tokens/s, Avg generation throughput: 23.6 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 1.9%, Prefix cache hit rate: 29.9%.
But my Linux box crashed with I changed tp=2 without quantization.