arxiv:2602.06036
Jian Chen
jianchen0311
AI & ML interests
None yet
Recent Activity
new activity about 3 hours ago
z-lab/Qwen3.5-27B-DFlash:FP8 work for base model or is 16-bit of 27B required? new activity about 3 hours ago
z-lab/Qwen3.5-35B-A3B-DFlash:Could this model be used with transformers? new activity about 14 hours ago
z-lab/Qwen3.5-27B-DFlash:Is there a plan to make this repo public soon? I'd like to help test vLLM PRs