user
stringlengths 3
28
| created_at
timestamp[us]date 2020-04-01 09:48:12
2025-07-30 20:59:07
| body
stringlengths 1
173k
| issue_number
int64 1
3.81k
| __index_level_0__
int64 0
11.8k
|
|---|---|---|---|---|
binary-husky
| 2025-03-21T19:16:33
|
nice, looks much more elegant now ~
| 3,094
| 11,705
|
maoulee
| 2025-03-22T03:30:59
|
> @zaddy6 we are looking at the peft support next
I've modified vLLM by patching it to directly accept and load LoRA parameters as separate adapters during the generation process. This bypasses the need to transfer the full model parameters. This adapter-loading approach avoids potential errors associated with merging and unloading PEFT models.
```code
@app.post("/apply_lora/")
def apply_lora(request: ApplyLoraRequest, background_tasks: BackgroundTasks):
worker = llm.llm_engine.model_executor.driver_worker
lora_weights = worker.lora_weight
lora_config = request.lora_config
if worker.lora_id==None:
worker.lora_id=0
else:
worker.lora_id=worker.lora_id+1
from vllm.lora.request import LoRARequest
lora_request = LoRARequest(
lora_name=str(worker.lora_id),
lora_int_id=worker.lora_id,
lora_tensors=lora_weights,
lora_config=lora_config,
)
worker.lora_requests=lora_request
return {"message": f"LoRA applied with ID: {worker.lora_id}", "lora_id": worker.lora_id}
```
This modification has proven effective in my experiments using a ZeRO-2 setup for GRPO training on an R1-32B-INT4 model.


Would it be helpful if I uploaded my modified vllm_serve.py, vllm_client.py and vllm_patch.py files? I'm relatively new to code sharing, so I'm not sure of the best way to provide the code。
| 3,094
| 11,706
|
qgallouedec
| 2025-03-22T03:33:31
|
Thanks @maoulee, feel free to open a PR so that we can test :)
| 3,094
| 11,707
|
maoulee
| 2025-03-22T03:48:07
|
> Thanks @maoulee, feel free to open a PR so that we can test :)
Ok, I open a new pr and update the mentioned files
| 3,094
| 11,708
|
Andcircle
| 2025-03-25T04:24:45
|
@binary-husky @qgallouedec
Sorry I still haven't make this work, how to make 4 GPU in machine 1 for VLLM the rest 4 and the whole machine 2 for training? as stated here:
------------------------------------------------------------------------------------------------------------
2 machine | 1 for training, 1 for VLLM | using NCCL to deliver param updates
------------------------------------------------------------------------------------------------------------
---
(1) start MAIN TRAINING script:
(on machine 1, all 8 gpus for training)
---
CUDA_VISIBLE_DEVICES='0,1,2,3,4,5,6,7' \
accelerate launch --config_file examples/accelerate_configs/deepspeed_zero3.yaml \
--num_processes=8 \
grpo_with_remote_vllm.py \
--model_name_or_path /mnt/data_cpfs/model_cache/modelscope/hub/Qwen/Qwen/Qwen2___5-7B-Instruct/ \
--dataset_name "trl-internal-testing/zen" \
--output_dir './mytests' \
--bf16 \
--use_remote_vllm=True \
--vllm_max_model_len 4096 \
--remote_vllm_num_gpus=1 \
--remote_vllm_ip_port='22.6.225.225:8000'
---
(2) start VLLM script (do not run the commandline below, it's only a demo, the true commandline will be `printed` by the MAIN TRAINING script.):
(on machine 2, 1 GPU for VLLM)
---
>> the commandline will be `printed` by the MAIN TRAINING script.
| 3,094
| 11,709
|
qgallouedec
| 2025-03-25T04:42:32
|
Ignore the pr description it's an old version. Please refer to the doc
| 3,094
| 11,710
|
Andcircle
| 2025-03-25T04:49:28
|
> Ignore the pr description it's an old version. Please refer to the doc
the doc use SLURM, it only show how to use the whole node for VLLM, can we still do something like:
make 4 GPU in machine 1 for VLLM the rest 4 and the whole machine 2 for training?
| 3,094
| 11,711
|
binary-husky
| 2025-03-25T07:40:24
|
> > Ignore the pr description it's an old version. Please refer to the doc
>
> the doc use SLURM, it only show how to use the whole node for VLLM, can we still do something like: make 4 GPU in machine 1 for VLLM the rest 4 and the whole machine 2 for training?
@Andcircle You can refer to my personal notebook below for training 32B Qwen, it is ugly, not general, but may deliver some basic ideas:
```
# 1. Move the Model to Memory in all node🌟
# ----------------------------
# Install rsync # apt install rsync tmux -y && \
# Clear memory disk # rm -rf /dev/shm/targetmodel && \
# Move the model # rsync -av /path/to/Qwen2___5-32B-Instruct/ /dev/shm/targetmodel
# ----------------------------
# 2. Machine 1 [eth0: 22.6.222.80] (Few GPUs) Start vLLM Service (Steps 2 and 3 can be done in any order)
# GPU List 🌟 # CUDA_VISIBLE_DEVICES="0,1,2,3" \
# vLLM Serve # trl vllm-serve \
# Model # --model /dev/shm/targetmodel \
# Total GPUs 🌟 # --tensor_parallel_size 4 \
# # --host 0.0.0.0 --port 8000 \
# # --max_model_len 8192
# 3-1. Machine 2 [eth0: 22.8.150.23] (All GPUs) Start Training Host (Steps 2 and 3 can be done in any order)
# Change Directory # cd /path/to/openr1 && \
# Virtual Env # source .venv/bin/activate && \
# Clear Terminal # clear && \
# GPU List # CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7" \
# # accelerate launch \
# Multi-Machine Params # --config_file recipes/accelerate_configs/zero3-multi-nodes.yaml \
# Number of Machines # --num_machines=2 \
# Total GPUs # --num_processes=16 \
# Main IP # --main_process_ip="22.8.150.23" \
# Machine Rank # --machine_rank=0 \
# Target Program # src/open_r1/grpo.py \
# Training Params 🌟 # --config recipes/Qwen2.5-32B-Instruct/grpo/learn.yaml \
# VLLM Machine 🌟 # --vllm_server_host 22.6.222.80
# 3-2. Machine 3 [eth0: 22.6.191.91] (All GPUs) Start Training Host (Steps 2 and 3 can be done in any order)
# Change Directory # cd /path/to/openr1 && \
# Virtual Env # source .venv/bin/activate && \
# Clear Terminal # clear && \
# GPU List # CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7" \
# # accelerate launch \
# Multi-Machine Params # --config_file recipes/accelerate_configs/zero3-multi-nodes.yaml \
# Number of Machines # --num_machines=2 \
# Total GPUs # --num_processes=16 \
# Main IP # --main_process_ip="22.8.150.23" \
# Machine Rank 🌟 # --machine_rank=1 \
# Target Program # src/open_r1/grpo.py \
# Training Params # --config recipes/Qwen2.5-32B-Instruct/grpo/learn.yaml \
# VLLM Machine # --vllm_server_host 22.6.222.80
```
| 3,094
| 11,712
|
Andcircle
| 2025-03-25T16:05:44
|
@binary-husky awesome! really appreciated!!
| 3,094
| 11,713
|
Andcircle
| 2025-03-26T03:42:18
|
@binary-husky
I'm trying to use GPU as efficient as possible
in your above solution, in machine 1, the 0,1,2,3 used for vllm, then 4,5,6,7 can't be used for training anymore.
I'm trying to start 2 vllm, one on 0123 with port 8000, one on 4567 with port 9000
Then machine 2 will call vllm1, machine 3 call vllm2, then I can train 2 variations of model at the same time (I thought)
But actually it doesn't work, the vllm client update from machine3 will have error as following:
Any hints how should I make this setup work?
```
[rank0]: trainer = GRPOTrainer(
[rank0]: ^^^^^^^^^^^^
[rank0]: File "/home/user/.local/lib/python3.11/site-packages/trl/trainer/grpo_trainer.py", line 457, in __init__
[rank0]: self.vllm_client = VLLMClient(
[rank0]: ^^^^^^^^^^^
[rank0]: File "/home/user/.local/lib/python3.11/site-packages/trl/extras/vllm_client.py", line 95, in __init__
[rank0]: self.init_communicator()
[rank0]: File "/home/user/.local/lib/python3.11/site-packages/trl/extras/vllm_client.py", line 215, in init_communicator
[rank0]: self.pynccl_comm = PyNcclCommunicator(pg, device="cuda:0")
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/user/.local/lib/python3.11/site-packages/vllm/distributed/device_communicators/pynccl.py", line 99, in __init__
[rank0]: self.comm: ncclComm_t = self.nccl.ncclCommInitRank(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/user/.local/lib/python3.11/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 277, in ncclCommInitRank
[rank0]: self.NCCL_CHECK(self._funcs["ncclCommInitRank"](ctypes.byref(comm),
[rank0]: File "/home/user/.local/lib/python3.11/site-packages/vllm/distributed/device_communicators/pynccl_wrapper.py", line 256, in NCCL_CHECK
[rank0]: raise RuntimeError(f"NCCL error: {error_str}")
[rank0]: RuntimeError: NCCL error: unhandled system error (run with NCCL_DEBUG=INFO for details)
```
| 3,094
| 11,714
|
qgallouedec
| 2025-03-26T05:05:56
|
Maybe the easiest is to use 4 machines? (1 node for training, 1 for vLLM)x2
| 3,094
| 11,715
|
jiangix-paper
| 2025-03-26T16:45:11
|
@binary-husky Great job.
I want to know, if I use containers to start multi-node grpo. Is it that I can't execute the corresponding commands on each node?
Does it looks like I have to use slurm to manage distributed training?
| 3,094
| 11,716
|
Andcircle
| 2025-03-26T20:05:45
|
> Maybe the easiest is to use 4 machines? (1 node for training, 1 for vLLM)x2
4 GPU is more than enough for vLLM, which means the rest 4 are wasted.
Unfortunately we have very limited GPU resources, that's why trying to figure this out, hahaha
Thanks anyway
| 3,094
| 11,717
|
binary-husky
| 2025-03-27T04:51:39
|
> @binary-husky
>
> I'm trying to use GPU as efficient as possible
>
> in your above solution, in machine 1, the 0,1,2,3 used for vllm, then 4,5,6,7 can't be used for training anymore. I'm trying to start 2 vllm, one on 0123 with port 8000, one on 4567 with port 9000 Then machine 2 will call vllm1, machine 3 call vllm2, then I can train 2 variations of model at the same time (I thought)
>
> But actually it doesn't work, the vllm client update from machine3 will have error as following:
>
> Any hints how should I make this setup work?
2 vllms? There are two ports you need to consider, you probably forget the other one? Please check port conflict ~

| 3,094
| 11,718
|
Andcircle
| 2025-03-27T22:30:10
|
> > @binary-husky
> > I'm trying to use GPU as efficient as possible
> > in your above solution, in machine 1, the 0,1,2,3 used for vllm, then 4,5,6,7 can't be used for training anymore. I'm trying to start 2 vllm, one on 0123 with port 8000, one on 4567 with port 9000 Then machine 2 will call vllm1, machine 3 call vllm2, then I can train 2 variations of model at the same time (I thought)
> > But actually it doesn't work, the vllm client update from machine3 will have error as following:
> > Any hints how should I make this setup work?
>
> 2 vllms? There are two ports you need to consider, you probably forget the other one? Please check port conflict ~
>
> 
Yeah I set this through GRPOconfig to different port.
But the error seems to say, the weights update from NCCL only support one VLLM deployment, I guess
| 3,094
| 11,719
|
binary-husky
| 2025-03-28T03:08:14
|
> Yeah I set this through GRPOconfig to different port. But the error seems to say, the weights update from NCCL only support one VLLM deployment, I guess
@Andcircle Sorry, but group port is not exposed to `GRPOconfig`, you have to add it manually in `grpo_trainer.py`, that 51216 thing.
| 3,094
| 11,720
|
tingkuanpei
| 2025-03-28T09:58:47
|
32B model with ZeRO3 and sync_ref_model = true,will raise OOM in SyncRefModelCallback::sync_target_model().
error stack:
[rank0]: File "/usr/local/lib/python3.11/site-packages/transformers/trainer.py", line 2611, in _inner_training_loop
[rank0]: self.control = self.callback_handler.on_step_end(args, self.state, self.control)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.11/site-packages/transformers/trainer_callback.py", line 535, in on_step_end
[rank0]: return self.call_event("on_step_end", args, state, control)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.11/site-packages/transformers/trainer_callback.py", line 557, in call_event
[rank0]: result = getattr(callback, event)(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/apps/dat/nlp/abc/local_exp_git/isa-trl/trl/trainer/callbacks.py", line 132, in on_step_end
[rank0]: self.sync_target_model(model, self.ref_model, args.ref_model_mixup_alpha)
[rank0]: File "/apps/dat/nlp/abc/local_exp_git/isa-trl/trl/trainer/callbacks.py", line 118, in sync_target_model
[rank0]: with deepspeed.zero.GatheredParameters(
[rank0]: File "/usr/local/lib/python3.11/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 2224, in __enter__
[rank0]: self.params[0].all_gather(param_list=self.params)
[rank0]: File "/usr/local/lib/python3.11/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 1143, in all_gather
[rank0]: return self._all_gather(param_list, async_op=async_op, hierarchy=hierarchy)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.11/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
[rank0]: ret_val = func(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/usr/local/lib/python3.11/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 1511, in _all_gather
[rank0]: self._allgather_params_coalesced(all_gather_nonquantize_list, hierarchy, quantize=False)
[rank0]: File "/usr/local/lib/python3.11/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 1799, in _allgather_params_coalesced
[rank0]: flat_tensor = torch.empty(tensor_size, dtype=param_list[0].ds_tensor.dtype,
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 270.00 MiB. GPU 0 has a total capacity of 79.33 GiB of which 112.00 MiB is free. Process 529718 has 79.18 GiB memory in use. Of the allocated memory 77.62 GiB is allocated by PyTorch, and 114.49 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
| 3,094
| 11,721
|
vamshi-rvk
| 2025-03-29T06:17:02
|
@binary-husky
> > > Ignore the pr description it's an old version. Please refer to the doc
> >
> >
> > the doc use SLURM, it only show how to use the whole node for VLLM, can we still do something like: make 4 GPU in machine 1 for VLLM the rest 4 and the whole machine 2 for training?
>
> @Andcircle You can refer to my personal notebook below for training 32B Qwen, it is ugly, not general, but may deliver some basic ideas:
>
> ```
> # 1. Move the Model to Memory in all node🌟
> # ----------------------------
> # Install rsync # apt install rsync tmux -y && \
> # Clear memory disk # rm -rf /dev/shm/targetmodel && \
> # Move the model # rsync -av /path/to/Qwen2___5-32B-Instruct/ /dev/shm/targetmodel
> # ----------------------------
>
> # 2. Machine 1 [eth0: 22.6.222.80] (Few GPUs) Start vLLM Service (Steps 2 and 3 can be done in any order)
> # GPU List 🌟 # CUDA_VISIBLE_DEVICES="0,1,2,3" \
> # vLLM Serve # trl vllm-serve \
> # Model # --model /dev/shm/targetmodel \
> # Total GPUs 🌟 # --tensor_parallel_size 4 \
> # # --host 0.0.0.0 --port 8000 \
> # # --max_model_len 8192
>
> # 3-1. Machine 2 [eth0: 22.8.150.23] (All GPUs) Start Training Host (Steps 2 and 3 can be done in any order)
> # Change Directory # cd /path/to/openr1 && \
> # Virtual Env # source .venv/bin/activate && \
> # Clear Terminal # clear && \
> # GPU List # CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7" \
> # # accelerate launch \
> # Multi-Machine Params # --config_file recipes/accelerate_configs/zero3-multi-nodes.yaml \
> # Number of Machines # --num_machines=2 \
> # Total GPUs # --num_processes=16 \
> # Main IP # --main_process_ip="22.8.150.23" \
> # Machine Rank # --machine_rank=0 \
> # Target Program # src/open_r1/grpo.py \
> # Training Params 🌟 # --config recipes/Qwen2.5-32B-Instruct/grpo/learn.yaml \
> # VLLM Machine 🌟 # --vllm_server_host 22.6.222.80
>
> # 3-2. Machine 3 [eth0: 22.6.191.91] (All GPUs) Start Training Host (Steps 2 and 3 can be done in any order)
> # Change Directory # cd /path/to/openr1 && \
> # Virtual Env # source .venv/bin/activate && \
> # Clear Terminal # clear && \
> # GPU List # CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7" \
> # # accelerate launch \
> # Multi-Machine Params # --config_file recipes/accelerate_configs/zero3-multi-nodes.yaml \
> # Number of Machines # --num_machines=2 \
> # Total GPUs # --num_processes=16 \
> # Main IP # --main_process_ip="22.8.150.23" \
> # Machine Rank 🌟 # --machine_rank=1 \
> # Target Program # src/open_r1/grpo.py \
> # Training Params # --config recipes/Qwen2.5-32B-Instruct/grpo/learn.yaml \
> # VLLM Machine # --vllm_server_host 22.6.222.80
> ```
@binary-husky , thanks for this.
Im trying to finetune llama 405b and it uses 16h100s (2 nodes) for vLLM and 8 nodes for training. can you provide me a similar commands config which uses 2 nodes for vllms and the rest for training? Thanks in advance.
| 3,094
| 11,722
|
binary-husky
| 2025-03-31T04:02:45
|
> 32B model with ZeRO3 and sync_ref_model = true,will raise OOM in SyncRefModelCallback::sync_target_model().
>
> error stack: [rank0]: File "/usr/local/lib/python3.11/site-packages/transformers/trainer.py", line 2611, in _inner_training_loop [rank0]: self.control = self.callback_handler.on_step_end(args, self.state, self.control) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/usr/local/lib/python3.11/site-packages/transformers/trainer_callback.py", line 535, in on_step_end [rank0]: return self.call_event("on_step_end", args, state, control) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/usr/local/lib/python3.11/site-packages/transformers/trainer_callback.py", line 557, in call_event [rank0]: result = getattr(callback, event)( [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/apps/dat/nlp/abc/local_exp_git/isa-trl/trl/trainer/callbacks.py", line 132, in on_step_end [rank0]: self.sync_target_model(model, self.ref_model, args.ref_model_mixup_alpha) [rank0]: File "/apps/dat/nlp/abc/local_exp_git/isa-trl/trl/trainer/callbacks.py", line 118, in sync_target_model [rank0]: with deepspeed.zero.GatheredParameters( [rank0]: File "/usr/local/lib/python3.11/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 2224, in **enter** [rank0]: self.params[0].all_gather(param_list=self.params) [rank0]: File "/usr/local/lib/python3.11/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 1143, in all_gather [rank0]: return self._all_gather(param_list, async_op=async_op, hierarchy=hierarchy) [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/usr/local/lib/python3.11/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn [rank0]: ret_val = func(*args, **kwargs) [rank0]: ^^^^^^^^^^^^^^^^^^^^^ [rank0]: File "/usr/local/lib/python3.11/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 1511, in _all_gather [rank0]: self._allgather_params_coalesced(all_gather_nonquantize_list, hierarchy, quantize=False) [rank0]: File "/usr/local/lib/python3.11/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 1799, in _allgather_params_coalesced [rank0]: flat_tensor = torch.empty(tensor_size, dtype=param_list[0].ds_tensor.dtype, [rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ [rank0]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 270.00 MiB. GPU 0 has a total capacity of 79.33 GiB of which 112.00 MiB is free. Process 529718 has 79.18 GiB memory in use. Of the allocated memory 77.62 GiB is allocated by PyTorch, and 114.49 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
use this one as workaround: https://github.com/huggingface/trl/pull/3094#issuecomment-2743938970 @tingkuanpei
| 3,094
| 11,723
|
binary-husky
| 2025-03-31T04:05:11
|
@vamshi-rvk sorry, currently I'm unable to allocate that many machines
| 3,094
| 11,724
|
tongtong0613
| 2025-04-01T12:41:38
|
> > > Ignore the pr description it's an old version. Please refer to the doc
> >
> >
> > the doc use SLURM, it only show how to use the whole node for VLLM, can we still do something like: make 4 GPU in machine 1 for VLLM the rest 4 and the whole machine 2 for training?
>
> @Andcircle You can refer to my personal notebook below for training 32B Qwen, it is ugly, not general, but may deliver some basic ideas:
>
> ```
> # 1. Move the Model to Memory in all node🌟
> # ----------------------------
> # Install rsync # apt install rsync tmux -y && \
> # Clear memory disk # rm -rf /dev/shm/targetmodel && \
> # Move the model # rsync -av /path/to/Qwen2___5-32B-Instruct/ /dev/shm/targetmodel
> # ----------------------------
>
> # 2. Machine 1 [eth0: 22.6.222.80] (Few GPUs) Start vLLM Service (Steps 2 and 3 can be done in any order)
> # GPU List 🌟 # CUDA_VISIBLE_DEVICES="0,1,2,3" \
> # vLLM Serve # trl vllm-serve \
> # Model # --model /dev/shm/targetmodel \
> # Total GPUs 🌟 # --tensor_parallel_size 4 \
> # # --host 0.0.0.0 --port 8000 \
> # # --max_model_len 8192
>
> # 3-1. Machine 2 [eth0: 22.8.150.23] (All GPUs) Start Training Host (Steps 2 and 3 can be done in any order)
> # Change Directory # cd /path/to/openr1 && \
> # Virtual Env # source .venv/bin/activate && \
> # Clear Terminal # clear && \
> # GPU List # CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7" \
> # # accelerate launch \
> # Multi-Machine Params # --config_file recipes/accelerate_configs/zero3-multi-nodes.yaml \
> # Number of Machines # --num_machines=2 \
> # Total GPUs # --num_processes=16 \
> # Main IP # --main_process_ip="22.8.150.23" \
> # Machine Rank # --machine_rank=0 \
> # Target Program # src/open_r1/grpo.py \
> # Training Params 🌟 # --config recipes/Qwen2.5-32B-Instruct/grpo/learn.yaml \
> # VLLM Machine 🌟 # --vllm_server_host 22.6.222.80
>
> # 3-2. Machine 3 [eth0: 22.6.191.91] (All GPUs) Start Training Host (Steps 2 and 3 can be done in any order)
> # Change Directory # cd /path/to/openr1 && \
> # Virtual Env # source .venv/bin/activate && \
> # Clear Terminal # clear && \
> # GPU List # CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7" \
> # # accelerate launch \
> # Multi-Machine Params # --config_file recipes/accelerate_configs/zero3-multi-nodes.yaml \
> # Number of Machines # --num_machines=2 \
> # Total GPUs # --num_processes=16 \
> # Main IP # --main_process_ip="22.8.150.23" \
> # Machine Rank 🌟 # --machine_rank=1 \
> # Target Program # src/open_r1/grpo.py \
> # Training Params # --config recipes/Qwen2.5-32B-Instruct/grpo/learn.yaml \
> # VLLM Machine # --vllm_server_host 22.6.222.80
> ```
@binary-husky Hello, referring to your sharing, I used the first four cards of a single H100 to start the VLLM service, while the other two H100s are used for training. However, I encountered the following error. Do you know how to solve this issue?
```shell
[Rank12] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=8834, OpType=_ALLGATHER_BASE, NumelIn=1638400, NumelOut=26214400, Timeout(ms)=1800000) ran for 1800055 milliseconds before timing out.
...
```
| 3,094
| 11,725
|
binary-husky
| 2025-04-14T09:13:39
|
@tongtong0613 I have seen `1800055 milliseconds` error before, when I mess up reward function and make rank 0 compute reward forever. Then the watch dogs on other ranks become very unhappy...
| 3,094
| 11,726
|
wadhwasahil
| 2025-07-15T14:39:17
|
`_move_model_to_remote_vllm` - I get OOM because with peft because all parameters are gathered on a single GPU. However, without peft it works fine. Is there a way we can resolve this issue.
| 3,094
| 11,727
|
qgallouedec
| 2025-07-15T14:48:33
|
For peft we need to merge the adapter before moving the model to vLLM. But there is currently no way to do it in a distributed manner. So the answer is, for now, there is no solution
| 3,094
| 11,728
|
B-Gendron
| 2025-03-20T10:23:59
|
Hi @JWQZ,
It is still possible to run PPO with LoRA adapters using `trl==0.11.4`. Actually the issue is not related to PEFT, but it is related to the fact that, in PPO, the reward needs to be estimated and this is done using a value head at the top of the model. This head in hence part of the model structure, that's why the model class should be something like `AutoModelForCausalLMWithValueHead`. Therefore, what you should do is instantiate the model with this class, as follows:
```py
# example lora config
lora_config = LoraConfig(
r=16,
lora_alpha=8,
target_modules=["q_proj", "v_proj"],
bias="lora_only",
lora_dropout=0.1,
)
# instantiate model with lora adapters
model = AutoModelForCausalLMWithValueHead.from_pretrained(
config.model_name,
device_map=device,
peft_config=lora_config, # this is where peft_config should be specified, not in PPOTrainer
torch_dtype=torch.bfloat16,
)
```
If you need to plug fine-tuned adapters for further training, then simply update the weights of the initialized adapters without changing the model class, as follows:
```py
# load the fine-tuned adapter weights
adapter_model_name = 'path/to/your/adapter/model'
peft_model = PeftModel.from_pretrained(model, adapter_model_name)
# transfer weights using a state dict
lora_state_dict = {
k: v for k, v in peft_model.state_dict().items() if "lora" in k
}
model.load_state_dict(lora_state_dict, strict=False)
# make these parameter trainable (if desired)
for n, p in model.named_parameters():
if 'lora' in n:
p.requires_grad = True
```
Hope this helps!
| 3,093
| 11,729
|
JWQZ
| 2025-03-20T10:33:01
|
@B-Gendron Thank you very much for your reply, your approach looks good. Now I am modifying the source code based on trl==0.15.2 to suit my needs. If this doesn't work, I will adopt your approach.
| 3,093
| 11,730
|
qgallouedec
| 2025-03-15T02:28:26
|
I understand your question, but I can't see how you plan to combine the methods. For example, how do you combine between DPO and GRPO? One is online, the other offline.
| 3,092
| 11,731
|
AMindToThink
| 2025-03-15T22:14:14
|
I figure that you can make an outer loop for num_steps.
Inside, you could calculate the loss for a batch of GRPO (by taking and checking model responses) and the loss from a batch of DPO (by measuring the probability of offline responses). Add the two losses together with a weighting factor and do a step.
L = (1-alpha)Ldpo + (alpha)Lgrpo
Schedulers for the weighting factor and for the batch size would allow for expressive balances.
I don’t see what the online/offline distinction means for combining trainers. It just means that instead of looping through the data of one, then the other, you instead loop through the data together and combine the loss functions.
| 3,092
| 11,732
|
HuggingFaceDocBuilderDev
| 2025-03-14T21:20:16
|
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3091). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,091
| 11,733
|
qgallouedec
| 2025-03-14T21:25:05
|
No, actually there have been recurrent reports that SFT can't learn to generate EOS. I'm pretty sure #2405 re-introduced the bug reported in #1623
| 3,091
| 11,734
|
skandermoalla
| 2025-03-15T08:25:34
|
@qgallouedec I've faced this multiple times. I think it's just because of the (not so good) practice in the examples everywhere setting the pad token to the eos token. Then the SFT preprocessing masks everything that's a pad token (=eos token), including the real eos token in the chat template.
| 3,091
| 11,735
|
skandermoalla
| 2025-03-15T08:29:04
|
Personally, I don't think these forced patches are a good design. I understand that you want the Trainers to work out of the box, but users should still make sure they have a chat template that adds an eos properly. In case someone doesn't want an eos they can't do that anymore now.
(Same for the DPOTrainer btw, I think it adds an extra eos token somewhere.)
| 3,091
| 11,736
|
HwangYej1
| 2025-03-26T08:55:53
|
i got this TypeError: 'Qwen2TokenizerFast' object is not subscriptabl, after change this code
| 3,091
| 11,737
|
qgallouedec
| 2025-03-15T02:40:00
|
4.1.3 is about Process Supervision
Instead of giving a single reward per completion, process supervision provides multiple rewards at each step. However, I have concerns about whether the benefits outweigh the added complexity and usability challenges.
Some points:
- It requires users to adopt a PRM instead of the more common ORM. Since PRMs are less widely available, this shift could be difficult.
- Defining a process reward function isn’t straightforward, making implementation more complex for users.
- Figure 5 shows a slight advantage of PRMs over ORMs, but only in one of the two evaluations.
Given these factors, I’m unsure if the trade-off is justified.
| 3,090
| 11,738
|
tchang1997
| 2025-03-14T17:28:17
|
Could you define "doesn't work?" I have training running with PeFT + gradient checkpointing without issues but had to play around with the settings.
At a glance, the only major difference I see between our configs is that you might need `gradient_checkpointing_kwargs=dict(reentrant=True)` in your `GRPOConfig`.
| 3,089
| 11,739
|
binary-husky
| 2025-03-16T15:48:09
|
you must set `reentrant=True`
```
...
gradient_checkpointing_kwargs:
use_reentrant: true
...
```
| 3,089
| 11,740
|
kimihailvfg
| 2025-03-17T09:52:09
|
I've tried setting `use_reentrant=true`, it works without peft, but doesn't work with PEFT: `element 0 of tensors does not require grad and does not have a grad_fn`
| 3,089
| 11,741
|
leosmith8004
| 2025-03-24T14:57:25
|
OMG,i have the same issue with you,have you solved it ? Thanks for your reply
| 3,089
| 11,742
|
maoulee
| 2025-03-14T09:37:20
|
update vllm=0.7.3
| 3,085
| 11,743
|
YueChenkkk
| 2025-03-14T11:37:47
|
This works for me
```
accelerate launch --num_processes 4 --gpu_ids 0,2,3,4,5 --config_file accelerate_configs/deepspeed_zero3.yaml train_grpo.py --vllm_device auto
```
| 3,085
| 11,744
|
qgallouedec
| 2025-04-01T18:14:50
|
Fixed in #3091 (also related #3200)
| 3,083
| 11,745
|
qgallouedec
| 2025-03-14T12:44:08
|
Thanks! Does it work is you directly modify `transformers.training_args._VALID_DICT_FIELDS` instead?
| 3,082
| 11,746
|
Tavish9
| 2025-03-14T12:56:22
|
Yes, but both of `transformers.training_args` and `trl.GRPOConfig` should have their independent `_VALID_DICT_FIELDS `, as private attribute does.
In `GRPOConfig.__post_init__`, it first post-inits it's `_VALID_DICT_FIELDS` and then `transformers.training_args`'s
| 3,082
| 11,747
|
qgallouedec
| 2025-03-14T13:52:40
|
It seems to work:
```python
from transformers.training_args import _VALID_DICT_FIELDS
from trl import GRPOConfig
_VALID_DICT_FIELDS.append("model_init_kwargs")
args = GRPOConfig("output_dir", model_init_kwargs='{"num_labels": 2}')
print(args.model_init_kwargs) # {"num_labels": 2}
```
| 3,082
| 11,748
|
qgallouedec
| 2025-03-14T13:59:50
|
To do this properly, the first step would be to convert `_VALID_DICT_FIELDS` into a class attribute of `TrainingArguments` in transformers. Are you ready to open such a PR in Transformers?
Then we could do:
```python
# in transformers
class TrainingArguments:
_VALID_DICT_FIELDS = [...]
# in trl
class GRPOConfig(TrainingArguments):
_VALID_DICT_FIELDS = TrainingArguments._VALID_DICT_FIELDS + ["model_init_kwargs"]
```
which eliminates the need to duplicate the post init
| 3,082
| 11,749
|
Tavish9
| 2025-03-14T14:09:34
|
> To do this properly, the first step would be to convert `_VALID_DICT_FIELDS` into a class attribute of `TrainingArguments` in transformers. Are you ready to open such a PR in Transformers?
>
> Then we could do:
>
> ```python
> # in transformers
> class TrainingArguments:
> _VALID_DICT_FIELDS = [...]
>
> # in trl
> class GRPOConfig(TrainingArguments):
> _VALID_DICT_FIELDS = TrainingArguments._VALID_DICT_FIELDS + ["model_init_kwargs"]
> ```
>
> which eliminates the need to duplicate the post init
Yes, that was my initial thought as well. However, considering that the `transformers` defines `_VALID_DICT_FIELDS` as semi-private, I decided against submitting a PR to their repository. If we follow the semi-private variable approach, each config should ideally have its own variable, even though this might lead to some code duplication in the `__post__init__` logic. That said, I’m also open to the idea of modifying the semi-private variable in the `transformers` to make it a class attribute. However, I’m not sure if the maintainers would be receptive to this change in philosophy.
What's your suggestions?
| 3,082
| 11,750
|
qgallouedec
| 2025-03-14T14:56:57
|
Yes I think first modifying transformers is the way to go.
| 3,082
| 11,751
|
Tavish9
| 2025-03-14T16:42:24
|
okay, I would notify you when pr merged. :)
| 3,082
| 11,752
|
Tavish9
| 2025-04-01T10:54:14
|
Hi, @qgallouedec, the [PR](https://github.com/huggingface/transformers/pull/36736) in Transformers is merged. 🥳
| 3,082
| 11,753
|
qgallouedec
| 2025-04-02T05:02:51
|
I just need to review it carefully and ensure backwards compatibility
I'll do it asap.
| 3,082
| 11,754
|
HuggingFaceDocBuilderDev
| 2025-04-05T05:06:49
|
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3082). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,082
| 11,755
|
Tavish9
| 2025-04-07T04:02:34
|
maybe you need to update the version of transfomers and re-run the test?
| 3,082
| 11,756
|
qgallouedec
| 2025-04-07T04:06:59
|
So currently this change isn't backward compatible we need to figure out how to make it backward compatible
| 3,082
| 11,757
|
Tavish9
| 2025-04-07T04:41:17
|
okay, let me try with version checking
| 3,082
| 11,758
|
srinath1510
| 2025-03-14T01:27:18
|
Hi, it seems like `tokenizer.eos_token` is a string, which you are passing as the `padding_value`. The `torch.full()` function expects a numeric value for the `padding_value`. I suggest trying with the token id instead: `padding_value = tokenizer.eos_token_id`
| 3,080
| 11,759
|
sivaganesh07
| 2025-03-14T17:49:44
|
Thanks that worked!
| 3,080
| 11,760
|
HuggingFaceDocBuilderDev
| 2025-03-13T22:28:17
|
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3079). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,079
| 11,761
|
qgallouedec
| 2025-03-13T22:38:22
|
What happens if none of the reward functions return a valid reward? We should add a warning, something like:
```python
# If all reward functions return None for a given row, issue a warning
if torch.isnan(rewards_per_func).all(dim=1).any():
nan_row_idx = torch.isnan(rewards_per_func).all(dim=1).nonzero(as_tuple=True)[0][0]
row_reward_kwargs = {key: value[nan_row_idx] for key, value in reward_kwargs.items()}
row_reward_kwargs["prompt"] = prompts[nan_row_idx]
row_reward_kwargs["completion"] = completions[nan_row_idx]
warnings.warn(
f"All reward functions returned None for the following kwargs: {row_reward_kwargs}. "
"Please ensure that at least one reward function returns a valid reward."
)
```
| 3,079
| 11,762
|
qgallouedec
| 2025-03-13T22:39:47
|
Please also add a unittest for such case
| 3,079
| 11,763
|
qgallouedec
| 2025-03-15T01:04:59
|
Let's see if the ci passes 🤞
| 3,079
| 11,764
|
qgallouedec
| 2025-03-15T01:05:24
|
You need to re-apply the pre-commits
| 3,079
| 11,765
|
shirinyamani
| 2025-03-15T01:11:24
|
> You need to re-apply the pre-commits
I did and commit the changes suggested by i, and pushed now!
| 3,079
| 11,766
|
tchang1997
| 2025-03-13T23:04:19
|
What's your exact set of trainer args/training script? I noticed that if `self.beta == 0.0`, KL logging is skipped altogether, though that change may have been after 0.15.2.
| 3,078
| 11,767
|
tchang1997
| 2025-03-13T23:28:16
|
Huh, that's odd. Just to confirm, you're seeing all the other metrics, just not KL and loss? Just to make sure this isn't an unsloth thing, maybe try training w/ a tiny mockup dataset w/o unsloth? You might also try explicitly setting `beta` to something non-zero in the `GRPOConfig.`
FWIW, I haven't needed to explicitly `wandb.init` in my main script — I just let the trainer take care of that and set `WANDB_PROJECT="unsloth"` in the env, since I had some issues with duplicated runs/metrics not logging where I expected on wandb. But if logging was working before this is unlikely to be the issue.
| 3,078
| 11,768
|
SpaceHunterInf
| 2025-03-13T23:34:49
|
Here is a screen shot of my wandb workspace, one suspicious thing that I noticed is on the top left it says (6 of 13) but on the top right it says (1-6 of 6).
<img width="1446" alt="Image" src="https://github.com/user-attachments/assets/85013b62-0e8c-41cf-973a-bf041b2738c9" />
I will try to see if the beta thing works. I use wandb.init because I want to associate each run on wandb with my model settings as their wandb name.
Thanks
| 3,078
| 11,769
|
SpaceHunterInf
| 2025-03-14T15:18:28
|
This is the old comment I had, I just realized I accidentally pasted my wandb api key in the previous edit.......
Thanks for helping me.
I am actually using unsloth + trl GPRO. I can see the cmd output doing well on wandb log, but not on the wandb workspace. The log itself contains all the things I need.
```{'loss': 0.0133, 'grad_norm': 2.667192220687866, 'learning_rate': 2.6140692393428204e-10, 'rewards/correctness_reward_func': 0.25, 'rewards/confidence_reward_func': 0.0, 'rewards/int_reward_func': 0.5, 'rewards/soft_format_reward_func': 0.5, 'reward': 1.25, 'reward_std': 1.3363062143325806, 'completion_length': 32.0, 'kl': 0.3319159746170044, 'epoch': 0.92}```
TLDR, I attached my training config and scripts below.
**Config**
```
cfg = TrainingConfig(
dataset_name="ecqa",
# Model settings
model_name="Qwen/Qwen2.5-1.5B-Instruct",
max_seq_length=1024,
lora_rank=128,
load_in_4bit=True,
fast_inference=True,
gpu_memory_utilization=0.8,
target_modules=[
"q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj",
],
# Training args
use_vllm=True,
learning_rate=5e-6,
adam_beta1=0.9,
adam_beta2=0.99,
weight_decay=0.1,
warmup_ratio=0.1,
lr_scheduler_type="cosine",
optim="adamw_8bit",
logging_steps=1,
per_device_train_batch_size=8,
gradient_accumulation_steps=1,
num_generations=8,
max_prompt_length=256,
max_completion_length=32,
max_steps=7000,
save_steps=1400,
max_grad_norm=0.1,
report_to="wandb",
output_dir="outputs",
wandb_name="ecqa_qwen_hpc",
# Reward functions
reward_funcs=[
"correctness_reward_func",
"confidence_reward_func",
"int_reward_func",
"soft_format_reward_func"
]
)
```
**Training Script**
```
import argparse
import json
import os, sys
from pathlib import Path
import torch
from unsloth import FastLanguageModel, is_bfloat16_supported
from trl import GRPOConfig, GRPOTrainer
from utils.data_utils import get_dataset
from utils.training_utils import (
correctness_reward_func, confidence_reward_func,
int_reward_func, soft_format_reward_func, load_config, save_config
)
def get_reward_functions(reward_func_names):
"""Map reward function names to actual functions"""
reward_funcs_map = {
"correctness_reward_func": correctness_reward_func,
"confidence_reward_func": confidence_reward_func,
"int_reward_func": int_reward_func,
"soft_format_reward_func": soft_format_reward_func
}
return [reward_funcs_map[name] for name in reward_func_names]
def train(cfg):
"""Main training function"""
# Load dataset
train_dataset, dev_dataset, test_dataset = get_dataset(cfg.dataset_name)
# Initialize model
model, tokenizer = FastLanguageModel.from_pretrained(
model_name=cfg.model_name,
max_seq_length=cfg.max_seq_length,
load_in_4bit=cfg.load_in_4bit,
fast_inference=cfg.fast_inference,
max_lora_rank=cfg.lora_rank,
gpu_memory_utilization=cfg.gpu_memory_utilization,
)
model = FastLanguageModel.get_peft_model(
model,
r=cfg.lora_rank,
target_modules=cfg.target_modules,
lora_alpha=cfg.lora_rank,
use_gradient_checkpointing="unsloth",
random_state=3407,
)
# Configure training arguments
training_args = GRPOConfig(
use_vllm=cfg.use_vllm,
learning_rate=cfg.learning_rate,
adam_beta1=cfg.adam_beta1,
adam_beta2=cfg.adam_beta2,
weight_decay=cfg.weight_decay,
warmup_ratio=cfg.warmup_ratio,
lr_scheduler_type=cfg.lr_scheduler_type,
optim=cfg.optim,
logging_steps=cfg.logging_steps,
bf16=is_bfloat16_supported(),
fp16=not is_bfloat16_supported(),
per_device_train_batch_size=cfg.per_device_train_batch_size,
gradient_accumulation_steps=cfg.gradient_accumulation_steps,
num_generations=cfg.num_generations,
max_prompt_length=cfg.max_prompt_length,
max_completion_length=cfg.max_completion_length,
max_steps=cfg.max_steps,
save_steps=cfg.save_steps,
max_grad_norm=cfg.max_grad_norm,
report_to=cfg.report_to,
output_dir=cfg.output_dir,
)
if cfg.num_train_epochs is not None:
training_args.num_train_epochs = cfg.num_train_epochs
# Set up reward functions
reward_funcs = get_reward_functions(cfg.reward_funcs)
# Initialize trainer
trainer = GRPOTrainer(
model=model,
processing_class=tokenizer,
reward_funcs=reward_funcs,
args=training_args,
train_dataset=train_dataset,
eval_dataset=dev_dataset,
)
# Start training
trainer.train()
def main():
parser = argparse.ArgumentParser(description="RL training script")
parser.add_argument("--config", type=str, default=None, help="Path to config file")
args = parser.parse_args()
# Load configuration
cfg = load_config(args.config)
if cfg.report_to == 'wandb':
import wandb
os.environ['WANDB_API_KEY'] = MYAPI
wandb.init(project="unsloth", config=cfg.__dict__, name=cfg.wandb_name)
# Save configuration
save_config(cfg, cfg.output_dir)
# Start training
train(cfg)
if cfg.report_to == 'wandb':
wandb.finish()
```
| 3,078
| 11,770
|
SpaceHunterInf
| 2025-03-14T19:12:01
|
... Alright, I know the reason why.. I am an absolute idiot.
I didn't realize there was something in the regex filter on wandb. Everything is indeed there.
| 3,078
| 11,771
|
HuggingFaceDocBuilderDev
| 2025-03-13T18:18:04
|
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3076). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,076
| 11,772
|
HuggingFaceDocBuilderDev
| 2025-03-13T17:21:47
|
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_3075). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
| 3,075
| 11,773
|
qgallouedec
| 2025-03-13T17:57:00
|
Thanks! Have you tried to fine-tune a VLM with the trainer? Do you have results to share?
| 3,072
| 11,774
|
CompN3rd
| 2025-03-14T08:55:26
|
Well, actual fine tuning is still in progress and riddled with oom issues and the quantization bug referenced in the unittest as well and the full training script relies on a private dataset, but I can at least give a bit more information.
The training task I am currently looking at is fine-tuning a vlm (the language model part) doing image captioning to maximize a clip cosine similarity score.
So the reward module looks like this
```python
@dataclass
class CLIPRewardModelOutput(ModelOutput):
logits: torch.FloatTensor
"""The reward logits for the Trainer."""
class CLIPRewardModel(CLIPModel):
"""Inherits from CLIPModel (i.e. PreTrainedModel), such that the forward computation gives the rl reward,
but type-based training logic (accelerator.prepare) is still possible"""
def forward(self, *, input_ids, attention_mask, pixel_values) -> torch.Tensor:
"""Mainly copy-paste from CLIPModel.forward up to the logit and loss computation"""
vision_outputs = self.vision_model(
pixel_values=pixel_values,
output_attentions=False,
output_hidden_states=False,
interpolate_pos_encoding=False,
return_dict=True,
)
text_outputs = self.text_model(
input_ids=input_ids,
attention_mask=attention_mask,
position_ids=None,
output_attentions=False,
output_hidden_states=False,
return_dict=True,
)
image_embeds = vision_outputs[1]
image_embeds = self.visual_projection(image_embeds)
text_embeds = text_outputs[1]
text_embeds = self.text_projection(text_embeds)
# normalized features
image_embeds = image_embeds / _get_vector_norm(image_embeds)
text_embeds = text_embeds / _get_vector_norm(text_embeds)
# cosine similarity of the vectors as reward logits
return CLIPRewardModelOutput(logits=cosine_similarity(image_embeds, text_embeds, dim=-1).unsqueeze(-1))```
| 3,072
| 11,775
|
CompN3rd
| 2025-03-14T08:57:54
|
Then we have a small modification to the trainer via subclassing (which is why I proposed to split off the relevant code section into it's own member function)
```python
class GRPOVlmClipTrainer(GRPOTrainer):
def _prepare_inputs_for_reward_module(
self,
*,
inputs: dict[str, torch.Tensor | Any],
reward_processing_class: PreTrainedTokenizerBase,
prompts: list[str],
completions: list[str],
images=None,
) -> dict[str, torch.Tensor | Any]:
# disregard prompts, only prepare completions (captions) and images
reward_inputs = reward_processing_class(
images=images,
text=completions,
return_tensors="pt",
padding=True,
padding_side="right",
add_special_tokens=True,
truncation=True,
max_length=77,
)
reward_inputs = super(GRPOTrainer, self)._prepare_inputs(reward_inputs)
return reward_inputs
```
| 3,072
| 11,776
|
CompN3rd
| 2025-03-14T09:04:51
|
Finally this leads to reward curves like this, which seem to indicate that it generally optimizes in the right direction.

| 3,072
| 11,777
|
MohamedAliRashad
| 2025-03-16T07:10:33
|
@CompN3rd If you can give me a simple guide on how to use your PR i can help you with testing
| 3,072
| 11,778
|
CompN3rd
| 2025-03-17T07:45:36
|
> @CompN3rd If you can give me a simple guide on how to use your PR i can help you with testing
Sure, if you want to get started with a semi-realistic example, I'd suggest starting with the setup from the unittest, which should be able to run on a 24Gb gpu (`test_gpo_trainer.py` l.900-987)
```python
@require_flash_attn
@require_bitsandbytes
@require_peft
@require_torch_accelerator
def test_vlm_training(self):
model_name = "HuggingFaceTB/SmolVLM-Instruct"
.....
```
Biggest question there is why 8 bit quantization works, but 4 bit quantization breaks the test (or whether that is somehow expected behavior), so any input in that regard would be valuable.
Other than that if you have access to more vram gpus you could rewrite the test configuration to work without quantization or you could alternatively replace the model with a smaller one...
| 3,072
| 11,779
|
MohamedAliRashad
| 2025-03-18T04:11:02
|
@CompN3rd I have tried this preprocessing function:
```python
def format_data(row):
base64_image = encode_image(row["image"])
prompt = "Extract all text from the given image and format it using Markdown syntax. Preserve headings, lists, bold/italic text, and other structural elements. Ensure the output is clean and readable in Markdown format."
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": f"data:image/jpeg;base64,{base64_image}",
},
{"type": "text", "text": prompt},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
return inputs
```
and it gave me KeyError: 'prompt' (I am training `Qwen/Qwen2.5-VL-3B-Instruct`)
| 3,072
| 11,780
|
CompN3rd
| 2025-03-19T10:56:11
|
@MohamedAliRashad If I understand this correctly, it is probably because the `processor` in your case returns already tokenized `input_ids` and probably `pixel_values` or whatever fields are associated with image/video processing.
That is not the type of data the `GRPOTrainer` expects (even before this current pr). In the internal preprocessing function of the trainer, it accesses `prompt` of the input dictionary (and this pr adds `image`, which is expected to be a raw numpy or pil image, not a base64 string).
Then it internally calls the processor and goes from there.
Tldr.: Your data preprocessing probably interferes with the input data preparation done in the trainer class.
| 3,072
| 11,781
|
nph4rd
| 2025-03-19T14:07:30
|
@CompN3rd - so how would one preprocess the data or tell the trainer how to process it? For example, as far as I understand, Qwen2.5-VL uses qwen-vl-util's `process_vision_info`.
Based on your changes, what would be the best approach to use that during the input preparation?
| 3,072
| 11,782
|
MohamedAliRashad
| 2025-03-19T20:12:27
|
@CompN3rd I changed the preprocessing to be closer to what you have in the test file and it worked wanderfully.
I made full finetuning for qwen 2.5 vl 3B and it worked on an 80 GB GPU
| 3,072
| 11,783
|
nph4rd
| 2025-03-19T20:45:00
|
@MohamedAliRashad - do you mind sharing the setup/coda you used for that?
| 3,072
| 11,784
|
nph4rd
| 2025-03-20T17:52:18
|
I just tested the following with [this dummy dataset](agentsea/vqa-test-formatted) using 4 A100 80GB:
```python
from datasets import load_dataset
from trl import GRPOConfig, GRPOTrainer
import torch
from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
import copy
model_id = "Qwen/Qwen2.5-VL-3B-Instruct"
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
use_cache=False,
)
processor = AutoProcessor.from_pretrained(model_id, padding_side="left")
dataset = load_dataset("agentsea/vqa-test-formatted", split="train")
dataset = dataset.remove_columns(["completion"])
def preprocess_vision_info(examples):
examples_copy = copy.deepcopy(examples)
batch_size = len(examples["prompt"])
examples["image"] = []
for i in range(batch_size):
prompt_data = examples_copy["prompt"][i]
image_data = examples_copy["image"][i]
for message in prompt_data:
for content in message["content"]:
if isinstance(content, dict) and content.get("type") == "image":
content["image"] = image_data
processed_images, _ = process_vision_info(prompt_data)
examples["image"].extend(processed_images)
return examples
dataset = dataset.with_transform(preprocess_vision_info)
def reward_len(completions, **kwargs):
return [-abs(20 - len(completion)) for completion in completions]
training_args = GRPOConfig(
output_dir="Qwen2.5-VL-3B-GRPO",
logging_steps=1,
use_vllm=True,
bf16=True,
gradient_checkpointing=True,
per_device_train_batch_size=1,
num_generations=3,
max_prompt_length=None,
vllm_device="cuda:3",
)
trainer = GRPOTrainer(
model=model,
processing_class=processor,
reward_funcs=reward_len,
args=training_args,
train_dataset=dataset,
)
trainer.train()
```
However I'm encountering this error:
```
ValueError: Attempted to assign 5185 + 5185 + 5185 = 15555 multimodal tokens to 31107 placeholders
```
Upon further inspection I found that the code works if I make the following change in [this line specifically](https://github.com/CompN3rd/trl/blob/328ef463a776a00d02decc8bf7e5f8cfbe215c03/trl/trainer/grpo_trainer.py#L823
):
from:
```python
prompt_inputs = self.processing_class(
text=prompts_text,
images=images,
return_tensors="pt",
padding=True,
padding_side="left",
add_special_tokens=False,
)
```
to:
```python
prompt_inputs = self.processing_class(
text=prompts_text.copy(), # send a copy instead
images=images,
return_tensors="pt",
padding=True,
padding_side="left",
add_special_tokens=False,
)
```
What is happening is that the processor class is mutating the input [here](https://github.com/huggingface/transformers/blob/42c489f2ae738a3b690bb90aab274f02ff024795/src/transformers/models/qwen2_5_vl/processing_qwen2_5_vl.py#L156C21-L156C25). So, vLLM complains because it's receiving the modified `prompts_text`. You can test the side-effect in [this script](https://gist.github.com/nph4rd/f003323ac4c8940f779f44a24b815ff7).
I don't think this is an issue that should be handled by neither TRL nor vLLM. I think it should be handled at the source of the proccessor's code.
I can see that the [SmolVLM processor class](https://github.com/huggingface/transformers/blob/main/src/transformers/models/smolvlm/processing_smolvlm.py) doesn't have this kind of side-effect. But Qwen2.5-VL's does, so I do wonder how @MohamedAliRashad made it work. I presume it's because `prompts_text` is a string and not a list when using 1 GPU with `num_generations=1`?
----
EDIT: fwiw - I raised https://github.com/huggingface/transformers/issues/36865 + opened https://github.com/huggingface/transformers/pull/36866
| 3,072
| 11,785
|
MohamedAliRashad
| 2025-03-25T13:18:04
|
@nph4rd The error you are seeing is because of your context size limit.
Qwen (unlike other models) doesn't give a fixed number of tokens for images of different shapes, The number of tokens change based on the size of the input image, if i am not mistaken every `28x28` pixels are one token for them.
What you need to do is to resize your images to be in a smaller size than your acceptable context window and also i didn't use `process_vision_info` and it worked fine with me, so you may consider removing it and send the pil images as it is.
| 3,072
| 11,786
|
CompN3rd
| 2025-03-25T13:26:41
|
@qgallouedec Let me know if there are refactoring or api changes necessary to make this ready for merging. Would be happy to make those adjustments.
| 3,072
| 11,787
|
nph4rd
| 2025-03-25T16:47:30
|
@MohamedAliRashad / @CompN3rd - thanks for the comments. I don't understand why it would work with the change I shared but not without it though? 🤔 With that change I didn't have to resize the images for it to work.
Another thing I found is that when I set `log_completions=True`, the training was stuck at this line:
https://github.com/huggingface/trl/blob/e94b5facd44764d425bdb110784dd86794ef7a05/trl/trainer/grpo_trainer.py#L1026
Specifically the `gather_object(images)` was timing out. This might be my image size's again, but I thought I'd let you know in case you hadn't tested log_completions.
| 3,072
| 11,788
|
CompN3rd
| 2025-03-25T17:02:21
|
@nph4rd Thanks for testing it out. I concur with @MohamedAliRashad observations, that I could produce such errors mostly by having too small of a context window.
As for the `log_completions` error, I had a version, where not all processes participated in the communication, which obviously failed.
As of yet, I have a test with 2gpus locally, which worked well as well as a cloud test, but that was only one A40 GPU.
Both produced images in weights and biases, but I admit, I haven't made multi-node tests.

| 3,072
| 11,789
|
sunildkumar
| 2025-04-02T03:55:13
|
Eagerly awaiting this (https://github.com/huggingface/trl/issues/2734 - 2 months and counting)! @CompN3rd Let me know if and how I can help. I've been training VLMs with GRPO for a while now, just not on TRL `main`.
| 3,072
| 11,790
|
sunildkumar
| 2025-04-05T05:18:45
|
@qgallouedec – I hope you don’t mind the tag. It’s been a couple of weeks since @CompN3rd has engaged with this PR, and I really appreciate the work that’s been done so far. I’d love to help move it forward if that’s appropriate.
I’m not entirely sure what the etiquette is in cases like this—would it be okay to open a follow-up PR branching off of this one, or would you recommend waiting longer? Apologies if this is a naive question, and thank you in advance for any guidance.
| 3,072
| 11,791
|
qgallouedec
| 2025-04-05T05:42:01
|
Thanks again for your work on this, and sorry for the slow response, be sure we're doing our best. It's a valuable feature and makes a lot of sense to include. That said, it requires thorough review, testing, and documentation before merging, and at the moment we don’t have the capacity to give it the attention it needs. I’ll make sure to revisit it as soon as I can.
In the meantime, keeping the PR open is a great idea. It allows the community to test it, report any issues, and benefit from the feature.
And to your question — yes, feel free to open a follow-up PR based on this one. That’s totally fine and actually very helpful. No need to wait.
| 3,072
| 11,792
|
sunildkumar
| 2025-04-05T05:49:52
|
@qgallouedec - totally understood. Thanks for your advice!
| 3,072
| 11,793
|
Benjoyo
| 2025-05-03T20:33:43
|
Anyone actively working on GRPO support for VLMs still? 🙏
| 3,072
| 11,794
|
chaodreaming
| 2025-05-08T09:30:33
|
How long will it take to support grpo?
| 3,072
| 11,795
|
chaodreaming
| 2025-05-08T09:31:19
|
About how long will it take to be able to support grpo, a lot of people are very excited about this, thank you very much for your contribution!
| 3,072
| 11,796
|
mccatec
| 2025-05-13T03:09:28
|
<img width="593" alt="image" src="https://github.com/user-attachments/assets/7050a162-f429-4520-8bea-8055a48248cc" />
Noticed this from @qgallouedec 's x 🙏
| 3,072
| 11,797
|
chaodreaming
| 2025-05-14T08:42:47
|
> <img alt="图像" width="593" src="https://private-user-images.githubusercontent.com/72635723/443021928-7050a162-f429-4520-8bea-8055a48248cc.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NDcyMTIyNjYsIm5iZiI6MTc0NzIxMTk2NiwicGF0aCI6Ii83MjYzNTcyMy80NDMwMjE5MjgtNzA1MGExNjItZjQyOS00NTIwLThiZWEtODA1NWE0ODI0OGNjLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTA1MTQlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwNTE0VDA4MzkyNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTRlNjI5NGZiYmM1MjNmZjcxYTQ2OWM1YjlmYjc1MjlhMGI0OTM1Y2E5NDA2Y2IwYTM5ZTIzZDhmMzQ5MDhjZTQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.quodHN7y3djobvHAjnK17FoLeDwMZ4tPNmM55-0cjDQ">
> 从 的 x 🙏 中注意到这一点
Where did you see this news, hopefully give me a link, I'll be keeping an eye on the progress
| 3,072
| 11,798
|
mccatec
| 2025-05-14T08:45:59
|
> > <img alt="图像" width="593" src="https://private-user-images.githubusercontent.com/72635723/443021928-7050a162-f429-4520-8bea-8055a48248cc.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NDcyMTIyNjYsIm5iZiI6MTc0NzIxMTk2NiwicGF0aCI6Ii83MjYzNTcyMy80NDMwMjE5MjgtNzA1MGExNjItZjQyOS00NTIwLThiZWEtODA1NWE0ODI0OGNjLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTA1MTQlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwNTE0VDA4MzkyNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTRlNjI5NGZiYmM1MjNmZjcxYTQ2OWM1YjlmYjc1MjlhMGI0OTM1Y2E5NDA2Y2IwYTM5ZTIzZDhmMzQ5MDhjZTQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.quodHN7y3djobvHAjnK17FoLeDwMZ4tPNmM55-0cjDQ">
> > 从 的 x 🙏 中注意到这一点
>
> Where did you see this news, hopefully give me a link, I'll be keeping an eye on the progress
https://x.com/QGallouedec/status/1919806234821026141
| 3,072
| 11,799
|
chaodreaming
| 2025-05-14T08:53:19
|
> > > <img alt="图像" width="593" src="https://private-user-images.githubusercontent.com/72635723/443021928-7050a162-f429-4520-8bea-8055a48248cc.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NDcyMTIyNjYsIm5iZiI6MTc0NzIxMTk2NiwicGF0aCI6Ii83MjYzNTcyMy80NDMwMjE5MjgtNzA1MGExNjItZjQyOS00NTIwLThiZWEtODA1NWE0ODI0OGNjLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTA1MTQlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwNTE0VDA4MzkyNlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTRlNjI5NGZiYmM1MjNmZjcxYTQ2OWM1YjlmYjc1MjlhMGI0OTM1Y2E5NDA2Y2IwYTM5ZTIzZDhmMzQ5MDhjZTQmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.quodHN7y3djobvHAjnK17FoLeDwMZ4tPNmM55-0cjDQ">
> > > 从 的 x 🙏 中注意到这一点
> >
> >
> > Where did you see this news, hopefully give me a link, I'll be keeping an eye on the progress
>
> https://x.com/QGallouedec/status/1919806234821026141
Thank you very much, I saw that, he is a very capable person, I'm sure soon trl will support grpo multimodal
| 3,072
| 11,800
|
nph4rd
| 2025-05-22T00:55:58
|
hey in case you want to try grpo+vlm while it's not supported here, i wrote this up based on TRL's code and @CompN3rd's PR
https://github.com/nph4rd/grpo_vlm
| 3,072
| 11,801
|
chaodreaming
| 2025-05-24T00:58:14
|
> 嘿,如果您想尝试 GRPO+VLM 而这里不支持它,我根据 TRL 的代码和 PR 编写了这个
>
> https://github.com/nph4rd/grpo_vlm
You should mention a pr to become a contributor, it's a good resume!
| 3,072
| 11,802
|
sergiopaniego
| 2025-07-09T09:27:34
|
Hi! 👋
I've created an [example notebook](https://colab.research.google.com/drive/1xr_3vEFfuCg2qg9OCrylIsweDh-MnF5r?usp=sharing) with the current PR's content for fine tuning a VLM using GRPO. It seems to be working, so maybe it's time to revive this PR, update it, and consider merging it soon!
| 3,072
| 11,803
|
kashif
| 2025-07-10T13:57:05
|
would you like to edit the function here as a "suggestion" and then i can commit it here directly and you will have a commit here
| 3,072
| 11,804
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.