Beyond Binary Preference: Aligning Diffusion Models to Fine-grained Criteria by Decoupling Attributes
Abstract
A two-stage framework for diffusion model alignment using hierarchical evaluation criteria and complex preference optimization demonstrates improved generation quality and expert alignment.
Post-training alignment of diffusion models relies on simplified signals, such as scalar rewards or binary preferences. This limits alignment with complex human expertise, which is hierarchical and fine-grained. To address this, we first construct a hierarchical, fine-grained evaluation criteria with domain experts, which decomposes image quality into multiple positive and negative attributes organized in a tree structure. Building on this, we propose a two-stage alignment framework. First, we inject domain knowledge to an auxiliary diffusion model via Supervised Fine-Tuning. Second, we introduce Complex Preference Optimization (CPO) that extends DPO to align the target diffusion to our non-binary, hierarchical criteria. Specifically, we reformulate the alignment problem to simultaneously maximize the probability of positive attributes while minimizing the probability of negative attributes with the auxiliary diffusion. We instantiate our approach in the domain of painting generation and conduct CPO training with an annotated dataset of painting with fine-grained attributes based on our criteria. Extensive experiments demonstrate that CPO significantly enhances generation quality and alignment with expertise, opening new avenues for fine-grained criteria alignment.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Direct Diffusion Score Preference Optimization via Stepwise Contrastive Policy-Pair Supervision (2025)
- Mind the Generative Details: Direct Localized Detail Preference Optimization for Video Diffusion Models (2026)
- PC-Diffusion: Aligning Diffusion Models with Human Preferences via Preference Classifier (2025)
- Multi-dimensional Preference Alignment by Conditioning Reward Itself (2025)
- Taming Preference Mode Collapse via Directional Decoupling Alignment in Diffusion Reinforcement Learning (2025)
- Diffusion-DRF: Differentiable Reward Flow for Video Diffusion Fine-Tuning (2026)
- BideDPO: Conditional Image Generation with Simultaneous Text and Condition Alignment (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper