FLAC: Maximum Entropy RL via Kinetic Energy Regularized Bridge Matching
Abstract
Field Least-Energy Actor-Critic (FLAC) addresses challenges in maximum entropy reinforcement learning with iterative generative policies by using kinetic energy as a proxy for policy stochasticity regulation through a generalized Schrödinger bridge formulation.
Iterative generative policies, such as diffusion models and flow matching, offer superior expressivity for continuous control but complicate Maximum Entropy Reinforcement Learning because their action log-densities are not directly accessible. To address this, we propose Field Least-Energy Actor-Critic (FLAC), a likelihood-free framework that regulates policy stochasticity by penalizing the kinetic energy of the velocity field. Our key insight is to formulate policy optimization as a Generalized Schrödinger Bridge (GSB) problem relative to a high-entropy reference process (e.g., uniform). Under this view, the maximum-entropy principle emerges naturally as staying close to a high-entropy reference while optimizing return, without requiring explicit action densities. In this framework, kinetic energy serves as a physically grounded proxy for divergence from the reference: minimizing path-space energy bounds the deviation of the induced terminal action distribution. Building on this view, we derive an energy-regularized policy iteration scheme and a practical off-policy algorithm that automatically tunes the kinetic energy via a Lagrangian dual mechanism. Empirically, FLAC achieves superior or comparable performance on high-dimensional benchmarks relative to strong baselines, while avoiding explicit density estimation.
Community
FLAC is a likelihood-free RL method using kinetic energy regularization and a generalized Schrödinger bridge to stay near a high-entropy reference while optimizing return, avoiding explicit density estimation.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Boosting Maximum Entropy Reinforcement Learning via One-Step Flow Matching (2026)
- Reparameterization Flow Policy Optimization (2026)
- DeFlow: Decoupling Manifold Modeling and Value Maximization for Offline Policy Extraction (2026)
- PolicyFlow: Policy Optimization with Continuous Normalizing Flow in Reinforcement Learning (2026)
- Latent Spherical Flow Policy for Reinforcement Learning with Combinatorial Actions (2026)
- How Does the Lagrangian Guide Safe Reinforcement Learning through Diffusion Models? (2026)
- Flow Matching for Offline Reinforcement Learning with Discrete Actions (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper