ROME-30B-A3B
ROME (ROME is Obviously an Agentic ModEl) is an open-source agentic model incubated within the ALE (Agentic Learning Ecosystem).
Rather than scaling performance purely by increasing parameter count, ROME achieves parameter-scaleβcrossing performance through full-stack infrastructure integration and advanced Reinforcement Learning optimization.
π Highlights
π§ ALE Full-Stack Infrastructure
ROLL β Large-scale reinforcement learning optimization engine
ROCK β Secure sandbox and environment orchestration for agent execution
iFlow CLI β Unified agent framework and developer interface
π§ IPA Policy Optimization Algorithm
- Introduces Interaction-Perceptive Agentic Policy Optimization (IPA)
- Performs credit assignment at the level of Semantic Interaction Chunks
- Significantly improves training stability and success rates on long-horizon tasks
π Strong Agentic Performance
Despite being a mid-sized model (30B MoE with 3B active parameters), ROME outperforms same-scale models on standard agent benchmarks:
- Terminal-Bench 2.0: 24.72%
- SWE-bench Verified: 57.40%
Performance is competitive with, and in some cases comparable to, models exceeding 100B parameters
π Production-Grade Safety
- Designed for autonomous agent execution in real environments
- Rigorously aligned and red-teamed against risks such as:
- Unauthorized access
- Illegal or unsafe tool invocation
- Built with deployment-grade safety guarantees in mind
π Performance (Preview)
Terminal-Based Benchmarks
| Model | Terminal-Bench 2.0 | SWE-bench Verified |
|---|---|---|
| Qwen3-Coder-30B-A3B-Instruct | 13.48% | 46.33% |
| ROME-30B-A3B | 24.72% | 57.40% |
| GPT-OSS-120B | 21.12% | 43.93% |
| GLM-4.5 Air (106B) | 17.30% | 56.20% |
See the technical report for full experimental details.
π Citation
If you find our work useful, please consider citing:
@article{rome2025ale,
title={Let It Flow: Agentic Crafting on Rock and Roll - Building the ROME Model within an Open Agentic Learning Ecosystem},
author={Wang, Weixun and Xu, XiaoXiao and An, Wanhe and Dai, Fangwen and others},
journal={arXiv preprint arXiv:2512.24873},
year={2025}
}
- Downloads last month
- -