Learning Rate Matters: Vanilla LoRA May Suffice for LLM Fine-tuning Paper • 2602.04998 • Published 8 days ago • 6 • 5
Learning Rate Matters: Vanilla LoRA May Suffice for LLM Fine-tuning Paper • 2602.04998 • Published 8 days ago • 6
Falcon-H1R: Pushing the Reasoning Frontiers with a Hybrid Model for Efficient Test-Time Scaling Paper • 2601.02346 • Published Jan 5 • 26
LLM-JEPA: Large Language Models Meet Joint Embedding Predictive Architectures Paper • 2509.14252 • Published Sep 11, 2025 • 5