LoRA (Low-Rank Adaptation) is a popular lightweight method for fine-tuning AI models. It doesn't update the full model, it adds small trainable components, low-rank matrices, while keeping the original weights frozen. Only these adapters are trained.
Recently, many interesting new LoRA variations came out, so it’s a great time to take a look at these 13 clever approaches:
2. SingLoRA → SingLoRA: Low Rank Adaptation Using a Single Matrix (2507.05566) Simplifies LoRA by using only one small matrix instead of usual two, and multiplying it by its own transpose (like A × Aᵀ). It uses half the parameters of LoRA and avoids scale mismatch between different matrices
Collection of approximately 58 million Russian forum messages featuring:
- Complete message content from Russian online forums spanning 2010-2025 - Comprehensive metadata including unique message IDs and timestamps - Full text content preserving original user discussions and interactions - Monolingual dataset focused exclusively on Russian language content
This dataset offers a unique textual archive of Russian online conversations suitable for text generation, sentiment analysis, and language modeling research. Released to the public domain under CC0 1.0 license.