Speed by Simplicity: A Single-Stream Architecture for Fast Audio-Video Generative Foundation Model
Abstract
daVinci-MagiHuman is an open-source audio-video generative model that synchronizes text, video, and audio through a single-stream Transformer architecture, achieving high-quality human-centric content generation with efficient inference capabilities.
We present daVinci-MagiHuman, an open-source audio-video generative foundation model for human-centric generation. daVinci-MagiHuman jointly generates synchronized video and audio using a single-stream Transformer that processes text, video, and audio within a unified token sequence via self-attention only. This single-stream design avoids the complexity of multi-stream or cross-attention architectures while remaining easy to optimize with standard training and inference infrastructure. The model is particularly strong in human-centric scenarios, producing expressive facial performance, natural speech-expression coordination, realistic body motion, and precise audio-video synchronization. It supports multilingual spoken generation across Chinese (Mandarin and Cantonese), English, Japanese, Korean, German, and French. For efficient inference, we combine the single-stream backbone with model distillation, latent-space super-resolution, and a Turbo VAE decoder, enabling generation of a 5-second 256p video in 2 seconds on a single H100 GPU. In automatic evaluation, daVinci-MagiHuman achieves the highest visual quality and text alignment among leading open models, along with the lowest word error rate (14.60%) for speech intelligibility. In pairwise human evaluation, it achieves win rates of 80.0% against Ovi 1.1 and 60.9% against LTX 2.3 over 2000 comparisons. We open-source the complete model stack, including the base model, the distilled model, the super-resolution model, and the inference codebase.
Community
We present daVinci-MagiHuman, an open-source audio-video generative foundation model for humancentric generation. daVinci-MagiHuman jointly generates synchronized video and audio using a singlestream Transformer that processes text, video, and audio within a unified token sequence via self-attention only. This single-stream design avoids the complexity of multi-stream or cross-attention architectures while remaining easy to optimize with standard training and inference infrastructure. The model is particularly strong in human-centric scenarios, producing expressive facial performance, natural speech-expression coordination, realistic body motion, and precise audio-video synchronization. It supports multilingual spoken generation across Chinese (Mandarin and Cantonese), English, Japanese, Korean, German, and French. For efficient inference, we combine the single-stream backbone with model distillation, latentspace super-resolution, and a Turbo VAE decoder, enabling generation of a 5-second 256p video in 2 seconds on a single H100 GPU. In automatic evaluation, daVinci-MagiHuman achieves the highest visual quality and text alignment among leading open models, along with the lowest word error rate (14.60%) for speech intelligibility. In pairwise human evaluation, it achieves win rates of 80.0% against Ovi 1.1 and 60.9% against LTX 2.3 over 2,000 comparisons. We open-source the complete model stack, including the base model, the distilled model, the super-resolution model, and the inference codebase.
Remarkable publication.
This work will lead to lot of developments in future.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SkyReels-V4: Multi-modal Video-Audio Generation, Inpainting and Editing model (2026)
- UniTalking: A Unified Audio-Video Framework for Talking Portrait Generation (2026)
- Improving Joint Audio-Video Generation with Cross-Modal Context Learning (2026)
- ID-LoRA: Identity-Driven Audio-Video Personalization with In-Context LoRA (2026)
- DreamID-Omni: Unified Framework for Controllable Human-Centric Audio-Video Generation (2026)
- MOVA: Towards Scalable and Synchronized Video-Audio Generation (2026)
- JUST-DUB-IT: Video Dubbing via Joint Audio-Visual Diffusion (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
the single-stream transformer with a sandwich layout is a neat way to dodge cross-attention while keeping cross-modal alignment in check. the claim that input and output layers are modality-specific while the middle layers are shared is clever, but i wonder how it handles modality-specific inductive biases as you scale to longer videos and more nuanced speech-gesture coordination. 2 seconds for a 5-second 256p video on a single h100 is impressive, but how does this translate to consumer GPUs or extended real-time workflows? btw the arxivLens breakdown does a nice job unpacking this approach, and the link helped me parse the section on the sandwich architecture: https://arxivlens.com/PaperView/Details/speed-by-simplicity-a-single-stream-architecture-for-fast-audio-video-generative-foundation-model-5420-36f7891e. my one question is an ablation prompt: what happens if you remove the shared middle layers or replace them with modality-conditioned adapters; would cross-modal fidelity suffer more than model size would benefit?
Models citing this paper 2
Datasets citing this paper 0
No dataset linking this paper