-
LongVILA: Scaling Long-Context Visual Language Models for Long Videos
Paper • 2408.10188 • Published • 52 -
xGen-MM (BLIP-3): A Family of Open Large Multimodal Models
Paper • 2408.08872 • Published • 101 -
Building and better understanding vision-language models: insights and future directions
Paper • 2408.12637 • Published • 133 -
Show-o: One Single Transformer to Unify Multimodal Understanding and Generation
Paper • 2408.12528 • Published • 51
Collections
Discover the best community collections!
Collections including paper arxiv:2408.12637
-
xGen-MM (BLIP-3): A Family of Open Large Multimodal Models
Paper • 2408.08872 • Published • 101 -
Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model
Paper • 2408.11039 • Published • 63 -
Building and better understanding vision-language models: insights and future directions
Paper • 2408.12637 • Published • 133
-
MUMU: Bootstrapping Multimodal Image Generation from Text-to-Image Data
Paper • 2406.18790 • Published • 34 -
OmniGen: Unified Image Generation
Paper • 2409.11340 • Published • 115 -
Show-o: One Single Transformer to Unify Multimodal Understanding and Generation
Paper • 2408.12528 • Published • 51 -
MonoFormer/MonoFormer_ImageNet_256
1B • Updated • 7 • 5
-
Multimodal Pathway: Improve Transformers with Irrelevant Data from Other Modalities
Paper • 2401.14405 • Published • 13 -
CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs
Paper • 2406.18521 • Published • 29 -
xGen-VideoSyn-1: High-fidelity Text-to-Video Synthesis with Compressed Representations
Paper • 2408.12590 • Published • 36 -
Law of Vision Representation in MLLMs
Paper • 2408.16357 • Published • 95
-
OpenResearcher: Unleashing AI for Accelerated Scientific Research
Paper • 2408.06941 • Published • 32 -
ControlNeXt: Powerful and Efficient Control for Image and Video Generation
Paper • 2408.06070 • Published • 55 -
Generative Photomontage
Paper • 2408.07116 • Published • 20 -
Building and better understanding vision-language models: insights and future directions
Paper • 2408.12637 • Published • 133
-
What matters when building vision-language models?
Paper • 2405.02246 • Published • 103 -
An Introduction to Vision-Language Modeling
Paper • 2405.17247 • Published • 90 -
InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output
Paper • 2407.03320 • Published • 95 -
Building and better understanding vision-language models: insights and future directions
Paper • 2408.12637 • Published • 133
-
Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs
Paper • 2406.16860 • Published • 63 -
Understanding Alignment in Multimodal LLMs: A Comprehensive Study
Paper • 2407.02477 • Published • 24 -
LongVILA: Scaling Long-Context Visual Language Models for Long Videos
Paper • 2408.10188 • Published • 52 -
Building and better understanding vision-language models: insights and future directions
Paper • 2408.12637 • Published • 133
-
RLHF Workflow: From Reward Modeling to Online RLHF
Paper • 2405.07863 • Published • 71 -
Chameleon: Mixed-Modal Early-Fusion Foundation Models
Paper • 2405.09818 • Published • 132 -
Meteor: Mamba-based Traversal of Rationale for Large Language and Vision Models
Paper • 2405.15574 • Published • 55 -
An Introduction to Vision-Language Modeling
Paper • 2405.17247 • Published • 90
-
LongVILA: Scaling Long-Context Visual Language Models for Long Videos
Paper • 2408.10188 • Published • 52 -
xGen-MM (BLIP-3): A Family of Open Large Multimodal Models
Paper • 2408.08872 • Published • 101 -
Building and better understanding vision-language models: insights and future directions
Paper • 2408.12637 • Published • 133 -
Show-o: One Single Transformer to Unify Multimodal Understanding and Generation
Paper • 2408.12528 • Published • 51
-
xGen-MM (BLIP-3): A Family of Open Large Multimodal Models
Paper • 2408.08872 • Published • 101 -
Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model
Paper • 2408.11039 • Published • 63 -
Building and better understanding vision-language models: insights and future directions
Paper • 2408.12637 • Published • 133
-
OpenResearcher: Unleashing AI for Accelerated Scientific Research
Paper • 2408.06941 • Published • 32 -
ControlNeXt: Powerful and Efficient Control for Image and Video Generation
Paper • 2408.06070 • Published • 55 -
Generative Photomontage
Paper • 2408.07116 • Published • 20 -
Building and better understanding vision-language models: insights and future directions
Paper • 2408.12637 • Published • 133
-
What matters when building vision-language models?
Paper • 2405.02246 • Published • 103 -
An Introduction to Vision-Language Modeling
Paper • 2405.17247 • Published • 90 -
InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output
Paper • 2407.03320 • Published • 95 -
Building and better understanding vision-language models: insights and future directions
Paper • 2408.12637 • Published • 133
-
MUMU: Bootstrapping Multimodal Image Generation from Text-to-Image Data
Paper • 2406.18790 • Published • 34 -
OmniGen: Unified Image Generation
Paper • 2409.11340 • Published • 115 -
Show-o: One Single Transformer to Unify Multimodal Understanding and Generation
Paper • 2408.12528 • Published • 51 -
MonoFormer/MonoFormer_ImageNet_256
1B • Updated • 7 • 5
-
Cambrian-1: A Fully Open, Vision-Centric Exploration of Multimodal LLMs
Paper • 2406.16860 • Published • 63 -
Understanding Alignment in Multimodal LLMs: A Comprehensive Study
Paper • 2407.02477 • Published • 24 -
LongVILA: Scaling Long-Context Visual Language Models for Long Videos
Paper • 2408.10188 • Published • 52 -
Building and better understanding vision-language models: insights and future directions
Paper • 2408.12637 • Published • 133
-
Multimodal Pathway: Improve Transformers with Irrelevant Data from Other Modalities
Paper • 2401.14405 • Published • 13 -
CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs
Paper • 2406.18521 • Published • 29 -
xGen-VideoSyn-1: High-fidelity Text-to-Video Synthesis with Compressed Representations
Paper • 2408.12590 • Published • 36 -
Law of Vision Representation in MLLMs
Paper • 2408.16357 • Published • 95
-
RLHF Workflow: From Reward Modeling to Online RLHF
Paper • 2405.07863 • Published • 71 -
Chameleon: Mixed-Modal Early-Fusion Foundation Models
Paper • 2405.09818 • Published • 132 -
Meteor: Mamba-based Traversal of Rationale for Large Language and Vision Models
Paper • 2405.15574 • Published • 55 -
An Introduction to Vision-Language Modeling
Paper • 2405.17247 • Published • 90