-
UniME-V2: MLLM-as-a-Judge for Universal Multimodal Embedding Learning
Paper • 2510.13515 • Published • 11 -
TianchengGu/UniME-V2-LLaVA-OneVision-8B
Image-Text-to-Text • 8B • Updated • 29 • 2 -
TianchengGu/UniME-V2-Qwen2VL-7B
Image-Text-to-Text • 8B • Updated • 518 • 2 -
TianchengGu/UniME-V2-Qwen2VL-2B
Image-Text-to-Text • 2B • Updated • 549 • 2
Kaicheng Yang
Kaichengalex
AI & ML interests
Multimodal Representation Learning/ Vision-Language Pretraining/DeepResearch
Recent Activity
liked
a model
2 days ago
Qwen/Qwen3-VL-Embedding-8B
liked
a model
2 days ago
Qwen/Qwen3-VL-Embedding-2B
Organizations
UniME
-
DeepGlint-AI/UniME-Phi3.5-V-4.2B
Image-Text-to-Text • Updated • 183 • 7 -
DeepGlint-AI/UniME-LLaVA-OneVision-7B
Image-Text-to-Text • 8B • Updated • 12 • 3 -
DeepGlint-AI/UniME-LLaVA-1.6-7B
Image-Text-to-Text • 8B • Updated • 16 • 5 -
Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs
Paper • 2504.17432 • Published • 40
RealSyn Dataset
-
Kaichengalex/RealSyn100M
Viewer • Updated • 89.6M • 1.36k • 16 -
Kaichengalex/RealSyn15M
Viewer • Updated • 13.5M • 287 • 3 -
Kaichengalex/RealSyn30M
Viewer • Updated • 27M • 189 • 4 -
RealSyn: An Effective and Scalable Multimodal Interleaved Document Transformation Paradigm
Paper • 2502.12513 • Published • 16
SFT Dataset
RWKV-CLIP
Web-Person Dataset
Vision-Language Dataset
MLLM4Embedding
-
GME: Improving Universal Multimodal Retrieval by Multimodal LLMs
Paper • 2412.16855 • Published • 5 -
VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks
Paper • 2410.05160 • Published • 4 -
VLM2Vec-V2: Advancing Multimodal Embedding for Videos, Images, and Visual Documents
Paper • 2507.04590 • Published • 16
UniME-V2
-
UniME-V2: MLLM-as-a-Judge for Universal Multimodal Embedding Learning
Paper • 2510.13515 • Published • 11 -
TianchengGu/UniME-V2-LLaVA-OneVision-8B
Image-Text-to-Text • 8B • Updated • 29 • 2 -
TianchengGu/UniME-V2-Qwen2VL-7B
Image-Text-to-Text • 8B • Updated • 518 • 2 -
TianchengGu/UniME-V2-Qwen2VL-2B
Image-Text-to-Text • 2B • Updated • 549 • 2
RWKV-CLIP
UniME
-
DeepGlint-AI/UniME-Phi3.5-V-4.2B
Image-Text-to-Text • Updated • 183 • 7 -
DeepGlint-AI/UniME-LLaVA-OneVision-7B
Image-Text-to-Text • 8B • Updated • 12 • 3 -
DeepGlint-AI/UniME-LLaVA-1.6-7B
Image-Text-to-Text • 8B • Updated • 16 • 5 -
Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs
Paper • 2504.17432 • Published • 40
Web-Person Dataset
RealSyn Dataset
-
Kaichengalex/RealSyn100M
Viewer • Updated • 89.6M • 1.36k • 16 -
Kaichengalex/RealSyn15M
Viewer • Updated • 13.5M • 287 • 3 -
Kaichengalex/RealSyn30M
Viewer • Updated • 27M • 189 • 4 -
RealSyn: An Effective and Scalable Multimodal Interleaved Document Transformation Paradigm
Paper • 2502.12513 • Published • 16
Vision-Language Dataset
SFT Dataset
MLLM4Embedding
-
GME: Improving Universal Multimodal Retrieval by Multimodal LLMs
Paper • 2412.16855 • Published • 5 -
VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks
Paper • 2410.05160 • Published • 4 -
VLM2Vec-V2: Advancing Multimodal Embedding for Videos, Images, and Visual Documents
Paper • 2507.04590 • Published • 16