izhx commited on
Commit
9e37c14
·
verified ·
1 Parent(s): facd32f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +111 -17
README.md CHANGED
@@ -3691,44 +3691,96 @@ The `GME` models support three types of input: **text**, **image**, and **image-
3691
  |[`gme-Qwen2-VL-2B`](https://huggingface.co/Alibaba-NLP/gme-Qwen2-VL-2B-Instruct) | 2.21B | 32768 | 1536 | 65.27 | 66.92 | 64.45 |
3692
  |[`gme-Qwen2-VL-7B`](https://huggingface.co/Alibaba-NLP/gme-Qwen2-VL-7B-Instruct) | 8.29B | 32768 | 3584 | 67.48 | 69.73 | 67.44 |
3693
 
3694
- ## Usage
3695
- ### Transformers
 
 
 
3696
  ```python
3697
- from transformers import AutoModel
3698
  texts = [
3699
- "What kind of car is this?",
3700
- "The Tesla Cybertruck is a battery electric pickup truck built by Tesla, Inc. since 2023."
3701
  ]
3702
  images = [
3703
- 'https://en.wikipedia.org/wiki/File:Tesla_Cybertruck_damaged_window.jpg',
3704
- 'https://en.wikipedia.org/wiki/File:2024_Tesla_Cybertruck_Foundation_Series,_front_left_(Greenwich).jpg',
3705
  ]
3706
 
3707
- gme = AutoModel("Alibaba-NLP/gme-Qwen2-VL-2B-Instruct", trust_remote_code=True)
 
 
 
 
 
3708
 
3709
  # Single-modal embedding
3710
  e_text = gme.get_text_embeddings(texts=texts)
3711
  e_image = gme.get_image_embeddings(images=images)
3712
- print((e_text * e_image).sum(-1))
3713
- ## tensor([0.2281, 0.6001], dtype=torch.float16)
3714
 
3715
  # How to set embedding instruction
3716
- e_query = gme.get_text_embeddings(texts=texts, instruction='Find an image that matches the given text.')
3717
  # If is_query=False, we always use the default instruction.
3718
  e_corpus = gme.get_image_embeddings(images=images, is_query=False)
3719
- print((e_query * e_corpus).sum(-1))
3720
- ## tensor([0.2433, 0.7051], dtype=torch.float16)
3721
 
3722
  # Fused-modal embedding
3723
  e_fused = gme.get_fused_embeddings(texts=texts, images=images)
3724
- print((e_fused[0] * e_fused[1]).sum())
3725
- ## tensor(0.6108, dtype=torch.float16)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3726
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3727
  ```
3728
 
 
 
3729
  ## Evaluation
3730
 
3731
- We validated the performance on our universal multimodal retrieval benchmark (**UMRB**) among others.
3732
 
3733
  | | | Single-modal | | Cross-modal | | | Fused-modal | | | | Avg. |
3734
  |--------------------|------|:------------:|:---------:|:-----------:|:-----------:|:---------:|:-----------:|:----------:|:----------:|:-----------:|:----------:|
@@ -3745,6 +3797,48 @@ The [MTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard) English t
3745
 
3746
  **More detailed experimental results can be found in the [paper](http://arxiv.org/abs/2412.16855)**.
3747
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3748
 
3749
  ## Limitations
3750
 
@@ -3790,4 +3884,4 @@ If you find our paper or models helpful, please consider cite:
3790
  primaryClass={cs.CL},
3791
  url={http://arxiv.org/abs/2412.16855},
3792
  }
3793
- ```
 
3691
  |[`gme-Qwen2-VL-2B`](https://huggingface.co/Alibaba-NLP/gme-Qwen2-VL-2B-Instruct) | 2.21B | 32768 | 1536 | 65.27 | 66.92 | 64.45 |
3692
  |[`gme-Qwen2-VL-7B`](https://huggingface.co/Alibaba-NLP/gme-Qwen2-VL-7B-Instruct) | 8.29B | 32768 | 3584 | 67.48 | 69.73 | 67.44 |
3693
 
3694
+ ## Usage
3695
+
3696
+
3697
+ **Transformers**
3698
+
3699
  ```python
3700
+ t2i_prompt = 'Find an image that matches the given text.'
3701
  texts = [
3702
+ "The Tesla Cybertruck is a battery electric pickup truck built by Tesla, Inc. since 2023.",
3703
+ "Alibaba office.",
3704
  ]
3705
  images = [
3706
+ 'https://upload.wikimedia.org/wikipedia/commons/e/e9/Tesla_Cybertruck_damaged_window.jpg',
3707
+ 'https://upload.wikimedia.org/wikipedia/commons/e/e0/TaobaoCity_Alibaba_Xixi_Park.jpg',
3708
  ]
3709
 
3710
+
3711
+ gme = AutoModel.from_pretrained(
3712
+ model_path,
3713
+ torch_dtype="float16", device_map='cuda', trust_remote_code=True
3714
+ )
3715
+
3716
 
3717
  # Single-modal embedding
3718
  e_text = gme.get_text_embeddings(texts=texts)
3719
  e_image = gme.get_image_embeddings(images=images)
3720
+ print('Single-modal', (e_text @ e_image.T).tolist())
3721
+ ## Single-modal [[0.359619140625, 0.0655517578125], [0.04180908203125, 0.374755859375]]
3722
 
3723
  # How to set embedding instruction
3724
+ e_query = gme.get_text_embeddings(texts=texts, instruction=t2i_prompt)
3725
  # If is_query=False, we always use the default instruction.
3726
  e_corpus = gme.get_image_embeddings(images=images, is_query=False)
3727
+ print('Single-modal with instruction', (e_query @ e_corpus.T).tolist())
3728
+ ## Single-modal with instruction [[0.429931640625, 0.11505126953125], [0.049835205078125, 0.409423828125]]
3729
 
3730
  # Fused-modal embedding
3731
  e_fused = gme.get_fused_embeddings(texts=texts, images=images)
3732
+ print('Fused-modal', (e_fused @ e_fused.T).tolist())
3733
+ ## Fused-modal [[1.0, 0.05511474609375], [0.05511474609375, 1.0]]
3734
+ ```
3735
+
3736
+
3737
+ **sentence_transformers**
3738
+
3739
+ The `encode` function accept `str` or `dict` with key(s) in `{'text', 'image', 'prompt'}`.
3740
+
3741
+ **Do not pass `prompt` as the argument to `encode`**, pass as the input as a `dict` with a `prompt` key.
3742
+
3743
+ ```python
3744
+ from sentence_transformers import SentenceTransformer
3745
+
3746
+
3747
+ t2i_prompt = 'Find an image that matches the given text.'
3748
+ texts = [
3749
+ "The Tesla Cybertruck is a battery electric pickup truck built by Tesla, Inc. since 2023.",
3750
+ "Alibaba office.",
3751
+ ]
3752
+ images = [
3753
+ 'https://upload.wikimedia.org/wikipedia/commons/e/e9/Tesla_Cybertruck_damaged_window.jpg',
3754
+ 'https://upload.wikimedia.org/wikipedia/commons/e/e0/TaobaoCity_Alibaba_Xixi_Park.jpg',
3755
+ ]
3756
 
3757
+
3758
+ gme_st = SentenceTransformer("Alibaba-NLP/gme-Qwen2-VL-2B-Instruct")
3759
+
3760
+ # Single-modal embedding
3761
+ e_text = gme_st.encode(texts, convert_to_tensor=True)
3762
+ e_image = gme_st.encode([dict(image=i) for i in images], convert_to_tensor=True)
3763
+ print('Single-modal', (e_text @ e_image.T).tolist())
3764
+ ## Single-modal [[0.356201171875, 0.06536865234375], [0.041717529296875, 0.37890625]]
3765
+
3766
+ # How to set embedding instruction
3767
+ e_query = gme_st.encode([dict(text=t, prompt=t2i_prompt) for t in texts], convert_to_tensor=True)
3768
+ # If no prompt, we always use the default instruction.
3769
+ e_corpus = gme_st.encode([dict(image=i) for i in images], convert_to_tensor=True)
3770
+ print('Single-modal with instruction', (e_query @ e_corpus.T).tolist())
3771
+ ## Single-modal with instruction [[0.425537109375, 0.1158447265625], [0.049835205078125, 0.413818359375]]
3772
+
3773
+ # Fused-modal embedding
3774
+ e_fused = gme_st.encode([dict(text=t, image=i) for t, i in zip(texts, images)], convert_to_tensor=True)
3775
+ print('Fused-modal', (e_fused @ e_fused.T).tolist())
3776
+ ## Fused-modal [[0.99951171875, 0.0556640625], [0.0556640625, 0.99951171875]]
3777
  ```
3778
 
3779
+
3780
+
3781
  ## Evaluation
3782
 
3783
+ We validated the performance on our universal multimodal retrieval benchmark (**UMRB**, see [Release UMRB](https://huggingface.co/Alibaba-NLP/gme-Qwen2-VL-7B-Instruct/discussions/2)) among others.
3784
 
3785
  | | | Single-modal | | Cross-modal | | | Fused-modal | | | | Avg. |
3786
  |--------------------|------|:------------:|:---------:|:-----------:|:-----------:|:---------:|:-----------:|:----------:|:----------:|:-----------:|:----------:|
 
3797
 
3798
  **More detailed experimental results can be found in the [paper](http://arxiv.org/abs/2412.16855)**.
3799
 
3800
+ ## Community support
3801
+
3802
+ ### Fine-tuning
3803
+
3804
+ GME models can be fine-tuned by SWIFT:
3805
+
3806
+ ```shell
3807
+ pip install ms-swift -U
3808
+ ```
3809
+
3810
+ ```shell
3811
+ # MAX_PIXELS settings to reduce memory usage
3812
+ # check: https://swift.readthedocs.io/en/latest/BestPractices/Embedding.html
3813
+ nproc_per_node=8
3814
+ MAX_PIXELS=1003520 \
3815
+ USE_HF=1 \
3816
+ NPROC_PER_NODE=$nproc_per_node \
3817
+ swift sft \
3818
+ --model Alibaba-NLP/gme-Qwen2-VL-2B-Instruct \
3819
+ --train_type lora \
3820
+ --dataset 'HuggingFaceM4/TextCaps:emb' \
3821
+ --torch_dtype bfloat16 \
3822
+ --num_train_epochs 1 \
3823
+ --per_device_train_batch_size 2 \
3824
+ --per_device_eval_batch_size 2 \
3825
+ --gradient_accumulation_steps $(expr 64 / $nproc_per_node) \
3826
+ --eval_steps 100 \
3827
+ --save_steps 100 \
3828
+ --eval_strategy steps \
3829
+ --save_total_limit 5 \
3830
+ --logging_steps 5 \
3831
+ --output_dir output \
3832
+ --lazy_tokenize true \
3833
+ --warmup_ratio 0.05 \
3834
+ --learning_rate 5e-6 \
3835
+ --deepspeed zero3 \
3836
+ --dataloader_num_workers 4 \
3837
+ --task_type embedding \
3838
+ --loss_type infonce \
3839
+ --dataloader_drop_last true
3840
+ ```
3841
+
3842
 
3843
  ## Limitations
3844
 
 
3884
  primaryClass={cs.CL},
3885
  url={http://arxiv.org/abs/2412.16855},
3886
  }
3887
+ ```