You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Tiny MiniCPM-o-2_6 Model (6MB INT4 Quantized)

This is a tiny random version of the MiniCPM-o-2_6 model, optimized for testing purposes.

Model Details

  • Model Size: ~6-7MB (INT4 quantized)
  • Original Model: MiniCPM-o-2_6
  • Quantization: INT4 pipeline quantization
  • Vocabulary Size: 50,000 tokens (reduced from 151,700)
  • Format: OpenVINO IR

Model Architecture

  • Maintains MiniCPMO class compatibility
  • Full INT4 quantization for all components:
    • Language model
    • Vision embeddings
    • Resampler

Usage

With Optimum-Intel

from optimum.intel import OVModelForVisualCausalLM
from transformers import AutoProcessor

model = OVModelForVisualCausalLM.from_pretrained(
    "M-Ziyo/tiny-random-MiniCPM-o-2_6-6mb",
    export=False,  # Already quantized
    trust_remote_code=True
)

processor = AutoProcessor.from_pretrained(
    "optimum-intel-internal-testing/tiny-random-MiniCPM-o-2_6",
    trust_remote_code=True
)

Validation

python validate_tiny_minicpm.py --model-path M-Ziyo/tiny-random-MiniCPM-o-2_6-6mb

Files

  • OpenVINO IR files (.xml, .bin) for all model components
  • Configuration files (config.json, openvino_config.json)
  • Python model files for custom architecture
  • Processor files

Notes

  • This is a test model with random weights
  • Processor should be loaded from the original model: optimum-intel-internal-testing/tiny-random-MiniCPM-o-2_6
  • Compatible with Optimum-Intel test suite
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support