metadata
language: en
license: apache-2.0
tags:
- aqea
- compression
- embeddings
- similarity-search
- vector-database
datasets:
- mteb/stsbenchmark-sts
base_model: openai/text-embedding-3-large
AQEA: aqea-text-embedding-3-large-29x
OpenAI text-embedding-3-large compressed 29x while preserving 91.9% similarity ranking
π Performance
| Metric | Value |
|---|---|
| Compression Ratio | 29.3x |
| Spearman Ο | 91.9% |
| Source Dimension | 3072D |
| Compressed Dimension | 105D |
| Storage Savings | 96.6% |
π Usage
from aqea import AQEACompressor
# Load pre-trained compressor
compressor = AQEACompressor.from_pretrained("nextxag/aqea-text-embedding-3-large-29x")
# Compress embeddings
embeddings = model.encode(texts) # 3072D
compressed = compressor.compress(embeddings) # 105D
# Decompress for retrieval
reconstructed = compressor.decompress(compressed) # 3072D
π Files
weights.aqwt- Binary weights (AQEA native format)config.json- Model configuration
π¬ How It Works
AQEA (Adaptive Quantized Embedding Architecture) uses learned linear projections with Pre-Quantify rotation to compress embeddings while maximally preserving pairwise similarity rankings (measured by Spearman correlation).
π Citation
@software{aqea2024,
title = {AQEA: Adaptive Quantized Embedding Architecture},
author = {AQEA Team},
year = {2024},
url = {https://huggingface.co/nextxag}
}
π License
Apache 2.0