jina-embeddings-v5-text: Task-Targeted Embedding Distillation
AI 摘要
提出一种结合模型蒸馏和任务特定对比损失的训练方法,提升小型嵌入模型的性能。
主要贡献
- 提出新的训练方法,结合模型蒸馏和任务特定对比损失
- 训练出高性能的小型嵌入模型 jina-embeddings-v5-text-small 和 jina-embeddings-v5-text-nano
- 支持长文本和多语言,并具有鲁棒性
方法论
结合模型蒸馏和任务特定对比损失函数进行训练,优化小型嵌入模型。
原文摘要
Text embedding models are widely used for semantic similarity tasks, including information retrieval, clustering, and classification. General-purpose models are typically trained with single- or multi-stage processes using contrastive loss functions. We introduce a novel training regimen that combines model distillation techniques with task-specific contrastive loss to produce compact, high-performance embedding models. Our findings suggest that this approach is more effective for training small models than purely contrastive or distillation-based training paradigms alone. Benchmark scores for the resulting models, jina-embeddings-v5-text-small and jina-embeddings-v5-text-nano, exceed or match the state-of-the-art for models of similar size. jina-embeddings-v5-text models additionally support long texts (up to 32k tokens) in many languages, and generate embeddings that remain robust under truncation and binary quantization. Model weights are publicly available, hopefully inspiring further advances in embedding model development.