Multimodal Dataset Distillation via Phased Teacher Models
AI 摘要
提出一种新型多模态数据集蒸馏框架PTM-ST,有效提升学生模型性能并降低存储开销。
主要贡献
- 提出Phased Teacher Model with Shortcut Trajectory (PTM-ST)框架
- 解决多模态数据集蒸馏中跨阶段性能差距和教师模型不稳定的问题
- 通过实验证明PTM-ST在Flickr30k和COCO数据集上超越现有方法
方法论
PTM-ST利用阶段感知教师建模和基于捷径的轨迹构建策略,准确拟合教师在不同训练阶段的学习动态,增强蒸馏过程的稳定性和表达性。
原文摘要
Multimodal dataset distillation aims to construct compact synthetic datasets that enable efficient compression and knowledge transfer from large-scale image-text data. However, existing approaches often fail to capture the complex, dynamically evolving knowledge embedded in the later training stages of teacher models. This limitation leads to degraded student performance and compromises the quality of the distilled data. To address critical challenges such as pronounced cross-stage performance gaps and unstable teacher trajectories, we propose Phased Teacher Model with Shortcut Trajectory (PTM-ST) -- a novel phased distillation framework. PTM-ST leverages stage-aware teacher modeling and a shortcut-based trajectory construction strategy to accurately fit the teacher's learning dynamics across distinct training phases. This enhances both the stability and expressiveness of the distillation process. Through theoretical analysis and comprehensive experiments, we show that PTM-ST significantly mitigates optimization oscillations and inter-phase knowledge gaps, while also reducing storage overhead. Our method consistently surpasses state-of-the-art baselines on Flickr30k and COCO, achieving up to 13.5% absolute improvement and an average gain of 9.53% on Flickr30k. Code: https://github.com/Previsior/PTM-ST.