BehaviorVLM: Unified Finetuning-Free Behavioral Understanding with Vision-Language Reasoning
AI 摘要
BehaviorVLM提出了一种无需微调的视觉语言模型,用于动物行为的姿态估计和行为理解。
主要贡献
- 提出BehaviorVLM,一个统一的视觉语言框架
- 无需任务特定的微调
- 姿态估计和行为理解的统一框架
方法论
利用预训练的视觉语言模型,通过详细的推理步骤进行姿态估计和行为理解,减少人工标注。
原文摘要
Understanding freely moving animal behavior is central to neuroscience, where pose estimation and behavioral understanding form the foundation for linking neural activity to natural actions. Yet both tasks still depend heavily on human annotation or unstable unsupervised pipelines, limiting scalability and reproducibility. We present BehaviorVLM, a unified vision-language framework for pose estimation and behavioral understanding that requires no task-specific finetuning and minimal human labeling by guiding pretrained Vision-Language Models (VLMs) through detailed, explicit, and verifiable reasoning steps. For pose estimation, we leverage quantum-dot-grounded behavioral data and propose a multi-stage pipeline that integrates temporal, spatial, and cross-view reasoning. This design greatly reduces human annotation effort, exposes low-confidence labels through geometric checks such as reprojection error, and produces labels that can later be filtered, corrected, or used to fine-tune downstream pose models. For behavioral understanding, we propose a pipeline that integrates deep embedded clustering for over-segmented behavior discovery, VLM-based per-clip video captioning, and LLM-based reasoning to merge and semantically label behavioral segments. The behavioral pipeline can operate directly from visual information and does not require keypoints to segment behavior. Together, these components enable scalable, interpretable, and label-light analysis of multi-animal behavior.