Universal Skeleton Understanding via Differentiable Rendering and MLLMs
AI 摘要
SkeletonLLM通过可微渲染将骨骼数据转换为视觉信息,利用MLLM实现通用骨骼理解。
主要贡献
- 提出 DrAction,一个可微且格式无关的渲染器
- 引入 Causal Reasoning Distillation 和 Discriminative Finetuning 的协同训练策略
- 在多种任务上验证了 SkeletonLLM 的泛化能力
方法论
通过可微渲染将骨骼数据转为图像序列,利用 MLLM 进行视觉语言推理,并采用协同训练策略提升性能。
原文摘要
Multimodal large language models (MLLMs) exhibit strong visual-language reasoning, yet remain confined to their native modalities and cannot directly process structured, non-visual data such as human skeletons. Existing methods either compress skeleton dynamics into lossy feature vectors for text alignment, or quantize motion into discrete tokens that generalize poorly across heterogeneous skeleton formats. We present SkeletonLLM, which achieves universal skeleton understanding by translating arbitrary skeleton sequences into the MLLM's native visual modality. At its core is DrAction, a differentiable, format-agnostic renderer that converts skeletal kinematics into compact image sequences. Because the pipeline is end-to-end differentiable, MLLM gradients can directly guide the rendering to produce task-informative visual tokens. To further enhance reasoning capabilities, we introduce a cooperative training strategy: Causal Reasoning Distillation transfers structured, step-by-step reasoning from a teacher model, while Discriminative Finetuning sharpens decision boundaries between confusable actions. SkeletonLLM demonstrates strong generalization on diverse tasks including recognition, captioning, reasoning, and cross-format transfer -- suggesting a viable path for applying MLLMs to non-native modalities. Code will be released upon acceptance.