AdaZoom-GUI: Adaptive Zoom-based GUI Grounding with Instruction Refinement
AI 摘要
AdaZoom-GUI通过指令优化和自适应缩放提升VLM在GUI界面定位的准确性和效率。
主要贡献
- 提出了指令优化模块,提升指令理解
- 设计了自适应缩放策略,优化小元素定位
- 构建了高质量GUI定位数据集
方法论
利用指令优化模块细化用户指令,并结合条件缩放策略,对小元素进行二次推理,使用GRPO训练模型。
原文摘要
GUI grounding is a critical capability for vision-language models (VLMs) that enables automated interaction with graphical user interfaces by locating target elements from natural language instructions. However, grounding on GUI screenshots remains challenging due to high-resolution images, small UI elements, and ambiguous user instructions. In this work, we propose AdaZoom-GUI, an adaptive zoom-based GUI grounding framework that improves both localization accuracy and instruction understanding. Our approach introduces an instruction refinement module that rewrites natural language commands into explicit and detailed descriptions, allowing the grounding model to focus on precise element localization. In addition, we design a conditional zoom-in strategy that selectively performs a second-stage inference on predicted small elements, improving localization accuracy while avoiding unnecessary computation and context loss on simpler cases. To support this framework, we construct a high-quality GUI grounding dataset and train the grounding model using Group Relative Policy Optimization (GRPO), enabling the model to predict both click coordinates and element bounding boxes. Experiments on public benchmarks demonstrate that our method achieves state-of-the-art performance among models with comparable or even larger parameter sizes, highlighting its effectiveness for high-resolution GUI understanding and practical GUI agent deployment.