Multimodal Learning 相关度: 8/10

See it to Place it: Evolving Macro Placements with Vision-Language Models

Ikechukwu Uchendu, Swati Goel, Karly Hou, Ebrahim Songhori, Kuang-Huei Lee, Joe Wenjie Jiang, Vijay Janapa Reddi, Vincent Zhuang
arXiv: 2603.28733v1 发布: 2026-03-30 更新: 2026-03-30

AI 摘要

提出VeoPlace,利用视觉语言模型指导芯片布局,显著提升芯片设计性能。

主要贡献

  • 提出VeoPlace框架,利用VLM指导芯片布局
  • 无需微调VLM即可实现性能提升
  • 在多个benchmark上超越现有方法

方法论

使用VLM生成芯片区域建议,通过进化搜索优化布局,提升芯片性能。

原文摘要

We propose using Vision-Language Models (VLMs) for macro placement in chip floorplanning, a complex optimization task that has recently shown promising advancements through machine learning methods. Because human designers rely heavily on spatial reasoning to arrange components on the chip canvas, we hypothesize that VLMs with strong visual reasoning abilities can effectively complement existing learning-based approaches. We introduce VeoPlace (Visual Evolutionary Optimization Placement), a novel framework that uses a VLM, without any fine-tuning, to guide the actions of a base placer by constraining them to subregions of the chip canvas. The VLM proposals are iteratively optimized through an evolutionary search strategy with respect to resulting placement quality. On open-source benchmarks, VeoPlace outperforms the best prior learning-based approach on 9 of 10 benchmarks with peak wirelength reductions exceeding 32%. We further demonstrate that VeoPlace generalizes to analytical placers, improving DREAMPlace performance on all 8 evaluated benchmarks with gains up to 4.3%. Our approach opens new possibilities for electronic design automation tools that leverage foundation models to solve complex physical design problems.

标签

芯片布局 视觉语言模型 进化算法 电子设计自动化

arXiv 分类

cs.LG