Multimodal Learning 相关度: 7/10

EdgeDiT: Hardware-Aware Diffusion Transformers for Efficient On-Device Image Generation

Sravanth Kodavanti, Manjunath Arveti, Sowmya Vajrala, Srinivas Miriyala, Vikram N R
arXiv: 2603.28405v1 发布: 2026-03-30 更新: 2026-03-30

AI 摘要

EdgeDiT通过硬件感知优化,实现Diffusion Transformer在移动NPU上的高效图像生成。

主要贡献

  • 提出硬件感知的EdgeDiT架构
  • 针对移动NPU优化DiT
  • 实现低延迟、高效率的图像生成

方法论

通过硬件感知优化框架,系统性剪枝DiT backbone中的冗余结构,针对移动数据流进行优化。

原文摘要

Diffusion Transformers (DiT) have established a new state-of-the-art in high-fidelity image synthesis; however, their massive computational complexity and memory requirements hinder local deployment on resource-constrained edge devices. In this paper, we introduce EdgeDiT, a family of hardware-efficient generative transformers specifically engineered for mobile Neural Processing Units (NPUs), such as the Qualcomm Hexagon and Apple Neural Engine (ANE). By leveraging a hardware-aware optimization framework, we systematically identify and prune structural redundancies within the DiT backbone that are particularly taxing for mobile data-flows. Our approach yields a series of lightweight models that achieve a 20-30% reduction in parameters, a 36-46% decrease in FLOPs, and a 1.65-fold reduction in on-device latency without sacrificing the scaling advantages or the expressive capacity of the original transformer architecture. Extensive benchmarking demonstrates that EdgeDiT offers a superior Pareto-optimal trade-off between Frechet Inception Distance (FID) and inference latency compared to both optimized mobile U-Nets and vanilla DiT variants. By enabling responsive, private, and offline generative AI directly on-device, EdgeDiT provides a scalable blueprint for transitioning large-scale foundation models from high-end GPUs to the palm of the user.

标签

Diffusion Transformer Edge Computing Hardware-Aware Optimization Image Generation

arXiv 分类

cs.CV cs.AI