LLM-Empowered Cooperative Content Caching in Vehicular Fog Caching-Assisted Platoon Networks
AI 摘要
提出一种基于LLM的车联网雾计算内容缓存架构,优化内容检索延迟。
主要贡献
- 提出三层车联网雾计算缓存架构
- 利用LLM进行实时智能缓存决策
- 设计层级确定性缓存映射策略
方法论
使用LLM处理异构信息,通过提示框架编码任务目标和约束,将缓存问题建模为决策任务。
原文摘要
This letter proposes a novel three-tier content caching architecture for Vehicular Fog Caching (VFC)-assisted platoon, where the VFC is formed by the vehicles driving near the platoon. The system strategically coordinates storage across local platoon vehicles, dynamic VFC clusters, and cloud server (CS) to minimize content retrieval latency. To efficiently manage distributed storage, we integrate large language models (LLMs) for real-time and intelligent caching decisions. The proposed approach leverages LLMs' ability to process heterogeneous information, including user profiles, historical data, content characteristics, and dynamic system states. Through a designed prompting framework encoding task objectives and caching constraints, the LLMs formulate caching as a decision-making task, and our hierarchical deterministic caching mapping strategy enables adaptive requests prediction and precise content placement across three tiers without frequent retraining. Simulation results demonstrate the advantages of our proposed caching scheme.