Mobile Edge Computing —— Paper List

2020-10-12 14:00:48

Mobile Edge Computing —— Paper List

本部落格主要是為了記錄看過的一些有關於移動邊緣計算的論文,並做一個分類。所有文章均已附上地址以供下載。

綜述

1.張開元,桂小林,任德旺,李敬,吳傑,任東勝.移動邊緣網路中計算遷移與內容快取研究綜述[J].軟體學報,2019,30(08):2491-2516.
2.朱友康,樂光學,楊曉慧,劉建生.邊緣計算遷移研究綜述[J].電信科學,2019,35(04):74-94.
3.丁春濤,曹建農,楊磊,王尚廣.邊緣計算綜述:應用、現狀及挑戰[J].中興通訊技術,2019,25(03):2-7.
4.李肯立,劉楚波.邊緣智慧:現狀和展望[J].巨量資料,2019,5(03):69-75.
5.施巍鬆,張星洲,王一帆,張慶陽.邊緣計算:現狀與展望[J].計算機研究與發展,2019,56(01):69-89.
6.Shi W, Cao J, Zhang Q, et al. Edge computing: Vision and challenges[J]. IEEE internet of things journal, 2016, 3(5): 637-646.
7.李子姝,謝人超,孫禮,黃韜.移動邊緣計算綜述[J].電信科學,2018,34(01):87-101.

強化學習與邊緣計算

解除安裝問題

1.Wang J, Hu J, Min G, et al. Computation Offloading in Multi-Access Edge Computing Using a Deep Sequential Model Based on Reinforcement Learning[J]. IEEE Communications Magazine, 2019, 57(5): 64-69.
2.Zhang C, Zheng Z. Task migration for mobile edge computing using deep reinforcement learning[J]. Future Generation Computer Systems, 2019, 96: 111-118.
3.Qi Q, Wang J, Ma Z, et al. Knowledge-driven service offloading decision for vehicular edge computing: A deep reinforcement learning approach[J]. IEEE Transactions on Vehicular Technology, 2019, 68(5): 4192-4203.
4.X. Chen, H. Zhang, C. Wu, S. Mao, Y. Ji and M. Bennis, 「Optimized Computation Offloading Performance in Virtual Edge Computing Systems Via Deep Reinforcement Learning,」 in IEEE Internet of Things Journal, vol. 6, no. 3, pp. 4005-4018, June 2019, doi: 10.1109/JIOT.2018.2876279.
5.Liu Y, Yu H, Xie S, et al. Deep reinforcement learning for offloading and resource allocation in vehicle edge computing and networks[J]. IEEE Transactions on Vehicular Technology, 2019, 68(11): 11158-11168.
6.Lin C C, Deng D J, Chih Y L, et al. Smart Manufacturing Scheduling with Edge Computing Using Multi-class Deep Q Network[J]. IEEE Transactions on Industrial Informatics, 2019.
7.Xiaoyu Qiu, Luobin Liu, Wuhui Chen,et al.Online Deep Reinforcement Learning for Computation Offloading in Blockchain-Empowered Mobile Edge Computing[J].IEEE Transactions on Vehicular Technology,2019,68(8):8050-8062
8.Zhang Q, Lin M, Yang L T, et al. A double deep Q-learning model for energy-efficient edge scheduling[J]. IEEE Transactions on Services Computing, 2018.
9.Le, Duc & Tham, Chen Khong. (2018). A Deep Reinforcement Learning based Offloading Scheme in Ad-hoc Mobile Clouds. 10.1109/INFCOMW.2018.8406881.
10.Wang Y, Wang K, Huang H, et al. Traffic and computation co-offloading with reinforcement learning in fog computing for industrial applications[J]. IEEE Transactions on Industrial Informatics, 2018, 15(2): 976-986.
11.Ning Z, Dong P, Wang X, et al. Deep reinforcement learning for vehicular edge computing: An intelligent offloading system[J]. ACM Transactions on Intelligent Systems and Technology (TIST), 2019, 10(6): 1-24.
12.Park S, Kwon D, Kim J, et al. Adaptive Real-Time Offloading Decision-Making for Mobile Edges: Deep Reinforcement Learning Framework and Simulation Results[J]. Applied Sciences, 2020, 10(5): 1663.
13.Zhan W, Luo C, Wang J, et al. Deep Reinforcement Learning-Based Offloading Scheduling for Vehicular Edge Computing[J]. IEEE Internet of Things Journal, 2020.
14.Hossain M S, Nwakanma C I, Lee J M, et al. Edge computational task offloading scheme using reinforcement learning for IIoT scenario[J]. ICT Express, 2020.
15.Lu H, Gu C, Luo F, et al. Optimization of lightweight task offloading strategy for mobile edge computing based on deep reinforcement learning[J]. Future Generation Computer Systems, 2020, 102: 847-861.

能源問題

1.Xu J, Chen L, Ren S. Online learning for offloading and autoscaling in energy harvesting mobile edge computing[J]. IEEE Transactions on Cognitive Communications and Networking, 2017, 3(3): 361-373.
2.Munir M S, Abedin S F, Tran N H, et al. When Edge Computing Meets Microgrid: A Deep Reinforcement Learning Approach[J]. IEEE Internet of Things Journal, 2019.
3.Munir M, Tran N H, Saad W, et al. Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable Edge Computing Systems[J]. arXiv preprint arXiv:2002.08567, 2020.

快取問題

1.Zhu H, Cao Y, Wang W, et al. Deep reinforcement learning for mobile edge caching: Review, new features, and open issues[J]. IEEE Network, 2018, 32(6): 50-57.
2.Chien W C, Weng H Y, Lai C F. Q-learning based collaborative cache allocation in mobile edge computing[J]. Future Generation Computer Systems, 2020, 102: 603-610.
3.Zhong C, Gursoy M C, Velipasalar S. Deep Reinforcement Learning-Based Edge Caching in Wireless Networks[J]. IEEE Transactions on Cognitive Communications and Networking, 2020, 6(1): 48-61.
4.Qiao F, Wu J, Li J, et al. Trustworthy Edge Storage Orchestration in Intelligent Transportation Systems Using Reinforcement Learning[J]. IEEE Transactions on Intelligent Transportation Systems, 2020.
5.Dai Y, Xu D, Zhang K, et al. Deep reinforcement learning and permissioned blockchain for content caching in vehicular edge computing and networks[J]. IEEE Transactions on Vehicular Technology, 2020, 69(4): 4312-4324.

聯合優化問題

1.He Y, Yu F R, Zhao N, et al. Software-defined networks with mobile edge computing and caching for smart cities: A big data deep reinforcement learning approach[J]. IEEE Communications Magazine, 2017, 55(12): 31-37.
2.Y. He, N. Zhao and H. Yin, 「Integrated Networking, Caching, and Computing for Connected Vehicles: A Deep Reinforcement Learning Approach,」 in IEEE Transactions on Vehicular Technology, vol. 67, no. 1, pp. 44-55, Jan. 2018, doi: 10.1109/TVT.2017.2760281.
3.Dai Y, Xu D, Maharjan S, et al. Artificial intelligence empowered edge computing and caching for internet of vehicles[J]. IEEE Wireless Communications, 2019, 26(3): 12-18.
4.Li M, Yu F R, Si P, et al. Resource Optimization for Delay-Tolerant Data in Blockchain-Enabled IoT with Edge Computing: A Deep Reinforcement Learning Approach[J]. IEEE Internet of Things Journal, 2020.
5.Z. Ning et al., 「Joint Computing and Caching in 5G-Envisioned Internet of Vehicles: A Deep Reinforcement Learning-Based Traffic Control System,」 in IEEE Transactions on Intelligent Transportation Systems, doi: 10.1109/TITS.2020.2970276.
6.Li S, Li B, Zhao W. Joint Optimization of Caching and Computation in Multi-Server NOMA-MEC System via Reinforcement Learning[J]. IEEE Access, 2020.
7.Tan L T, Hu R Q. Mobility-aware edge caching and computing in vehicle networks: A deep reinforcement learning[J]. IEEE Transactions on Vehicular Technology, 2018, 67(11): 10190-10203.
8.He Y, Liang C, Yu R, et al. Trust-based social networks with computing, caching and communications: A deep reinforcement learning approach[J]. IEEE Transactions on Network Science and Engineering, 2018.
9.Sun Y, Peng M, Mao S. Deep Reinforcement Learning-Based Mode Selection and Resource Management for Green Fog Radio Access Networks[J]. IEEE Internet of Things Journal, 2018, 6(2): 1960-1971.
10.Luo Q, Li C, Luan T H, et al. Collaborative Data Scheduling for Vehicular Edge Computing via Deep Reinforcement Learning[J]. IEEE Internet of Things Journal, 2020.
11.Wang J, Zhao L, Liu J, et al. Smart resource allocation for mobile edge computing: A deep reinforcement learning approach[J]. IEEE Transactions on emerging topics in computing, 2019.
12.Huang B, Li Z, Xu Y, et al. Deep Reinforcement Learning for Performance-Aware Adaptive Resource Allocation in Mobile Edge Computing[J]. Wireless Communications and Mobile Computing, 2020, 2020.
13.Jiang F, Dong L, Wang K, et al. Distributed Resource Scheduling for Large-Scale MEC Systems: A Multi-Agent Ensemble Deep Reinforcement Learning with Imitation Acceleration[J]. arXiv preprint arXiv:2005.12364, 2020.

強化學習-博弈論與邊緣計算

1.Ranadheera S, Maghsudi S, Hossain E. Mobile edge computation offloading using game theory and reinforcement learning[J]. arXiv preprint arXiv:1711.09012, 2017.
2.Asheralieva A, Niyato D. Hierarchical game-theoretic and reinforcement learning framework for computational offloading in UAV-enabled mobile edge computing networks with multiple service providers[J]. IEEE Internet of Things Journal, 2019, 6(5): 8753-8769.
3.L. Huang, S. Bi and Y. J. Zhang, 「Deep Reinforcement Learning for Online Computation Offloading in Wireless Powered Mobile-Edge Computing Networks,」 in IEEE Transactions on Mobile Computing, doi: 10.1109/TMC.2019.2928811.
4.Chen X, Chen T, Zhao Z, et al. Resource awareness in unmanned aerial vehicle-assisted mobile-edge computing systems[C]//2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring). IEEE, 2020: 1-6.
5.Munir M, Abedin S F, Tran N H, et al. Risk-Aware Energy Scheduling for Edge Computing with Microgrid: A Multi-Agent Deep Reinforcement Learning Approach[J]. arXiv preprint arXiv:2003.02157, 2020.
6.Xu Q, Su Z, Lu R. Game theory and reinforcement learning based secure edge caching in mobile social networks[J]. IEEE Transactions on Information Forensics and Security, 2020.

強化學習-聯邦學習與邊緣計算

1.Shan N, Cui X, Gao Z. 「DRL+ FL」: An intelligent resource allocation model based on deep reinforcement learning for Mobile Edge Computing[J]. Computer Communications, 2020.