Knowledge Distillation for Mobile Edge Computation Offloading

Release Date:2020-07-22 Author:CHEN Haowei, ZENG Liekang, YU Shuai, CHEN Xu Click:

Knowledge Distillation for Mobile Edge Computation Offloading

 

CHEN Haowei, ZENG Liekang, YU Shuai, CHEN Xu

(School of Data and Computer Science, Sun Yat-sen University, Guangzhou, Guangdong 510006, China)

 

Abstract: Edge computation offloading allows mobile end devices to execute compute-intensive tasks on the edge servers. End devices can decide whether the tasks are offloaded to edge servers, cloud servers or executed locally according to current network condition and devices’ profile in an online manner. In this article, we propose an edge computation offloading framework based on deep imitation learning (DIL) and knowledge distillation (KD), which assists end devices to quickly make fine-grained decisions to optimize the delay of computation tasks online. We formalize computation offloading problem into a multi-label classification problem. Training samples for our DIL model are generated in an offline manner. After the model is trained, we leverage KD to obtain a lightweight DIL model, by which we further reduce the model’s inference delay. Numerical experiment shows that the offloading decisions made by our model not only outperforms those made by other related policies in latency metric, but also has the shortest inference delay among all policies.
Keywords: mobile edge computation offloading; deep imitation learning; knowledge distillation

Download: PDF