DIVISIBLE TASK OFFLOADING FOR MULTIUSER MULTISERVER MOBILE EDGE COMPUTING SYSTEMS BASED ON DEEP REINFORCEMENT LEARNING

Divisible Task Offloading for Multiuser Multiserver Mobile Edge Computing Systems Based on Deep Reinforcement Learning

Divisible Task Offloading for Multiuser Multiserver Mobile Edge Computing Systems Based on Deep Reinforcement Learning

Blog Article

Mobile edge computing (MEC) is a color charm wella 050 promising computing paradigm that enables offloading tasks to edge servers to decrease the load on user equipment (UE) and the latency of services.However, blind offloading of data-intensive tasks may cause edge node congestion and increase the task’s time delay.Therefore, developing a new incentive deep reinforcement learning-based scheme is imperative to solve such divisible task offloading in this end-edge-cloud orchestrated environment.First, we build a novel MEC-based state space framework containing multiple servers and UE, and introduce a new dynamic task division into multiple subtasks mode to execute in parallel on multiple nodes with multiple service providers.

Next, we model the process of the divisible task offloading by the Markov Decision Process (MDP) and derive a sufficient condition in the subtask offloading to maximize the quality of physical experience (QoPE) of the divisible task.Finally, we propose a new self-adaptive Q-network with prioritized experience flamingo swizzle sticks replay (SQ-PER) algorithm based on double deep Q-network (DDQN), in which the experience replay technique and the traditional $ arepsilon $ -greedy exploration strategy are optimized to improve learning efficiency and stability.Simulation results show that the SQ-PER algorithm makes better utilization of environmental resources and significantly reduces time delay of the task than other methods in complex scenarios.

Report this page