LFQ: Online Learning of Per-flow Queuing Policies using Deep Reinforcement Learning

Research paper by Maximilian Bachl, Joachim Fabini, Tanja Zseby

Indexed on: 07 Jul '20Published on: 06 Jul '20Published in: arXiv - Computer Science - Networking and Internet Architecture


The increasing number of different, incompatible congestion control algorithms has led to an increased deployment of fair queuing. Fair queuing isolates each network flow and can thus guarantee fairness for each flow even if the flows' congestion controls are not inherently fair. So far, each queue in the fair queuing system either has a fixed, static maximum size or is managed by an Active Queue Management (AQM) algorithm like CoDel. In this paper we design an AQM mechanism (Fair Learning Qdisc (LFQ)) that dynamically learns the optimal buffer size for each flow according to a specified reward function online. We show that our Deep Learning based algorithm can dynamically assign the optimal queue size to each flow depending on its congestion control, delay and bandwidth. Comparing to competing fair AQM schedulers, it provides significantly smaller queues while achieving the same or higher throughput.