Delay-gradient TCP congestion control mechanisms for Linux kernel
Delay-based TCP congestion control mechanisms suffer from performance penalty when co-existing with loss-based TCP flows. This is mostly due to the fact that delay-based congestion control mechanisms take "increase in delay" as an indication of congestion, while loss-based congestion control mechanisms might be contributing to such increase as they tend to fill-up the network buffers. Several proposals have been made to tackle this issue, among them the recently new CAIA's delay-gradient congestion control (CDG)  which takes into consideration the delay-gradient instead of the rough RTT estimates. It also features several mechanisms e.g. shadowing cwnd and holding backoff to provide a better share of bandwidth while co-existing with loss-based flows. CDG is shown to improve the end-to-end latency of TCP as well as delay-sensitive applications under lossy paths such as in 802.11 networks. While CDG is initially developed as a modular congestion control mechanism in FreeBSD 9.0 kernel, it hasn't been ported to the Linux kernel. This project requires the master student to develop (a) Linux kernel module(s) to provide CDG algorithm for Linux-based hosts. This will facilitate a wider use of CDG for the Internet users and server administrators and provides better reliability and lower latency for the Internet traffic more specifically in scenarios with 802.11 end-nodes.
1) good knowledge of C and shell scripting
2) familiarity with Linux kernel
3) good knowledge of TCP congestion control mechanisms
4) The ability to conduct extensive research work and possible publication
 D. A. Hayes and G. Armitage, "Revisiting TCP Congestion Control using Delay Gradients," in IFIP Networking 2011, Valencia, Spain, 9-13 May 2011