Differential privacy in online learning
Differential privacy formally defines how an algorithm can protect individual privacy. Previous theoretical results in learning theory have shown that local differential privacy incurs a small learning penalty proportional to how much privacy we want to preserve. However, other results also show that being private aids generalisation ability. It is the purpose of this research programme to explore these ideas in online learning.
More specifically, the student is expected to critically read available literature on learning theory and generalisation, including:
- Algorithmic stability for adaptive data analysis Raef Bassily, Kobbi Nissim
- Cummings, Rachel et al. “Adaptive Learning with Robust Generalization Guarantees.” COLT (2016).
- Local privacy and minimax bounds: Sharp rates for probability estimation J Duchi, MJ Wainwright, MI Jordan - NIPS 2013
The student is then expect to work closely with the supervisor and other members of his group to develop suitable lower and upper bounds on online learning.