# Anders C. Hansen: Mini-course on Foundational Computational Problems in l^1 and Total Variation Regularisation, part II

This is the second of two lectures by Anders Hansen (Cambridge Univ. and UiO) on Foundational Computational Problems in l^1 and Total Variation Regularisation.

The use of regularisation techniques such as l^1 and Total Variation in Basis Pursuit and Lasso has been a great success in wide areas of mathematics and statistics. In this talk we establish universal boundaries regarding existence of algorithms for solving these problems through both positive and negative existence results. For example, we establish the following paradox: it is impossible to design algorithms to solve these regularisation problems accurately when given inaccurate input data, even when the inaccuracies can be made arbitrarily small. Indeed, if this was possible it would imply decidability of famous undecidable problems such as the Halting problem. Moreover, since e.g. root(2) never comes with an exact numerical representation, inaccurate data input is a daily encounter. The paradox implies that any algorithm designed to solve these problems will fail in the following way: For fixed dimensions and any small accuracy parameter epsilon > 0, one can choose an arbitrary large time T and find an input such that the algorithm will run for longer than T and still not have reached epsilon accuracy. Moreover, it is impossible to determine when the algorithm should halt to achieve an epsilon accurate solution, and hence the algorithm will never be able to produce an output where one knows that the output is at least epsilon accurate. The largest epsilon for which this failure happens is called the Breakdown-epsilon. For Basis Pursuit and Lasso the Breakdown-epsilon > 1/3 even when the the input is bounded by one and well conditioned.

The paradox opens up for a new classification theory to determine the boundaries of what computers can achieve in regularisation, and to explain why empirically many modern algorithms perform very well in real-world scenarios. We provide positive classification results showing that sparse problems can be computed accurately. However, this is delicate; e.g. given standard assumptions from sparse recovery, there are algorithms that can compute a solution to Basis Pursuit accurately, however, this is impossible for Lasso and Basis Pursuit with noise parameter delta > 0. However, one can compute a solution accurately up to the Breakdown-epsilon, which we calculate. As we demonstrate, the secret behind the success of many modern algorithms applied in numerous real-world scenarios, despite the above paradoxes, lies in the Breakdown-epsilon. We show that in many cases of interest the Breakdown-epsilon is sufficiently small for reliable computations.

The paradox opens up for a new classification theory to determine the boundaries of what computers can achieve in regularisation, and to explain why empirically many modern algorithms perform very well in real-world scenarios. We provide positive classification results showing that sparse problems can be computed accurately. However, this is delicate; e.g. given standard assumptions from sparse recovery, there are algorithms that can compute a solution to Basis Pursuit accurately, however, this is impossible for Lasso and Basis Pursuit with noise parameter delta > 0. However, one can compute a solution accurately up to the Breakdown-epsilon, which we calculate. As we demonstrate, the secret behind the success of many modern algorithms applied in numerous real-world scenarios, despite the above paradoxes, lies in the Breakdown-epsilon. We show that in many cases of interest the Breakdown-epsilon is sufficiently small for reliable computations.

Anders Hansen is head of the group in Applied Functional and Harmonic Analysis within the Cambridge Centre of Analysis at DAMTP. He is also Prof. II at the Institute of Mathematics, UiO.

Published Apr. 26, 2017 1:28 PM
- Last modified Apr. 26, 2017 1:38 PM