Trial lecture - time and place
28th of February, 10:15 AM, "Storstua", Simula-senteret, Martin Linges vei 25, Fornebu
"Regularization of linear inverse problems"
Successful machine learning systems require a fairly large amount of computing power and large data sets that store high quality information. While the first demand has been met by the increase in computing power in the past two decades, acquiring sufficiently large data sets is still considered expensive and often represents a bottleneck in practice. This thesis addresses the data bottleneck through model-based machine learning algorithms that lead to computationally efficient algorithms tailored to the data poor regime, and with rigorously proven performance guarantees.
Main research findings
A common task in machine learning centres around identifying functions that describe the relation between inputs and outputs for a given data set. Although often perceived differently, designing such an algorithm requires knowledge about the data set at hand and a degree of fine-tuning by an experienced data analyst. This is especially true if the accessible data set is small and the inputs are highly complex, for instance because they are embedded in a high-dimensional space. Dealing with such data sets requires the use of model-based machine learning methods that assume the input-output relation can be described in a simplified fashion.
This thesis focuses on models which assume the input-output relation depends on a fairly small number of linear or nonlinear transformations of the original inputs of the data. This includes popular models such as single- and multi-index models, nonlinear generalisations thereof, and certain types of neural networks. We develop computationally efficient algorithms and provide performance guarantees through rigorous mathematical analysis. The usability of our methods for practice is ensured by comparisons with the state of the art on real-world benchmarks.
Contact information to Department: Pernille Adine Nordby