I am a PhD student in statistics, doing research on the interesting field of sequential testing and sequential change-point detection, which is seeing a new revival in the age of big data. In the classic, fixed-sample hypothesis testing setup, one assesses the amount of evidence for or against a null hypothesis, whereas in sequential testing, evidence is accumulated over time until a firm decision can be made. The goal is then to make this decision as quickly as possible, that is, to minimize the sample size until a decision. Change-point detection is intimately linked to hypothesis testing, as the former problem can be stated as a hypothesis test, but one is only interested in the decision against the null hypothesis of "the world is as usual". If nothing happens, as few false alarms as possible should be raised.
The specific problem of interest in my PhD is the detection of sparse changes; a high-dimensional stream of data is being monitored, e.g., many measurements of a system is taken at set intervals, but only a small subset of the streams registers a change. How can you quickly detect such changes to the overall distribution of the data? I currently study how to compress the data in ways that retain the most information about changes (dimension reduction and filtering) sequentially. Later, I will move on to non-parametric methods to be able to detect very general changes, not only changes to the mean or variances. I am equally interested in theory and application.
My supervisors are Ingrid K. Glad and Nils Lid Hjort.
Prior to joining the university workforce, I have been a student of mathematics, statistics and philosophy at The University of Oslo for seven years. One semester during my master's was spent abroad at The University of Utrecht.