Norwegian version of this page

Events - Page 2

Time and place: , Helga Engs hus, Auditorium 3

We invite you (once again) to a two-day seminar celebrating Professor Ørnulf Borgans many and substantial contributions to statistics in general and life event history analysis in particular.  

Time and place: , Erling Svedrups plass, ZOOM
Time and place: , Erling Sverdrups plass, Niels Henrik Abels hus, 8th floor
Time and place: , Erling Sverdrups plass, Niels Henrik Abels hus, 8th floor

We introduce SMARTboost (boosting of symmetric smooth additive regression trees), a machine learning model capable of fitting complex functions in high dimensions, yet designed for good performance in small n and low signal-to-noise environments. SMARTboost inherits many of the qualities that have made boosted trees the most widely used machine learning tool for tabular data; it automatically adjusts model complexity, handles continuous and discrete features, can capture nonlinear functions in high dimensions without overfitting, performs variable selection, and can handle highly non-Gaussian features. The combination of smooth symmetric trees and of carefully designed Bayesian priors gives SMARTboost an edge (in comparison with a state-of-the-art tool like XGBoost) in most settings with continuous and mixed discrete-continuous features. Unlike other tree-based methods, it can also compute marginal effects.

Time and place: , Erling Sverdrups plass, Niels Henrik Abels hus, 8th floor

Maintenance plays a crucial role in ships and especially in the vital electric propulsion system. Intelligent predictive maintenance idealistically aims at preventing system failures and minimizing needless repairs, i.e., predicting failure likelihood and time to failure while providing the crew explainable predictions and recommending the best action for timely intervention. This presentation will cover a relevant work in collaboration with Sensor Systems in BigInsight, particularly a paper published under https://doi.org/10.1109/TII.2022.3144177. The failure prediction approach is driven by event logs, which include warnings, alarms, and operational information that describe all the happenings onboard the ship. The failure prediction objective is turned into classification and regression tasks; however, the training data pose three challenges. The events are irregular textual messages. The training data samples are not labelled. The datasets are extremely imbalanced, due to sparse failure events and multiple failure modes. The problem is casted into a weakly supervised machine learning framework. In a multiple instance learning process, the ungiven data labels are learned recursively while fitting the model parameters using deterministic annealing. The overall approach was tested on real ship data, and it successively forecasted few propulsion failures with explainable causes.

Time and place: , Erling Sverdrups plass, Niels Henrik Abels hus, 8th floor

Deep learning (DL) has had unprecedented success and is now entering scientific computing with full force. However, current DL methods typically suffer from instability, even when universal approximation properties guarantee the existence of stable neural networks (NNs). In this talk we will show that there are basic well-conditioned problems in scientific computing where NNs with great approximation qualities are proven to exist, however, there does not exist any algorithm, even randomised, that can train (or compute) such a NN to even 1-digit of accuracy with a probability greater than 1/2. These results provide basic foundations for Smale’s 18th problem ("What are the limits of AI?") and imply a potentially vast classification theory describing conditions under which (stable) NNs with a given accuracy can be computed by an algorithm. We begin this theory by initiating a unified theory for compressed sensing and DL, leading to sufficient conditions for the existence of algorithms that compute stable NNs in inverse problems. We introduce Fast Iterative REstarted NETworks (FIRENETs), which we prove and numerically check (via suitable stability tests) are stable. The reference for this talk is: https://arxiv.org/abs/2101.08286 (to appear in Proc. Natl. Acad. Sci. USA).

Time and place: , Erling Svedrups plass (Niels Henrik Abel Hus, 8th floor) & ZOOM

Super-resolution is a hot topic in current day Machine Learning.  The origin of the methodology dates back to applications in seismic imaging. I discuss the evolution from the early days and highlight some papers which have given new theoretical insights along the way. I illustrate the bridge between traditional convex optimization and current day convolutional neural nets. Along the way I show some examples where we have used this for current day applications in seismic imaging.

Time and place: , Erling Sverdrups plass, Niels Henrik Abels hus, 8th floor

For many real-life phenomena one may assume that the units of observation, typically patients, transition through a set of discrete states on their way towards an absorbing state. The states often constitute various stages of a disease, from perfect health through various stages of dementia for example. Multi-state models are a class of statistical models which allow us to study the time spent in different states, the probability of transitioning between states, and the relationship between these quantities and covariates of interest. In many applications the transition times between states are not observed exactly; instead, the current state of the patients is queried at arbitrary times. The transition times are therefore interval censored, and this makes inference and modelling challenging. Most current approaches are based on the Markov assumption, for example the simplest parametric model available - the time-homogeneous Markov model. Here, we propose a new, general framework for parametric inference with interval censored multi-state data. Our models allow non-Markovian behaviour. I will present the framework and an algorithm for the automatic construction of the likelihood function, along with real-data examples. This talk is based on joint work with Marthe Aastveit and Nils Lid Hjort.

Time and place: , Erling Sverdrups plass, Niels Henrik Abels hus, 8th floor

Cards are drawn, one at a time, with replacement, from a deck of n cards. I study the total time W_n needed until we have seen all n cards, via different perspectives, along with a Gumbel limiting distribution. Various non-trivial identities, involving different perspectives for moments and Laplace transformations, are found as corollaries. These findings are also used to estimate the number of different cards,if uknown. If I needed to sample 133 words from a document, before I had 50 different words, what is the vocabulary size for the document? How many words did Shakespeare know (including those he never used in his writing)? 

An Abels Tårn podcast about some of these themes, which attracted a fair amount of inspired comments and guesses from the public (specifically, finding the mean of W_n above, for the case of n = 52 cards), can be found on the Abels Tårn website, July 2021, as a conversation with Torkild Jemterud, Jo Røislien, and myself. 

Time and place: , Erling Sverdrups plass, Niels Henrik Abels hus, 8th floor

The talk is elementary and discusses empirical modelling of single variables with insurance losses as example. There are in such cases little or no theory to go on, and the amount of data is many situation quite scarce. Why do we so often limit ourselves to fit two-parameter families? It will be suggested that it may be a good idea to work with more flexible models with three or four parameters and that this may provide a nice framework for automating the entire procedure for the computer to work alone. Sure, with little data the parameters may be unstably estimated, but that may not apply equally to the distributions they define. Many-parameter families suitable for insurance losses will be reviewed with some simple asymptotics in an example allowing this and with Monte Carlo to throw light on the issue in other cases.

Time and place: , Erling Sverdrups plass, Niels Henrik Abels hus, 8th floor

In this talk I will discuss the variational form of Bayes theorem by Zellner (1988). This result is the rationale behind the variational (approximate) inference scheme, although it is not always that clear in modern presentations. I will discuss two applications of this results. First, I will show how to do a low-rank mean correction within the INLA framework (with amazing results), which is essential for the next generation of the R-INLA software currently in development. In the second one, I will introduce the Bayesian   learning rule, which unify many machine-learning algorithms from fields such as optimization, deep learning, and graphical models. This includes classical algorithms such as ridge regression, Newton's method, and Kalman filter, as well as modern deep-learning algorithms such as stochastic-gradient descent, RMSprop, and Dropout.

The first part of the talk is based on our recent research at KAUST, while the second part is based upon \texttt{arxiv.org/abs/2107.04562} with Dr. Mohammad Emtiyaz Khan, RIKEN Center for AI Project, Tokyo.

Time and place: , Erling Sverdrups plass, Niels Henrik Abels hus, 8th floor

Marginal maximum likelihood estimation of longitudinal latent variable models for ordinal observed variables is challenging due to the high latent dimensionality required to accurately model residual dependencies for repeated measurements. We use second-order Laplace approximations to the high-dimensional integrals in the marginal likelihood function for longitudinal item response theory models and implement an efficient estimation method based on the approximations. The method is illustrated with items from the Montreal Cognitive Assessment, administered at four time points in a Hong Kong study of aging and well-being. We discuss the limitations of the proposed estimation method and outline a potential extension to the approach that uses a dimension-reduction technique.

Time and place: , Erling Sverdrups plass, Niels Henrik Abels hus, 8th floor

Template Model Builder (TMB) is an R-package that is well suited for performing fast inference with latent Gaussian models when the likelihood can be written as a three times differentiable function. TMB automatically differentiates the likelihood and utilizes Markov structures to efficiently integrate over latent variables with the Laplace approximation. In this seminar I will first introduce TMB and then elaborate a fish stock assessment model implemented with use of TMB that provides quota advice for approximately 25 different fish species in Europe.

Time and place: , Niels Henrik Abels hus, 8th floor

William Robert Paul Denault (Department of Genetics and Bioinformatics, Norwegian Institute of Public Health) will give a talk on December 8th at 14:15 (held with restricted attendance in the Erling Sverdrups plass, Niels Henrik Abels hus, 8th floor and streamed in Zoom - the link will be sent by mail one day in advance).

Time and place: , Niels Henrik Abels hus, 8th floor

Carla Janaina Ferreira (DNV GL) will give a talk on November 24th at 14:15 (held with restricted attendance in the Erling Sverdrups plass, Niels Henrik Abels hus, 8th floor and streamed in Zoom - the link will be sent by mail one day in advance).

Time and place: , Niels Henrik Abels hus, 8th floor

Steffen Grønneberg (Department of Economics, BI Norwegian Business School) will give a talk on November 10th at 14:15 (held with restricted attendance in the Erling Sverdrups plass, Niels Henrik Abels hus, 8th floor and streamed in Zoom - the link will be sent by mail one day in advance).

Time and place: , Zoom

Benjamin Kedem (Department of Mathematics, University of Maryland, USA) will give a talk on October 27th at 14:15 in Zoom - the link will be sent by mail one day in advance).

Time and place: , Niels Henrik Abels hus, 8th floor

Christian Page (Department of Mathematics, University of Oslo) will give a talk on October 13th at 14:15 (held with restricted attendance in the Erling Sverdrups plass, Niels Henrik Abels hus, 8th floor and streamed in Zoom - the link will be sent by mail one day in advance).

Time and place: , Zoom

Mette Langaas (Department of Mathematical Sciences, Norwegian University of Science and Technology) will give a talk on May 19th at 14:15 in Zoom (the link to the event will be available soon).

Time and place: , Zoom

Heidi Seibold (Department of Statistics, Ludwig-Maximilians-Universität of Munich, GER) will give a talk on May 5th at 14:15 in Zoom https://uio.zoom.us/j/66792241824.

Time and place: , Zoom

Antonio Canale (Department of Statistical Sciences, University of Padova, ITA) will give a talk on April 21st at 14:15 in Zoom (https://uio.zoom.us/j/66042762975).

Time and place: , Niels Henrik Abels hus, 8th floor

Due to travel restrictions related to the spread of Covid-19, the talk of  Lokukaluge Prasad Perera (Department of Technology and Safety, Artic University of Norway), supposedly held on March 16th at 14:15 in the Erling Sverdrups plass, Niels Henrik Abels hus, 8th floor, has been CANCELLED.

Time and place: , Niels Henrik Abels hus, 8th floor

Morten Hjorth-Jensen (Department of Physics, University of Oslo and Department of Physics and Astronomy, Michigan State University, USA) will give a talk on March 3rd at 14:15 in the Erling Sverdrups plass, Niels Henrik Abels hus, 8th floor.

Time and place: , Niels Henrik Abels hus, 8th floor

Haakon Bakka (formerly Computer Electrical Mathematical Science and Engineering Division, King Abdullah University of Science and Technology, KSA, from 01.02.2020 Department of Mathematics, University of Oslo) will give a talk on February 25th at 14:15 in the Erling Sverdrups plass, Niels Henrik Abels hus, 8th floor.

Time and place: , Niels Henrik Abels hus, 8th floor

Johan Pensar (formerly Department of Mathematics and Statistics, University of Helsinki, from 01.02.2020 Department of Mathematics, University of Oslo) will give a talk on February 11th at 14:15 in the Erling Sverdrups plass, Niels Henrik Abels hus, 8th floor.