-
-
Ellefsen, Kai Olav
(2023).
Evolutionary Robotics.
-
Ellefsen, Kai Olav
(2023).
Hva er Kunstig Intelligens?
-
Ellefsen, Kai Olav
(2023).
Kunstig intelligens: Verden sett gjennom en maskins øyne.
-
-
-
Ellefsen, Kai Olav
(2023).
More human robot brains with inspiration from biology, psychology and neuroscience.
-
Strand, Ørjan; Reilstad, Didrik Spanne; Wu, Zhenying; Castro da Silva, Bruno; Torresen, Jim & Ellefsen, Kai Olav
(2023).
Learning When to Think Fast and When to Think Slow.
-
Bogen, Annika Celin & Ellefsen, Kai Olav
(2023).
Hvorfor mener noen at kunstig intelligens er farlig?
[Internett].
ung.forskning.no.
-
-
Reilstad, Didrik Spanne; Strand, Ørjan; Wu, Zhenying; Castro da Silva, Bruno; Torresen, Jim & Ellefsen, Kai Olav
(2022).
RADAR: Reactive and Deliberative Adaptive Reasoning – Learning When to Think Fast and When to Think Slow.
-
Ellefsen, Kai Olav
(2022).
Towards more Human Robot Brains.
Vis sammendrag
Despite many large breakthroughs in Artificial Intelligence in the last decade, robots are still struggling to solve tasks that we as humans take for granted, like emptying a dishwasher or learning multiple skills in a sequence.
In this talk, Kai Olav Ellefsen will talk about work he and colleagues at the group of Robotics and Intelligent Systems (ROBIN), University of Oslo, have done with the goal of making robots more robust and better learners, by taking inspiration from how humans and animals learn.
-
Gorton, Patrick Ribu & Ellefsen, Kai Olav
(2020).
Evaluating Predictive Deep Learning Models.
-
-
Ellefsen, Kai Olav
(2020).
Hva kan intelligente maskiner lære av biologisk liv?
Biolog.
ISSN 0801-0722.
s. 16–19.
-
Ellefsen, Kai Olav & Time, Jon Kåre
(2020).
Han lærer roboter hvordan verden fungerer.
[Avis].
https://morgenbladet.no/aktuelt/2020/02/han-laerer-roboter-h.
Vis sammendrag
Kai Olav Ellefsen irriterer seg over de mange overdrivelsene om hva kunstig intelligens kan utrette. Selv prøver han å lære maskinene å forutse konsekvensene av sine handlinger.
-
Ellefsen, Kai Olav; Nygaard, Tønnes Frostad & Kjørstad, Elise
(2020).
Her er den første roboten som er laget av levende celler.
[Internett].
forskning.no.
-
Ellefsen, Kai Olav
(2019).
NAIS - Norwegian Artificial Intelligence Society.
Vis sammendrag
The Norwegian Artificial Intelligence Society (NAIS) was established in October 1985 as a representative body for the Norwegian Artificial Intelligence community. Its aim is to promote the study, research and application of Artificial Intelligence in Norway. NAIS is a member of the European Association for Artificial Intelligence (EurAI).
NAIS is a nonprofit entity representing AI professionals, organisations, companies and academic institutions. Our aim is to serve as a focal point of interaction and cooperation among NAIS’s members and the Norwegian public. NAIS is a point of contact and forum facilitating communication between parties interested in AI.
NAIS members commit themselves to promoting the values of Norwegian society through ensuring responsible use of AI research and applications.
-
Tørresen, Jim; Glette, Kyrre & Ellefsen, Kai Olav
(2019).
Intelligent, Adaptive Robots in Real-World Scenarios.
-
Tørresen, Jim; Glette, Kyrre & Ellefsen, Kai Olav
(2019).
Adaptive Robot Body and Control for Real-World Environments.
-
Ellefsen, Kai Olav
(2019).
Hva Kan Roboter Lære av Biologisk Liv?
-
Ellefsen, Kai Olav & Tørresen, Jim
(2019).
Evolutionary Robotics: Automatic design of robot bodies and control.
-
Nygaard, Tønnes Frostad; Nordmoen, Jørgen Halvorsen; Ellefsen, Kai Olav; Martin, Charles Patrick; Tørresen, Jim & Glette, Kyrre
(2019).
Experiences from Real-World Evolution with DyRET: Dynamic Robot for Embodied Testing.
-
Nordmoen, Jørgen Halvorsen; Nygaard, Tønnes Frostad; Ellefsen, Kai Olav & Glette, Kyrre
(2019).
Evolved embodied phase coordination enables robust quadruped robot locomotion
.
-
Teigen, Bjørn Ivar; Ellefsen, Kai Olav & Tørresen, Jim
(2019).
A Categorization of Reinforcement Learning Exploration Techniques Which Facilitates Combination
of Different Methods.
-
Ellefsen, Kai Olav & Tørresen, Jim
(2019).
Self-Adapting Goals Allow Transfer of Predictive Models to New Tasks.
-
Ellefsen, Kai Olav; Huizinga, Joost & Tørresen, Jim
(2019).
Guiding Neuroevolution with Structural Objectives.
-
Ellefsen, Kai Olav & Tørresen, Jim
(2018).
Evolutionary Robotics: Automatic design of robot controllers and bodies.
-
Ellefsen, Kai Olav
(2018).
Evolusjonær Robotikk: Automatisk design og kontroll av roboter.
-
Søyseth, Vegard Dønnem; Nygaard, Tønnes Frostad; Martin, Charles Patrick; Uddin, Md Zia & Ellefsen, Kai Olav
(2018).
ROBIN-Stand ved Cutting Edge 2018.
-
Tørresen, Jim; Garcia Ceja, Enrique Alejandro; Ellefsen, Kai Olav & Martin, Charles Patrick
(2018).
Equipping Systems with Forecasting Capabilities .
-
Garcia Ceja, Enrique Alejandro; Ellefsen, Kai Olav; Martin, Charles Patrick & Tørresen, Jim
(2018).
Prediction, Interaction, and User Behaviour.
Vis sammendrag
The goal of this tutorial is to apply predictive machine learning models to human behaviour through a human computer interface. We will introduce participants to the key stages for developing predictive interaction in user-facing technologies: collecting and identifying data, applying machine learning models, and developing predictive interactions. Many of us are aware of recent advances in deep neural networks (DNNs) and other machine learning (ML) techniques; however, it is not always clear how we can apply these techniques in interactive and real-time applications. Apart from well-known examples such as image classification and speech recognition, what else can predictive ML models be used for? How can these computational intelligence techniques be deployed to help users?
In this tutorial, we will show that ML models can be applied to many interactive applications to enhance users’ experience and engagement. We will demonstrate how sensor and user interaction data can be collected and investigated, modelled using classical ML and DNNs, and where predictions of these models can feed back into an interface. We will walk through these processes using live-coded demonstrations with Python code in Jupyter Notebooks so participants will be able to see our investigations live and take the example code home to apply in their own projects.
Our demonstrations will be motivated from examples from our own research in creativity support tools, robotics, and modelling user behaviour. In creativity, we will show how streams of interaction data from a creative musical interface can be modelled with deep recurrent neural networks (RNNs). From this data, we can predict users’ future interactions, or the potential interactions of other users. This enables us to “fill in” parts of a tablet-based musical ensemble when other users are not available, or to continue a user’s composition with potential musical parts. In user behaviour, we will show how smartphone sensor data can be used to infer user contextual information such as physical activities. This contextual information can be used to trigger interactions in smart home or internet of things (IoT) environments, to help tune interactive applications to user’s needs, or to help track health data.
-
-
Ellefsen, Kai Olav
(2017).
Internal Models for Adaptation and Prediction.
-
-
Ellefsen, Kai Olav
(2017).
Automating Robot Design with Evolutionary Algorithms.
Vis sammendrag
The world around us is full of creatures with remarkable abilities to display intelligent, adaptive and robust behaviors. Ranging from the reasoning capabilities of the human brain to the robust, efficient running pattern of a cheetah, nature is filled with impressive solutions that we have so far not been able to reproduce in robots or computers. Evolutionary robotics (ER) aims to automatically generate robust robots and control algorithms by taking inspiration from biological evolution. Simulating the process of natural selection, ER optimizes robots over hundreds or thousands of generations, resulting in a final solution adapted to the problem at hand. This talk will give an introduction to evolutionary algorithms and evolutionary robotics, and give examples of recent developments, including work from the Robotics and Intelligent Systems (ROBIN) research group at the University of Oslo.
-
Ellefsen, Kai Olav
(2017).
Evolutionary Robotics: Automatic design of robot bodies and control.
Vis sammendrag
The world around us is full of creatures with remarkable abilities to display intelligent, adaptive and robust behaviors. Ranging from the reasoning capabilities of the human brain to the robust, efficient running pattern of a cheetah, nature is filled with impressive solutions that we have so far not been able to reproduce in robots or computers.
Evolutionary robotics (ER) aims to automatically generate robust robots and control algorithms by taking inspiration from biological evolution. Simulating the process of natural selection, ER optimizes robots over hundreds or thousands of generations, resulting in a final solution adapted to the problem at hand. This talk will give an introduction to evolutionary algorithms and evolutionary robotics, and give examples of recent developments, including work from the Robotics and Intelligent Systems (ROBIN) research group at the University of Oslo.
-
-
-
Ellefsen, Kai Olav
(2010).
A Genetic Algorithm for Flexible Robotic Planning.
-
Kocan, Danielius & Ellefsen, Kai Olav
(2023).
Attention-Guided Explainable Reinforcement Learning: Key State Memorization and Experience-Based Prediction.
Universitetet i Oslo.
-
Taye, Eyosiyas Bisrat & Ellefsen, Kai Olav
(2023).
Accountability Module: Increasing Trust in Reinforcement Learning Agents.
Universitetet i Oslo.
Vis sammendrag
Artificial Intelligence requires trust to be fully utilised by users and for them to feel safe while using them. Trust, and indirectly, a sense of safety, has been overlooked in the pursuit of more accurate or better-performing black box models. The field of Explainable Artificial Intelligence and the current recommendations and regulations around Artificial Intelligence require more transparency and accountability from governmental and private institutes. Creating a self-explainable AI that can be used to solve a problem while explaining its reasoning is challenging to develop. Still, it would be unable to explain all the other AIs without self-explainable abilities. It would likely not function for different problem domains and tasks without extensive knowledge about the model. The solution proposed in this thesis is the Accountability Module. It is meant to function as an external explanatory module, which would be able to function with different AI models in different problem domains. The prototype was inspired by accident investigations regarding autonomous vehicles but was created and implemented for a simplified simulation of vehicles driving on a highway. The prototype's goal was to attempt to assist an investigator in understanding why the vehicle crashed. The Accountability Module found the main factors in the decision that resulted in an accident. It was also able to facilitate the answering of whether the outcome was avoidable and if there were inconsistencies with the agent's logic by examining different cases against each other. The prototype managed to provide useful explanations and assist investigators in understanding and troubleshooting agents. The thesis and the Accountability Module indicate that a similar explanatory module is a robust direction to explore further. The chosen explainability methods and techniques were highly connected to the problem domain and limited by the scope of the thesis. Therefore, a more extensive test of the prototype with different problems needs to be performed to check the system's rigidity and versatility as well as the significance of the results. Nevertheless, in a collaboration between an Accountability Module expert and a domain expert, I expect a modular explainability solution to create more insight into an AI model and its significant incidents.
-
Ellefsen, Kai Olav & Phan, Tommy
(2022).
Exploring the Potential of Hierarchical Quality-Diversity Algorithms for Robot Navigation.
Universitetet i Oslo.
-
Høvin, Mats Erling; Ellefsen, Kai Olav & Skjeltorp, Ole Edvin
(2022).
3D Neural Cellular Automata
Simulating morphogenesis: Shape, color
and behavior of three-dimensional
structures.
Universitetet i Oslo.
Vis sammendrag
Plants, fungi, humans and all other multicellular organisms go through
the same process of growing step by step. Starting as a single cell with
a genome containing all the genetic information of the organism, they
grow into the shape encoded in their genome with stunning accuracy. Not
only do they grow into a shape, but a complex composition of cell types.
Organisms also know when to stop growing, and some even have abilities
to regrow damaged cells. The study of this process, called developmental
biology, can provide insight useful for a range of disciplines, such as
medicine and artificial intelligence. Computer science has a long history
of benefiting from mimicking models of biology, and modern computing
power provides tools to simulate biological models in ways that may
benefit both fields. Simulations can provide insight and observations that
are hard to catch otherwise.
This thesis contributes to the tools capable of providing such insights,
and aims to simulate morphogenesis by growing a single cell into a
three-dimensional colored shape. The framework extends recent work
simulating 2D morphogenesis using machine learning combined with an
abstract computational system called cellular automatas (CA). In addition
to the added dimensionality, we further extend the framework and propose
a novel solution allowing guidance of the morphogenesis through certain
checkpoints during training. We also experiment with a novel approach of
training a simple 3D model to exhibit an oscillating motion, with promising
results laying the foundation for future work exceeding past simulation
of just morphogenesis. A formula for estimating a hyperparameter, the
minimum number of updates a CA needs during training, is derived to
provide a basis for future work on 3D neural cellular automatas (3D NCA).
The framework is successfully adapted to the higher dimensionality
and three-dimensional morphogenesis is simulated with high precision on
a range of models covering different geometrical challenges. Both shape
and color is correctly grown from a single cell, smaller models are indistinguishable
from their targets, while larger models tend to have a few
cells misplaced. We observe a significant increase in computational cost
with the three-dimensional simulations, indicating that optimisation measures
would be critical if using the framework on large scale simulations.
In terms of simulating morphogenesis, the framework matches the performance
of similar work published while this thesis was written, in this
relatively narrow but fast evolving field.
-
Roa Gran, Kristian & Ellefsen, Kai Olav
(2022).
Learning to drive by predicting the future: Direct Future Prediction.
Universitetet i Oslo.
Vis sammendrag
The use of artificial intelligence in systems for autonomous vehicles is growing in
popularity [1, 2]. Following the rapid development of deep learning techniques over the
past years, reinforcement learning has enabled automating the learning of prediction
abilities. Controlling an autonomous vehicle with reinforcement learning is typically
done by either learning a direct mapping from observations to actions, or by learning a
model of the environment and using the model to make decisions. Model-free approaches
have previously seen the most success as errors can easily propagate in an inaccurate
model.
The predictive reinforcement learning algorithm «Direct Future Prediction» (DFP) won
the Visual Doom AI competition in 2016 with results 50% better than the second best
submission [3]. By learning a simpler model of the environment that only focuses on a few
measurable quantities this approach can efficiently solve challenging control tasks. Prior
to this thesis, the utility of the method had not yet been tested for relevant real world
tasks, such as sensorimotor control of an autonomous vehicle.
DFP is tested on a variety of traffic scenarios with the aim of investigating the potential for
predictive deep learning algorithms to learn to control an autonomous vehicle. The more
classical reinforcement learning algorithm «Deep Q-Networks» (DQN) is also trained and
tested on the same scenarios, and is used as a reference for determining the performance
of DFP. Experiments are conducted in a more difficult version of the CarRacing simulator
from OpenAI gym, where DQN has previously performed well [4, 5].
DFP is able to solve all of the traffic scenarios from the conducted experiments, and is
able to drive around a challenging track while avoiding cleverly placed obstacles. The
performance of DFP is equal to, or better, in every experiment when compared to DQN.
The driving style of the DFP agents are calm and controlled, which is further highlighted
by the sporadic driving of the DQN agents. Performance is strong also in previously
unseen environments, indicating that the method has good generalization abilities, which
is further explained when visualizing the future DFP predictions.
-
Ølberg, Eirik & Ellefsen, Kai Olav
(2022).
CHILD: Predicting simulation outcome in open-ended co-evolution.
Universitetet i Oslo.
Vis sammendrag
In this thesis, we present CHILD, a method to reduce computational costs
by predicting simulation outcomes in problem-solution co-evolution and
open-ended learning. In recent years, great promise has been shown by
open-ended evolutionary algorithms to produce complex and generalized
solutions to difficult and unseen problems. This branch of evolutionary
algorithms are characterized by their ability to innovate indefinitely, often
by finding and solving ever changing new problems. But as with any
method of artificial intelligence, they are hindered by the computational
resources they demand. CHILD detects difficulties in a task, and predicts
the outcome of simulations. By doing this, CHILD attempts to reduce the
costs of the algorithm by omitting simulation where the result would be
either severe failure, or effortless success. Since open-ended algorithms
are designed to innovate indefinitely, they are greatly limited by the
computational resources they consume, which is tied to the efficiency of
their algorithm. Alleviating this is the primary motivation for the CHILD
method.
-
Bordvik, David Andreas; Ellefsen, Kai Olav & Riemer-Sørensen, Signe
(2022).
Forecasting regulation market balancing volumes from market data and weather data using Deep Learning and Transfer Learning.
Universitetet i Oslo.
Vis sammendrag
The energy and power sector is a major value contributor to our society and our high
living standards. In recent times the power sector has gained increased complexity
while undergoing significant changes, with the increased share of renewable production
being one of the contributors. An increased portion of renewable contributors in the
power mix from, e.g., wind power, results in more volatile power production, increasing
the need for grid balancing, making the regulating power market more challenging
for power producers to participate in. The purpose of the regulating power market
is to compensate the gap between the planned production that has been settled in
the day-ahead market and the actual production and demand. The ability to forecast
the regulating power volumes and prices some hours in advance of the hour when
it is actually traded would enable power producers to balance their positions in the
market more optimally. This project exploits historical regulation data together with
different market data and weather data to train deep learning models to forecast future
regulation volumes. A thorough time-series analysis of regulating power volumes
revealed some predictive potential. Furthermore, Bidirectional LSTM showed satisfying
results when forecasting up to four hours into the future using data from 2016-to 2021.
No previous research was found that uses more than two years of data, no previous
research uses recent data, and no previous work has utilized deep learning to forecast
the Norwegian regulation market volumes. Additionally, this project did a deep analysis
of topographical weather images and transfer learning to evaluate the potential of
predicting regulating power volumes using weather images. Different weather forecasts,
actual weather, and weather uncertainties were all utilized. The weather data was
generally not found to have a considerable direct influence on regulation volumes.
However, the weather is considered to have an increasing influence in the future as more
volatile renewable power production is expected in the power markets. No previous
research has been found to investigate weather images in the context of the regulation
market.
-
Nordmoen, Jørgen; Glette, Kyrre & Ellefsen, Kai Olav
(2021).
Enhancing MAP-Elites to overcome challenges in Evolutionary Robotics.
University of Oslo.
ISSN 1501-7710.
-
Thoresen, Sindre & Ellefsen, Kai Olav
(2021).
Solving Long Term Planning Problems with Direct Future Prediction.
Universitetet i Oslo.
-
Ellefsen, Kai Olav & Bjørsvik, Vegard
(2021).
Solving Sparse Reward Environments Using Go-Explore with Learned Cell Representation.
Universitetet i Oslo.
-
Sørensen, Scott Andreas Fiskerstrand & Ellefsen, Kai Olav
(2020).
Comparing Model-Free and Model-Based Reinforcement Learning for Collision Avoidance.
Universitetet i Oslo.
-
Gorton, Patrick & Ellefsen, Kai Olav
(2020).
Backpropagating to the Future: Evaluating Predictive Deep Learning Models.
Universitetet i Oslo.
-
Tørresen, Jim; Teigen, Bjørn Ivar & Ellefsen, Kai Olav
(2018).
An Active Learning Perspective on Exploration in Reinforcement Learning.
Universitetet i Oslo.