Wallace, Benedikte; Nymoen, Kristian; Tørresen, Jim & Patrick Martin, Charles
(2024).
Breaking from realism: exploring the potential of glitch in AI-generated dance.
Digital Creativity.
ISSN 1462-6268.
doi: 10.1080/14626268.2024.2327006.
Erdem, Cagri; Wallace, Benedikte; Glette, Kyrre & Jensenius, Alexander Refsum
(2023).
Tool or Actor? Expert Improvisers' Evaluation of a Musical AI “Toddler”.
Computer Music Journal.
ISSN 0148-9267.
doi: 10.1162/comj_a_00657.
Ruud, Markus Toverud; Sandberg, Tale Hisdal; Tranvaag, Ulrik Johan Vedde; Wallace, Benedikte; Karbasi, Seyed Mojtaba & Tørresen, Jim
(2022).
Reinforcement Learning Based Dance Movement Generation.
I Carlson, Kristin (Red.),
MOCO '22: Proceedings of the 8th International Conference on Movement and Computing.
Association for Computing Machinery (ACM).
ISSN 978-1-4503-8716-3.
doi: 10.1145/3537972.3538007.
Fulltekst i vitenarkivVis sammendrag
Generating genuinely creative and novel artifacts with machine learning is still a challenge in the world of computational science. A creative machine learning agent can be beneficial for applications where novel solutions are desired and may also optimize search. Reinforcement Learnings’ (RL) interactive properties can make it an effective tool to investigate these possibilities in creative contexts. This paper shows how a Reinforcement learning-based technique, in combination with Principal Component Analysis (PCA), can be utilized for generating varying movements based on a goal picking policy. The proposed model is trained on a data set of motion capture recordings of dance improvisation. Our study shows that the trained RL agent can learn to pick sequences of dance poses that are coherent, have compound movement, and can resemble dance.
Bentsen, Lars Ødegaard; Simionato, Riccardo; Wallace, Benedikte & Krzyzaniak, Michael Joseph
(2022).
Transformer and LSTM Models for Automatic Counterpoint Generation using Raw Audio.
Proceedings of the SMC Conferences.
ISSN 2518-3672.
doi: 10.5281/zenodo.6572847.
Fulltekst i vitenarkivVis sammendrag
A study investigating Transformer and LSTM models applied to raw audio for automatic generation of counterpoint was conducted. In particular, the models learned to generate missing voices from an input melody, using a collection of raw audio waveforms of various pieces of Bach’s work, played on different instruments. The research demonstrated the efficacy and behaviour of the two deep learning (DL) architectures when applied to raw audio data, which are typically characterised by much longer sequences than symbolic music representations, such as MIDI. Currently, the LSTM model has been the quintessential DL model for sequence-based tasks, such as generative audio models, but the research conducted in this study shows that the Transformer model can achieve competitive results on a fairly complex raw audio task. The research therefore aims to spark further research and investigation into how Trans- former models can be used for applications typically dominated by recurrent neural networks (RNN). In general, both models yielded excellent results and generated sequences with temporal patterns similar to the input targets for songs that were not present in the training data, as well as for a sample taken from a completely different dataset.
Wallace, Benedikte; Martin, Charles Patrick; Tørresen, Jim & Nymoen, Kristian
(2021).
Exploring the Effect of Sampling Strategy on Movement Generation with Generative Neural Networks,
EvoMUSART 2021: Artificial Intelligence in Music, Sound, Art and Design.
Springer Nature.
ISSN 978-3-030-72913-4.s. 344–359.
doi: 10.1007/978-3-030-72914-1_23.
In this work, we present a method for generating sound-tracings using a mixture density recurrent neural network (MDRNN). A sound-tracing is a rendering of perceptual qualities of short sound objects through body motion. The model is trained on a dataset of single point sound-tracings with multimodal input data and learns to generate novel tracings. We use a second neural network classifier to show that the input sound can be identified from generated tracings. This is part of an ongoing research effort to examine the complex correlations between sound and movement and the possibility of modelling these relationships using deep learning.
Noori, Farzan Majeed; Wallace, Benedikte; Uddin, Md Zia & Tørresen, Jim
(2019).
A Robust Human Activity Recognition Approach Using OpenPose, Motion Features, and Deep Recurrent Neural Network.
Lecture Notes in Computer Science (LNCS).
ISSN 0302-9743.
11482 LNCS,
s. 299–310.
doi: 10.1007/978-3-030-20205-7_25.
Generating genuinely creative and novel artifacts with machine learning is still a challenge in the world of computational science. A creative machine learning agent can be beneficial for applications where novel solutions are desired and may also optimize search. Reinforcement Learnings’ (RL) interactive properties can make it an effective tool to investigate these possibilities in creative contexts. This paper shows how a Reinforcement learning-based technique, in combination with Principal Component Analysis (PCA), can be utilized for generating varying movements based on a goal picking policy. The proposed model is trained on a data set of motion capture recordings of dance improvisation. Our study shows that the trained RL agent can learn to pick sequences of dance poses that are coherent, have compound movement, and can resemble dance.
Wallace, Benedikte
(2021).
Exploring the Effect of Sampling Strategy on Movement Generation with Generative Neural Networks.
Wallace, Benedikte
(2020).
Deep Learning with Multi-Dimensional Time-Series Data: Examining the Effect of Music on Movement.
Wallace, Benedikte; Nymoen, Kristian; Martin, Charles Patrick & Tørresen, Jim
(2020).
Towards Movement Generation with Audio Features.
Krzyzaniak, Michael Joseph; Kwak, Dongho Daniel; Veenstra, Frank; Erdem, Cagri; Wallace, Benedikte & Jensenius, Alexander Refsum
(2020).
Dr. Squiggles rhythmical robots.
Vis sammendrag
Dr. Squiggles is an interactive musical robot that we designed, that plays rhythms by tapping. It listens for tapping produced by humans or other musical robots, and attempts to play along and improvise its own rhythms based on what it hears.
Å lage kunst eller bevege seg til musikk er noe vi mennesker gjør helt instinktivt, vil vi kunne modellere disse kreative prosessene ved hjelp av kunstig intelligens? Og hva menes med å trene nevrale nettverk i musikk? I denne episoden av #LØRN snakker Silvija med PhD kandidat, RITMO senter for tverrfaglig forskning på rytme, tid og bevegelse ved UiO, Benedikte Wallace, om hvordan de kobler kunstig intelligens, robotikk og sensorikk med ting som har med dans og musikk å gjøre fordi det er denne type kombinasjoner som gir nye innsikter.
Wallace, Benedikte; Nymoen, Kristian & Martin, Charles Patrick
(2019).
Tracing from Sound to Movement with Mixture Density Recurrent Neural Networks.
Wallace, Benedikte
(2018).
Harmony Prediction with Deep Recurrent Neural Nets.
Wallace, Benedikte
(2023).
AI-generated Dance and The Subjectivity Challenge.
Universitetet i Oslo.
ISSN 1501-7710.
Wallace, Benedikte & Martin, Charles Patrick
(2018).
Predictive songwriting with concatenative accompaniment.
Universitetet i Oslo.