Analyzing Approaches to the Problem of AI Safety

The world and the Internet are more and more populated by artificial autonomous agents carrying out tasks on our behalf. Many of these agents nowadays are provided with an objective and they learn their behavior trying to achieve their objective as better as they could. This approach has allowed the development of very efficient agents in many environment. However, this approach can not guarantee that an agent, while learning its behavior, will not undertake actions that may have unforeseen and undesirable effects. Research in AI safety tries to design autonomous agent that will behave in a predictable and safe way.
    
The aim of this project is to study, implement and evaluate solutions for problems in the AI safety domain. This would require to develop a solid understanding of the reinforcement learning paradigm and its limitations; to explore the current state of the art on this project; to implement solutions relying on the OpenGym or GridWorld framework and compare the results with the community. 

Publisert 20. aug. 2018 12:11 - Sist endret 2. sep. 2019 15:16

Veileder(e)

Omfang (studiepoeng)

60