Modelling Human Behaviour in Computer Science

This projects involves a cases study and analysis of human users of a modern information system, like IoT. This topic is at the border of computer science, psychology, cognitive sciences, and AI.

Ask for discussions with one of the supervisors, for more information or variations of the project. See also general concerns.

The rapidly increasing pervasiveness and integration of computers in human society calls for a broad discipline under which this development can be studied. To design and use technology one needs to develop and use models of humans and machines in all their aspects, including cognitive and memory models, but also social influence and (artificial) emotions.

Our ethical compass should guide us to build intelligent machines that have desirable traits, whatever that might be. In order to achieve this goal it is essential that we understand haw humans actually behave in interaction with intelligent machines, and this is a largely unexplored field. For example, what are the criteria for trusting an intelligent machine for which the intelligent behaviour a priori is unknown. Also, how can an intelligent machine trust humans with whom it interacts. Finally, how can intelligent machines trust each other. From a security point of view, the most serious vulnerabilities are no longer found in the systems but in the humans who operate the systems. In a sense, it is no longer a question of whether people can trust their systems, but whether systems can trust their human masters.

A recent relevant paper [28] originating from the group of Ann Blandford.

See the Technical Report for an easy to read general introduction to this topic.

Tags: human behaviour, psychology, ceremonies, security, modelling
Published Sep. 20, 2017 6:43 PM - Last modified Sep. 20, 2017 6:43 PM

Scope (credits)