Keynote Lecture - Perceptual Interfaces

Professor David Hogg, University of Leeds

Abstract

It is widely believed that computers will be easier to use if we can communicate with them in ways that are more similar to our interactions with other people. To achieve this will require advances in animation (e.g. facial modelling), perceptual technologies (e.g. computer vision), and cognitive aspects of interaction. A major research challenge is to find ways for acquiring and encoding the wide spatial, temporal and procedural variations between and within different types of interaction.

We describe recent work on visual gesture recognition and interaction modelling in which the range of possible things that can happen are learnt automatically through passive observation of video sequences depicting typical gestures and interactions. The basis for the approach is the construction of probabilistic spatio-temporal models from training data extracted from the video sequences.