Quan, Wei ORCID: 0000-0003-2099-9520 (2009) 3-D facial expression representation using statistical shape models. Doctoral thesis, University of Central Lancashire.
PDF (Thesis document)
- Submitted Version
Restricted to Repository staff only Available under License Creative Commons Attribution Non-commercial Share Alike. 7MB |
Abstract
Facial expressions are visible signs of person's affective state, cognitive activity and personality. Automatic recognition of facial expressions is an important component for a wide spectrum of applications including human-computer interfaces, video conferencing, augmented reality and human activity monitoring to mention a few. Facial expression representation is an essential part in the automatic recognition of facial expressions. It is concerned with finding distinguishable features that can be utilised for representing different facial expressions without constraints of age, ethnicity or gender. This thesis reports on research and development in the facial expression representation. The author has proposed two novel methods for representing facial expressions. One is based on the shape space vector (SSV) of the statistical shape model (SSM); the other is based on the SSV of the B-spline statistical shape model (BSSM).
The first proposed method uses the SSV of the SSM as a significant feature for representing facial expressions embedded in 3-D facial surfaces. In order to obtain the SSV, a novel model-based surface registration method is proposed that iteratively
deforms and matches the model to an unseen new facial surface. Two major stages are included in this method, namely, model building and model fitting. In the model building stage, a SSM is built by using a training data set with estimated
correspondences. In the model fitting stage, the built model is adapted to represent the shape of the new facial surface, which has not been included in the training data set. To build the model, the thin plate spline warping has been used so that all of the facial surfaces in the training data set are aligned into a common reference facial surface and the dense correspondences of points between these facial surfaces can be calculated. To fit the model to the new facial surface a modified iterative closest point (ICP) algorithm and least-squares projection on to the estimated shape space, constructed using the training data set, are applied.
The second proposed method uses the SSV of the BSSM for facial expression representation. The mddel is built using B-spline control points instead of the surface points as in the SSM based method. In order to obtain the control points of
B-spline, a novel method for the B-spline surface fitting has been proposed.
The robustness and efficiency of both model-based facial expression representation methods are improved by introducing a multi-resolution scheme in the model fitting stage. The experimental results on simulated and real 3-D facial surfaces show that the proposed methods can effectively provide distinguishable features for facial expression analysis and recognition.
Repository Staff Only: item control page