Facial Feature Extraction by Manifold Learning of Appearance Models
Wei-Ren Chen (陳維荏)
中華民國 97 年 7 月


  Extracting accurate positions of eyes, nose and mouth, is a crucial process for face recognition and facial expression recognition. Classical methods such as Active Appearance Model (AAM) use the principal component analysis to reduce the dimensionality of appearance data, and an iterative search to find facial features by minimizing an error criteria of the reduced appearance data. In this paper, we propose a facial feature extraction approach by manifold learning. The manifold learning method, locality preserving projection (LPP), projects appearance data into low-dimensional data by considering neighborhood relation but not variance. The LPP can preserve local structure of appearance data, and remain most of the important characteristics of the appearance data. During search phase, AdaBoost face detection algorithm is utilized to locate the face localization, which can improve the search. Experimental data includes 870 images from AR face database which includes variation of illumination and expression, and 200 images from CMU PIE face database which includes different poses. Experimental results show that the proposed method have a better performance than the AAM method.



   We apply a manifold learning approach, which considers the local structure of the data, to reduce the dimensions of face appearance data. The manifold learning approach, Locality Preserving Projections (LPP), constructs an adjacency graph to retain local properties by using k nearest neighbors. Moreover, LPP is a linear algorithm. LPP can generate a linear mapping between high-dimensional data and low-dimensional data, and can directly map a new high-dimensional data point into a point in reduced representation space. When a set of training data is collected, LPP is applied to reduce the dimension of shape and texture data, respectively. And the weights between shape LPP and texture LPP  are computed to combine the two vectors. PCA is then applied to reduce the dimension of the combined vectors. Then, a linear regression matrix is learnt to reconstruct shape and texture during search. After detecting face by using AdaBoost algorithm, an iterative search method by minimizing the error of appearance data is then performed to extract facial features.










   In this paper, the proposed manifold learning approach, LPP, can not only reduce the dimensionality of appearance data but also retain local properties of variation of face data. By coupling the robust Locality Preserving Projections algorithm, the experimental results validate that the nonlinear and local property of image data produced by expression, illumination and pose can be handled by the proposed method.

    The proposed approach can almost solve the problems of expression, illumination, and pose. However, there is still some disadvantage of the search algorithm. The search algorithm can be tried to modify to make the facial feature extraction more complete.