Quantcast


CURATOR

PhD Student, University of Alberta

I'm currently working on Deep learning and Video Analysis

PINBOARD SUMMARY

Facial Expression, Body Language, Lips Motion

Our goal is to analyse video data provided by companies to help making conclusion on the state of the business. The ultimate goal is to allow machine predicts customer stratification, evaluates employees interaction with customers and finally provide a summary. To achieve this, we have to analyse video data using different factors such as expressions, body language, lips motion, audio.

7 ITEMS PINNED

Intelligent facial emotion recognition using moth-firefly optimization

Abstract: In this research, we propose a facial expression recognition system with a variant of evolutionary firefly algorithm for feature optimization. First of all, a modified Local Binary Pattern descriptor is proposed to produce an initial discriminative face representation. A variant of the firefly algorithm is proposed to perform feature optimization. The proposed evolutionary firefly algorithm exploits the spiral search behaviour of moths and attractiveness search actions of fireflies to mitigate premature convergence of the Levy-flight firefly algorithm (LFA) and the moth-flame optimization (MFO) algorithm. Specifically, it employs the logarithmic spiral search capability of the moths to increase local exploitation of the fireflies, whereas in comparison with the flames in MFO, the fireflies not only represent the best solutions identified by the moths but also act as the search agents guided by the attractiveness function to increase global exploration. Simulated Annealing embedded with Levy flights is also used to increase exploitation of the most promising solution. Diverse single and ensemble classifiers are implemented for the recognition of seven expressions. Evaluated with frontal-view images extracted from CK+, JAFFE, and MMI, and 45-degree multi-view and 90-degree side-view images from BU-3DFE and MMI, respectively, our system achieves a superior performance, and outperforms other state-of-the-art feature optimization methods and related facial expression recognition models by a significant margin.

Pub.: 22 Aug '16, Pinned: 28 Jun '17

Facial expression recognition using {l}_{p}-norm MKL multiclass-SVM

Abstract: Automatic recognition of facial expressions is an interesting and challenging research topic in the field of pattern recognition due to applications such as human–machine interface design and developmental psychology. Designing classifiers for facial expression recognition with high reliability is a vital step in this research. This paper presents a novel framework for person-independent expression recognition by combining multiple types of facial features via multiple kernel learning (MKL) in multiclass support vector machines (SVM). Existing MKL-based approaches jointly learn the same kernel weights with \(l_{1}\)-norm constraint for all binary classifiers, whereas our framework learns one kernel weight vector per binary classifier in the multiclass-SVM with \(l_{p}\)-norm constraints \((p \ge 1)\), which considers both sparse and non-sparse kernel combinations within MKL. We studied the effect of \(l_{p}\)-norm MKL algorithm for learning the kernel weights and empirically evaluated the recognition results of six basic facial expressions and neutral faces with respect to the value of “\(p\)”. In our experiments, we combined two popular facial feature representations, histogram of oriented gradient and local binary pattern histogram, with two kernel functions, the heavy-tailed radial basis function and the polynomial function. Our experimental results on the CK\(+\), MMI and GEMEP-FERA face databases as well as our theoretical justification show that this framework outperforms the state-of-the-art methods and the SimpleMKL-based multiclass-SVM for facial expression recognition.

Pub.: 16 Apr '15, Pinned: 28 Jun '17