Click to open the HelpDesk interface
AECE - Front page banner

Menu:


FACTS & FIGURES

JCR Impact Factor: 1.221
JCR 5-Year IF: 0.961
SCOPUS CiteScore: 2.5
Issues per year: 4
Current issue: Aug 2021
Next issue: Nov 2021
Avg review time: 89 days


PUBLISHER

Stefan cel Mare
University of Suceava
Faculty of Electrical Engineering and
Computer Science
13, Universitatii Street
Suceava - 720229
ROMANIA

Print ISSN: 1582-7445
Online ISSN: 1844-7600
WorldCat: 643243560
doi: 10.4316/AECE


TRAFFIC STATS

1,726,422 unique visits
569,576 downloads
Since November 1, 2009



Robots online now
bingbot
PetalBot


SJR SCImago RANK

SCImago Journal & Country Rank




TEXT LINKS

Anycast DNS Hosting
MOST RECENT ISSUES

 Volume 21 (2021)
 
     »   Issue 3 / 2021
 
     »   Issue 2 / 2021
 
     »   Issue 1 / 2021
 
 
 Volume 20 (2020)
 
     »   Issue 4 / 2020
 
     »   Issue 3 / 2020
 
     »   Issue 2 / 2020
 
     »   Issue 1 / 2020
 
 
 Volume 19 (2019)
 
     »   Issue 4 / 2019
 
     »   Issue 3 / 2019
 
     »   Issue 2 / 2019
 
     »   Issue 1 / 2019
 
 
 Volume 18 (2018)
 
     »   Issue 4 / 2018
 
     »   Issue 3 / 2018
 
     »   Issue 2 / 2018
 
     »   Issue 1 / 2018
 
 
 Volume 17 (2017)
 
     »   Issue 4 / 2017
 
     »   Issue 3 / 2017
 
     »   Issue 2 / 2017
 
     »   Issue 1 / 2017
 
 
  View all issues  




SAMPLE ARTICLES

Method for Efficiency Increasing of Distributed Classification of the Images based on the Proactive Parallel Computing Approach, MUKHIN, V., VOLOKYTA, A., HERIATOVYCH, Y., REHIDA, P.
Issue 2/2018

AbstractPlus

Parameter Improved Particle Swarm Optimization Based Direct-Current Vector Control Strategy for Solar PV System, NAMMALVAR, P., RAMKUMAR, S.
Issue 1/2018

AbstractPlus

Robust 2-bit Quantization of Weights in Neural Network Modeled by Laplacian Distribution, PERIC, Z., DENIC, B., DINCIC, M., NIKOLIC, J.
Issue 3/2021

AbstractPlus

Simple Framework for Efficient Development of the Functional Requirement Verification-specific Language, POPIC, S., TESLIC, N., BJELICA, M. Z.
Issue 3/2021

AbstractPlus

A Phasor Estimation Algorithm based on Hilbert Transform for P-class PMUs, RAZO-HERNANDEZ, J. R., VALTIERRA-RODRIGUEZ, M., GRANADOS-LIEBERMAN, D., TAPIA-TINOCO, G., RODRIGUEZ-RODRIGUEZ, J. R.
Issue 3/2018

AbstractPlus

Differential Evolution Implementation for Power Quality Disturbances Monitoring using OpenCL, SOLIS-MUNOZ, F. J., OSORNIO-RIOS, R. A., ROMERO-TRONCOSO, R. J., JAEN-CUELLAR, A. Y.
Issue 2/2019

AbstractPlus




LATEST NEWS

2021-Jun-30
Clarivate Analytics published the InCites Journal Citations Report for 2020. The InCites JCR Impact Factor of Advances in Electrical and Computer Engineering is 1.221 (1.053 without Journal self-cites), and the InCites JCR 5-Year Impact Factor is 0.961.

2021-Jun-06
SCOPUS published the CiteScore for 2020, computed by using an improved methodology, counting the citations received in 2017-2020 and dividing the sum by the number of papers published in the same time frame. The CiteScore of Advances in Electrical and Computer Engineering in 2020 is 2.5, better than all our previous results.

2021-Apr-15
Release of the v3 version of AECE Journal website. We moved to a new server and implemented the latest cryptographic protocols to assure better compatibility with the most recent browsers. Our website accepts now only TLS 1.2 and TLS 1.3 secure connections.

2020-Jun-29
Clarivate Analytics published the InCites Journal Citations Report for 2019. The InCites JCR Impact Factor of Advances in Electrical and Computer Engineering is 1.102 (1.023 without Journal self-cites), and the InCites JCR 5-Year Impact Factor is 0.734.

2020-Jun-11
Starting on the 15th of June 2020 we wiil introduce a new policy for reviewers. Reviewers who provide timely and substantial comments will receive a discount voucher entitling them to an APC reduction. Vouchers (worth of 25 EUR or 50 EUR, depending on the review quality) will be assigned to reviewers after the final decision of the reviewed paper is given. Vouchers issued to specific individuals are not transferable.

Read More »


    
 

  2/2015 - 12

 HIGH-IMPACT PAPER 

Towards Real-Life Facial Expression Recognition Systems

BENTA, K.-I. See more information about BENTA, K.-I. on SCOPUS See more information about BENTA, K.-I. on IEEExplore See more information about BENTA, K.-I. on Web of Science, VAIDA, M.-F. See more information about VAIDA, M.-F. on SCOPUS See more information about VAIDA, M.-F. on SCOPUS See more information about VAIDA, M.-F. on Web of Science
 
View the paper record and citations in View the paper record and citations in Google Scholar
Click to see author's profile in See more information about the author on SCOPUS SCOPUS, See more information about the author on IEEE Xplore IEEE Xplore, See more information about the author on Web of Science Web of Science

Download PDF pdficon (1,034 KB) | Citation | Downloads: 757 | Views: 5,092

Author keywords
facial expression recognition, affective computing, feature extraction, classification, database

References keywords
recognition(74), facial(70), computing(20), pattern(19), emotion(16), analysis(15), automatic(14), affective(14), image(12), vision(10)
Blue keywords are present in both the references section and the paper title.

About this article
Date of Publication: 2015-05-31
Volume 15, Issue 2, Year 2015, On page(s): 93 - 102
ISSN: 1582-7445, e-ISSN: 1844-7600
Digital Object Identifier: 10.4316/AECE.2015.02012
Web of Science Accession Number: 000356808900012
SCOPUS ID: 84979726417

Abstract
Quick view
Full text preview
Facial expressions are a set of symbols of great importance for human-to-human communication. Spontaneous in their nature, diverse and personal, facial expressions demand for real-time, complex, robust and adaptable facial expression recognition (FER) systems to facilitate the human-computer interaction. The last years' research efforts in the recognition of facial expressions are preparing FER systems to step into the real-life. In order to meet the before-mentioned requirements, this article surveys the work in FER since 2008, particularly adopting the discrete states emotion model in a quest for the most valuable FER works/systems. We first present the new spontaneous facial expression databases and then organize the real-time FER solutions grouped by spontaneous and posed facial expression databases. Then automatic FERs are compared and the cross-database validation method is presented. Finally, we outline FER system open issues to meet real-life challenges.


References | Cited By  «-- Click to see who has cited this paper

[1] J. F. Grafsgaard, J. B. Wiggins, K. E. Boyer, E. N. Wiebe, J. C. Lester, "Automatically recognizing facial expression: predicting engagement and frustration," Proceedings of the 6th International Conference on Educational Data Mining, 2013.
[CrossRef] [Web of Science Times Cited 30] [SCOPUS Times Cited 49]


[2] C. N. Moridis, A. A. Economides, "Affective Learning: Empathetic Agents with Emotional Facial and Tone of Voice Expressions," IEEE Transactions on Affective Computing, vol. 3, no. 3, pp. 260-272, July-Sept. 2012.
[CrossRef] [Web of Science Times Cited 34] [SCOPUS Times Cited 52]


[3] M. (E.) Hoque, M. Courgeon, J.-C. Martin, B. Mutlu, R. W. Picard, "MACH: My automated conversation coacH," Proceedings of the 2013 ACM International Joint Conference on Pervasive and ubiquitous computing. ACM, pp. 697-706, 2013.
[CrossRef] [SCOPUS Times Cited 175]


[4] S. J. Ahn, J. Bailenson, J. Fox, M. Jabon, "Using automated facial expression analysis for emotion and behavior prediction," The Routledge Handbook of Emotions and Mass Media, pp. 349, 2010. Available: http://vhil.stanford.edu/pubs/2010/ahn-hemm-facial-expression.pdf

[5] H.-J. Kim, Y. S. Choi, "EmoSens: Afective entity scoring, a novel service recommendation framework for mobile platform," Workshop on personalization in mobile application of the 5th international conference on recommender system, 2011. Available: http://pema2011.cs.ucl.ac.uk/papers/pema2011_kim.pdf

[6] A. Kolakowska, A. Landowska, M. Szwoch, W. Szwoch, M.R. Wróbel, "Emotion Recognition and Its Applications," In Human-Computer Systems Interaction: Backgrounds and Applications 3, pp. 51-62, Springer International Publishing, 2014.
[CrossRef] [Web of Science Times Cited 37] [SCOPUS Times Cited 39]


[7] K.-I. Benta, M. Cremene, V., Todica, "Towards an affective aware home," In Ambient Assistive Health and Wellness Management in the Heart of the City, pp. 74-81, Springer Berlin Heidelberg, 2009.
[CrossRef] [SCOPUS Times Cited 12]


[8] G. Castellano, H. Gunes, C. Peters, B. Schuller, "Multimodal Affect Recognition for Naturalistic Human-Computer and Human-Robot Interactions", invited chapter for Handbook of Affective Computing, R. A. Calvo, S. D'Mello, J. Gratch, A. Kappas (eds.), Oxford University Press, pp. 246-257, 2015.
[CrossRef]


[9] P. Marrero-Fernandez, A. Montoya-Padrón, A. Jaume-i-Capo, J.M. Buades Rubio, "Evaluating the Research in Automatic Emotion Recognition," IETE Technical Review, vol. 31, no. 3, 220-232, 2014.
[CrossRef] [Web of Science Times Cited 14] [SCOPUS Times Cited 13]


[10] G. Littlewort, J. Whitehill, T. Wu, I. Fasel, M. Frank, J. Movellan, M. Bartlett, "The computer expression recognition toolbox (CERT)," Automatic Face & Gesture Recognition and Workshops (FG 2011), IEEE International Conference, 2011.
[CrossRef] [SCOPUS Times Cited 339]


[11] M. Pantic, L. J. M. Rothkrantz, "Automatic analysis of facial expressions: The state of the art," IEEE Trans. On Pattern Analysis and Machine Intelligence, vol. 22, no. 12, pp. 1424-1445, on december, 2000.
[CrossRef] [Web of Science Times Cited 997] [SCOPUS Times Cited 1281]


[12] M. Pantic, L. J. M. Rothkrantz, "Toward an affect-sensitive multimodal human-computer interaction," Proceedings of the IEEE 91.9, pp. 1370-1390, 2003.
[CrossRef] [Web of Science Times Cited 443] [SCOPUS Times Cited 572]


[13] B. Fasel, J. Luettin, "Automatic facial expression analysis: a survey," Pattern Recognition, Volume 36, Issue 1, January 2003, Pages 259-275.
[CrossRef] [Web of Science Times Cited 1020] [SCOPUS Times Cited 1341]


[14] Z. Zeng, M. Pantic, G. I. Roisman, T. S. Huang, "A survey of affect recognition methods: audio, visual, and spontaneous expressions," IEEE Trans. On Pattern Analysis and Machine Intelligence, vol. 31, no.1, pp. 39-58, 2009.
[CrossRef] [Web of Science Times Cited 1498] [SCOPUS Times Cited 1932]


[15] C. H. Wu, J. C. Lin, W.L. Wei, "Survey on audiovisual emotion recognition: databases, features, and data fusion strategies," APSIPA Transactions on Signal and Information Processing, vol. 3, e12, 2014.
[CrossRef] [SCOPUS Times Cited 85]


[16] J. Cohn, F. De La Torre, "Automated Face Analysis for Affective Computing". 2015. Handbook of Affective Computing, R. A. Calvo, S. D'Mello, J. Gratch, A. Kappas (eds.), pp. 131-150, Oxford University Press.
[CrossRef]


[17] E. Sariyanidi, H. Gunes, A. Cavallaro, "Automatic analysis of facial affect: A survey of registration, representation and recognition," IEEE Transactions on Pattern Analysis & Machine Intelligence, no. 1, pp. 1, 2014.
[CrossRef] [Web of Science Times Cited 317] [SCOPUS Times Cited 385]


[18] R. W. Picard, "Emotion research by the people, for the people," Emotion Rev., vol. 2, pp. 250-254, 2010.
[CrossRef] [Web of Science Times Cited 55] [SCOPUS Times Cited 85]


[19] D. A. G. Jauregui, J.-C. Martin, "Evaluation of Vision-based Real-Time Measures for Emotions Discrimination under Uncontrolled Conditions," Proceeding EmotiW '13 Proceedings of the 2013 on Emotion recognition in the wild challenge and workshop, pp. 17-22, 2013.
[CrossRef] [SCOPUS Times Cited 5]


[20] R. A. Calvo, S. D'Mello, "Affect detection: an interdisciplinary review of models, methods, and their applications," IEEE Trans. On Affective Computing, vol. 1, no. 1, pp. 18-37, 2010.
[CrossRef] [Web of Science Times Cited 746] [SCOPUS Times Cited 953]


[21] H. Gunes, M. Pantic, "Automatic, dimensional and continuous emotion recognition," International Journal of Synthetic Emotions, vol. 1, is. 1, pp. 68-99, 2010.
[CrossRef]


[22] H. Gunes, B. Schuller, M. Pantic, R. Cowie, "Emotion representation, analysis and synthesis in continuous space: a survey," Automatic Face & Gesture Recognition and Workshops (FG 2011), pp. 827-834, IEEE International Conference on 21-25 March, 2011.
[CrossRef] [SCOPUS Times Cited 191]


[23] K.-I. Benta, H.-I. Lisei, M. Cremene, "Towards a Unified 3D Affective Model," Doctoral Consortium Proceedings of International Conference on Affective Computing and Intelligent Interaction (ACII2007), Lisbon, Portugal, 12-14 September 2007, pp. 75-85. Available: www.di.uniba.it/intint/DC-ACII07/Benta.pdf

[24] H. Gunes, B. Schuller, "Categorical and dimensional affect analysis in continuous input: Current trends and future directions," Images and Vision Computing 31, pp. 120-136, 2013.
[CrossRef] [Web of Science Times Cited 171] [SCOPUS Times Cited 215]


[25] H. Chen, C. Huang, C. Fu, "Hybrid-boost learning for multi-pose face detection and facial expression recognition," Pattern Recognition 41, pp. 1173-1185, 2008.
[CrossRef] [Web of Science Times Cited 39] [SCOPUS Times Cited 57]


[26] X. Huang, G. Zhao, W. Zheng, M. Pietikäinen, "Towards a dynamic expression recognition system under facial occlusion," Pattern Recognition Letters, 33(16), pp. 2181-2191.
[CrossRef] [Web of Science Times Cited 28] [SCOPUS Times Cited 32]


[27] I. B. Ciocoiu, H. N. Costin. "Localized versus locality-preserving subspace projections for face recognition," Journal on Image and Video Processing, 2007(1), pp. 3-3, 2007.
[CrossRef] [Web of Science Times Cited 4] [SCOPUS Times Cited 3]


[28] O. Rudovic, M. Pantic, I. (Y.) Patras, "Coupled Gaussian Processes for pose-invariant facial expression recognition," IEEE Trans. On Pattern Analysis and Machine Intelligence, vol. 35, no. 6, 2013.
[CrossRef] [Web of Science Times Cited 102] [SCOPUS Times Cited 112]


[29] S. M. Mavadati, M. H. Mahoor, K. Bartlett, P. Trinh, J. F. Cohn, "DISFA: A Spontaneous Facial Action Intensity Database," IEEE Transactions on Affective Computing, vol. 4, no. 2, pp. 151-160, 2013.
[CrossRef] [Web of Science Times Cited 263] [SCOPUS Times Cited 324]


[30] L. Zhang, D. Tjondronegoro, V. Chandran, "Facial expression recognition experiments with data from television broadcasts and the World Wide Web," Image and Vision Computing 32, pp. 107-119, 2014.
[CrossRef] [Web of Science Times Cited 24] [SCOPUS Times Cited 27]


[31] S. Wang, Z. Liu, Z. Wang, G. Wu, P. Shen, S. He, X. Wang, "Analyses of a multi-modal spontaneous facial expression database," IEEE Trans. Affective Computing, vol. 4, issue 1, pp. 34-46, on Jan.-March, 2013.
[CrossRef] [Web of Science Times Cited 23] [SCOPUS Times Cited 27]


[32] X. Zhang, L. Yin, J.F. Cohn, S. Canavan, M. Reale, A. Horowitz, J.M. Girard, "BP4D-Spontaneous: a high-resolution spontaneous 3D dynamic facial expression database," Image and Vision Computing, vol. 32, no. 10, pp. 692-706, 2014.
[CrossRef] [Web of Science Times Cited 245] [SCOPUS Times Cited 276]


[33] D. McDuff, R. El Kaliouby, T. Senechal, M. Amr, J.F. Cohn, R. Picard, "Affectiva-MIT Facial Expression Dataset (AM-FED): Naturalistic and Spontaneous Facial Expressions Collected" In-the-Wild"," In Computer Vision and Pattern Recognition Workshops (CVPRW), 2013 IEEE Conference on, pp. 881-888, 2013..
[CrossRef] [Web of Science Times Cited 92] [SCOPUS Times Cited 129]


[34] A. Tcherkassof, D. Dupre, B. Meillon, N. Mandran, M. Dubois, J. Adam, "DynEmo: A video database of natural facial expressions of emotions," The International Journal of Multimedia & Its Applications (IJMA) Vol.5, No.5, pp. 61-80, 2013.
[CrossRef]


[35] I. Sneddon, M. McRorie, G. McKeown, J. Hanratty, "The Belfast Induced Natural Emotion Database," IEEE Transactions on Affective Computing, vol.3, no.1, pp.32,41, Jan.-March 2012.
[CrossRef] [Web of Science Times Cited 67] [SCOPUS Times Cited 71]


[36] A. Dhall, R. Goecke, S. Lucey, T. Gedeon, "Collecting large, richly annotated facial-expression databases from movies," IEEE Multimedia, vol. 19, no. 3, pp. 34-41, July-Sept. 2012,
[CrossRef] [Web of Science Times Cited 225] [SCOPUS Times Cited 288]


[37] C. Zhan, W. Li, F. Ogunbona, F. Safaei, "A Real-Time Facial Expression Recognition System for Online Games," International Journal of Computer Games Technology, vol. 2008, 7 pages, 2008.
[CrossRef]


[38] R. D'Ambrosio, G. Iannello, P. Soda, "Automatic facial expression recognition using statistical-like moments," Lecture Notes in Computer Science, pp. 585-594, 2011.
[CrossRef] [SCOPUS Times Cited 5]


[39] F. Abdat, C.Maaoui, A.Pruski, "Human-computer interaction using emotion recognition from facial expression," IEEE UKSim 5th European Symposium on Computer Modeling and Simulation, pp. 196-201, 2011.
[CrossRef] [Web of Science Times Cited 22] [SCOPUS Times Cited 39]


[40] C. Martin, U. Werner, H-M. Gross, "A real-time facial expression recognition system based on active appearance models using gray images and edge images," Proc. 8th IEEE int. Conf. On face and Gesture Recognition (FG'08), Amsterdam, paper no. 299, pp. 6, IEEE, 2008.
[CrossRef] [SCOPUS Times Cited 40]


[41] L. Zhang, D. Tjondronegoro and V. Chandran, "Discovering the best feature extraction and selection algorithms for spontaneous facial expression recognition," IEEE International Conference on Multimedia and Expo, 2012.
[CrossRef] [SCOPUS Times Cited 13]


[42] L. Zhang, D. Tjondronegoro, V. Chandran, J. Eggink, "Towards robust automatic affective classification of images using facial expressions for practical applications," Multimedia Tools and Applications, pp. 1-27, Springer International Publishing, 2015.
[CrossRef] [Web of Science Times Cited 10] [SCOPUS Times Cited 12]


[43] M. Khademit, M. T. Manzuri, M. H. Kiapour, M. Safayabu, M.Shojaei, "Facial expression representation and recognition using 2DHLDA, Gabor Walvelets and Ensemble Learning," 2011. [Persistent URL]

[44] Y. Cheon, D. Kim, "Natural facial expression recognition using differential-AAM and manifold learning," Pattern Recognition 42, 1300-1350, 2009.
[CrossRef] [Web of Science Times Cited 88] [SCOPUS Times Cited 116]


[45] R. A. Khan, A. Meyer, H. Konik, S. Bouakaz, "Framework for reliable, real-time facial expression recognition for low resolution images," Pattern Recognition Letters 34, pp. 1159-1168, 2013.
[CrossRef] [Web of Science Times Cited 80] [SCOPUS Times Cited 96]


[46] E. Sariyanidi, H. Gunes, M. Gökmen, A. Cavallaro, "Local Zernike moment representations for facial affect recognition," In Proceedings of the British Machine Vision Conference, pp.108.1-108.13, BMVA Press, 2013.
[CrossRef] [Web of Science Times Cited 18]


[47] M. Zhang, D.J. Lee, A. Desai, K.D. Lillywhite, B.J. Tippetts, "Automatic Facial Expression Recognition Using Evolution-Constructed Features," In Advances in Visual Computing, vol. 8888, pp. 282-291, Springer International Publishing, 2014.
[CrossRef] [SCOPUS Times Cited 3]


[48] J. Sung, D. Kim, "Real-time facial expression using STAAM and layered GDA classifier," Image and Vision Computing 27(9), pp. 1313-1325, 2009.
[CrossRef] [Web of Science Times Cited 20] [SCOPUS Times Cited 24]


[49] C. Fahn, M. Wu and C. Kao, "Real-time facial expression recognition in image sequences using an AdaBoost-based multi-classifier," Proceedings: APSIPA ASC 2009: Asia-Pacific Signal and Information Processing Association, Annual Summit and Conference, pp. 8-17, 2009. [Handle]

[50] C. Loconsole, D. Chiaradia, V. Bevilacqua, A. Frisoli, "Real-Time Emotion Recognition: An Improved Hybrid Approach for Classification Performance," In Intelligent Computing Theory, pp. 320-331, Springer International Publishing, 2014.
[CrossRef] [SCOPUS Times Cited 9]


[51] Noldus Information Technology, "FaceReader methodology"-White Paper based on FaceReader 5. Available: http://www.noldus.com, accessed on 3.09.2014.

[52] J. Whitehill, M. S. Bartlett, and J. R. Movellan, "Automatic facial expression recognition," In J. Gratch and S. Marsella, editors, Social Emotions in Nature and Artifact. Oxford University Press, 2014.

[53] Emotient, San Diego, U.S.A., http://www.emotient.com/products, accessed on 3.09.2014.

[54] Sightcorp B.V., Amsterdam, http://sightcorp.com/insight/, accessed on 3.09.2014.

[55] R. Valenti, N. Sebe, T. Gevers, "Facial expression recognition: A fully integrated approach," 14th International Conference of Image Analysis and Processing - Workshops (ICIAPW 2007), 2007.
[CrossRef] [Web of Science Times Cited 18] [SCOPUS Times Cited 31]


[56] D. M. Deriso, J. Susskind, J. Tanaka, P. Winkielman, J. Herrington, R. Schultz, M. Bartlett, "Exploring the facial expression perception-production link using real-time automated facial expression recognition, " In Computer Vision-ECCV 2012. Workshops and demonstrations, pp. 270-279, Springer Berlin Heidelberg, 2012.
[CrossRef] [SCOPUS Times Cited 11]


[57] L. Danner, L. Sidorkina, M. Joechl, K. Duerrschmid, "Make a face! Implicit and explicit measurement of facial expressions elicited by orange juices using face reading technology," Food Quality and Preference, Volume 32, Part B, March 2014, Pages 167-172,
[CrossRef] [Web of Science Times Cited 94] [SCOPUS Times Cited 97]


[58] Jamshidnezhad, A., Nordin, M. J., "Bee royalty offspring algorithm for improvement of facial expressions classification model," International Journal of Bio-Inspired Computation, 5(3), pp. 175-191, 2013.
[CrossRef] [Web of Science Times Cited 12] [SCOPUS Times Cited 11]


[59] A. Khanum, M. Mmufti, M. Y. Javed, M. Z. Shafiq, "Fuzzy case-based reasoning for facial expression recognition," Fuzzy Sets and Systems 160(2), pp. 231-250, 2009.
[CrossRef] [Web of Science Times Cited 43] [SCOPUS Times Cited 50]


[60] D. Filko, G. Martinovic, "Emotion recognition system by a neural network based facial expression analysis," Automatika t Journal for Control, Measurement, Electronics, Computing and Communications vol. 54, issue 2, pp. 263-272, 2013.
[CrossRef] [Web of Science Times Cited 17] [SCOPUS Times Cited 30]


[61] S. Wan, J.K.Aggarwal, "Spontaneous facial expression recognition: A robust metric learning approach," Pattern Recognition 47, 1859-1868, 2014.
[CrossRef] [Web of Science Times Cited 62] [SCOPUS Times Cited 66]


[62] J. Zhou, Y. Wang, T. Xu, W. Liu, "A novel facial expression recognition based on the curvelet features," Image and Video Technology (PSIVT), 2010 Fourth Pacific-Rim Symposium, pp. 82-87, 14-17 Nov. 2010.
[CrossRef] [Web of Science Times Cited 1] [SCOPUS Times Cited 5]


[63] T. H. H. Zavaschi, A. L. Koerich, L. E. S. Oliveira, "Facial expression recognition using ensemble of classifiers," Acoustics, Speech and Signal Processing (ICASSP), IEEE International Conference, pp. 1489-1492, 2011.
[CrossRef] [SCOPUS Times Cited 24]


[64] J.-J. Wong, S.-Y. Cho, "A face emotion tree structure representation with probabilistic recursive neural network modeling," Neural Comput&Applic 19, pp. 33-54, 2010.
[CrossRef] [Web of Science Times Cited 15] [SCOPUS Times Cited 21]


[65] A. Rahman, L. Ali, "Weighted local directional pattern for robust facial expression recognition," Informatics and Computational Intelligence (ICI), pp. 268-271, 2011.
[CrossRef] [SCOPUS Times Cited 3]


[66] K. Hong, S. K. Chalup, R. A. R. King, "A component based approach improves classification of discrete facial expressions over a holistic approach," WCCI, IEEE World Congress on Computational Intelligence, pp. 1-8, 2010.
[CrossRef] [SCOPUS Times Cited 6]


[67] L. Zhang, S. Chen, T. Wang, Z. Liu, "Automatic facial expression recognition based on hybrid features," International Conference on Future Electrical Power an Energy Systems, Energy Procedia vol. 17, pp. 1817-1823. 2012. Available: http://doi.org/10.1016/j.egypro.2012.02.317

[68] J. G. Razuri, D. Sundgren, R. Rahmani, A. M. Cardenas, "Automatic emotion recognition through facial expression analysis in merged images based on an artificial neural network," 12th Mexican International Conference on Artificial Intelligence, pp. 85-96, 2013.
[CrossRef] [Web of Science Times Cited 12] [SCOPUS Times Cited 22]


[69] A. Jamshidnezhad, J. Nordin, "An adaptive learning model based genetic for facial expression recognition," International Journal of the Physical Sciences Vol. 7(4), pp. 619-623, 2012.
[CrossRef]


[70] R. KiranServadevabhalta, M. Benevoy, V. Ng-Thow-Hing, S. Musallam, "Adaptive Facial Expression Recognition using Inter-modal Top-down Context," Proceedings of the 13th international conference on multimodal interfaces - ICMI '11, 2011.
[CrossRef] [SCOPUS Times Cited 2]


[71] K. S. Rao, S. G. Koolagudi, "Recognition of emotions from video using acoustic and facial features," Signal, Image and Video Processing Journal, 2013.
[CrossRef] [Web of Science Times Cited 6] [SCOPUS Times Cited 10]


[72] S. Zhang, X. Zhao and B. Lei, "Robust facial expression recognition via compressive sensing," Sensors, vol. 12(12), pp. 3747-3761, 2012.
[CrossRef] [Web of Science Times Cited 60] [SCOPUS Times Cited 64]


[73] Y. Ji, K. Idrissi, "Using moments on spatiotemporal plane for facial expression recognition," 20th International Conference on Pattern Recognition, pp. 3806-3809, 2010.
[CrossRef] [SCOPUS Times Cited 3]


[74] X. Zhao, S. Zhang, "Facial expression recognition using local binary patterns and discriminant kernel locally linear embedding," Journal on Advances in Signal Processing 1, pp. 20, 2012.
[CrossRef] [Web of Science Times Cited 63] [SCOPUS Times Cited 58]


[75] X. Zhao, S. Zhan, "Facial expression recognition based on local binary patterns and kernel discriminant isomap," Journal Sensors 11, pp. 9573-9588, 2011.
[CrossRef] [Web of Science Times Cited 72] [SCOPUS Times Cited 87]


[76] L. Zhang, D. Tjondronegoro, "Facial expression recognition using facial movement features," IEEE Transactions on Affective Computing, vol. 2(4), pp. 219-229, 2011.
[CrossRef] [Web of Science Times Cited 111] [SCOPUS Times Cited 159]


[77] G. Zhao, M. Pietikainen, "Boosted multi-resolution spatiotemporal descriptors for facial expression recognition," Pattern Recognition Letters, vol. 30/12, 1 September, pp. 1117-1127, 2009.
[CrossRef] [Web of Science Times Cited 79] [SCOPUS Times Cited 105]


[78] Y. Ji, K. Idrissi, "Automatic facial expression recognition based on spatiotemporal descriptors," Pattern Recognition Letters 33(10), pp. 1373-1380, 2012.
[CrossRef] [Web of Science Times Cited 29] [SCOPUS Times Cited 34]


[79] M. Kabir, T. Jabid, and O. Chae, "Local directional pattern variance (LDPV): a robust feature descriptor for facial expression recognition," 7th IEEE International Conference on Advanced Video and Signal Based Surveillance, 2010.
[CrossRef] [SCOPUS Times Cited 39]


[80] H. Kabir, T. Jabid and O. Chae, "A local directional pattern variance (LDVPv) based face descriptor for human facial expression recognition," Seventh IEEE International Conference on Advanced Video and Signal Based Surveillance, 2010.
[CrossRef] [SCOPUS Times Cited 39]


[81] A. Saha, Q.M. Jonathan Wu, "Facial expression recognition using curvelet based local binary patterns," ICASSP, 2010.
[CrossRef] [Web of Science Times Cited 17] [SCOPUS Times Cited 24]


[82] J. Zhou, Y. Wang, T. Xu, W. Liu, "A novel facial expression recognition based on the curvelet features," Fourth Pacific-Rim Symposium on Image and Video Technology, 2010.
[CrossRef] [Web of Science Times Cited 1] [SCOPUS Times Cited 5]


[83] S. Chatterjee, H. Shi, "A Novel Neuro Fuzzy Approach to Human Emotion Determination," Digital Image Computing: Techniques and Aplication, 2010.
[CrossRef] [SCOPUS Times Cited 13]


[84] X. Zhao, H. Zhang, Z. Xu, "Expression recognition by extracting facial features of shapes and textures," Journal of Computational Information Systems 8, Pages 3377-3384, 2012. Available: http://www.jofcis.com/publishedpapers/2012_8_8_3377_3384.pdf

[85] K. Yurtkan, H. Demirel, "Feature selection for improved 3D facial expression recognition," Pattern Recognition Letters 38, pages 26-33, 2014.
[CrossRef] [Web of Science Times Cited 29] [SCOPUS Times Cited 33]


[86] S. Moore, R. Bowden, "The effects of pose on facial expression recognition," Proceedings of the British Machine Vision Conference, pp. 1-11, 2009.
[CrossRef] [SCOPUS Times Cited 34]


[87] A. Kar, A. Mukerjee, "Facial expression classification using visual cues and language," IIT, 2011. Available: http://www.cs.berkeley.edu/~akar/se367/project/report.pdf

[88] M. H. Siddiqi, S. Lee, Y.K. Lee, A.M. Khan, P.T.H. Truc, "Hierarchical recognition scheme for human facial expression recognition systems," Sensors, 13(12), pp. 16682-16713, 2013.
[CrossRef] [Web of Science Times Cited 26] [SCOPUS Times Cited 27]


[89] C. Mayer, M. Eggers, B. Radig, "Cross-database evaluation for facial expression recognition," Pattern recognition and image analysis, vol. 24, no. 1, pp. 124-132, Springer International Publishing, 2014.
[CrossRef] [SCOPUS Times Cited 37]


[90] C. Shan, S. Gong, P. W. McOwan, "Facial expression recognition based on local binary patterns: A comprehensive study," Image and Vision Computing 27, pp. 803-816, 2009.
[CrossRef] [Web of Science Times Cited 1154] [SCOPUS Times Cited 1479]


[91] H. Yan, M. H. Ang Jr, A. N. Poo, "Cross-dataset facial expression recognition," IEEE International Conference on Robotics and Automation, Shanghai, China, 9-13 May, 2011. http://dx.doi.org/10.1109/icra.2011.5979705

[92] M. S. Zia, M. A. Jaffar, "An adaptive training based on classification system for patterns in facial expressions using SURF descriptor templates," Multimed Tools Appl. Multimed Tools Appl, 2013.
[CrossRef] [Web of Science Times Cited 9] [SCOPUS Times Cited 9]


[93] L. Alboaie, "Pres-personalized evaluation system in a web community," Proceedings of the 2008 IEEE International Conference on e-Business, pp. 64-69, July 2008. Available: http://doc.utwente.nl/75918/1/ICE-B_2008.pdf#page=123



References Weight

Web of Science® Citations for all references: 8,642 TCR
SCOPUS® Citations for all references: 12,065 TCR

Web of Science® Average Citations per reference: 92 ACR
SCOPUS® Average Citations per reference: 128 ACR

TCR = Total Citations for References / ACR = Average Citations per Reference

We introduced in 2010 - for the first time in scientific publishing, the term "References Weight", as a quantitative indication of the quality ... Read more

Citations for references updated on 2021-09-18 15:42 in 540 seconds.




Note1: Web of Science® is a registered trademark of Clarivate Analytics.
Note2: SCOPUS® is a registered trademark of Elsevier B.V.
Disclaimer: All queries to the respective databases were made by using the DOI record of every reference (where available). Due to technical problems beyond our control, the information is not always accurate. Please use the CrossRef link to visit the respective publisher site.

Copyright ©2001-2021
Faculty of Electrical Engineering and Computer Science
Stefan cel Mare University of Suceava, Romania


All rights reserved: Advances in Electrical and Computer Engineering is a registered trademark of the Stefan cel Mare University of Suceava. No part of this publication may be reproduced, stored in a retrieval system, photocopied, recorded or archived, without the written permission from the Editor. When authors submit their papers for publication, they agree that the copyright for their article be transferred to the Faculty of Electrical Engineering and Computer Science, Stefan cel Mare University of Suceava, Romania, if and only if the articles are accepted for publication. The copyright covers the exclusive rights to reproduce and distribute the article, including reprints and translations.

Permission for other use: The copyright owner's consent does not extend to copying for general distribution, for promotion, for creating new works, or for resale. Specific written permission must be obtained from the Editor for such copying. Direct linking to files hosted on this website is strictly prohibited.

Disclaimer: Whilst every effort is made by the publishers and editorial board to see that no inaccurate or misleading data, opinions or statements appear in this journal, they wish to make it clear that all information and opinions formulated in the articles, as well as linguistic accuracy, are the sole responsibility of the author.




Website loading speed and performance optimization powered by: