Click to open the HelpDesk interface
AECE - Front page banner

Menu:


FACTS & FIGURES

JCR Impact Factor: 0.700
JCR 5-Year IF: 0.700
SCOPUS CiteScore: 1.8
Issues per year: 4
Current issue: Aug 2024
Next issue: Nov 2024
Avg review time: 55 days
Avg accept to publ: 60 days
APC: 300 EUR


PUBLISHER

Stefan cel Mare
University of Suceava
Faculty of Electrical Engineering and
Computer Science
13, Universitatii Street
Suceava - 720229
ROMANIA

Print ISSN: 1582-7445
Online ISSN: 1844-7600
WorldCat: 643243560
doi: 10.4316/AECE


TRAFFIC STATS

2,989,567 unique visits
1,159,574 downloads
Since November 1, 2009



No robots online now


SCOPUS CiteScore

SCOPUS CiteScore


SJR SCImago RANK

SCImago Journal & Country Rank




TEXT LINKS

Anycast DNS Hosting
MOST RECENT ISSUES

 Volume 24 (2024)
 
     »   Issue 3 / 2024
 
     »   Issue 2 / 2024
 
     »   Issue 1 / 2024
 
 
 Volume 23 (2023)
 
     »   Issue 4 / 2023
 
     »   Issue 3 / 2023
 
     »   Issue 2 / 2023
 
     »   Issue 1 / 2023
 
 
 Volume 22 (2022)
 
     »   Issue 4 / 2022
 
     »   Issue 3 / 2022
 
     »   Issue 2 / 2022
 
     »   Issue 1 / 2022
 
 
 Volume 21 (2021)
 
     »   Issue 4 / 2021
 
     »   Issue 3 / 2021
 
     »   Issue 2 / 2021
 
     »   Issue 1 / 2021
 
 
  View all issues  








LATEST NEWS

2024-Jun-20
Clarivate Analytics published the InCites Journal Citations Report for 2023. The InCites JCR Impact Factor of Advances in Electrical and Computer Engineering is 0.700 (0.700 without Journal self-cites), and the InCites JCR 5-Year Impact Factor is 0.600.

2023-Jun-28
Clarivate Analytics published the InCites Journal Citations Report for 2022. The InCites JCR Impact Factor of Advances in Electrical and Computer Engineering is 0.800 (0.700 without Journal self-cites), and the InCites JCR 5-Year Impact Factor is 1.000.

2023-Jun-05
SCOPUS published the CiteScore for 2022, computed by using an improved methodology, counting the citations received in 2019-2022 and dividing the sum by the number of papers published in the same time frame. The CiteScore of Advances in Electrical and Computer Engineering for 2022 is 2.0. For "General Computer Science" we rank #134/233 and for "Electrical and Electronic Engineering" we rank #478/738.

2022-Jun-28
Clarivate Analytics published the InCites Journal Citations Report for 2021. The InCites JCR Impact Factor of Advances in Electrical and Computer Engineering is 0.825 (0.722 without Journal self-cites), and the InCites JCR 5-Year Impact Factor is 0.752.

2022-Jun-16
SCOPUS published the CiteScore for 2021, computed by using an improved methodology, counting the citations received in 2018-2021 and dividing the sum by the number of papers published in the same time frame. The CiteScore of Advances in Electrical and Computer Engineering for 2021 is 2.5, the same as for 2020 but better than all our previous results.

Read More »


    
 

  3/2012 - 5

 HIGH-IMPACT PAPER 

The Analysis of the FCM and WKNN Algorithms Performance for the Emotional Corpus SROL

ZBANCIOC, M. See more information about ZBANCIOC, M. on SCOPUS See more information about ZBANCIOC, M. on IEEExplore See more information about ZBANCIOC, M. on Web of Science, FERARU, S. M. See more information about FERARU, S. M. on SCOPUS See more information about FERARU, S. M. on SCOPUS See more information about FERARU, S. M. on Web of Science
 
Extra paper information in View the paper record and citations in Google Scholar View the paper record and similar papers in Microsoft Bing View the paper record and similar papers in Semantic Scholar the AI-powered research tool
Click to see author's profile in See more information about the author on SCOPUS SCOPUS, See more information about the author on IEEE Xplore IEEE Xplore, See more information about the author on Web of Science Web of Science

Download PDF pdficon (875 KB) | Citation | Downloads: 1,062 | Views: 4,939

Author keywords
emotional speech database, FCM and WKNN algorithm, recurrent coefficient, statistical parameters

References keywords
speech(20), emotion(15), recognition(11), systems(7), fuzzy(7), features(7), classification(7), emotional(5), communication(5), automatic(5)
Blue keywords are present in both the references section and the paper title.

About this article
Date of Publication: 2012-08-31
Volume 12, Issue 3, Year 2012, On page(s): 33 - 38
ISSN: 1582-7445, e-ISSN: 1844-7600
Digital Object Identifier: 10.4316/AECE.2012.03005
Web of Science Accession Number: 000308290500005
SCOPUS ID: 84865856327

Abstract
Quick view
Full text preview
The purpose of this research is to find a set of relevant parameters for the emotion recognition. In this study we used the recordings from the emotion database SROL which is part of the project 'Voiced Sounds of Romanian Language'. The database was validated by human listeners. The recognition accuracy of the correct expressed emotion (neutral tone, joy, fury and sadness) for the entire database was 63.97%. We used for the classification of input data the Recurrent Fuzzy C-Means (FCM) and WKNN algorithms. We compared the cluster position with the statistical parameters extracted from vowels in order to establish the relevance of each parameter in the recognition of the emotions. For the extracted parameters for each vowel (mean, median and standard deviation of fundamental frequency - F0 and F1-F4 formants, jitter, and shimmer) the FCM algorithm gave satisfactory results in the phonemes recognition, but not to the emotions. For this reason we used WKNN algorithm in classification, which provided the errors around 20-30% comparing with FCM algorithm when the classification errors are around 40-50%.


References | Cited By  «-- Click to see who has cited this paper

[1] K. R. Scherer, "Vocal communication of emotion: A review of research paradigms", Speech Communication, vol. 40, pp. 227-256, 2003.
[CrossRef] [Web of Science Times Cited 980] [SCOPUS Times Cited 1238]


[2] W. Hess, "Pitch determination of speech signals: algorithms and devices", Springer-Verlag, Berlin, Germany 1983.
[CrossRef]


[3] S. McGilloway, R. Cowie, E. Douglas-Cowie, S. Gielen, M. Westerdijk, S. Stroeve, "Approaching automatic recognition of emotion from voice: a rough enchmark", in Proc. of the ISCA Workshop on Speech and Emotion, Belfast, Northern Ireland, pp. 200-205, 2000.

[4] G. Klasmeyer, "An automatic description tool for timecontours and long-term average voice features in large emotional speech databases", in Proc. of ISCA Workshop on Speech and Emotion, Belfast, Northern Ireland, pp. 66-71, 2000.

[5] M. Slaney, G. McRoberts, "Baby ears: a recognition system for affective vocalization", in Proc. of ICASSP, 1998.

[6] S. Steidl, M. Levit, A. Batliner, E. Noth, H. Niemann, "Of all things the measure is man" automatic classification of emotions and inter-labeler consistency, in Proc. of ICASSP, pp. 317-320, 2005.

[7] R. O. Duda, P. E. Hart, D. G. Stork, Pattern Recognition, 2nd edition. New York, John Wiley & Sons Inc., 2001.

[8] F. Dellaert, Th. Polzin, A. Waibel, "Recognizing emotion in speech", in Proc. of ICSLP, vol. 3, pp. 1970 - 1973, 1996.

[9] Xi Li, Jidong Tao, Michael T. Johnson, J. Soltis, A. Savage, Kirsten M. Leong, John D. Newman, "Stress and emotion classification using jitter and shimmer features", In Proc. of ICASSP, pp. 1081-1084, 2007.

[10] A. Noam, "Classifying emotions in speech: a comparison of methods", in Proc. of 7th European Conference on Speech Communication and Technology, Aalborg, Denmark, pp. 127-130, 2001.

[11] H. N. Teodorescu, M. Zbancioc, M. Feraru, "The analysis of the vowel triangle variation for Romanian language depending on emotional states", in Proc. of ISSCS Conference, Romania, ISBN 978-1-4577-0201-3, pp. 331-334, 2011

[12] H. N. Teodorescu, M. Zbancioc, M. Feraru, "Statistical characteristics of the formants of the Romanian vowels in emotional states", in Proc. of the Int. Conf. on Speech Technology and Human-Computer Dialogue, Romania, ISBN 978-1-4577-0439-0, pp. 13-22, 2011

[13] H. N. Teodorescu, "Recurrent Rules-Based Fuzzy Decision-Making and Control", in Proc. of WSAS Conference, Udine, Italy, 2004.

[14] H. N. Teodorescu, "Fuzzy systems with recurrent rules in population and medical models", in Proc. of the American Conference on Applied Mathematics World Scientific and Engineering Academy and Society Stevens Point, Wisconsin, USA, ISBN: 978-960-6766-47-3, pp. 343-349, 2008.

[15] H. N. Teodorescu, "Fuzzy Systems with Recurrent Rules. A new type of fuzzy systems and applications", Intelligent Systems, pag 157-166, Editors: H.N.Teodorescu, Iaºi, România, Ed. Performantica, ISBN 973-7994-85-X, 2004.

[16] M. Zbancioc, "Recurrent fuzzy rules (Teodorescu's fuzzy systems) in economic process modeling", in Proc. of 15th International Conference on Control Systems and Computer Science, Bucuresti, România, 2005.

[17] C. M. Lee, S. Narayanan, "Emotion recognition using a data-driven fuzzy inference system", in Proc. of Eurospeech, Geneva, , pp. 157-160, 2003.

[18] M. Grimm, K. Kroschel, "Rule-based emotion classification using acoustic features", in Proc. Int. Conf. on Telemedicine and Multimedia Communication, 2005.

[19] D. Ververidis, C. Kotropoulos, I. Pitas, "Automatic emotional speech classification", in Proc. of Internat. Conf. on Acoustics, Speech and Signal Processing, Montreal, vol. 1, pp. 593-596, 2004.

[20] Valery A. Petrushin, "Emotion recognition in speech signal: experimental study, development, and application", in Proc. of the Sixth International Conference on Spoken Language Processing ICSLP 2000.

[21] Dan-Nmg Jiang, LiaHong Cai, "Speech emotion classification with the combination of statistic features and temporal features", IEEE International Conference on Multimedia and Expo (ICME), pp.1967-1970, 2004.
[CrossRef] [Web of Science Times Cited 31]


[22] Aishah AM. Razak, Mohd Hafizuddin Mohd Yusof, Ryoichi Komiya, "Towards automatic recognition of emotion in speech", pp.548-551

[23] Kuan-Chieh Huang, Yau-Hwang Kuo, "A novel objective function to optimize neural networks for emotion recognition from speech patterns", in Proc. of the second World Congress on Nature and Biologically Inspired Computing, Kitakyushu, Fukuoka, Japan, pp. 413-417, 2010

[24] Liqin Fu, Changjiang Wang, Yongmei Zhang, "A study on influence of gender on speech emotion classification", in Proc. of 2nd Int. Conference on Signal Processing Systems, pp. 534-537, 2010.
[CrossRef] [SCOPUS Times Cited 10]


[25] Ashish B. Ingale, D. S. Chaudhari, "Speech Emotion Recognition", International Journal of Soft Computing and Engineering (IJSCE) ISSN: 2231-2307, Volume-2, Issue-1, 2012.

[26] M. E. Ayadi, M. S. Kamel, F. Karray, "Survey On Speech Emotion Recognition: Features, Classification Schemes, And Databases", Pattern Recognition vol. 44, pp. 572-587, 2011.
[CrossRef] [Web of Science Times Cited 1222] [SCOPUS Times Cited 1659]


[27] D. Ververidis, C. Kotropoulos, "Emotional speech recognition: resources, features and methods", Elsevier Speech Communication, vol. 48, no. 9, pp. 1162-1181, 2006.
[CrossRef] [Web of Science Times Cited 558] [SCOPUS Times Cited 745]


References Weight

Web of Science® Citations for all references: 2,791 TCR
SCOPUS® Citations for all references: 3,652 TCR

Web of Science® Average Citations per reference: 103 ACR
SCOPUS® Average Citations per reference: 135 ACR

TCR = Total Citations for References / ACR = Average Citations per Reference

We introduced in 2010 - for the first time in scientific publishing, the term "References Weight", as a quantitative indication of the quality ... Read more

Citations for references updated on 2024-11-16 15:36 in 43 seconds.




Note1: Web of Science® is a registered trademark of Clarivate Analytics.
Note2: SCOPUS® is a registered trademark of Elsevier B.V.
Disclaimer: All queries to the respective databases were made by using the DOI record of every reference (where available). Due to technical problems beyond our control, the information is not always accurate. Please use the CrossRef link to visit the respective publisher site.

Copyright ©2001-2024
Faculty of Electrical Engineering and Computer Science
Stefan cel Mare University of Suceava, Romania


All rights reserved: Advances in Electrical and Computer Engineering is a registered trademark of the Stefan cel Mare University of Suceava. No part of this publication may be reproduced, stored in a retrieval system, photocopied, recorded or archived, without the written permission from the Editor. When authors submit their papers for publication, they agree that the copyright for their article be transferred to the Faculty of Electrical Engineering and Computer Science, Stefan cel Mare University of Suceava, Romania, if and only if the articles are accepted for publication. The copyright covers the exclusive rights to reproduce and distribute the article, including reprints and translations.

Permission for other use: The copyright owner's consent does not extend to copying for general distribution, for promotion, for creating new works, or for resale. Specific written permission must be obtained from the Editor for such copying. Direct linking to files hosted on this website is strictly prohibited.

Disclaimer: Whilst every effort is made by the publishers and editorial board to see that no inaccurate or misleading data, opinions or statements appear in this journal, they wish to make it clear that all information and opinions formulated in the articles, as well as linguistic accuracy, are the sole responsibility of the author.




Website loading speed and performance optimization powered by: 


DNS Made Easy