2/2018 - 13 |
Combination of Long-Term and Short-Term Features for Age Identification from VoiceBUYUK, O. , ARSLAN, M. L. |
Extra paper information in |
Click to see author's profile in SCOPUS, IEEE Xplore, Web of Science |
Download PDF (1,172 KB) | Citation | Downloads: 990 | Views: 6,060 |
Author keywords
feature extraction, Gaussian mixture model, neural networks, speech processing, support vector machines
References keywords
processing(20), speaker(19), speech(16), recognition(14), signal(13), language(12), deep(9), verification(8), neural(8), vector(7)
No common words between the references section and the paper title.
About this article
Date of Publication: 2018-05-31
Volume 18, Issue 2, Year 2018, On page(s): 101 - 108
ISSN: 1582-7445, e-ISSN: 1844-7600
Digital Object Identifier: 10.4316/AECE.2018.02013
Web of Science Accession Number: 000434245000013
SCOPUS ID: 85047853422
Abstract
In this paper, we propose to use Gaussian mixture model (GMM) supervectors in a feed-forward deep neural network (DNN) for age identification from voice. The GMM is trained with short-term mel-frequency cepstral coefficients (MFCC). The proposed GMM/DNN method is compared with a feed-forward DNN and a recurrent neural network (RNN) in which the MFCC features are directly used. We also make a comparison with the classical GMM and GMM/support vector machine (SVM) methods. Baseline results are obtained with a set of long-term features which are commonly used for age identification in previous studies. A feed-forward DNN and an SVM are trained using the long term features. All the systems are tested using a speech database which consists of 228 female and 156 male speakers. We define three age classes for each gender; young, adult and senior. In the experiments, the proposed GMM/DNN significantly outperforms all the other DNN types. Its performance is only comparable to the GMM/SVM method. On the other hand, experimental results show that age identification performance is significantly improved when the decisions of the short-term and long-term systems are combined together. We obtain approximately 4% absolute improvement with the combination compared to the best standalone system. |
References | | | Cited By «-- Click to see who has cited this paper |
[1] D. A. Reynolds, T. F. Quatieri, R. B. Dunn, "Speaker verification using adapted Gaussian mixture models," Digital Signal Processing, vol. 10 (1-3), pp. 19-41, 2000. [CrossRef] [Web of Science Times Cited 2906] [SCOPUS Times Cited 3977] [2] W. M. Campbell, D. E. Sturim, D. A. Reynolds, "Support vector machines using GMM supervectors for speaker verification," IEEE Signal Processing Letters, vol. 13 (5), pp. 308-311, 2006. [CrossRef] [Web of Science Times Cited 699] [SCOPUS Times Cited 942] [3] P. Kenny, P. Ouellet, N. Dehak, V. Gupta, P. Dumouchel, "A study of inter-speaker variability in speaker verification," IEEE Transactions on Audio Speech and Language Processing, vol. 16 (5), pp. 980-988, 2008. [CrossRef] [Web of Science Times Cited 426] [SCOPUS Times Cited 574] [4] N. Dehak, P. Kenny, R. Dehak, P. Dumouchel , P. Ouellet, "Front-end factor analysis for speaker verification," IEEE Transactions on Audio Speech and Language Processing, vol. 19 (4), pp. 788-798, 2011. [CrossRef] [Web of Science Times Cited 2766] [SCOPUS Times Cited 3413] [5] P. Kenny "A small footprint i-vector extractor," in The Speaker and Language Recognition Workshop (ODYSSEY), Singapore, pp. 1-6, 25-28 June 2012. [6] P. Kenny, "Bayesian speaker verification with heavy-tailed priors," in The Speaker and Language Recognition Workshop (ODYSSEY), Brno, Czech Republic, pp. 014, 28 June-1 July 2010. [7] S. J. D. Prince, J. H. Elder, "Probabilistic linear discriminant analysis for inferences about identity," in IEEE International Conference on Computer Vision (ICCV), Rio de Janeiro, Brazil, pp. 1-8, 14-20 October 2007. [CrossRef]1 [8] G. E. Hinton, S. Osindero, Y. The, "A fast learning algorithm for deep belief nets," Neural Computation, vol. 18, pp. 1527-1554, 2006. [CrossRef] [Web of Science Times Cited 10330] [SCOPUS Times Cited 13702] [9] L. Deng, D. Yu, "Deep learning methods and applications," Foundations and Trends in Signal Processing, vol. 7 (3-4), pp. 197-387, 2013. [CrossRef] [Web of Science Times Cited 1429] [SCOPUS Times Cited 2850] [10] G. Hinton, L. Deng, D. Yu, G. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. Sainath, B. Kingsbury, "Deep neural networks for acoustic modeling in speech recognition," IEEE Signal Processing Magazine, vol. 29 (6), pp. 82-97, 2012. [CrossRef] [Web of Science Times Cited 6809] [SCOPUS Times Cited 8647] [11] F. Richardson, D. A. Reynolds, N. Dehak, "Deep neural network approaches to speaker and language recognition," IEEE Signal Processing Letters, vol. 22 (10), pp. 1671-1675, 2012. [CrossRef] [Web of Science Times Cited 274] [SCOPUS Times Cited 372] [12] H. Zen, A. Senior, M. Schuster, "Statistical parametric speech synthesis using deep neural networks," in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 7962-7966, 26-31 May 2013. [CrossRef] [SCOPUS Times Cited 715] [13] I. J. Tashev, Z. Q. Wang, K. Godin, "Speech emotion recognition based on Gaussian mixture models and deep neural networks", in Information Theory and Applications Workshop (ITA), February 2017. [CrossRef] [SCOPUS Times Cited 24] [14] C. Zhang, C. Yu, J. H. L. Hansen, "An investigation of deep learning frameworks for speaker veri?cation anti-spoo?ng," IEEE Journal of Selected Topics in Signal Processing, vol. 11 (4), pp. 684-694, 2017. [CrossRef] [Web of Science Times Cited 88] [SCOPUS Times Cited 111] [15] S. B. Davis, P. Mermelstein, "Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences", IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 28 (4), pp. 357-366, 1980. [CrossRef] [Web of Science Times Cited 2923] [SCOPUS Times Cited 3997] [16] J. Makhoul, "Linear prediction: A tutorial review", Proceeding of the IEEE, vol. 63 (4), pp. 561-580, 1975. [CrossRef] [Web of Science Times Cited 2310] [SCOPUS Times Cited 2970] [17] F. Itakura, "Minimum prediction residual principle applied to speech recognition", IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 23 (1), pp. 67-72, 1975. [CrossRef] [Web of Science Times Cited 861] [SCOPUS Times Cited 1203] [18] D. A. Reynolds, W. Andrews, J. Campbell, J. Navratil, B. Peskin, A. Adami, Q. Jin, D. Klusacek, J. Abramson, R. Mihaescu, J. Godfrey, D. Jones, B. Xiang, "The SuperSID project: Exploiting high-level information for high-accuracy speaker recognition," in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Vancouver, Canada, 6-10 April 2003. [CrossRef] [19] B. Yegnanarayana, S. R. M. Prasanna, J. M. Zachariah, C.S. Gupta, "Combining evidence from source, suprasegmental and spectral features for a fixed-text speaker verification system," IEEE Transactions on Audio Speech and Language Processing, vol. 13 (4), pp. 575-582, 2005. [CrossRef] [Web of Science Times Cited 78] [SCOPUS Times Cited 111] [20] F. Metze, J. Ajmera, R. Englert, U. Bub, F. Burkhardt, J. Stegmann, C. Muller, R. Huber, B. Andrassy, J.G. Bauer, B. Little, "Comparison of four approaches to age and gender recognition for telephone applications," in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Hawaii, Honolulu, USA, 16-20 April 2007. [CrossRef] [SCOPUS Times Cited 120] [21] H. Meinedo, I. Trancoso, "Age and gender classification using fusion of acoustic and prosodic features," in International Conference on Spoken Language Processing (INTERSPEECH), Makuhari, Japan, 26-30 September 2010. [22] T. Bocklet, A. Maier, J. G. Bauer, F. Burkhardt, E. Noth, "Age and gender recognition for telephone applications based on GMM supervectors and support vector machines," in IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Las Vegas, USA, 31 March-4 April 2008. [CrossRef] [Web of Science Times Cited 71] [SCOPUS Times Cited 102] [23] B. Schuller, S. Steidl, A. Batliner, F. Burkhardt, L. Devillers, C. Mueller, S. Narayanan, "The Interspeech 2010 paralinguistic challenge," in International Conference on Spoken Language Processing (INTERSPEECH), Makuhari, Japan, 26-30 September 2010. [24] B. E. Kingsbury, N. Morgan, S. Greenberg, "Robust speech recognition using the modulation spectrogram," Speech Comunication, vol. 25, pp. 117-132, 1998. [CrossRef] [Web of Science Times Cited 160] [SCOPUS Times Cited 209] [25] M. Feld, F. Burkhardt, C. Müller, "Automatic speaker age and gender recognition in the car for tailoring dialog and mobile services," in International Conference on Spoken Language Processing (INTERSPEECH), Makuhari, Japan, 26-30 September 2010. [26] M. Li, K.J. Han, S. Narayanan, "Automatic speaker age and gender recognition using acoustic and prosodic level information fusion," Computer Speech and Language, vol. 27 (1), pp. 151-167, 2013. [CrossRef] [Web of Science Times Cited 121] [SCOPUS Times Cited 150] [27] J. Grzybowska, S. Kacprzak, "Speaker age classification and regression using i-vectors," in International Conference on Spoken Language Processing (INTERSPEECH), San Francisco, California, USA, 8-12 September 2016. [CrossRef] [Web of Science Times Cited 32] [SCOPUS Times Cited 36] [28] Z. Qawaqneh, A. A. Mallouh, B. D. Barkana, "Deep neural network framework and transformed MFCCs for speaker's age and gender classification," Knowledge Based Systems, vol. 115, pp. 5-14, 2017. [CrossRef] [Web of Science Times Cited 50] [SCOPUS Times Cited 78] [29] F. Eyben, M. Wöllmer, B. Schuller, "Opensmile: the Munich versatile and fast open-source audio feature extractor," in ACM International Conference on Multimedia, Firenze, Italy, 25-29 October 2010. [CrossRef] [SCOPUS Times Cited 2321] [30] B. E. Boser, I. Guyon, V. Vapnik, "A training algorithm for optimal margin classi?ers," in ACM Workshop on Computational Learning Theory, Pittsburgh, USA, pp. 144-152, 27-29 July 1992. [CrossRef] [31] C. Cortes, V. Vapnik, "Support-vector networks," Machine Learning, vol. 20 (3), pp. 273-297, 1995. [CrossRef] [Web of Science Times Cited 35208] [32] Y. Bengio, "Learning deep architectures for AI," Foundations and Trends in Machine Learning, vol. 2(1), pp. 1-127, 2009. [CrossRef] [Web of Science Times Cited 5867] [SCOPUS Times Cited 6627] [33] O. Buyuk, "Sentence-HMM state-based i-vector/PLDA modelling for improved performance in text dependent single utterance speaker verification," IET Signal Processing, vol. 10 (8), pp. 918-923, 2016. [CrossRef] [Web of Science Times Cited 11] [SCOPUS Times Cited 13] [34] J. J. Hopfield, "Neural networks and physical systems with emergent collective computational abilities," Proceedings of the National Academy of Sciences of the USA, vol. 79 (8), pp. 2554-2558, April 1982. [CrossRef] [Web of Science Times Cited 11279] [SCOPUS Times Cited 13472] [35] S. Hochreiter, "Untersuchungen zu dynamischen neuronalen Netzen," Diploma thesis 1991, TU Munich. [36] S. Hochreiter, J. Schmidhuber, "Long short-term memory," Neural Computation, vol. 9 (8), pp. 1735-1780, November 1997. [CrossRef] [SCOPUS Times Cited 72393] [37] Y. Linde, A. Buzo, R. Gray, "An algorithm for vector quantizer design," IEEE Transactions on Communications, vol. 28 (1), pp. 84-95, 1980. [CrossRef] [Web of Science Times Cited 4202] [SCOPUS Times Cited 5826] [38] R. Blouet, C. Mokbel, H. Mokbel, E.S. Soto, G. Chollet, H. Greige, "Becars: a free software for speaker verification," in The Speaker and Language Recognition Workshop (ODYSSEY), Toledo, Spain. pp. 145-148, 31 May - 4 June 2004. [39] C. C. Chang, C. J. Lin, "LIBSVM: A library for support vector machines," ACM Transactions on Intelligent Systems and Technology, vol. 2 (3), pp. 27:1-27, 2011. [CrossRef] [Web of Science Times Cited 24407] [SCOPUS Times Cited 27759] [40] F. Chollet, Keras. Github repository 2015. https://github.com/fchollet/keras. [41] R. Al-Rfou, G. Alain, A. Almahairi, C. Angermueller, D. Bahdanau, N. Ballas, F. Bastien, J. Bayer, A. Belikov, A. Belopolsky, et. al. "Theano: A Python framework for fast computation of mathematical expressions," arXiv e-prints 2016. Web of Science® Citations for all references: 113,307 TCR SCOPUS® Citations for all references: 172,714 TCR Web of Science® Average Citations per reference: 2,698 ACR SCOPUS® Average Citations per reference: 4,112 ACR TCR = Total Citations for References / ACR = Average Citations per Reference We introduced in 2010 - for the first time in scientific publishing, the term "References Weight", as a quantitative indication of the quality ... Read more Citations for references updated on 2024-11-25 20:00 in 226 seconds. Note1: Web of Science® is a registered trademark of Clarivate Analytics. Note2: SCOPUS® is a registered trademark of Elsevier B.V. Disclaimer: All queries to the respective databases were made by using the DOI record of every reference (where available). Due to technical problems beyond our control, the information is not always accurate. Please use the CrossRef link to visit the respective publisher site. |
Faculty of Electrical Engineering and Computer Science
Stefan cel Mare University of Suceava, Romania
All rights reserved: Advances in Electrical and Computer Engineering is a registered trademark of the Stefan cel Mare University of Suceava. No part of this publication may be reproduced, stored in a retrieval system, photocopied, recorded or archived, without the written permission from the Editor. When authors submit their papers for publication, they agree that the copyright for their article be transferred to the Faculty of Electrical Engineering and Computer Science, Stefan cel Mare University of Suceava, Romania, if and only if the articles are accepted for publication. The copyright covers the exclusive rights to reproduce and distribute the article, including reprints and translations.
Permission for other use: The copyright owner's consent does not extend to copying for general distribution, for promotion, for creating new works, or for resale. Specific written permission must be obtained from the Editor for such copying. Direct linking to files hosted on this website is strictly prohibited.
Disclaimer: Whilst every effort is made by the publishers and editorial board to see that no inaccurate or misleading data, opinions or statements appear in this journal, they wish to make it clear that all information and opinions formulated in the articles, as well as linguistic accuracy, are the sole responsibility of the author.