3/2021 - 6 |
Classification of Diabetic Retinopathy disease with Transfer Learning using Deep Convolutional Neural NetworksSOMASUNDARAM, K. , SIVAKUMAR, P. , SURESH, D. |
View the paper record and citations in |
Click to see author's profile in SCOPUS, IEEE Xplore, Web of Science |
Download PDF (1,768 KB) | Citation | Downloads: 1,490 | Views: 1,747 |
Author keywords
computer aided diagnosis, image classification, learning, neural networks, retinopathy
References keywords
image(12), learning(8), diabetic(8), classification(8), retinopathy(7), deep(7), recognition(6), neural(6), convolutional(5), retinal(4)
Blue keywords are present in both the references section and the paper title.
About this article
Date of Publication: 2021-08-31
Volume 21, Issue 3, Year 2021, On page(s): 49 - 56
ISSN: 1582-7445, e-ISSN: 1844-7600
Digital Object Identifier: 10.4316/AECE.2021.03006
Web of Science Accession Number: 000691632000006
SCOPUS ID: 85114777184
Abstract
Diabetic Retinopathy (DR) stays a main source of vision deterioration around world and it is getting exacerbated day by day. Almost no warning signs for detecting DR which will be greater challenge with us today. So, it is extremely preferred that DR has to be discovered on time. Adversely, the existing result involves an ophthalmologist to manually check and identify DR by positioning the exudates related with vascular irregularity due to diabetes from fundus image. In this work, we are able to classify images based on different severity levels through an automatic DR classification system. To extract specific features of image without any loss in spatial information, a Convolutional Neural Network (CNN) models which possesses an image with a distinct weight matrix is used. In the beginning, we estimate various CNN models to conclude the best performing CNN for DR classification with an objective to obtain much better accuracy. In the classification of DR disease with transfer learning using deep CNN models, 97.72% of accuracy is provided by the proposed CNN model for Kaggle dataset. The proposed CNN model provides a classification accuracy of 97.58% for MESSIDOR dataset. The proposed technique provides better results than other state-of-art methods. |
References | | | Cited By «-- Click to see who has cited this paper |
[1] Islam, Monzurul, Anh V. Dinh, and Khan A. Wahid, "Automated diabetic retinopathy detection using bag of words approach," Journal of Biomedical Science and Engineering, pp. 86-96, 2017. [CrossRef] [2] Safitri, Diah Wahyu, and Dwi Juniati, "Classification of diabetic retinopathy using fractal dimension analysis of eye fundus image," In AIP conference proceedings, August 2017. [CrossRef] [Web of Science Times Cited 16] [SCOPUS Times Cited 31] [3] Deperlioglu Omer, Utku Kose, and Gur Emre Guraksin, "Underwater image enhancement with HSV and histogram equalization," 7th International Conference on Advanced Technologies 2018, pp. 1 - 6. E-ISBN: 978-605-68537-1-5 [4] Beohar, Ritesh, and Pankaj Sahu, "Performance analysis of underwater image enhancement with CLAHE 2D median filtering technique on the basis of SNR, RMS error, mean brightness," International Journal of Engineering and Innovative Technology. pp. 525-528, 2013. [5] Ramachandran N, Hong SC, Sime MJ, Wilson GA. "Diabetic retinopathy screening using deep neural network," Clinical & experimental ophthalmology, pp. 412-416, 2018. [CrossRef] [Web of Science Times Cited 62] [SCOPUS Times Cited 72] [6] Gen-Min Lin, Mei-Juan Chen, Chia-Hung Yeh, Yu-Yang Lin, Heng-Yu Kuo, Min-Hui Lin, Ming-Chin Chen, Shinfeng D. Lin, Ying Gao, Anran Ran, Carol Y. Cheung, "Transforming Retinal Photographs to Entropy Images in Deep Learning to Improve Automated Detection for Diabetic Retinopathy," Journal of Ophthalmology, 2018. [CrossRef] [Web of Science Times Cited 57] [SCOPUS Times Cited 72] [7] Aujih AB, Izhar LI, Meriaudeau F, Shapiai MI, "Analysis of retinal vessel segmentation with deep learning and its effect on diabetic retinopathy classification," International conference on intelligent and advanced system, 2018 pp. 1-6. [CrossRef] [SCOPUS Times Cited 13] [8] Sreejini, K. S., and V. K. Govindan, "Severity classification of DME from retina images: a combination of PSO and FCM with bayes classifier," International Journal of Computer Applications, pp.11-17, 2013. [CrossRef] [9] Acharya UR, Ng EYK, Tan JH, Sree SV, Ng KH, "An integrated index for the identification of diabetic retinopathy stages using texture parameters," Journal of medical systems, pp. 2011-2020, 2012. [CrossRef] [Web of Science Times Cited 117] [SCOPUS Times Cited 144] [10] Becherer, N., Pecarina, J., Nykl, S., Hopkinson, K."Improving optimization of convolutional neural networks through parameter fine-tuning," Neural Computing and Applications, pp.3469-3479, 2019. [CrossRef] [Web of Science Times Cited 35] [SCOPUS Times Cited 42] [11] R. J. Borgli, H. Kvale Stensland, M. A. Riegler and P. Halvorsen, "Automatic Hyperparameter Optimization for Transfer Learning on Medical Image Datasets Using Bayesian Optimization," 13th International Symposium on Medical Information and Communication Technology, 2019, pp. 1-6. [CrossRef] [Web of Science Times Cited 17] [SCOPUS Times Cited 27] [12] Y. Wang, G. A. Wang, W. Fan, and J. Li, "A deep learning based pipeline for image classification of diabetic retinopathy," International Conference on Smart Health, 2018, pp. 240- 248. [CrossRef] [SCOPUS Times Cited 14] [13] Ren F, Cao P, Zhao D, Wan C, "Diabetic macular edema classification in retinal images using vector quantization and semi-supervised learning," Technology and Health Care, pp. 389-397, 2018. [CrossRef] [Web of Science Times Cited 24] [SCOPUS Times Cited 26] [14] Rodrigues MB, Da No'brega RVM, Alves SSA, Reboucas Filho PP, Duarte JBF, Sangaiah AK, De Albuquerque VHC, "Health of things algorithms for malignancy level classification of lung nodules," IEEE Access, pp. 18592-18601, 2018. [CrossRef] [Web of Science Times Cited 63] [SCOPUS Times Cited 85] [15] Abdullah, Muhammad, Muhammad Moazam Fraz, and Sarah A. Barman, "Localization and segmentation of optic disc in retinal images using circular Hough transform and grow-cut algorithm," PeerJ, 2016. [CrossRef] [Web of Science Times Cited 81] [SCOPUS Times Cited 101] [16] Oquab M., Bottou L., Laptev I., Sivic J, "Learning and transferring mid-level image representations using convolutional neural networks," Proc. of IEEE conference on computer vision and pattern recognition. 2014, pp. 1717-1724. [CrossRef] [Web of Science Times Cited 1963] [SCOPUS Times Cited 2614] [17] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton, "Imagenet classification with deep convolutional neural networks," Communications of the ACM, pp. 84-90, 2012. [CrossRef] [Web of Science Times Cited 69679] [SCOPUS Times Cited 20499] [18] Le Cun, Y.; Bottou, L.; Bengio, Y. & Haffner, P, "Gradient-based learning applied to document recognition," Proceedings. of the IEEE, 1998 pp. 2278-2324. [CrossRef] [Web of Science Times Cited 31655] [SCOPUS Times Cited 41046] [19] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan1, Vincent Vanhoucke1, Andrew Rabinovich, "Going deeper with convolutions." Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-9, 2015. [CrossRef] [Web of Science Times Cited 15794] [SCOPUS Times Cited 37324] [20] Wei Wang, Yujing Yang, Xin Wang, Weizheng Wang, Ji Li, "Development of Convolutional Neural Network and its application in Image classification: A Survey," Optical Engineering, pp. 1-19 2019. [CrossRef] [Web of Science Times Cited 149] [SCOPUS Times Cited 203] [21] Simonyan, K, and Zisserman, A, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv, 2015. https://arxiv.org/abs/1409.1556v6 [22] He, K., Zhang, X., Ren, S., and Sun, J, "Deep residual learning for image recognition," Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 171-180. [CrossRef] [Web of Science Times Cited 97494] [SCOPUS Times Cited 155976] [23] Asia Pacific Tele-Ophthalmology Society, APTOS 2019 blindness detection Database: Kaggle [Internet]. Available from: https://www.kaggle.com/c/aptos2019-blindness-detection/data. [24] Decenciere, E., Zhang, X., Cazuguel, G., Lay, B., Cochener, B., Trone, C., & Charton, B, "Feedback on a publicly distributed image database: the Messidor database", Image Analysis & Stereology," pp. 231-234, 2014. [CrossRef] [Web of Science Times Cited 775] [SCOPUS Times Cited 1003] [25] Yussof WNJHW, Hitam MS, Awalludin EA, Bachok Z, "Performing contrast limited adaptive histogram equalization technique on combined color models for underwater image enhancement," International Journal of Interactive Digital Media, vol. 1, no. 1, pp. 1-6, 2013. [CrossRef] Web of Science® Citations for all references: 217,981 TCR SCOPUS® Citations for all references: 259,292 TCR Web of Science® Average Citations per reference: 8,384 ACR SCOPUS® Average Citations per reference: 9,973 ACR TCR = Total Citations for References / ACR = Average Citations per Reference We introduced in 2010 - for the first time in scientific publishing, the term "References Weight", as a quantitative indication of the quality ... Read more Citations for references updated on 2024-10-12 19:00 in 151 seconds. Note1: Web of Science® is a registered trademark of Clarivate Analytics. Note2: SCOPUS® is a registered trademark of Elsevier B.V. Disclaimer: All queries to the respective databases were made by using the DOI record of every reference (where available). Due to technical problems beyond our control, the information is not always accurate. Please use the CrossRef link to visit the respective publisher site. |
Faculty of Electrical Engineering and Computer Science
Stefan cel Mare University of Suceava, Romania
All rights reserved: Advances in Electrical and Computer Engineering is a registered trademark of the Stefan cel Mare University of Suceava. No part of this publication may be reproduced, stored in a retrieval system, photocopied, recorded or archived, without the written permission from the Editor. When authors submit their papers for publication, they agree that the copyright for their article be transferred to the Faculty of Electrical Engineering and Computer Science, Stefan cel Mare University of Suceava, Romania, if and only if the articles are accepted for publication. The copyright covers the exclusive rights to reproduce and distribute the article, including reprints and translations.
Permission for other use: The copyright owner's consent does not extend to copying for general distribution, for promotion, for creating new works, or for resale. Specific written permission must be obtained from the Editor for such copying. Direct linking to files hosted on this website is strictly prohibited.
Disclaimer: Whilst every effort is made by the publishers and editorial board to see that no inaccurate or misleading data, opinions or statements appear in this journal, they wish to make it clear that all information and opinions formulated in the articles, as well as linguistic accuracy, are the sole responsibility of the author.