Skip to content
BY 4.0 license Open Access Published by De Gruyter Open Access August 20, 2021

Convolutional neural-network-based classification of retinal images with different combinations of filtering techniques

Asha Gnana Priya Henry and Anitha Jude EMAIL logo
From the journal Open Computer Science


Retinal image analysis is one of the important diagnosis methods in modern ophthalmology because eye information is present in the retina. The image acquisition process may have some effects and can affect the quality of the image. This can be improved by better image enhancement techniques combined with the computer-aided diagnosis system. Deep learning is one of the important computational application techniques used for a medical imaging application. The main aim of this article is to find the best enhancement techniques for the identification of diabetic retinopathy (DR) and are tested with the commonly used deep learning techniques, and the performances are measured. In this article, the input image is taken from the Indian-based database named as Indian Diabetic Retinopathy Image Dataset, and 13 filters are used including smoothing and sharpening filters for enhancing the images. Then, the quality of the enhancement techniques is compared using performance metrics and better results are obtained for Median, Gaussian, Bilateral, Wiener, and partial differential equation filters and are combined for improving the enhancement of images. The output images from all the enhanced filters are given as the convolutional neural network input and the results are compared to find the better enhancement method.

1 Introduction

Diabetic retinopathy (DR) is the major cause of blindness and it mostly affects people with diabetes. Identifying DR at an early stage can prevent vision loss and this needs trained ophthalmologists and computer-aided diagnosis (CAD) systems. The performance of the system can be improved by the quality of the given retinal fundus image and proper image enhancement techniques. To improve the quality of the image, for diagnostic purposes, a fundus image is taken using a fundus camera after dilating the pupil with eye drops and fundus focusing. Fundus photography includes an intricate microscope fixed to the fundus camera with flash and can be observed with dyes like fluorescein and indocyanine green or with colored filters. The monocular indirect ophthalmoscopy principle is the optical design of the fundus camera, and the resultant images are magnificent in showing the optic nerve through which signals transfer from the brain and retinal vessels can be visualized. The images obtained from the fundus camera can be stored and used for future purposes.

The medical image acquisition system introduces a few noises and should be removed for better analysis. Noise can be removed by filtering techniques with the prior knowledge of the filters by preserving the details in an image. Filters are used to enhance or modify an image, and the performance of image enhancement techniques depends on the filter chosen. It can be used to highlight or remove features in an image and filtered out either in frequency or spatial domain [1,2]. Image processing carried out with filtering includes smoothing filters, sharpening filters, and edge enhancement. In a smoothing filter, each pixel value is replaced by the average pixel and can remove fine details in an image. Sharpening filters highlight fine details in an image by removing the blur and highlighting edges. Edge enhancements enhance the contrast of edges in an image [3,4,5].

Illumination and equipment factors affect the quality of an image. The gray level dynamic range is increased to improve the original image by contrast enhancement and this helps ophthalmologists to differentiate normal or abnormal regions in an image [6]. The contrast variation, low contrast improvement, and irregular illumination suppression lead to the pre-processing stage [7]. This method mainly helps the CAD system and visual assessment for better segmentation and classification in retinal fundus images [8].

Image enhancement techniques in retinal images improve the segmentation of optic disc, blood vessels, and clinical features [9]. Based on the noise in retinal images, enhancement techniques can be classified into various categories like image resolution enhancement, filtering, histogram equalization (HE), morphological operations, fuzzy-based enhancement, and transform-based enhancement. The image resolution method gives the importance of resizing an image for reducing the memory size and speeding up the process of classifier [10]. The filtering-based method maximizes the filter response to structures in fundus images for segmentation [11]. The quality of an image can be differentiated by the contrast factor and this solves illumination correction by HE [12]. The morphological operation has different parameters like size, shape, and structuring elements for measuring the performance of the method by varying the values. This helps in improving the retinal image enhancement and further process [13]. The fuzzy-based method also enhances the contrast in an image with sharp boundaries and smooth regions [14,15]; transforms can increase the contrast to improve the quality of an image with sharp edges [16].

Deep learning methods are an important artificial intelligence method for medical applications. The deep convolutional neural network (CNN) gives high accuracy by automatic feature selection [17,18]. Multiple features can be used in CNN for classification, and the hierarchy approach is used to learn complex features. The accuracy of the classification method can be improved by translation and distortion features in higher layers [19]. The CNN can solve the problems in the identification of accurate detection of many diseases [20]. The application of image enhancement techniques is in underwater images, infrared images, medical images, satellite images, and so on. From the literature studies, it is proved that for better segmentation and classification methods, image enhancement techniques should be properly selected by finding the noise present in the available retinal fundus images.

In this article, various enhancement methods have been implemented, and the performance metrics have been tabulated and the variations in the values are clearly shown in graphs. The structure of the article is as follows: Section 2 gives the retinal image enhancement block diagram as well as various image enhancement techniques and their performance metrics comparison; Section 3 gives the combinational filters and their performance metrics; and Section 4 gives the convolution neural network architecture and the accuracy obtained from all of the enhanced filters.

2 Retinal image enhancement

The retinal image enhancement techniques are helpful in removing noise, brighten the image, or improving the contrast in an image. The original image quality and information content can be improved before processing by mathematical techniques. This can be useful in identifying features and further image analysis. The general block diagram for retinal image enhancement is shown in Figure 1.

Figure 1 
               Block diagram for retinal image enhancement.
Figure 1

Block diagram for retinal image enhancement.

2.1 Retinal fundus image

The input retinal fundus image is taken from the Indian-based database named as Indian Diabetic Retinopathy Image Dataset given by ISBI 2018 challenge. This dataset consists of 516 images for grading of DR in which the number of normal fundus images is 168 and DR-signed images are 348, taken from the eye clinic in Nanded, Maharashtra, India. The input retinal fundus image taken is shown in Figure 2.

Figure 2 
                  Retinal fundus image.
Figure 2

Retinal fundus image.

2.2 Image resizing

Image resizing is required to increase or decrease the total number of pixels in an image because the images in the dataset or real-time images may vary in the pixel range and may slow down the classifier. Hence, the medical images can be resized by pixel values of 256*256, 512*512, 1,024*1,024, and so on to speed up the further process. The images taken in this article are resized to the pixel range of 256*256 and are shown in Figure 3.

Figure 3 
                  Resized image.
Figure 3

Resized image.

2.3 Splitting of RGB channels

The resized RGB image is then split into red, green, and blue channels for the localization of blood vessels. The red channel is low contrast and brightest color channel with more information of optic disc, the blood vessels that can be used for identifying diseases like glaucoma but not suitable for DR because of more noise. The green channel has more information on retinal blood vessels and clinical features because of the high contrast level that can be used for the grading of DR. The blue channel does not have much information because the contrast level is very low and mostly not used for disease identification. The splitting of red, green, and blue channels for the images taken in this article is shown in Figure 4.

Figure 4 
                  Splitting of RGB channels: (a) red channel, (b) green channel, and (c) blue channel.
Figure 4

Splitting of RGB channels: (a) red channel, (b) green channel, and (c) blue channel.

2.4 Green channel to grayscale conversion

Color information does not help us to identify edges or other features in most cases, and therefore, it is converted into a grayscale image. Grayscale conversion reduces the memory size and this is a single-layered image; hence, in morphological operation, segmentation problems can be solved easily. In this article, only the green channel is converted because the blood vessel needed for the future process is clearly visible and the converted image is shown in Figure 5.

Figure 5 
                  Green channel to grayscale conversion.
Figure 5

Green channel to grayscale conversion.

2.5 Retinal image enhancement

Image enhancement techniques are used for removing noise, highlighting the features, and improving the quality of an image. The quality of an image can be improved by contrast enhancement, i.e. separating the dark regions in an image. This can be accomplished by using filters; numerous filters are available for enhancing the images such as smoothing, sharpening, and edge enhancement filters. In this article, 13 simple filters have been proposed for obtaining the enhanced blood vessel structures. The proposed filters are median filter (MF), Gaussian filter (GF), HE, contrast limited histogram equalization (CLAHE), Tophat filter (THF), bottomhat filter (BHF), canny edge detection (CED), bilateral filter (BF), Laplacian filter (LF), Sobel filter (SF), Wiener filter (WF), homomorphic filter (HF), partial differential equation (PDE); they are explained with their mathematical expressions and output images in Table 1.

Table 1

Proposed filters with their mathematical expressions and output images

Filters Mathematical expressions Advantages Output images
MF f ( x , y ) = median { g ( s , t ) } 1. Smoothing filter
2. Removes noise and preserve edges
3. Highlights blood vessels
GF G ( x , y ) = 1 2 π σ 2 e ( x 2 + y 2 ) / 2 σ 2 1. Smoothing filter
2. Blur edges, and reduces noise and contrast
3. Enhances blood vessels
HE P ( r k ) = n k n 1. Sharpening filter
2. Modifies image intensity
3. Enhances contrast in an image
CLAHE g = [ g max g min ] p ( f ) + g min 1. Sharpening filter
2. Improves low- contrast medical images and tiny blood vessel extraction
THF T ( f ) = f f b 1. Sharpening filter
2. Improves contrast
3. Enhances light objects on a dark background in an image and reserves sharp bottoms
BHF T ( f ) = f b f 1. Sharpening filter
2. Enhances dark objects on light background
3. Extracts blood vessels
CED G = G x 2 + G y 2 = G x + G y 1. Smoothening filter
2. Detects edges by preserving important structural properties in an image
BF I = 1 W P x i Ω I ( x i ) f r ( I ( x i ) I ( x ) ) 1. Smoothing filter
2. Removes noise and preserves edges
3. Retinal blood vessels are preserved
LF L ( x , y ) = 2 I x 2 + 2 I y 2 1. Sharpening filter
2. Enhances edges of blood vessels
3. Enhances features with sharp discontinuity
SF G = G x 2 + G y 2 Θ = a tan G y G x 1. Smoothing filter
2. Enhances edges and detects features
WF H w ( u , v ) = S a a ( u , v ) s a a ( u , v ) + s n n ( u , v ) 1. Smoothing filter
2. Removes noise and blurring effect
3. Measures contrast between original and lowpass filter
HF I ( X , Y ) = L ( X , Y ) R ( X , Y ) 1. Sharpening filter
2. Increases contrast by normalizing the brightness in an image
PDE p t = 2 [ c ( 2 p ) ] 2 p 1. Sharpening filter
2. Extracts contour of vessels in retinal fundus image

2.6 Performance metrics

The performance of the image processing algorithms or enhancement filters can be evaluated by various quantitative measures. The metrics such as peak signal-to-noise ratio (PSNR), entropy (E), mean square error (MSE), root mean square error (RMSE), structural similarity index measure (SSIM), contrast improvement index (CII), absolute mean brightness error (AMBE), linear index of fuzziness (LIF), relative contrast enhancement factor (RCEF), and universal image quality index (UQI) are used to find which image enhancement techniques give better results. PSNR is the most commonly used metric for image enhancement techniques and is given by

(1) PSNR = 10 log 10 255 2 1 M N i j ( g i j f i j ) 2 ,

where f and g are the original and the enhanced images and M and N are the row and column pixels in the image, respectively. The highest PSNR value gives better performance. The information content in the image is measured using E, and the mathematical equation is given by

(2) E [ p ] = J = 0 L 1 p ( J ) log 2 p ( j ) ,

where p is the normalized histogram counts and L is the number of intensity levels. The E value ranges from 0 to 8, and a higher value of E has more details and gives better performance. MSE is the cumulative square error between the original and enhanced image, and the mathematical expression is given by

(3) MSE = 1 m n i = 0 m 1 j = 0 n 1 [ I ( i , j ) K ( i , j ) ] 2 ,

where I and K are the original and enhanced images, and m and n are the rows and columns in the image, respectively. A low value of MSE indicates a lower error in an image. RMSE is used to find the quality of an image. The square root of MSE is RMSE and is given by

(4) RMSE = [ 1 m n i = 0 m 1 j = 0 n 1 [ I ( i , j ) K ( i , j ) ] 2 ] 1 / 2 ,

where I and K are the original and enhanced images, and m and n are the rows and columns in the image, respectively. A low value of RMSE indicates a lower error in an image. The change in the structural similarity between the original and enhanced images is measured by SSIM and is given by

(5) SSIM ( x , y ) = ( 2 μ x μ y + c 1 ) ( 2 σ x y + c 2 ) ( μ x 2 + μ y 2 + c 1 ) ( σ x 2 + σ y 2 + c 2 ) ,

where μ x and μ y are the average of x and y, respectively; σ x 2 , σ y 2 and σ x y are the variance and covariance of x and y; and c 1 and c 2 are variables. The values range from −1 to 1; if the value is exactly 1 it is considered to be of perfect structure similarity and if the value is lower, then there is no structural similarity. The contrast level in the image is measured using CII and is given by

(6) CII = c en c ,

where c and c en are the contrast values of the original and enhanced images, respectively. A larger value of CII gives better enhanced results. The rating of performance brightness preservation from original brightness is measured using AMBE and is given by

(7) AMBE = mean(Enhanced image) mean ( Input image )

The lower value of AMBE preserves brightness and the quality of the image is improved. The difference between the original and enhanced image is measured using LIF and is given by

(8) LIF = 2 M N x = 1 M y = 1 N min { p x y , ( 1 p x y ) } ,

where p x y = sin π 2 1 f ( x , y ) f max , f max is the maximum value of the whole image, and M and N are the rows and columns in the image, respectively. A smaller value of LIF indicates that the enhanced image is clear with less noise. The dynamic range of histogram is measured using RCEF and is given by

(9) RCEF = σ B 2 / μ B σ A 2 / μ A ,

where σ A and σ B are the standard deviation of the original and enhanced image, and μ A and μ B are the mean of the original and enhanced image, respectively. For better enhancement, RCEF should be greater than 1. The distortion in an image is measured using UQI by combining the loss of correlation, luminance, and contrast distortion. The mathematical expression of UQI is given by

(10) UQI = 4 σ x y x ¯ y ¯ ( σ x 2 + σ y 2 ) ( x 2 + y 2 ) ,

where the correlation coefficient = σ x y σ x σ y , luminance distortion = 2 x y x 2 + y 2 , and contrast distortion = 2 σ x σ y σ x 2 + σ y 2 . The range of UQI is from 0 to 1. When both the images are identical (i.e. x = y) to each other the value is 1.

The evaluated results of performance metrics explained above are shown in Table 2. This gives the filter analysis the highest PSNR and E values are for GF and CLAHE among the proposed filters. The lowest error range in MSE and RMSE is GF. The structural similarity range of GF is close to 1. The highest value for CII and the lowest value of AMBE are CLAHE and LF among the proposed filters. The smaller value with less noise for LIF is HF. The higher value in RCEF is for CLAHE. The UQI range gives better results for BF. The overall better performance among the proposed filter is MF, GF, BF, WF, and PDE. These filters can be combined for better results and are explained in Section 3.

Table 2

Evaluation of performance metrics

MF 48.8278 4.880 0.9710 0.9542 0.9893 2.8783 0.0156 0.1571 0.9968 0.1985
GF 53.8574 4.9624 0.3096 0.5376 0.9977 1.4097 0.0022 0.1583 0.9952 0.1986
HE 21.2632 5.7129 503.6569 22.2505 0.6576 42.6710 17.8838 0.1676 3.8242 0.1151
CLAHE 16.2210 5.8787 1.5732 × 103 39.5337 0.4640 45.7160 37.4574 0.1505 5.3204 0.0994
THF 17.8359 3.4202 1.2192 × 103 33.8875 0.4331 0.1095 −27.6012 0.1800 0.0085 0.0276
BHF 16.9483 2.2983 1.4758 × 103 37.4008 0.2983 8.0963 −30.0360 0.1804 8.7075 × 10−4 0.0037
CED 16.6258 0.4793 1.5859 × 103 38.7930 0.2598 0.0045 −31.4381 0.1626 9.0996 × 10−7 4.9338 × 10−5
BF 48.9856 4.9483 0.8740 0.9210 0.9896 3.4009 0.0018 0.1569 0.9939 0.1987
LF 16.5974 0.0030 1.5952 × 10 3 38.9121 0.2552 0 −31.5414 0.1814 2.4040 × 10−11 3.5189 × 10−9
SF 17.3494 3.1893 1.3533 × 103 35.7679 0.3554 5.4952 −28.1576 0.1800 0.0083 0.0135
WF 46.5614 4.9878 1.6139 1.2354 0.9795 4.7459 −0.0106 0.1568 0.9903 0.1987
HF 16.7911 0 1.5331 × 103 38.1086 0.2955 40.8456 −30.5417 0 0 0
PDE 51.4980 4.9702 0.4727 0.6832 0.9941 3.0299 0.0014 0.1585 0.9955 0.1984

3 Combinational filter enhancement

The better enhancement results obtained for filters are chosen for combinational filter enhancement. The better filters obtained are MF, GF, BF, WF, and PDE; among these, MF, GF, BF, and WF are smoothing filters, and PDE is the sharpening filter. Hence, smoothing filters are combined with sharpening filters and the performance metrics are tabulated in Table 3. If GF is applied to an image, there may be blurring of the image and this can be inverted using WF, followed by MF to preserve the edges and the sharpening filter PDE to enhance the features; the results are tabulated. The combinational filter enhancement comparison graph is shown in Figure 6.

Table 3

Performance metrics of combinational filter enhancement

46.6925 4.9577 1.5506 1.2132 0.9830 3.7923 0.0177 0.1569 0.9934 0.1987
48.1480 4.9858 1.0646 1.0154 0.9893 3.0479 0.0037 0.1579 0.9916 0.1987
46.6369 4.9693 1.5113 1.2093 0.9825 3.7921 0.0048 0.1565 0.9909 0.1987
45.5758 4.9824 2.0131 1.3818 0.9763 4.6286 −0.0114 0.1563 0.9876 0.1989
43.8897 5.0036 3.0064 1.6834 0.9698 4.8848 −0.0099 0.1543 0.9831 0.1993
Figure 6 
               Comparison of combinational filter enhancement performance metrics.
Figure 6

Comparison of combinational filter enhancement performance metrics.

The combinational filter enhancement graph clearly shows that the PSNR value is high for GF + PDE, the E value is higher for GWMF + PDE, the MSE and RMSE error range is smaller for GF + PDE, the SSIM value is close to 1 for GF + PDE, the CII value is highest for GWMF + PDE, the AMBE value is lower for WF + PDE, the LIF value is lower for GWMF + PDE, the RCEF value is better for MF + PDE, and the UQI value is highest for GWMF + PDE. The overall better performance for the enhanced combinational filter is for GF + PDE. This can be tested using deep learning neural networks.


The automatic disease diagnostic system in medical image classifications is based on deep learning. The above-enhanced filters are compared using the simple deep learning method for the identification of DR. The proposed deep learning method is the CNN and the block diagram is shown in Figure 7. The total number of images used in this network is 516 of which 413 are training images. In the training set, 134 are normal images and 279 are DR-signed images. In the testing set, 34 images are normal and 69 images are DR-signed. These images are enhanced using 13 simple filters and 5 combinational filters, and the enhanced image is given as the input for the CNN. The size of the input is given in this layer, which represents the height, width, and channel size. The input layer in the proposed method is [ 256 × 256 × 1 ] and the channel size for the grayscale image is 1.

  1. Convolutional layer

    The proposed system has three convolutional layers and contains the argument’s filter size and the number of filters. The filter size represents the height and width of the filter, and the number of filters represents the number of neurons connecting the same region of the input. Convolution is performed between the input and the filter coefficients. All the three convolutional layers in the proposed method use [ 3 × 3 ] filter size with 8, 16, and 32 filters. The second and third convolution layers work on the output of the first and second convolutional layers. The number of feature maps is determined in this parameter and padding is added to the map with stride 1 and the output size is the same as the input size.

  2. Batch normalization

    To train network train an easy optimization problem batch normalization is used. It normalizes the activations and gradients propagating through the network. To speed up the training of the network, this layer is used between the convolution and ReLU functions. The network initialization sensitivity is also reduced by the batch normalization layer.

  3. ReLU function

    The rectified linear unit (ReLU) is the commonly used nonlinear activation function followed by the convolutional layer. This layer eliminates zero and the negative values and enhances the output of each convolutional layer.

  4. Maxpooling layer

    Maxpooling is the down-sampling layer to reduce the feature map spatial size, remove unnecessary spatial information, and the number of filters in the convolutional layer is increased without increasing the amount of computation required per layer. The size of the rectangular region used in the proposed method is [ 2 × 2 ] .

  5. Fully connected layer

    The fully connected layer is followed by the convolutional and the down-sampling layers. This layer connects all neurons to all the neurons in the preceding layer. This is the decision-making layer that combines all of the features learned by the previous layers to classify the images by the last fully connected layer. The parameter of the output size in the last fully connected region is equal to the number of classes in the target data. The proposed system has two output sizes corresponding to two classes. The classification layer is the final layer and this uses the probabilities from the SoftMax activation function for each input to assign the input to one of the mutually exclusive classes and loss computation.

  6. Softmax function

    The fully connected layer is normalized by the activation function SoftMax. The classification layer uses classification probabilities by the SoftMax function output that holds the sum of the positive numbers to 1. This layer is connected after the last fully connected layer.

Figure 7 
               Block diagram of the proposed CNN.
Figure 7

Block diagram of the proposed CNN.

The network is trained using the stochastic gradient descent momentum with a learning rate of 0.01, and the maximum number of epochs is 100 given with the validation data and frequency. The data is shuffled after every epoch. The network trains the network and gives the validation accuracy. The testing process is carried out using the unknown images and the output is categorized by the target given in the fully connected layer. In this article, normal and DR are the classes given as the target. All the enhanced filters are compared using the CNN, and the performance metrics of the testing data are tabulated in Table 4.

Table 4

CNN performance metrics

Filters Accuracy (%)
MF 96.11
GF 86.78
HE 65.56
CLAHE 69.44
THF 65
BHF 67.22
CED 64.44
BF 78.33
LF 68.89
SF 71.67
WF 76.84
HF 69
PDE 96.44
MF + PDE 90.44
GF + PDE 92.22
BF + PDE 82.54
WF + PDE 85
GMWF + PDE 86.78

The table shows that the accuracy is high for MF, PDE, the combinational filter of PDE with median, and GF. The accuracy obtained can also be improved by optimizing the enhanced filter techniques and using modified deep learning techniques.

5 Conclusion

In this article, various enhancement filters including smoothing and sharpening filters are implemented and compared by the performance metrics of enhancement techniques. The smoothing filters such as median, Gaussian, bilateral, and Wiener are combined with the sharpening filter for better enhancement. The filter output images are then given as the input of CNN for the detection of DR and testing the enhancement techniques. The highest accuracy obtained is for PDE, MF, and PDE with GF. The implemented filters highlight blood vessels, features in the image, improved contrast, reduce uneven illumination and amplification of noise for better segmentation, feature selection, and classification. The future scope is to apply the optimization techniques in the enhancement filters to improve the accuracy. Clinical features can also be identified using the enhanced image and can grade the stages of DR. Identification of the early stage of DR is a challenging process and should be identified to avoid vision loss. Modifications in the deep learning networks can also increase the accuracy.

  1. Conflict of interest: Authors state no conflict of interest.


[1] H. Ackar, A. Abd Almisreb, and M. A. Saleh, “A review in image enhancement techniques,” Southeast Europe J. Soft. Comput., vol. 8, no. 1. pp. 42–48, 2019.10.21533/scjournal.v8i1.175Search in Google Scholar

[2] N. R. Sabri and H. B. Yazid, “Image enhancement methods for fundus retina images,” In: Proceedings of IEEE Student Conference on Research and Development. IEEE, Selangor, Malaysia,2018, pp. 1–6.Search in Google Scholar

[3] A. Goyal, A. Bijalwan, and M. K. Chowdhury, “A comprehensive review of image smoothing techniques,” Int. J. Adv. Res. Comput. Eng. Technol., vol. 1, no. 4. pp. 315–319, 2012.Search in Google Scholar

[4] J. N. Archana and P. Aishwarya, “A review on the image sharpening algorithms using unsharp masking,” Int. J. Eng. Sci. Comput., vol. 6, no. 7. pp. 8729–8733, 2016.Search in Google Scholar

[5] E. Daniel and J. Anitha, “Retinal image enhancement using wavelet domain edge filtering and scaling,” In: Proceedings of International Conference on Electronics and Communication System. IEEE, Coimbatore, India, 2014, pp. 1–6.10.1109/ECS.2014.6892670Search in Google Scholar

[6] P. K. Verma, N. P. Singh, and D. Yadav, “Image enhancement: A review,” In: Ambient communications and computer systems. Springer, 2020, pp. 347–355.10.1007/978-981-15-1518-7_29Search in Google Scholar

[7] T. A. Soomro, A. J. Afifi, A. A. Shah, S. Soomro, G. A. Baloch, L. Zheng, et al., “Impact of image enhancement technique on CNN model for retinal blood vessels segmentation,” IEEE Access, vol. 7, pp. 158183–158197, 2019.10.1109/ACCESS.2019.2950228Search in Google Scholar

[8] S. H. Rasta, M. E. Partovi, H. Seyedarabi, and A. Javadzadeh, “A comparative study on preprocessing techniques in diabetic retinopathy retinal images: illumination correction and contrast enhancement,” J. Med. Signals Sens., vol. 5, no. 1. pp. 40–48, 2015.10.4103/2228-7477.150414Search in Google Scholar

[9] D. Li, L. Zhang, C. Sun, T. Yin, C. Liu, and J. Yang, “Robust retinal image enhancement via dual-tree complex wavelet transform and morphology-based method,” IEEE Access, vol. 7, pp. 47303–47316, 2019.10.1109/ACCESS.2019.2909788Search in Google Scholar

[10] H. Pratt, F. Coenen, D. M. Broadbent, S. P. Harding, and Y. Zheng, “Convolutional neural networks for diabetic retinopathy,” Proc. Comput. Sci., vol. 90, pp. 200–205, 2016.10.1016/j.procs.2016.07.014Search in Google Scholar

[11] A. Subudhi, S. Pattnaik, and S. Sabut, “Blood vessel extraction of diabetic retinopathy using optimized enhanced images and matched filter,” J. Med. Imaging, vol. 3, no. 4. p. 044003, 2016.10.1117/1.JMI.3.4.044003Search in Google Scholar PubMed PubMed Central

[12] N. Singla and N. Singh, “Blood vessel contrast enhancement techniques for retinal images,” Int. J. Adv. Res. Comput. Sci., vol. 8, no. 5. pp. 709–712, 2017.Search in Google Scholar

[13] Y. Kimori, “Mathematical-morphology based approach to the enhancement of morphological features in medical images,” J. Clin. Bioinf., vol. 1, no. 1. p. 33, 2011.10.1186/2043-9113-1-33Search in Google Scholar PubMed PubMed Central

[14] S. V. Paranjape, S. Ghosh, A. K. Ray, J. Chatterjee, and A. R. Lande, “A modified fuzzy contrast technique for retinal images,” In: Proceedings of International Conference on Computing Communication Control and Automation. IEEE, Pune, India, 2015, pp. 892–896.10.1109/ICCUBEA.2015.177Search in Google Scholar

[15] M. Arif, G. Wang, O. Geman, and J. Chen, “Medical image segmentation by combining adaptive artificial bee colony and wavelet packet decomposition,” In Proceedings of International Conference on Dependability in Sensor, Cloud and Big Data Systems and Applications. Springer, Singapore, 2019, pp. 158–169.10.1007/978-981-15-1304-6_13Search in Google Scholar

[16] A. Swaminathan, S. S. Ramapackiam, T. Thiraviam, and J. Selvaraj, “Contourlet transform-based sharpening enhancement of retinal images and vessel extraction application,” Biomed. Tech./Biomed. Eng, vol. 58, no. 1. pp. 87–96, 2013.10.1515/bmt-2012-0055Search in Google Scholar PubMed

[17] P. Khojasteh, B. Aliahmad, and D. K. Kumar, “Fundus images analysis using deep features for detection of exudates, hemorrhages and microaneurysms,” BMC Ophthalmol, vol. 18, no. 1. pp. 1–13, 2018.10.1186/s12886-018-0954-4Search in Google Scholar PubMed PubMed Central

[18] D. J. Hemanth, J. Anitha, A. Naaji, O. Geman, and D. E. Popescu, “A modified deep convolutional neural network for abnormal brain image classification,” IEEE Access, vol. 7, pp. 4275–4283, 2018.10.1109/ACCESS.2018.2885639Search in Google Scholar

[19] K. Xu, D. Feng, and H. Mi, “Deep convolutional neural network-based early automated detection of diabetic retinopathy using fundus image,” Molecules, vol. 22, no. 12. p. 2054, 2017.10.3390/molecules22122054Search in Google Scholar PubMed PubMed Central

[20] B. B. Narhari, K. M. Bakwad, and A. D. Sayyad, “Identification of diabetic retinopathy from color fundus images using deep convolutional neural network,” Int. J. Recent Technol. Eng., vol. 9, p. 1, 2020.10.35940/ijrte.F9905.059120Search in Google Scholar

Received: 2020-05-06
Revised: 2020-06-22
Accepted: 2020-12-16
Published Online: 2021-08-20

© 2021 Asha Gnana Priya Henry and Anitha Jude, published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 7.12.2022 from
Scroll Up Arrow