Trainable watershed - based model for cornea endothelial cell segmentation

: Segmentation of the medical image plays a signi ﬁ cant role when it comes to diagnosis using computer aided system. This article focuses on the human corneal endothelium ’ s health, which is one of the ﬁ led research interests, especially in the human cornea. Various pathological environments fasten the extermination of the endothelial cells, which in turn decreases the cell density in an abnormal manner. Dead cells worsen the hexagonal design. The mutilated endothelial cells can no longer revive back and that gives room for neighbouring cells to migrate and expand so that they can ﬁ ll in the space. The latter results in cell elongation that is unpredictable as well as increase in size and thinning. Cell density and shape are therefore considered major parameters when it comes to explaining the health condition attributed to corneal endothelium. In this study, medical feature extraction was obtained depending on the segmenta - tion of the endothelial cell boundary, and the task of segmentation of such objects especially the thin, transparent, and unclear cell boundary is considered challenging due to the nature of the image capture during endothelium layer examination by ophthalmologists using confocal or specular microscopy. The resulting image su ﬀ ers from various issues that a ﬀ ect the quality of the image. Low quality is due to non - uniformity of illumination and the presence of a lot of noise and artefacts resulting from high amounts of distortion, and most of these limitations are present because of the nature of the imaging modality. Usually, images contain certain kind of noise and also continuous shadow. Furthermore, the cells are separated by poor border, thereby leading to great di ﬃ culty in the segmentation of the images. The irregular shape of cell and also the contrast of such images seem to be low as they possess blurry boundaries with diverse objects existing in addition to the lack of homogeneity. The main aim of the study is to propose and develop a totally automatic, robust, and real -


Introduction
Medical imaging includes the technologies that are used in viewing the human body with the aim of monitoring, diagnosing, or treating medical conditions. It is basically aimed at obtaining an image of the internal structure of the body in a manner that is non-intrusive as possible. Medical imaging has emerged as one of the commonly used methods of laboratory test that is going through changes in the past decade. There has been a rapid advancement in this area, thereby leading to the development of more accurate and less intrusive devices. The region of interest (ROI) is often segmented manually by a properly trained expert. When manual segmentation is done, multiple subjective measurement decisions could be involved, and such decisions may cause an increase in the probability of intra-and inter-observer flaws. When such errors occur in terms of judging endothelial cells, the consequences can be severe in positions of missed chances (false negatives) and false anxieties (false positives). It has been stated by some medical practitioners that raising false alarm due to erroneous judgement is highly unacceptable. Thus, it is crucial to develop automatic solutions to facilitate speedy analysis and minimise the problems of intra-and interobserver variation [1]. Hence, now three parameters are measured while evaluating the health ranking of endothelium. Similarly, indicates that today three parameters are applied when evaluating the health ranking of endothelium. The parameters are polymegethism which is also termed as cell variation, pleomorphism also known as hexagonally and endothelial cell density. Various approaches have been used to separate every cell found in corneal endothelium's image, which gave accurate results. Getting cell contours that are reliable needs manual delineation of the cell boundaries because there are a lot of endothelial cells in every square millimetre and segmenting them manually has proven to be an activity that consumes a lot of time.
Additionally, arrangement of cells in "corneal endothelium" is quite important for the ophthalmologists since it gives essential diagnostic information concerning the status of the cornea health and signs of any disease [2]. Notably, it has been several decades since the first corneal image was recorded, but yet there is a lack of debatable precise fully computerised means that help in calculating the cell borders and successfully performing assessments and quantitative assessments of the characteristics. Notable inter-and intra-observer disparities can still be seen. Previous studies [3,4] confirmed that there are a number of tools that can be used to assess the density of the cell and endothelium's morphometry. Both non-contact specular and confocal microscopes give quality images from the peripheral and central cornea. Besides, another study by Salvetat et al. [5] states that the non-contact confocal microscopy is the current modality which gives the same quality as much as like the other microscopes and generates a huge field of view. This means that the extraction of medical features by image processing provides tremendous assistance for correct diagnosis of the cornea health. It increases the accuracy and saves time. Analysis of the said parameters can also be brought out spontaneously by use of a diagnosis model that is computer aided. The model should also be fully automatic while capturing the image using a medical tool and also during the examination by an optometrist (Figure 1).
Huang et al. [6] depicted that physical representation of cells is a task that is quite labour-intensive. The performance of the software provided by microscope manufacturers for segmenting the cells is insubstantial. Such integrated software has indicated the erroneousness of the automated analyses when compared with expert commentary and that calls for a model that is fully automatic. The article focuses on creating such a model through development and enhancement of image processing techniques so that the current challenges that exist in measurement and segmentation of cornea endothelial cells can be dealt with.
The main aim of this work is to propose and develop a totally automatic, robust, and real-time model for the segmentation of endothelial cells of the human cornea obtained by in vivo microscopy and computation of the different clinical features of endothelial cells, and this research also aims to improve the visual quality of the images by reducing their unwanted degradations and enhancing their poor contrast. Improvements in quality can be achieved by using image enhancement methods. A pre-processing scheme of methods has been proposed in this research to obtain a decent image quality to highlight cell border. Furthermore, segmentation method was proposed to achieve accurate and precise cell segmentation, all these methods will serve the purpose of clinical feature extraction which will be used by expert for better diagnosis of the medical condition of endothelium layer. The main contributions of this study are as follows: • A fully automatic, robust, and real-time model for the segmentation of endothelial cells of the human cornea obtained by in vivo microscopy and computation of the different clinical features of endothelial cells. • A pre-processing scheme of methods has been proposed in this research to obtain a decent image quality to highlight cell border. Furthermore, segmentation method was proposed to achieve accurate and precise cell segmentation. • To improve the visual quality of the images by reducing their unwanted degradations and enhancing their poor contrast.
The rest of this study is organised as follows. In Section 2, related work on corneal endothelium enhancement and segmentation is presented. The materials and methods that are used in this study such as image dataset and segmentation techniques are discussed in Section 3. Section 4 provides the results of corneal endothelium segmentation. Finally, the conclusion and future work are presented in Section 5.

Related work
Much of the current study is attributed to division of the cell that attempts to come up with a model that is fully automatic and one that will cater for detection of cells and quality of the image. That is because of image's intensity and many numbers of ROI. An example can be found in the previous study of Nadachi and Nunokawa [7], who used morphological thinning and scissoring to rectify the medical features. Lost boundaries are then edited physically, while a ref. [8] work histogram was derived from calculating cell size and the number of neighbours for every cell. The derivation gave quantitative information pertaining the cornea heath. The histogram resulted from using a dome extractor in marking cell edges and applied marker-driven watershed segmentation to get binary images. Both were semi-automatic, which needs manual editing to complete segmentation. In order to deal with the challenge in refs [9] and [10], a proposal for constraining the watershed segmentation through the distance map was made. A slightly contrasting method was suggested by Bullet et al. [11], who came up with watersheds on the map and divided the fused cells by using Voronoi diagrams. Nevertheless, as it can be seen in ref. [12], the methods are receptive to the setting of the parameter and therefore requires research before the prime results are derived. Arguably, Selig et al. [13] have come up with a proposal of using stochastic watershed so as to avoid the interaction between the user, change of parameters, and the empirical setting. However, Dagher and El Tom [14] made use of the watershed contours in initialising many balloon snakes. A comparable method was suggested by Charłampowicz et al. [15], in which various active contours for snakes continue to evolve from circular sections derived through thresholding. Foracchia and Ruggeri [16] and Ruggeri et al. [17] have taken advantage of shape modelling technology using the prior knowledge incorporated into the Bayesian analysis framework [18]. This approach is based on using neural networks to make classification of the cells in the cell body, marking every pixel as the cell vertex or the body by using vector machines [19], and growing a number of vertices and hence is coming up with normal hexagons into the boundaries of the cell by the use of genetic algorithm [20]. The researcher seeks to develop an accurate, reliable, and fully automatic model capable of segmenting the endothelium cell. The researcher also tackles some significant issue that was a severe challenge to achieve their goal. They impose during their work to solve the problems such as the artefact that the microscopy may produce during the acquisition, which includes noise, bluer, and uneven illumination, especially at the border of the image due to the nature of the cornea endothelium layer, besides, the mechanism of capturing and the reflection of light.
Most of the studies start to enhance images before the segmentation phase using a sophisticated preprocessing tier and scheme, which significantly influences segmentation accuracy. In some models, postprocessing also was required. One of the dedicated pre-processing models was introduced by Khan et al. [21], which involves using the bandpass filter to the image's input. An illumination that is not uniform is seen when the content having low frequency is dealt with through the lower region of the sub-band. Also Sharif et al. [22] noted that the noise of high frequency is taken care of by the band pass's upper subband section. During his analysis of stat of the artworks of others, almost every study concerning endothelial cell segmentation consists of first-processing treatment followed by banalisation just before the segmentation phase. Many researcher utilized various, post-processing method to overcome the unwanted result from segmentation process such over-segmentation or under segmentation or disconnected marker the determined cell boundary which effect feature analysis and extraction as an example the study [23].
Furthermore, it is done by applying biomarker estimation from edge images, using Fourier analysis, and finally using characteristic SD. To improve edges of the cell, image enhancement was performed by authors due to the presence of specific artefact in the image obtained by confocal or specular microscopy used in accession of medical and biomedical facts lead to segmentation issues that are more profound. They include the divergent noises that are associated such as Poissonian, Rician, Speckle, and Gaussian noise [24]. The noise presence in such images such as Gaussian noise is one of the common noises encountered, also Poisson noise was found to be in confocal microscope as a result of complex appearances of the cell. Subsequently, Sheppard et al. [25] presented the sources of noise as end results of the size of the pinhole, form of detection, and the imaging data on the ratio of noise to signal. Also, Choraś [26] reported that the another artefact that affects the quality of images is the uneven illumination due to light. The problem of distribution of brightness was treated by adjusting the brightness levels in rows and columns. More investigations are presented in chapter 2 about the methods and techniques used to solve such artefact, which is combined with noise and dark edges of the images.

Dataset used
Images of corneal endothelium from 30 eyes with alizarine were acquired with an optical microscope and saved as grey-level digital image and the corresponding manually segmented images [17]: 30 images of corneal endothelium and corresponding 30 manually segmented images. Acquisition instrument used was inverse phase contrast microscope (CK 40, Olympus) at 200× magnification and analogue camera (SSC-DC50AP, Sony) with mean area assessed per cornea: 0.54 ± 0.07 mm 2 (range 0.31-0.64 mm 2 ). Image format was JPEG compressed, 768 × 576 pixel monochrome digital images. Source of this dataset was Department of Ophthalmology, Charité-Universitätsmedizin Berlin, Campus Virchow-Klinikum, Berlin, Germany ( Figure 2).

Pre-processing
The first stage where the focus is to enhance the image concerning its brightness, reduce the intensity of darker areas, remove noise by highlighting the ROI border, and finally smoothening the image using contrast balancing. The Contrast-Limited Adaptive Histogram Equalisation (CLAHE) technique, as depicted in Figure 3, is the first-stage processing technique required for enhancing the input that contains the image of the endothelial cell. After that, a new image denoising technique called Wavelet Transform Filter and Butterworth Bandpass for Segmentation (WTBBS) is used. Subsequently, brightness level correction is carried out by using the moving average filter and the CLAHE to reduce the effects of the non-uniform image lighting produced as a result of the previous step.

CLAHE technique
This technique is designed to operate by splitting the input image into multiple non-overlapping elements of equal size. In order to make adequate statistical estimates for images having N × M pixels, the image is divided into 36 blocks using six divisions both horizontally and vertically. The block creation process is depicted in Figure 4, where the output blocks comprise regions designated into three groups. The first group comprises four regions, which are the corner region (CR) of the image. The next group comprises 16 regions referred to as the border region (BR) class. Of all the regions lying on the border of the image, the corners are the only excluded regions.
The 16 remaining regions comprise the third group and are known as inner region (IR). The approach begins by determining the histogram for every region. Subsequently, the required contrast expansion is factored in, and the clipping threshold for the histogram is determined, after which, the histogram is rearranged so that the height does not violate the clipping threshold. Finally, the resulting contrast-limited histograms are processed using the cumulative distribution functions to obtain a greyscale mapping. The CLAHE technique is based on examining the four closest regions to the pixel of interest and using a linear combination of the mapping output. For the regions pertaining to the IR group, the abovementioned process is relatively straightforward. Nevertheless, the case is different for BR and CR groups because they require additional attention.
The histogram for every region is not complicated. In this case, for every greyscale in the region, the number of pixels without greyscale is counted. The histogram has a collection of all greyscale counts, as shown in Figure 5. This function provides a rough estimate of the greyscale density function. In order to achieve histogram equalisation, the CDF estimate is used. If the number of pixels and greyscales in every region is M and N, respectively; and if hi, j(n), for n = 0, 1, 2,…, N − 1, is the histogram for the region (i, j), then the corresponding CDF estimate, appropriately scaled by (N − 1) for greyscale mapping, is as follows: The given greyscale density function can be approximated to a uniform density function using the specified expression. This process of function conversion is known as histogram equalisation. It is limited by the maximum increase in region contrast. The contrast can be reduced to the desired level by limiting the maximum slope of equation (1). The clip limit β can be used for all histograms and sets the maximum slope. A relation between the clip limit and the clip factor, α expressed as a percentage, is as follows: In this case, for a clip factor of zero (α = 0), the clip limit is equal to (M/N), thereby leading to a uniform distribution mapping of the regional pixels to the possible greyscale levels. In this case, the pixel values do not change. The maximum clip limit, achieved at α = 100, achieves a maximum value of ( ) This condition restricts the slope to a permissible upper limit of S max , which typically has a value of four in the case of still images. Nevertheless, it is recommended that for any other application, the appropriate choice, r, for S max should be obtained experimentally.
A change in the clip factor ranging between 0 and 100 is associated with the corresponding change in the maximum slope mapping in the range 1 to S max . The required threshold for image contrast modification is a determinant of how much the original histogram has changed. For every greyscale, the upper limit of counts is restricted to β. After this threshold is reached, the extra counts are distributed uniformly between the greyscales under the condition that the clip limit is not breached. The distribution is such that the count for any greyscale never exceeds β. Several iterations may be required for every histogram. Typically, the number of iterations may increase as the clip factor percentage is reduced. Such that the greyscale for every region is obtained by using equation (1) on its modified histogram. The quadrant mapping of the regions in the IR group is done based on the characteristics of its four nearest regions. Figure 6 depicts how the CLAHE algorithm processes the original image. It can be seen that the image contrast is enhanced and the cell borders (ROI) are more precise. The output of this algorithm is used as input for the next stage.

New denoising scheme based on Enhanced WTBBS
For noise reduction, while the edges of the cells are highlighted, the output of CLAHE is used as input for the WTBBS scheme, which consists of an enhanced version of the classical wavelet transform based on the frequency domain. An input signal is represented as a set of orthonormal wavelets which depends on a single wavelet role known as the mother wavelet function; this representation is created using repetitive translations. Wavelet transform involves the use of filters. The prominent and typically used filters comprise the Daubechies (DB) family of filters, which comprises numerous groups of Daubechies like db2 (also known as the Haar channel), db4, db6, and db8, which have lengths of 2, 4, 6, and 8, respectively.
When the scaling and wavelet functions of this method are used, the image is divided into four diverse recurrence groups. During the first step, the information contained in the image is separated into four different sub-bands, named as, LL (low-low), LH (low-high), HL (high-low), and HH (high-high). The details of the image are presented on a different plane for every sub-band. For example, the LL and LH sub-bands represent data with low recurrence and vertical data, respectively. At the same time, HH and HL sub-bands represent diagonal information and data levels, respectively. Subsequently, enhancement using the WT method includes the elimination of the high frequencies based on the threshold value for the subbands that include high-frequency information. The Butterworth bandpass filter is used for the LL-subband where the threshold regulates the partial elimination of the high frequencies that create noise in the image. Hence, the threshold determination is considered as the most crucial stage in this proposed method. The threshold was accurately determined since it controls vital information concerning the cell body and borders. An inappropriate threshold could lead to incorrect segmentation during the subsequent process, where, if the threshold is small, there will still be noise. On the other hand, a large threshold would lead to the elimination of crucial regions from the image and cause blurring. Hence, it is crucial to select the threshold accurately, considering both hard and soft thresholds. It should be noted that a soft threshold provides better performance compared to a hard threshold (Figures 7 and 8).  The primary objective of the Bayes Shrink technique is to reduce the Bayesian risk called Bayes Shrink. This technique utilises a soft threshold and depends on sub-bands. It means that the threshold operator is used during the resolution of every band during WT division. This threshold is considered as smoothness adaptive. The Bayes threshold ( ) t B can be defined using equation (1).
where σ 2 represents noise variance and σ s 2 represents the signal variance without noise. Estimating the noise variance (σ 2 ) from the HH sub-band is based on the median estimator, as specified in equation (4).
Since both functions are independent of each other, the signal and the noise can be expressed using: σ w 2 and signal variance σ s 2 can be calculated using equation (6).
Using σ 2 and σ s 2 , the Bayes threshold has been calculated based on equation (3). As shown in Figure 9, the soft threshold has been used for three sub-bands, which are HH, HL, and LH, to reduce noise and remove unwanted content which is not related to the cell edges. To highlight the cell border and make the structure of the cells clearer, the Butterworth Bandpass filter is used only for the LL sub-band. The mathematical derivation of this filter requires the multiplication of the high and low pass filter transform functions. The cut-off frequency threshold is higher for a low pass filter.
where D H _ and D L _ denote the frequency thresholds for the high and low pass filters, respectively; n denotes the filter order, while ( ) D u v , represents the distance of the point from the origin. The Butterworth filter comprises a smooth transfer function that lacks discontinuity or a precisely defined frequency cut off. The filter order is a significant determinant of the frequency range allowed by the filter. During the selection of n, a trade-off needs to be made between the requirements of the spatial domain (quicker decay) and the frequency domain (sharper cut off). As specified previously, the output of CLAHE has been used as an input for the WT. Figure 10 depicts the application of the Enhanced WT filter on the CLAHE images to make the object clear.

Moving average filter
Moving average is an algebraic expression that defines an operation needed to be performed on image neighbourhoods under the conditions defined by a geometric rule. An image designated f, when required to be filtered using a window (designated B) that gathers the grey-level information according to the geometry of the window, has the moving-average processed image (G) specified as: where the operation AVE computes the sample average. Hence, the calculations concerning the local averages are conducted over local image neighbourhoods, thereby leading to superior smoothening. This process uses the image output by the Butterworth Bandpass as input. The process begins by applying the moving-average technique, followed by the CLAHE filter to produce the final processed image having border highlighting concerning the ROI. The proposed technique facilitates cell visibility by reducing the overlap between the cells, thereby making the image clearer for the subsequent segmentation stage ( Figure 11).

Trainable segmentation and distance transform (TDWS) to enhance the watershed transform for cell segmentation
The moving average filter produced an enhanced image but these images still contain numbers of overlap cells. This scenario warranted the use of the watershed transform to segment and reduce the overlap between the cells. It is possible to determine two overlapping objects if differentiation can be made concerning the shapes of those objects. Therefore, it is essential to separate all overlapping objects. For the accurate estimation of the borders of the overlapping artefacts, the proposed technique includes several processing steps beginning from segmentation, followed by Euclidean distance transformation, H-minima, and, finally, the watershed transforms. Overlapping of objects can be avoided if the borders are estimated using segmentation after the first process that facilitates the detection of aggregates. Detection of aggregates and segmentation is conducted using a commonly used process as described below: 1. Input the filtered test image. 2. Binarise the greyscale filtered image to get the initial segmented image.  3. Use the distance transformation for all the binary images. 4. Determine the minimum values for every object and use it as the seed for the watershed transform (by using the HMIN marker technique).

Initial segmentation
In this study, due to the significant difference in the intensity values of the images, a new hybrid technique based on Fractal Dimension [27] was proposed for the segmentation and extraction of the poorly defined cell borders. It was indicated in the literature that the threshold method was used for binarising the image due to its simplicity compared to other segmentation techniques. The threshold method may be more efficient in segmenting images, especially for specific applications. Frequently, one threshold value was used for segmenting the image into borders and objects. However, multiple thresholds are commonly used for segmenting the images. Additionally, thresholds are categorised as global or local depending on the local features of the different parts of the image. This study presents a clear picture of how the use of thresholds alone is not sufficient to achieve image segmentation when there is under or over-segmentation concerning some parts of the object. Therefore, in this study, a method is suggested for the identification of the cell through the use of fractal dimension features. The primary aspect, in this case, involves capturing the border of the cells. The primary objective behind conducting the segmentation, in the beginning, is to demarcate the cell from the other parts of the image. The first step comprises model training, where numerous images are used for training the model. Several samples are chosen from every image, and fixed size (11 × 11) region is used from inside the cell (Class 1) and the cell border (Class 2). This objective is achieved by implementing a semi-automatic mechanism so that the samples can be generated.
The fractal dimension was used for every window of each class rather than using the pixel intensities. The training of a KNN classifier was conducted using the labelled features obtained from the samples (Figure 12, training phase). For every block, three fractal dimension values have been calculated. Every FD is used as an input for the KNN classifier. When training is finished, the testing set is used to feed images for segmentation (testing phase is depicted in Figure 12). The blocks are scanned at a pixel level, where, for every pixel, a fixed size square region is constructed using the pixel as the centre. Subsequently, three FD features comprising the region are isolated and classified into two classes using the trained KNN classifiers, which are designated "inside cell" or "cell border." After pixel labelling is completed, the "inside cell" region is considered as the segmented cell.
The feature descriptor comprises a fractal dimension histogram that facilitates object identification from digital images. The description of the region cannot be generated only by the one FD. Thus, the three threshold values (median value threshold, mean value threshold, and out's threshold) have been used for calculating the three FD values. The calculation concerning the FD feature for every binary block is done using the fractal dimension. The complexity concerning the shape or texture of an object can be determined by measuring the fractal dimensions concerning the context of image analysis paradigm. The description of the fractal dimensions is given by a fractal geometry depending on different approaches. One of the basic fractal dimensions is the Hausdorff dimension. For an object that has a Euclidean dimension E, the Hausdorff fractal dimension D 0 is calculated using the following equation:

Distance transform and H-minimum marker for the watershed transform
The binary image distance transformation is resolved in the following way: For every pixel x in group A, DT (An) is its distance from x to the complement of A: Therefore, the calculation of the binary image distance transform can be done following the assumption that A c is the group of 1-value pixels. This leads to the formation of a greyscale sample that can be segmented through the use of the watershed transformation. However, the watershed technique is prone to over-segmentation, except if the appropriate choice and use of markers have been made. The outcome of a typical distance transformation computation is illustrated in Figure 13. A binary image of two overlapping objects is created from the initial "coarse" segmentation ( Figure 13(a)). However, the overlapping objects are not distinguished using coarse segmentation. Consequently, this binary image can be generated by using trainable neural networks. After the use of the distance transform for the generation of the greyscale image shown in Figure 13(c), the original binary images are no longer required (Figure 13(b)). Referring to Figure 13(b), the maxima correspond to black areas far from the white background. Notwithstanding this scenario, figure morphology can be found to have numerous local maxima. The greyscale in Figure 13(c) is complemented to generate the image depicted in Figure 13(d) having a white background, and the previous maxima are now depicted as minima. This technique is performed as an internal distance transformation. Subsequently, the watershed transform is applied to the image depicted in Figure 13(d) so that the overlapping objects can be separated. The region that requires segmentation can be complemented outside the aggregate to identify sufficient external markers. When over-segmentation occurs, specious minima can be created, thereby necessitating the creation of suitable markers.
Using a procedure where the inner marking processes are grouped, the use of several greyscale morphological functions can be facilitated. The specious minima have adverse effects that can be regulated before the application of the watershed technique. The morphological functions are specified as follows: • The minima were used in precise facts using the insertion of a −∞ value at a specified position to facilitate the removal of the local minima corresponding to other areas pertaining to the greyscale image. • Placing these at the appropriate places leads to the creation of useful markers.
• The output of the inner distance transform can be used to apply H-minima so that all minima having depth values less than a specified positive value can be removed. This process allows the minima having depth falling in the size threshold to be removed, thereby allowing the usage of the created area minima as effect indicators.
This process may be implemented to completion using a morphological sub-geodesic reconstruction (∇D) pertaining to the surface density concerning image f. The surface increased by the threshold is determined by h. It is also capable of determining the constructing part, D, that describes the connectivity (Soille, 2013).
Although the description of the process is simple, determining an appropriate H-minima transformation threshold and an appropriate size to execute the transformation is a challenging task. The markers can get lost, thereby leading to the retention of the minima. Along the same lines, the local minima may converge, thereby counteracting the disengagement of the marker. Evaluation indicates that a crucial prerequisite is that the possible minima and the base in mass associations should be disposed of. This must be done while maintaining the detachment of the two minima situated at the surmised midpoint of the overlapping objects that required segmentation. In this study, the fast immersion-dependent watershed transformation is applied to the output of the gradient distance transformation so that the initial separation of overlapping objects can be achieved ( Figure 14).
To determine the clinical features, the object must be clear. For this purpose, drawing the cell borders and making them more precise for the next stage (calculating the clinical features) are required. The Voronoi diagram has been used for final segmentation to identify the border and make it clearer. This technique has been used with the output of the watershed transformation (segmented image).

Medical feature measurement
As mentioned before, the main objective of the proposed system is to determine the clinical features. Based on the previous stages, a segmented image having clear cell border is produced. This stage involves the extraction of the clinically valuable features from the segmented endothelial cell images using objective and automatic schemes. The intention is to describe the health status of the endothelial cell images using pleomorphism, MCD (cell per mm 2 ), polymegathism, mean cell parameters MCP (µm), and mean cell area (MCA) (µm 2 ). The morphological feature extraction by the Trainable Model proposed for segmenting the endothelial cells is reported in Figure 15. The low quality of these images makes it challenging to accurately segment the cell and estimate the morphological features in some regions which are extremely dark, blurred, or highly reflective. To address this issue and enhance the clinical analysis, the proposed system requires inputs from an ophthalmologist for selecting and producing the most visible ROI in the segmenting sample. Afterwards, the morphological feature of the cropped region is calculated automatically. In this manual calculation, the cells intersecting only with the nearby binary borders of the frame are included. In contrast, those intersecting with other borders of the frame are excluded. Nevertheless, the use of the entire image implies the exclusion of all the outmost cells from the statistical computation.
The objective of this step is to prevent the inadvertent segmentation of the cells comprising the edges of the input sample.
Mean cell density (MCD) is defined as the number of endothelial cells (C number) in the trimmed ROI (or complete picture) separated using the all-out size (A) of the modified ROI (or complete picture), as specified below:

Segmentation assessment results
The proposed segmentation model has been explained in detail. To evaluate the proposed model, we have measured the closeness and the correlation between the manual features determined by the expert and the automated features extracted by the proposed model. Linear regression was used to measure correlation, which evaluates the effectiveness of the proposed segmentation algorithm, and the automatic measurements of the endothelial eell feature, i.e. density, cell area, CellPer, polymegathism, and pleomorphism, are compared with the equivalent manual measurements taken by a domain expert and provided in the ground

Pre-Processing
Step

Segmentation
Step

Features Extraction
Step TDWS Figure 15: Medical feature extraction.
truth document. The evaluation of features, was done using several regressions statistical method to show the consistency between the manual and the automatic measurements through regression and correlation. Also, the Bland-Altman method is used to measure the closeness between the two solutions which is a scatter plot method invented by Bland and Altman (1986) that describes the agreement or disagreement between two measurements measured quantitatively.

Automated vs manual measurements
Watershed Transform was applied on images. We have used five clinical features to determine the correlation of these features with manual observations. Figure   Cell Area Figure 17: Correlation for the cell area feature for the segmented images. and manual techniques. Table 1 indicates that R 2 for this feature equals 0.7301, and the slope of the line is approximately 45°. Furthermore, based on the R 2 concerning the cell area, it can be observed that the proposed model measures cell area with higher accuracy compared to density ( Figure 17).
The correlations for CellPer, polymegathism, and pleomorphism are depicted in Figures 18-20, respectively. Table 1 indicates the R 2 for every feature. Figures 21-25 depict the Bland-Altman scatter plot for density, cell area, CellPer, polymegathism, and pleomorphism. Table 2 contains information about the mean difference, confidence limits, and the parameters for the linear fit.      enumerated and ranked from top to bottom according to the degree of importance. New scheme of image enhancement was achieved through three novel ideas: (a) applying modified CLAHE approach which focuses on ROI to enhance the contrast of entire image evenly. (b) New approach to denoise based on Enhanced WTBBS. (c) The images from previous steps were treated by applying moving average filter combined with CLAHE. The process begins by applying the moving-average technique, followed by the CLAHE filter to produce the final processed image having border highlighting concerning the ROI. TDWS segments images with small and overlapping cells. Therefore, it is essential to separate all overlapping objects for the accurate estimation of the borders of the overlapping artefacts.