Feature fusion-based text information mining method for natural scenes

: As a crucial medium of information dissemination, text holds a pivotal role in a multitude of applications. However, text detection in complex and unstructured environments presents signi ﬁ cant challenges, such as the presence of cluttered backgrounds, variations in appearance, and uneven lighting conditions. To address this issue, this study proposes a text detection framework that leverages multistage edge detection and contextual information. This framework deviates from traditional approaches by incorporating four primary processing steps, including text visual saliency region detection to accentuate the text regions and diminish background interference, multistage edge detection to enhance the conventional stroke width transform results, a texture-based and connected components-based integration to accurately distinguish text from the background, and a context fusion step to recover missing text regions and improve the recall of text detection. The proposed method was evaluated on two widely used benchmark datasets, i.e., the international conference on document analysis and recognition (ICDAR) 2005 dataset and the ICDAR 2011 dataset, and the results indicate the advancedness of the method.


Introduction
The widespread use of digital image capture devices has resulted in an increased demand for image retrieval and understanding.As a result, text detection has become a critical task in content-based image analysis In comparison to the background, the text region exhibits distinctive textural characteristics.With machine learning techniques having gained widespread application [1][2][3], the effective extraction of these features and the selection of appropriate classifiers are considered to be the two crucial components in texture-based methods.Some hand-engineered features have been designed to describe text candidates, e.g., T-HOG [4], eHOG [5], HOM [6], and discrete wavelet transform features [7].In recent years, Liao et al. [8], Liu et al. [9], and Cai et al. [10] have adopted learning methods to obtain the features for avoiding randomness of the hand-engineered features.The feature vectors extracted from each text candidate region are fed into a trained classifier; the text likelihood of the candidate regions is estimated; and the text candidates are distinguished by the aforementioned trained classifiers.For example, Wei and Lin [7] and Ye et al. [11] adopted the support vector machine (SVM) in their work, Hanif and Prevost [12] trained the AdaBoost classifier, Xu and Su [13] adopted the Random forest classifier, Wang et al. [14] and Sun et al. [15] used the Neural Networks, etc.In the second stage of the process, texture-based methods prove to be effective in addressing complex backgrounds with dissimilar texture structures in relation to the text regions.However, these approaches consistently utilize a trained classifier that performs a comprehensive scan of the images through the use of multi-scale sliding windows, resulting in a high demand for thousands of predictions.In addition, the training of the classifier requires a substantial number of training samples, which can be both time-consuming and resource-intensive.As a result, the computational complexity inherent in these methods limits their practical application in large databases.
The implementations of CC-based methods are founded on the premise that characters in text regions exhibit specific attributes, such as approximate constant color, proximate pixel values, and similar stroke widths, among others.These methods segment an image into a collection of CCs, and the ultimate CCs are classified as text or background based on the analysis of their geometric properties.CC-based methods typically comprise two distinct phases, namely, the detection of CCs and the verification of CCs.So far, various approaches have been used to extract the CCs, e.g., Shi et al. [16] and Yin et al. [17] have adopted the maximally stable extremal regions, Mancas-Thillou and Gosselin [18] have used the color clustering method, Shivakumara et al. [19] have performed K-means clustering in the Fourier-Laplacian domain, Sun et al. [20] have designed the color-enhanced contrasting extremal regions, Epshtein et al. [21] and Xu et al. [22] have adopted the Stroke Width Transform (SWT) processing to obtain the CCs.CC-based methods are recognized for their relative speed; however, their efficacy in accurately removing text components without prior knowledge of text position and scale is limited.Furthermore, the design of a reliable CC analyzer is a challenging task due to the presence of numerous non-text components and the difficulties that arise when text is noisy, multicolored, and textured.
The central objective of text detection, as a typical pattern recognition problem, is to differentiate texts from their backgrounds.The identification of suitable candidates and their accurate classification are crucial considerations in text detection.Detection accuracy and execution efficiency are the two paramount metrics used to evaluate the efficacy of a text detection method.It is widely acknowledged that the extraction of text candidates through the use of multi-scale sliding windows is computationally intensive, and relying solely on geometrical characteristics for candidate classification can lead to inaccurate results.As a result, the integration of texturebased and CC-based methods may offer a promising solution by capitalizing on their respective strengths.
Because the characters are regions of similar stroke width, the SWT processing proposed by Epshtein et al. [21] has been widely used to obtain the CCs.Epshtein et al. [21] carried out the SWT processing based on the Canny edge map, and the false positives existing in the Canny edge maps would undoubtedly weaken the processing results according to the principle of SWT.Based on the aforementioned discussion, how to obtain an accurate edge map needs to be solved.
The false positives in the results will decrease the detection precision, and suppressing the background region will help reduce the false alarm effectively.In fact, text is a carrier for interpersonal communication in human society, and they are typically designed in natural scenes to attract attention.Judd et al. [23] showed that scene texts can receive many eye fixations, and Wang et al. [14] have also performed psychophysical experiments to confirm the text regions in the images to be salient, the aforementioned studies evaluate existing visual attention models for text detection.Some methods [5,24,25] use the salient regions to detect the texts by leveraging intensity, color, orientation, size, and curvature saliency cues.Meanwhile, some work [26,27] proposed a weakly supervised approach for scene text detection, and promising detection performance has been reported in their work.Inspired by their work, a novel method exploiting features of color and luminance is adopted to detect the texts' visual saliency region in our work.
Traditional methods of character analysis tend to result in incorrect classifications due to their individualized approach.In reality, texts in natural scenes often exhibit a distinct layout structure, frequently appearing in clusters and lines.The proximity of candidates to text regions often leads to their misclassification as texts.The specific spatial distribution of texts provides a compelling reason to incorporate text context information.Thus, in our work, the utilization of context information is employed to improve classification robustness.Specifically, key regions are formed by the detection of adjacent text regions, followed by the retrieval of missing text regions surrounding the key regions through the implementation of constraint rules.
Drawing upon the previously discussed considerations, this work presents a novel text detection approach for natural scenes that utilizes context information and multistage edge detection.The primary contributions of this study are as follows: (1) The text regions are emphasized through the use of visual saliency detection to increase visibility.
(2) A multistage edge detection methodology is established to acquire a precise edge map.
(3) The integration of texture-based and CC-based methods is utilized to assess text candidates for optimal results.(4) A hierarchical identification approach is proposed to minimize texture distortion and improve the robustness of the method.

Our methodology
Based on the fact that the texture-based and CC-based methods are complementary, the two methods are reasonably integrated into our work, and an overview of the proposed method is shown in Figure 1.

Pre-processing
Texts are typically designed in natural scenes to attract attention, and some research [14,23] evaluated existing visual attention models for text detection to show that scene texts are salient.The saliency cues are adopted in our approach to prevent an exhaustive spatial and scale search over all possible regions.The augmented precision of the edge map consistently exerts a profound impact on the SWT outcome, as the SWT operator exhibits a linear correlation with the quantity of edge pixels.A multistage edge detection method is proposed  Feature fusion-based text information mining method for natural scenes  3 to obtain an accurate edge map based on the gradient image with the assistance of the visual saliency regions in our work.

Computing saliency
Visual saliency refers to the distinctive perceptual attributes that cause an object to stand out from its surrounding elements, thereby attracting human attention.In our research, we aim to discover a technique for obtaining precise edge maps to enhance the results of the SWT.In this study, we adopt the concise and efficient model proposed by Achanta et al. [28] to calculate saliency for the outputs.The methodology is deemed effective, resulting in full-resolution saliency maps with clearly defined boundaries.
The saliency map S of the image I can be formulated as follows: where I μ is the arithmetic mean pixel value of the image and I ωhc is the Gaussian blurred version of the original image, which eliminates fine texture details as well as noise and coding artifacts.The norm of the difference is used to obtain the magnitude of the differences.The features of color and luminance are used in this method, and equation ( 1) is extended and rewritten as follows: where, I μ is the mean image feature vector, I ωhc is the corresponding image pixel vector value in the Gaussian blurred version (using a × 3 3 separable binomial kernel) of the original image, and ‖ ‖ is the L2 norm.Applying the Lab color space, each pixel location is a [ ] L a b ; ; T vector, and the L2 norm is the Euclidean distance.

Obtaining the accurate edge map by using the multistage edge detection
The Canny edge detection method is characterized by its utilization of two thresholds to identify both strong and weak edges.This method is known to be less susceptible to being impacted by noise when compared to other edge detection techniques such as the Sobel and Prewitt methods.The Canny method operates by identifying local maxima in the gradient of an image, thereby detecting edges.However, it should be noted that this approach may also result in the detection of non-text edges.Efforts to minimize such false detections by adjusting the segment threshold may unfortunately result in the loss of true text edges.Fortunately, the characters have obvious differences with the backgrounds in text regions.The strokes of the character have similar fixed widths, and the gradient distribution of the character has obvious regularity, (as shown in Figure 2).We can obtain the accurate text edges 1 in the image while dismissing noisy and foliage edges with the following steps.First of all, we strengthen the regular boundary and weaken the irregular one, which is produced mainly by the non-text targets, (i.e., calculating the synthesis gradient value of every pixel with × 3 3 neighborhoods), and then the edge filter mask is obtained by using binarization processing and morphological processing.At last, the accurate edge is obtained using the following equation: where shows the pixel value of the accurate edge map at position ( ) i j , , ( ) show the pixel value of the original Canny edge map and the edge filter mask at position , , and M and N show the row and column of the gray image, respectively.
 1 It should be pointed out that we can qualitatively analyze the accurate edge map, but it is very difficult to do quantitative analysis for unrealistically annotating the text edge images manually.In order to quantitatively evaluate the accurate edge map obtained in our work, we adopted the SWT results and text detection results to verify the effectiveness of the accurate edge map.The evaluation protocol and results are introduced in Section 3.3.

Text candidates generation processing
The characters always have a nearly fixed stroke width in the text region.The SWT performs per-pixel computations to estimate the most probable stroke width, resulting in an output image that maintains the same dimensions as the input image.In the SWT image, each element contains the width of the stroke associated with the pixel.The SWT operator is linear with the number of edge pixels, and an accurate edge map often completely enhances SWT results.To describe the effectiveness of the applied method, we obtain accurate edges by using the multistage edge detection method proposed in our method.The SWT images are guided by the original edge map and the accurate edge map separately, and the SWT images are shown in Figure 3. Applying the SWT image guided by the accurate edge map, we can obtain better results in our work.
In text regions, the stroke width of characters is typically consistent.The SWT algorithm computes the most probable stroke width for each pixel, producing an output image of equal size to the input image.Each element in the SWT image corresponds to the stroke width associated with the respective pixel.The computational complexity of the SWT operator is linear with the number of edge pixels present in the image.The accuracy of the SWT results can be significantly improved by incorporating an accurate edge map.To evaluate the performance of the proposed method, we employ a multistage edge detection technique to obtain accurate edges.The SWT images are generated using both the original edge map and the accurate edge map as guides.The results are depicted in Figure 3.It can be observed that the use of the accurate edge map as a guide for the SWT results in improved performance in our study.Feature fusion-based text information mining method for natural scenes  5

False positives elimination processing
The SWT generates an output image, where each pixel corresponds to the width of the most probable stroke.While the SWT processing eliminates certain text regions that are unlikely to be text (e.g., by setting a maximum stroke width threshold of 350 pixels in our work, pixels with a stroke width exceeding this threshold are discarded), there may still exist non-text regions that persist and hinder subsequent processing.

False positives elimination by heuristic rules
To differentiate between text and non-text regions, the geometric and textural properties of individual CCs are characterized.In this study, we primarily utilized seven types of component features, as listed in Table 1, to eliminate certain non-text regions.
• Stroke width changes rate: This feature, coined as the stroke width variance-to-mean ratio, quantifies the extent of stroke width variability by comparing the variance to the mean stroke width.Its purpose is to facilitate the removal of candidate components characterized by significant deviations in stroke width.• Area and bounding box ratio: This feature is used to remove candidate regions whose foreground pixel density is too small, which is defined as the total number of foreground pixels to the external rectangle area.• Height size and width size: The characters in images always have a certain range of size; this feature is defined to eliminate the candidate regions that have large sizes or small sizes.• Containing CCs number: For the text components, there are not any other CCs inside it, and we define this feature to eliminate some candidate components.• Aspect ratio: This feature, defined as the maximum ratio between component width and height or from its height to width, is used to filter out non-text components whose shape is too long or too narrow.Note that, adhesion problems may occur between the adjacent characters in the horizontal direction, and the aspect ratio constraint of the component is relatively loose, only some outliers with a very large aspect ratio will be removed under this condition.• CC area: The number of foreground pixels of a text region always has a certain range, and this feature is defined to remove candidate regions, whose foreground pixel number is too large or too small.• Color range: In fact, the green leaves are bothersome objects and often appear in the natural scene images.
This feature is specially defined to filter out the green leaves in hue, saturation, and value color space.
The parameters in Table 1 are empirically determined based on the international conference on document analysis and recognition (ICDAR) 2005 training dataset.S , where H _ img and W _ img are the height and the width of the input images and H _ ue ave , and S _ aturation ave are the average hue and Height size and width size the average saturation of the candidate region.In order to prove the robustness of the proposed method, the same parameter values in these heuristic rules are used for detecting texts in both the ICDAR 2005 and ICDAR 2011 test datasets.

False positives elimination by classifier
The aforementioned heuristics are effective in eliminating certain non-text regions.However, some non-text candidates that possess similar geometric and textural properties to text regions may still persist.To address this issue, we employ an offline trained classifier to distinguish between text and non-text candidates.In natural scene text images, texts often appear in groups or lines, and image binarization processing can result in adhesion between adjacent characters.This adhesion can cause significant variations in the length of candidate regions, making it challenging to use a fixed size to describe them.To mitigate this problem, we present a novel hierarchical identification strategy in our study, aimed at reducing the texture distortion caused by image normalization processing and effectively addressing the variation in length of candidate regions.
• If the aspect ratio of the CC region is between 2 and 4, we recognize the CC region directly.
• In our approach, if the aspect ratio of a CC region exceeds 4, we intercept it from the left, right, and middle positions, respectively, in order to maintain an aspect ratio of ~3 for each sub-region.A voting-based method is then employed to classify the CC region.If more than one sub-region is identified as text, the entire CC region will be deemed a text region.Otherwise, it will be classified as a non-text region.• In our approach, if the aspect ratio of a CC region is below 2, we first determine the number of adjacent CCs with similar heights in the horizontal direction.If the number of adjacent regions is greater than 2, the CC region is immediately classified as text.This decision strategy is based on the observation that text lines typically contain more than three characters, while non-text regions rarely have multiple adjacent regions with similar heights.If the number of adjacent regions is not greater than 2, the CC region is joined with itself until the aspect ratio approaches 3, and the resulting composite region is then evaluated by the trained classifier.
In The Histograms of Gradients (HOG) feature is selected, and the SVM classifier is trained with this feature by the offline method.In our research, each detection window is divided into cells of × 8 8 pixels and each group of × 2 2 cells is integrated into a block, and each cell consists of nine orientation bins.

Retrieving the missing text regions around the key regions
Despite the potential loss of text regions through prior processing, it is fortunate that text in natural scenes frequently appears in groups or lines.As a result, candidates in proximity to text regions are highly likely to be considered as text.To take advantage of the specific spatial distribution of text, we utilize context information to recover missing text regions.
In order to describe the algorithm conveniently, we illustrate this question aided by Figure 4.In our work, the key region is defined as the region of the remaining adjacent candidates in the horizontal direction.As shown in Figure 4, the words "AR PARK" highlighted with a yellow background are the key region, the red dashed box indicates the horizontal search area, and the yellow dashed box indicates the vertical search area.Meanwhile, we further imagine that the candidate regions are lost after a series of ahead processing, which are shown in the left and right parts of Figure 4.
Feature fusion-based text information mining method for natural scenes  7 In our study, we leveraged context information to retrieve the missing text regions by utilizing key regions.To formulate context descriptors, we employ four distinct features, namely stroke width, common part, CC width, and CC height.We denote all the missing candidate regions after a series of ahead processing as r , where M is the total number of missing candidate regions.The c i r is the ith missing candidate region, and we define the state of c i r as { } c c c c , , , , where the c i s is the stroke width and c i c is the common part with the key region (i.e., it is the common width with the key region in the vertical search area, and the common height in the horizontal search area).Note that c i w is the CC whole width and c i h is the whole height.Meanwhile, we denote the key region r , where N is the total number of characters in the key region.The K s ave is the average stroke width of the characters in the key region, the K w ave is the average width, and the K h ave is the average height.
The missing text candidates will not be retrieved in the search area, if they meet any one of the following conditions: , when in vertical area , when in horizontal area Note that, the first row of equation ( 4) is one of the requirements to find the missing text regions in the vertical searching area, and the second row of equation ( 4 in our work.

Grouping characters into text line and further verification
The objective of this step is to group the set of characters identified in the previous steps into coherent lines of text.The characters within the same text line are anticipated to exhibit similar stroke width, character width, and character height.Our work operates under the assumption that the text lines in natural scenes tend to be horizontal or slightly tilted.To address this, we introduce the concept of an "influence field."For example, the influence field for the character R in Figure 5(b) is depicted by the yellow background.The CC candidates within the influence field are compared to the reference character based on their CC width and height.If these CCs meet specified constraint conditions, they are merged together.Note that, for the ith CC in the influence field, the c i w is the whole width and c i h is the whole height.The parameter c w seed is the width and c h seed is the height of the specific candidate.For every specific candidate, the morphology structure element is dynamically selected as The morphology close processing is implemented between the specific candidate and the CCs in its influence field.For every CC that is taken into account, the text line is developed by merging the sub-images, which are secured by using morphology close processing.The before and after grouping characters in text lines are shown in Figure 5(a) and (c), respectively, and in this section, = T 1.5 and = = T T 0.65 . In the task of text region recognition, the selection of appropriate descriptors is critical for effectively differentiating between text regions and background interferences.The SVM classifier utilized in the previous stage has demonstrated remarkable efficacy in eliminating background interferences.However, text lines in natural scene images often exhibit significant layout variations, making it challenging to train a comprehensive classifier using generic descriptors.As a result, some persistent background interferences may not be fully removed in previous processing steps.To address this challenge, a different approach is adopted.Guided by the work of Wang et al. [14], an unsupervised feature learning algorithm is utilized to automatically extract features from the training samples, and a convolutional neural networks (CNN) classifier is applied for candidate region classification.Although the CNN classifier presented in Wang et al. [14] demonstrates exceptional classification capabilities, it is not employed for text detection in the original image due to its timeconsuming nature as it performs multi-scale evaluations of candidate regions.In light of these considerations, an off-the-shelf CNN classifier is utilized in this work for efficient verification of candidate text lines with smaller areas than the original image.

Segmenting text line into words
In the context of scene text detection, while word segmentation is not the primary concern, it is nonetheless necessary to segment the detected text lines into separate words for the purpose of evaluating performance using the strategies employed in the ICDAR 2005 and ICDAR 2011 robust reading competitions.To address this requirement, a heuristic rule has been developed to facilitate the segmentation process.This rule is based on the calculation of the average distance between adjacent CCs within a given text line.The minimum word spacing distance T is estimated by equation (10).A separation will occur if the distance between adjacent CCs is over T , and the intra-word characters would be separated from the inter-word characters.
where D ave is the average distance value.Based on the previous research, we empirically set = α 1.75 and = β 3. Feature fusion-based text information mining method for natural scenes  9 3 Performance evaluation

Experimental datasets
In order to comprehensively assess the performance of the proposed method in comparison with other stateof-the-art text detection methods, we carried out experiments on the ICDAR 2005 dataset 3 and ICDAR 2011 dataset.4 The ICDAR 2005 dataset consists of ~499 natural scene images that have been annotated with ground truth, with varying resolutions, and its TrialTest subset contains a total of 249 images.The ICDAR 2011 dataset was specifically collected for the text detection competition at the ICDAR 2011 conference, and it comprises 484 natural scene images that have been annotated with ground truth, with a corresponding test dataset of 255

Evaluation protocol
The performance of our method is quantitatively measured by Precision (P), Recall (R), and f-measure (F).They are separately computed using the definitions provided in the studies by Lucas et al. [29], Lucas [30], and Shahab et al. [31].The output is a set of rectangles designated with bounding boxes for detected texts, and a set of ground-truth rectangles are also provided in these datasets.The match m p between two rectangles is defined as the area of the intersection divided by the area of the minimum bounding box containing both rectangles.
The number has the value one for identical rectangles and zero for rectangles that have no intersection.The closest match of each resulting rectangle with the set of truths is calculated.
The best match ( ) m r R ; for a rectangle r in a set of rectangles R is defined as follows: The Precision and Recall are defined as follows: where T and E represent the sets of ground truth and result rectangles, respectively.The f-measure, which is a single measure of algorithm performance, is a combination of the two measures, and it is defined as follows: where the parameter α is the relative weight of the Precision and the Recall, and = α 0.5 in our work.

Evaluation of SWT results obtained by using the multistage edge detection
To prove the effectiveness of the multistage edge detection method, we adopted the ICDAR 2005 annotation test dataset 5 as the baseline.Compared with the baseline in pixel level, the detection results are quantitatively measured by the Recall and the Precision.Note that, in this section, the two parameters are defined as follows: where I _ mg dec is our segmentation result obtained by binarizing the SWT image and I _ mg gt is the corresponding ground truth.In order to prove the effectiveness of the multistage edge detection algorithm, we adopted the original Canny edge map and the accurate edge map to obtain the SWT image, respectively.As shown in Table 2, we can greatly increase the Precision with a little cost of the Recall reduction.
There are total × 1.08 10 7 pixels annotated as texts in the ICDAR 2005 annotation test dataset.With the original Canny edge map for SWT processing, we can obtain × 1.1 10 8 candidate text pixels, and there are × 9.77 10 6 true text pixels in it.However, with the accurate edge map for SWT processing, we can obtain × 5.0 10 7 candidate text pixels with × 8.15 10 6 true text pixels in them.The aforementioned results show that we can greatly remove the false positive pixels with a little cost of the true positive pixels reduction.In addition, the average time consumption of performing SWT processing guided by the original edge maps is 16.6 s; however, the average time consumption is 10.8 s by using the accurate edge maps. 6 To further illustrate this problem, we first, we adopt the original Canny edge map and the accurate edge map to obtain the SWT image, respectively, and then the same exact processing steps as the subsequent pipelines will follow the SWT processing to detect the scene texts.For the original Canny edge map case, we can obtain the Recall 0.61, the Precision 0.66, and the f-measure 0.63.However, for the accurate edge map case, we can obtain the Recall 0.62, the Precision 0.73, and the f-measure 0.67.It is obvious that better text detection performance has been obtained by using the accurate edge map obtained by our multistage edge detection method.

Evaluation of retrieving the missing text regions
In order to further verify the effectiveness of retrieving the missing text regions by using the context information, a comparison experiment has been designed on the ICDAR 2005 TrialTest dataset.The results, which are obtained before retrieving and after retrieving the missing text regions, are compared with the ICDAR 2005 annotation test dataset at the pixel level, and they are quantitatively measured by the Recall and the Precision defined in equations ( 15) and (16).Before retrieving the missing text regions, we can obtain the Recall 0.66 and the Precision 0.53; however, we can obtain the Recall 0.72 and Precision 0.52 after retrieving the missing text regions.It is obvious that we can greatly improve the Recall rate with a little cost of the Precision reduction.

Comparison with other approaches
In order to verify the effectiveness of the text detection scheme proposed in this work, experiments have been designed and carried out on two public datasets, and the detected results are evaluated at the word level.The performance of our method is quantitatively measured by Precision (P), Recall (R), and f-measure (F), and they Feature fusion-based text information mining method for natural scenes  11 are computed by using the definitions provide in the studies by Lucas et al. [29], Lucas [30], and Shahab et al. [31].It is imperative to acknowledge that the evaluation methodology employed in the ICDAR 2011 competition differs from that of the preceding ICDAR 2005 competition.The revised evaluation scheme was introduced by Wolf et al. [32].The performances of the proposed method and other state-of-the-art methods on the ICDAR 2005 and the ICDAR 2011 databases are shown in Tables 3 and 4, and our results in each table are highlighted with bold font.We can know from Tables 3 and 4, that our method achieved the Precision 0.73, Recall 0.62, and f-measure 0.67, respectively, on the ICDAR 2005 dataset.Meanwhile, a similar competitive performance is obtained on the ICDAR 2011 dataset, in which we achieved the Precision 0.69, Recall 0.58, and f-measure 0.63.Comparing our approach with other state-of-the-art methods listed in Tables 3 and 4, our approach has achieved competitive performances with the most state-of-the-art methods on the ICDAR 2005 and the ICDAR 2011 datasets.Specifically, as shown in Table 3, our results on all performance indicators are better than the method of Epshtein et al. [21].The SWT processing was first proposed in the study by Epshtein et al. [21], and they had achieved great success in scene text detection.
Figure 6 shows some typical results obtained by our method, and the detected text regions are bounded by red rectangles.These results indicate that our system is robust against large variations in text font, color, size,  [38] and Zhang and Kasturi [39] in their studies.and geometric distortion.In addition, one of the advantages of our method is that we can detect text regions with less than three characters (as shown in the bottom row of Figure 6), while these text regions are always lost by the other proposed methods (e.g., [34]) for the assumption that the text regions always have more than three characters.

Weakness
Although some satisfactory detection results have been obtained by using our method, there are still some false positives that cannot be eliminated, because these text candidates are very like the genuine texts, e.g., the rectangular windows in Figure 7(a) and the wire fences in Figure 7(b) are very like the digital 1; The intra-word characters may be incorrectly separated from the inter-word characters (as shown in Figure 7(c)).Our method has difficulty in detecting some exaggerated art texts (as shown in Figure 7(d)), and the proposed method does not work well when the text regions have a poor resolution (as shown in Figure 7(e) and (f)).Meanwhile, the  Feature fusion-based text information mining method for natural scenes  13 hand-designed heuristic rules in Section 2.3.1 may affect the robustness of the algorithm.In our future work, we will try to resolve these problems by improving the detection method.
In addition, we found in our experiments that, as a feature extraction method, the SWT transform is strongly influenced by the accuracy of edge extraction, while the traditional Sobel edge detection operator based on differential operations lacks robustness in dealing with noisy images.Accordingly, this poses difficulties for the subsequent elimination of false alarms and for improving the performance of the algorithm.In fact, deep learning has substantially improved the accuracy of edge detection in recent years, so obtaining high-precision text edges by deep learning and then obtaining stroke width features by SWT are expected to improve text detection performance.
In Table 1, the establishment of discriminative conditions for candidate regions is predicated upon their geometric features, yielding the algorithm with diminished robustness primarily owing to the empirical selection of hyperparameter thresholds.An Interval Type-3 Fuzzy System and a new online fractional-order learning algorithm are proposed in the literature [41], which has fewer rules compared to traditional methods and learning iteration results, which has better stability for prediction tasks.Based on this, we will try to adopt the work proposed in the literature [41] for fast classification of text candidate regions in our future work.

Conclusion and future work
In this study, we propose a novel approach for detecting text in natural scenes.The method consists of four stages: (1) the utilization of an effective visual attention model, which effectively highlights text regions and suppresses the background; (2) the implementation of a multistage edge detection process to produce a precise edge map; (3) the verification of text candidates through heuristic rules and an offline trained classifier; and (4) the incorporation of contextual information to retrieve missing text regions and group characters into text lines, followed by further verification and word segmentation.The advancedness of the proposed approach is demonstrated through experimental evaluations on the ICDAR 2005 and the ICDAR 2011 benchmark datasets.In addition, the results indicate that the multistage edge detection process significantly improves the results obtained through the SWT algorithm.Future work aims to extend the scope of the proposed method toward arbitrary text detection in natural scenes.The planned additions of new and enhanced features will further distinguish texts from their background.In addition, a machine learning-based fuzzy system is planned to be developed in order to fast classification of text candidate regions without human intervention.Furthermore, a more effective segmentation strategy is planned to be designed to accurately separate intra-word characters from inter-word characters.

Figure 1 :
Figure 1: Flowchart of the proposed method.

Figure 3 :
Figure 3: SWT images guided by different edge maps.(a) SWT guided by the original edge, and (b) SWT guided by the accurate edge.

Figure 2 :
Figure 2: Gradient direction images of the text and the non-text regions.(a) non-text region, (b) gradient direction image of the non-text region, (c) text region, and (d) gradient direction image of the text region.
order to ensure a fair comparison, we trained text classifiers on the training datasets of the ICDAR 2005 and the ICDAR 2011, and we applied these trained classifiers to the respective test sets of ICDAR 2005 and ICDAR 2011.To develop the training sets, 2,338 positive samples and 2,709 negative samples were collected from the ICDAR 2005 training dataset, whereas 3,488 positive samples and 3,738 negative samples were collected from the ICDAR 2011 training dataset.The training samples are normalized to × 48 144.
) corresponds to the horizontal search area.Based on the pixel-wise annotation of the ICDAR 2003 training dataset, 2 the parameters are empirically selected, and =

Figure 4 : 2
Figure 4: Retrieving the missing text regions by using context information.

Figure 5 :
Figure 5: The results of grouping characters into text line.(a) Before grouping characters into text line, (b) the influence field of a character, and (c) after grouping characters into text line.

Figure 6 :
Figure 6: Typical results obtained by our method on the two public datasets.

Figure 7 :
Figure 7: Some poor results were obtained by our method.(a) false alarm due to rectangular windows, (b) false alarm derived from wire fences, (c) wrongly segmented texts, (d) missing texts from exaggerated art texts, (e) missing texts due to dark resolution, and (f) missing texts from glare regions.

Table 1 :
Features for a candidate region in heuristic rules min CA max

Table 2 :
Comparative results by using multistage edge detection In order to compare time consumption of performing SWT processing guided by the original edge map and the accurate edge map, respectively, we implemented our method in Matlab on an Intel(R) Core(TM)2 Duo CPU 2.8 GHz and 3 GHz RAM desktop.In order to accommodate both bright text on dark background and vice-versa, we apply the SWT processing twice.

Table 3 :
Experimental results on the ICDAR 2005 dataset The values are calculated by us based on the P and R results reported by Fabrizio et al.

Table 4 :
Performance comparison of text detection methods on the ICDAR 2011 dataset