Abstract
Cashmere and wool are common raw materials in the textile industry. The clothes made of cashmere are popular because of the excellent comfort. A system that can quickly and automatically classify the two will improve the efficiency of fiber recognition in the textile industry. We propose a classification method of cashmere and wool fibers based on feature fusion using the maximum inter-class variance. First, the fiber target area is obtained by the preprocessing algorithm. Second, the features of sub-images are extracted through the algorithm of the Discrete Wavelet Transform. It is linearly fused by introducing the weight in the approximate and detailed features. The maximum separability of the feature data can be achieved by the maximum inter-class variance. Finally, different classifiers are used to evaluate the performance of the proposed method. The support vector machine classifier can achieve the highest recognition rate, with an accuracy of 95.20%. The experimental results show that the recognition rate of the fused feature vectors is improved by 6.73% compared to the original feature vectors describing the image. It verifies that the proposed method provides an effective solution for the automatic recognition of cashmere and wool.
1. Introduction
Cashmere has become one of the most popular raw materials in the textile industry because of its comfortable hand feel and excellent thermal insulation [1]. However, the international cashmere market is facing great challenges. The cashmere and wool fibers have similar appearance and touch, and they are mainly identified according to their scale characteristics [2]. Sometimes, merchants hide the lower-priced wool in cashmere products, which affects the stable operation of the textile industry market [3]. At present, the identification of cashmere and wool mainly uses the human eye to observe the microscopic pictures of the two [4]. This is an extremely time-consuming and error-prone work, and the result of identification is not reliable. Thus, in order to meet the need of cashmere products in the market, it is necessary to develop an automatic classification system that can identify the fiber of cashmere and wool quickly and precisely.
In recent years, many workers have applied the image processing technology to the recognition of cashmere and wool. In ref. [5], a method of the recognition of cashmere and wool fibers has been determined, which uses eight morphological features based on the difference between cashmere and wool fiber scale. In ref. [6], a Bayesian model for recognizing the cashmere and wool has been established using the three morphological features of fiber. In the identification system of cashmere and wool which is based on the different morphological features of fiber, because of the complex background of collected images, sometimes it is difficult to obtain the morphological feature of fiber. The recognition method of texture features has high recognition accuracy based on the objective expression of fiber features. The scale texture of the fiber image is classified by the feature extraction model established by the image processing method.
In ref. [7], the cashmere and wool were identified by abstracting the Tamura texture features. In ref. [8], the morphological and textural features of fiber were extracted by the gray-level co-occurrence matrix algorithm and interactive measurement algorithm and then the fibers of cashmere and wool were identified quickly and accurately by using the K-means clustering algorithm. In ref. [9], the cashmere fiber and wool fiber were classified by the textural features obtained from the spatial and frequency domains of the fiber images. In ref. [10], the four sub-images of wavelet decomposition were analyzed by the Gaussian Markov random field model and the classification results of cashmere and wool images were reported by generating eight-dimensional texture features from sub-images. In ref. [11], the extracted speed up robust features (SURF) are applied to transform the original image into the corresponding high-dimensional feature vector, and finally it classifies the feature.
Deep learning technology is favored for its good performance in classification. In recent years, some deep learning architectures for cashmere and wool classification have been developed. In ref. [12], a vision-based fiber recognition framework based on image segmentation and deep convolution neural network was proposed. This method can be used to segment overlapping and adhesive translucent fibers. Luo et al. [13] proposed a residual network method to identify cashmere and cashmere fiber. The results show that the proposed residual network model with 18 weight layers has the highest accuracy. In ref. [14], it applied four pre-trained convolutional neural networks (including AlexNet, VGG-16, VGG-19, and GoogLeNet) to transfer learning, evaluating the performance difference between the four network architectures to identify similar fibers. The convolutional neural network (CNN) system shows good classification accuracy in the classification of similar fibers of cashmere and wool. However, because the system requires a large amount of sample data to train the network model, it cannot achieve a good recognition accuracy with a limited amount of data.
In this article, we propose a feature fusion method of the wavelet multi-scale image based on the maximum inter-class variance, which is used to obtain good features for distinguishing cashmere and wool fibers, while reducing the demand for data volume and the computational complexity of the algorithm. This method fuses the Tamura texture features extracted through the Discrete Wavelet Transform. At present, wavelet transform is widely used in various image classification and image fusion technologies [15–16]. Compared with traditional texture feature extraction methods, the wavelet transform of images based on multi-scale decomposition has shown successful results in expressing the details of images and classifying image texture features. In this article, we propose a method for the classification of cashmere and wool using discrete wavelets for multi-scale feature fusion of fiber images. Section 2 gives the details of the proposed classification method of cashmere and wool. The analysis of the experimental results is presented in Section 3, and Section 4 is the conclusion.
2. Proposed method
Due to the differences in growth environment, breeding methods, and matching breeding, cashmere coarsening has become very common, and the distinction between cashmere and wool has become more complex. In this work, an automatic recognition method for cashmere and wool is proposed. It extracts the texture features of fiber images through image processing technology and fuses them into feature vectors that can distinguish them significantly according to the idea of maximum inter-class variance. The system block diagram to be implemented is shown in Figure 1. First, the collected source image of fiber is enhanced and cropped to reduce the influence of background on feature extraction and obtain the target area of the image for feature extraction. Then, the Tamura texture features of each sub-image decomposed by the Discrete Wavelet Transform of the target image are extracted. Afterward, the optimal weight is introduced for feature fusion based on the method of maximum inter-class variance, and then the fused features are normalized. Finally, support vector machine (SVM) is used for classification to obtain the final recognition accuracy.

The flow chart of cashmere and wool classification method.
2.1. Details of the dataset
In this study, cashmere and wool fibers from northern China are used as raw materials for the experiment, and the experimental images obtained from cashmere and wool fibers by scanning electron microscope are used as the data set of this article. The cashmere and wool fiber images are 400 images each in this study, and the magnification is set to 1,000 times in the SEM, and the horizontal and vertical resolutions are set to 96 dpi, respectively. In Figure 2(a) and (b), microscopic images of cashmere and wool fibers magnified by scanning electron microscopy are shown, respectively. Image processing and feature extraction are implemented in Python. The data set is divided into two groups, one is used for the training set, as the cross-validation of the system, and the other is used as the test set to evaluate the final recognition effect.

Fiber image provided by scanning electron microscope: (a) cashmere and (b) wool.
2.2. Preprocessing
In order to reduce the interference information and obtain the detailed information of the target fiber, the texture of the fiber image is first enhanced by the adaptive texture enhancement algorithm, and then these images are binarized by the Otsu threshold method [17]. This part mainly distinguishes the image background from the original fiber area, and then it fills the area of interest and removes noise. Then it calculates the angle that needs to be rotated after obtaining the center axis of the fiber to facilitate subsequent cropping operations. Finally, the fiber-filled image is used as a template, and the original image is cropped to obtain the target area for subsequent fiber feature extraction. This process is shown in Figure 3.

Image preprocessing process: (a) original image, (b) enhancement, (c) binarization, (d) fill the image, (e) remove outliers, (f) central axis, (g) rotation angle, (h) rotate, and (i) crop.
2.3. Multi-scale decomposition of wavelet analysis
Wavelet transform is called image microscope in image processing. Its ability of the multi-scale decomposition can decompose and separate the image information layer by layer through low-pass and high-pass filters [18]. The Discrete Wavelet Transform of image is to use a series of wavelets with different scales to decompose the original image signal into multiple detail images and approximate images. In this article, the Haar wavelet is used to decompose the cashmere and wool fiber images. The Haar scale function is expressed by the following equation [19]:
The Haar wavelet function is expressed by the following equation:
The approximate and detail sub-images of the original cashmere and wool fiber images obtained by the Haar wavelet decomposition are shown in Figure 4.

Wavelet multi-scale decomposition of cashmere image.
2.4. Feature fusion based on the maximum inter-class variance
In order to make full use of the image information in low-frequency and high-frequency domain, the Tamura texture feature is extracted through the detailed and approximate sub-images which are obtained after the wavelet decomposition [20], which will better characterize the difference of fiber scales. Constructing the texture feature vectors of cashmere and wool with the largest divisibility is based on the feature fusion using the maximum inter-class variance. Assuming that after the original image is subsampled, the feature vector of the approximate image is FLL = [ fLL1, fLL2, fLL3]. Suppose the feature vector is obtained by the sub-image on the horizontal detail, its formula is FLH = [ fLH1, fLH2, fLH3]. The eigenvector is represented by FHL = [ fHL1, fHL2, fHL3], if it is acquired from the sub-image on the vertical detail. Assume that the eigenvector obtained from the sub-image on the diagonal detail is FHH = [ fHH1, fHH2, fHH3]. Because the features extracted from the decomposed sub-images are the surface roughness of fiber scales and contrast and linearity, these are used as feature descriptors for the original fibers. Therefore, the feature vectors with the maximum inter-class variance are obtained by linear fusion. By linearly adding features extracted from the four sub-images after introducing appropriate weights, we obtain the F with three-dimensional feature vectors, which can be expressed by the following equation:
where a + b + c + d = 1, the first feature represents roughness, the second one means contrast, and the third is linearity in each feature vector.
For obtaining the optimal feature fusion coefficient, the metric is introduced to measure the contrast of inter-class variance and intra-class variance. Its representation is shown in the following equation:
where
where μ1 and μ2 are, respectively, the mean value of cashmere features and wool features in the training set, while μ is the mean value of the whole sample. In addition, N1 represents the number of cashmere in the training set and N2 means the number of wool. xi and xj are the eigenvalue of cashmere and wool, respectively. When λ takes the maximum value, the feature vector with the maximum inter-class variance can be obtained to distinguish cashmere and wool. The experimental results show that at this time a = 0.3, b = 0.1, c = 0.2, d = 0.4.
2.5. Classifier
In the last step of the whole cashmere and wool recognition system, different classifiers are used to verify the fused feature performance for ten-fold cross-validation. These classifiers are K-nearest neighbors (KNN), support vector machine (SVM), and linear discriminant analysis (LDA) classifiers, respectively. KNN classifies the query points according to the similarity between the query points and their nearest K search points [21], SVM classifies the features by searching the optimal hyperplane in the space [22], and LDA classifies the features by obtaining the best projected line [23].
3. Experimental results and analysis
In this work, the size of the collected fiber images does not affect the final characteristics of the fibers, so the size of each image is cropped to 256 × 256 pixels for subsequent processing, and then the grayscale operation is performed on these images. In order to obtain the feature vectors with the maximum inter-class variance, the features extracted from the four sub-images of wavelet decomposition are fused linearly by introducing the weight, and the metric factor λ is introduced to measure the maximum separability of the extracted features. The results show that the maximum inter-class variance is obtained when the fusion coefficient of the low-frequency approximate sub-image and the high-frequency horizontal component sub-image is 0.3 and 0.1, respectively, and the fusion coefficient of the high-frequency vertical component sub-image and diagonal component sub-image is 0.2 and 0.4, respectively, as shown in Table 1.
Difference between intra-class and inter-class variance
Class | Image | Intra-class | Inter-class | λ |
---|---|---|---|---|
Cashmere | ![]() |
0.227 | 0.375 | 0.563 |
Wool | ![]() |
0.217 |
The recognition accuracy of the extracted features of each sub-image is evaluated using SVM, as shown in Table 2. The results show that when the diagonal high-frequency detail information is used to characterize the image, and the extracted features obtain the highest recognition accuracy, the difference between the detailed features in the diagonal direction of the cashmere and wool fiber scales is obvious.
Performance measure values of original and sub-image features
Features | Accuracy (%) | Precision (%) | Recall (%) |
---|---|---|---|
Original | 88.47 | 88.16 | 88.71 |
Approximate | 82.86 | 82.04 | 83.40 |
Horizontal | 78.06 | 75.92 | 79.32 |
Vertical | 79.90 | 77.76 | 81.24 |
Diagonal | 84.29 | 83.88 | 84.57 |
To visualize the extracted high-dimensional feature data, map the multi-dimensional data to a two-dimensional plane. Figure 5 shows the features of the target area and different sub-images. It also reflects the separability between features after linear fusion based on the maximum inter-class variance. The performance measures are used in this article, which are accuracy, precision, recall, F1-score, and misclassification error (MCE).

Comparison of the distribution of different feature vectors.
The system evaluates the extracted features by using several typical classifiers for cross-validation. Using the features fused by introducing weights, different classifiers are evaluated through ten-fold cross-validation in this article, as shown in Table 3. The results show that the SVM with Gaussian kernel has the highest recognition rate. It shows 95.20% precision, 95.86% recall, and 96.16% F1-score.
Performance measure values of different classifiers
Classifier | Accuracy (%) | Precision (%) | Recall (%) | F1-score (%) | MCE (%) |
---|---|---|---|---|---|
KNN | 92.35 | 91.84 | 92.78 | 92.31 | 7.65 |
LDA | 91.12 | 90.20 | 91.89 | 91.04 | 8.88 |
SVM | 95.20 | 96.46 | 95.86 | 96.16 | 4.80 |
On the basis of the same ratio of the training set and test set, the various existing methods of feature extraction and the method used in this article are evaluated. As shown in Figure 6, the method in this article shows a high recognition rate in classifying cashmere and wool fibers.

Comparison with the existing method.
4. Conclusion
In this article, an automatic classification method for cashmere and wool fibers is proposed. The method mainly obtains the feature vectors with the largest discriminative degree through the method of feature fusion based on the maximum inter-class variance. First, the original image is preprocessed to obtain the target fiber region, so as to reduce the impact of interference information such as background and noise. Then, the image is decomposed by the Discrete Wavelet Transform to obtain sub-images with approximate information and detail information of the image, and then the Tamura texture features are extracted from them, respectively. Next, these weights of the feature fusion are determined, and the feature vectors with the maximum inter-class variance are input into the classifier. By using different classifiers to evaluate the performance of the extracted features, it shows that the SVM classifier has the highest accuracy with an accuracy of 95.20%. The results indicate that the optimal feature vector of the cashmere and wool is obtained by fusing these features linearly, improving the recognition accuracy of fibers. However, there are still some shortcomings of this method, such as the small amount of sample data and the experimental errors. Therefore, more diverse fiber samples will be collected on the recognition of animal fibers in the future.
Funding information: This work was supported by the Natural Science Basic Research Key Program funded by Shaanxi Provincial Science and Technology Department (No. 2022JZ-35 and No. 2023), the key research program industrial textiles Collaborative Innovation Center Project of Shaanxi Provincial Department of Education (No. 20JY026) and Science and Technology plan project of Yulin City (No. CXY-2020-052).
Conflict of interest: Authors state no conflict of interest.
References
[1] Luo, J., Lu, K., Zhang, B., Zhang, Y., Chen, Y., Tian, J. (2021). Current status and progress of identification methods for cashmere and wool fibers. Wool Textile Journal, 48(10), 112–117.Search in Google Scholar
[2] Xing, W., Xin, B., Deng, N., Chen, Y., Zhang, Z. (2019). A novel digital analysis method for measuring and identifying of wool and cashmere fibers. Measurement, 132, 11–21.10.1016/j.measurement.2018.09.032Search in Google Scholar
[3] Zhong, Y., Lu, K., Tian, J., Zhu, H. (2017). Wool/cashmere identification based on projection curves. Textile Research Journal, 87(14), 1730–1741.10.1177/0040517516658516Search in Google Scholar
[4] Lin, P. (2019). Detection method for distinguishing wool from cashmere. Western Leather, 41(18), 160.Search in Google Scholar
[5] Ma, C., Liu, X., Liu, F. (2014). A research on cashmere automatic identification method based on statistical analysis. Wool Textile Journal, 42(10),62–64.Search in Google Scholar
[6] Xing, W., Liu, Y., Deng, N., Xin, B., Wang, W., Chen, Y. (2020). Automatic identification of cashmere and wool fibers based on the morphological features analysis. Micron, 128(102768),1–8.10.1016/j.micron.2019.102768Search in Google Scholar PubMed
[7] Yuan, S., Lu, K., Zhong, Y. (2016). Identification of wool and cashmere based on texture analysis. Key Engineering Materials, 671, 385–390. Trans Tech Publications Ltd.10.4028/www.scientific.net/KEM.671.385Search in Google Scholar
[8] Xing, W., Deng, N., Xin, B., Wang, Y., Chen, Y., Zhang, Z. (2019). An image-based method for the automatic recognition of cashmere and wool fibers. Micron, 141, 102–112.10.1016/j.measurement.2019.04.015Search in Google Scholar
[9] Zhu, Y., Huang, J., Wu, T., Ren, X. (2020). Identification method of cashmere and wool based on texture features of GLCM and Gabor. Journal of Engineered Fibers and Fabrics, 16, 1–7.10.1177/1558925021989179Search in Google Scholar
[10] Xing, W., Deng, N., Xin, B., Chen, Y., Zhang, Z. (2019). Investigation of a novel automatic micro image-based method for the recognition of animal fibers based on wavelet and Markov random field. Micron, 119, 88–97.10.1016/j.micron.2019.01.009Search in Google Scholar PubMed
[11] Lu, K., Luo, J., Zhong, Y., Chai, X. (2019). Identification of wool and cashmere SEM images based on SURF features. Journal of Engineered Fibers and Fabrics, 14, 1–9.10.1177/1558925019866121Search in Google Scholar
[12] Gao, F., Lin, J., Liu, H., Lu, S. (2019). A novel VBM framework of fiber recognition based on image segmentation and DCNN. IEEE Transactions on Instrumentation and Measurement, 69(4), 963–973.10.1109/TIM.2019.2912238Search in Google Scholar
[13] Luo, J., Lu, K., Chen, Y., Zhang, B. (2021). Automatic identification of cashmere and wool fibers based on microscopic visual features and residual network model. Micron, 143(103023),1–7.10.1016/j.micron.2021.103023Search in Google Scholar PubMed
[14] Xing, W., Liu, Y., Xin, B., Zang, L., Deng, N. (2022). The application of deep and transfer learning for identifying cashmere and wool fibers. Journal of Natural Fibers, 19(1),88–104.10.1080/15440478.2020.1727817Search in Google Scholar
[15] Mishra, S., Deepthi, V. (2021). Brain image classification by the combination of different wavelet transforms and support vector machine classification. Journal of Ambient Intelligence and Humanized Computing, 12(6), 6741–6749.10.1007/s12652-020-02299-ySearch in Google Scholar
[16] Dou, J., Qin, Q., Tu, Z. (2019). Image fusion based on wavelet transform with genetic algorithms and human visual system. Multimedia Tools and Applications, 78(9), 12491–12517.10.1007/s11042-018-6756-0Search in Google Scholar
[17] Otsu, N. (1979). A threshold selection method from gray-level histograms. IEEE Transactions on Systems, Man, and Cybernetics, 9(1),62–66.10.1109/TSMC.1979.4310076Search in Google Scholar
[18] Deighan, A., Watts, D. (1997). Ground-roll suppression using the wavelet transform. Geophysics, 62(6), 1896–1903.10.1190/1.1444290Search in Google Scholar
[19] Stankovic, R., Falkowski, B. (2003) The Haar wavelet transform: its status and achievements. Computers & Electrical Engineering, 29(1),25–44.10.1016/S0045-7906(01)00011-8Search in Google Scholar
[20] Tamura, H., Mori, S., Yamawaki, T. (1978). Textural features corresponding to visual perception. IEEE Transactions on Systems, Man, and Cybernetics, 8(6),460–473.10.1109/TSMC.1978.4309999Search in Google Scholar
[21] Liao, Y., Vemuri, V. (2002). Use of k-nearest neighbor classifier for intrusion detection. Computers & Security, 21(5), 439–448.10.1016/S0167-4048(02)00514-XSearch in Google Scholar
[22] Cherkassky, V., Ma, Y. (2004). Practical selection of SVM parameters and noise estimation for SVM regression. Neural Networks, 17(1), 113–126.10.1016/S0893-6080(03)00169-2Search in Google Scholar PubMed
[23] Liu, X., Zhang, L., Li, M., Zhang, H., Wang, D. (2005). Boosting image classification with LDA-based feature combination for digital photograph management. Pattern Recognition, 38(6),887–901.10.1016/j.patcog.2004.11.008Search in Google Scholar
© 2022 Yaolin Zhu et al., published by De Gruyter
This work is licensed under the Creative Commons Attribution 4.0 International License.