Abstract
Nuclear energy is a clean and popular form of energy, but leakage and loss of nuclear material pose a threat to public safety. Radiation detection in public spaces is a key part of nuclear security. Common security cameras equipped with complementary metal oxide semiconductor (CMOS) sensors can help with radiation detection. Previous work with these cameras, however, required slow, complex frame-by-frame processing. Building on the previous work, we propose a nuclear radiation detection method using convolution neural networks (CNNs). This method detects nuclear radiation in changing images with much less computational complexity. Using actual video images captured in the presence of a common Tc-99m radioactive source, we construct training and testing sets. After training the CNN and processing our test set, the experimental results show the high performance and effectiveness of our method.
1 Introduction
Nuclear energy has been rapidly developed and promoted worldwide as an important energy source and now plays an important role in modern industry. However, nuclear leakage [1], loss of nuclear materials and equipment [2], and the use of radioactive minerals as commemorative gifts [3] remind us that nuclear radiation detection and monitoring remain necessary for public safety.
Complementary metal oxide semiconductor (CMOS) image sensors respond directly to X-rays and gamma rays. Because of their decreasing prices and ready availability, they have generated tremendous research interest for detecting radiation. CMOS sensors can measure radiation doses [4,5,6,7], detect radiation events [8,9,10,11], gather cosmic rays, and support space exploration [12,13]. CMOS cameras have been used directly for nuclear radiation detection with the lens covered [14,15,16,17,18,19]. In our previous work [20,21], we used uncovered CMOS cameras to detect nuclear radiation via image processing methods. However, processing every image frame is computationally expensive and time-consuming.
To speed up detection time, computer vision and machine learning methods are increasingly being used. In the case where the camera’s lens is not shielded, the electrons excited by radiation particles and visible light in the depletion layer of the p–n junction of the CMOS sensor are superimposed on each other, resulting in the grayscale value of bright blotches in the video image being larger than the surrounding grayscale value [21]. Using a CMOS camera to detect nuclear radiation will collect a large amount of image data, which inspired us to use a deep learning algorithm that is good at processing a large number of image data to assist nuclear radiation detection. While convolution neural network (CNN) is widely used in target detection of nuclear medical images and nuclear radiation imaging [22,23,24,25], most of the direct detection of nuclear radiation events is a manual judgment by detection instruments. To reduce computational complexity and improve detection efficiency, we proposed a nuclear radiation detection method using a CNN model and an uncovered CMOS camera for public surveillance environment. We obtained surveillance videos from a CMOS camera with a common medical Technetium-99m (Tc-99m) radiation source. We then implemented an image fusion method to construct training and testing sets. Finally, the CNN was trained to test our method’s performance for nuclear radiation detection. We also perform experiments to verify the feasibility and effectiveness of our proposed method.
2 Materials and methods
2.1 Data acquisition
Surveillance videos were recorded using a TTQ-JW-02 camera with a 1/2.7 inch CMOS sensor OV2710-1E [26], a 3 μm pixel size, a frame rate of 25 fps, and 1,920 × 1,080 pixel image size. Table 1 provides the specifications of the OV2710-1E sensor, which is widely used in machine vision [27] and internet of things (IoT) [28] applications.
Product specifications of the CMOS sensor
Model | OV2710-1E | Shutter | Rolling |
---|---|---|---|
Active array size | 1,920 × 1,080 | Maximum exposure interval | 1,096 tline |
Lens size | 1/2.7-in | Image area | 5,856 μm × 3,276 μm |
Scan mode | Progressive | Package dimensions | 7,465 μm × 5,865 μm |
To obtain the bright spot image without radiation, we covered the camera lens and recorded 48 h of video. We selected 100,000 images with bright blotches from the video and labeled them as Dataset A. We then mounted the camera on a tripod to record videos without radiation, obtaining 200,000 frames of noncontinuous images with people but without radiation blotches designated as Dataset B. We placed a 7 × 108-Bq Tc-99m radioactive source above the camera, as shown in Figure 1. The half-life of Tc-99m is 6.02 h and the γ-ray energy is 140 keV. We used the camera with its lens covered to obtain 100,000 frames of radiation images designated as Dataset C. In addition, we used Dataset 1 and Dataset 2 from other research [21], including 20,000 monitoring images without radiation and 20,000 images with radiation captured in our previous work, to test the effectiveness of our proposed method.

TTQ-JW-02 CMOS camera and Tc-99m radioactive source.
When the CMOS camera lens is not shielded, the electrons that are excited by radiation particles and visible light are superimposed on each other in the p–n junction depletion layer of the CMOS sensor, resulting in the grayscale values of bright blotches being larger than the surrounding grayscale values in the video image. However, it is difficult for researchers to distinguish whether there are radiation bright blotches in images from surveillance cameras, especially when there are moving objects in the image backgrounds. Because nuclear radiation experiments cannot be carried out in public, it is not possible to obtain monitoring images containing radiation images directly. Therefore, we use an image fusion algorithm to process the images with only bright blotches (noise bright blotches and radiation bright blotches) and the monitoring images without radiation bright blotches to obtain the training and test sets for further experiments.
Sample frame images with blotches in Datasets A and C are shown in Figure 2. The corresponding original image without radiation bright blotches in Dataset B is shown in Figure 3. Table 2 shows the image category of each dataset.

Two images with bright blotches: (a) unirradiated; (b) irradiated.

A monitoring image without radiation.
The image category of each dataset
Dataset | Image category |
---|---|
A | Images with noise bright blotches |
B | Monitoring images without radiation |
C | Images with radiation bright blotches |
2.2 Acquisition of training set and testing set
2.2.1 Weighted averaging image fusion
Image fusion aims to combine multiple images from one or more sensors into a new image to meet a specific need using an image fusion algorithm. The result contains more useful information than any single image [29]. Current image fusion methods work at three levels [30]. The first and lowest level is pixel-level fusion, which retains original information as much as possible. The second level is feature-level fusion, which might lose important information in the image and distort the image. The third level is decision-making-level fusion, with difficult processing and implementation. We use the pixel-level image fusion method to construct the images with and without radiation from the observed image to retain bright blotch and monitoring information. These images form the training and test sets, respectively.
The weighted average (WA) method is the simplest pixel-level image fusion method and has the advantages of simple implementation, fast operation, and the ability to improve the signal-to-noise ratio of the fused image. The improved algorithm is widely used in infrared imaging, medical imaging, and other fields [31,32,33]. The principle of the WA method can be described as follows. Given images F 1 and F 2, the pixel value of fusion image F at pixel point (x, y) is
where w 1 and w 2 are the weights of F 1(x, y) and F 2(x, y), respectively, and w 1 + w 2 = 1. In this work, w 1 and w 2 are determined by F 1(x, y) and F 2(x, y):
2.2.2 Training set and testing set
We randomly fused 100,000 images in Dataset A and 100,000 images in Dataset B using the WA method and grouped 100,000 fusion images as Dataset N (negative). The remaining images in Datasets B and C were randomly fused to obtain images in Dataset P (positive). We used Datasets N and P as the training set.
In our previous work [21], we obtained Data1 and Data2 that included 10,000 unirradiated and 10,000 radiated monitoring images. In this work, we expanded the number of both images to 20,000, which was used as a testing set.
2.3 Convolutional neural network
Deep learning is one type of machine learning algorithm and employs a deep neural network structure. A CNN is a multiple layer feed-forward neural network designed specifically to process large numbers of image or sensor data in the form of multiple arrays by considering local and global stationary properties [34]. CNN is popular due to its efficient performance in solving object recognition tasks such as gesture recognition, face recognition, object classification, and scene description generation [35,36,37,38,39].
A CNN consists of three kinds of primary hidden layers: a convolution layer, a pooling layer, and a fully connected layer. The neurons of the fully connected layer are arranged in three dimensions: width, height, and depth. Readers interested in understanding the detailed structure and principle of each layer of CNNs are directed to Christian Szegedy [37]. For the sake of a brief review of the layers and the rectified linear unit (ReLU) of the CNN, we briefly describe them here:
Convolution layer: A convolution layer is composed of filters convoluted on input images to extract features. This layer discovers features in the image.
Pooling layer: The pooling layer receives feature sets from the convolution layer and then shrinks large images while preserving the most important information. Image calculations in the pooling layer do not affect the previous layer’s output because the only maximum value from each window is taken and brought to the upper layer. This layer also preserves the best fits of each feature within the window.
Rectified linear unit: The ReLU replaces every negative numeric value from the pooling layer with 0 to address the problems of disappearing gradients and convergence fluctuation. This preserves the CNN’s mathematical stability by preventing learned values from getting stuck near 0 or blowing up toward infinity.
Fully connected layer: Each node of the full connection layer is connected with all nodes of the upper layer to synthesize the features extracted from the front, and translate the high-level filtered images into categories with labels.
Figure 4 shows the framework of the CNN used in our research. Our CNN consists of five convolution layers and three fully connected layers. There are three max pooling operations (pooling layer) behind the first, second, and fifth convolution layers, which are not shown in Figure 4. The main structure of the model is input data; convolution, max pooling, ReLu activation; convolution, max pooling, ReLu activation; convolution, ReLu activation; convolution, ReLu activation; convolution, max pooling, ReLu activation; full connection, ReLu activation, dropout; full connection, ReLU activation, dropout; output.

The base architecture of CNN used in this work.
In this CNN, the input data is a 3-channel image of size 224 × 224 pixels. The first convolution layer uses 96 convolution cores of size 11 × 11 × 3 and is divided into two groups (48 in each group). The input layer is convoluted according to the stride size of 4-pixel to produce two groups of 55 × 55 × 48 convolution results. For convolution results, we use the ReLU activation function to obtain the activation results. Using the overlap maximum pooling with a window size of 3 × 3 and a stride size of 2-pixel, we obtain 27 × 27 × 48 pooling results. We apply a local response normalization operation to obtain the normalized result of 27 × 27 × 48. In the later convolution layers, similar operations are performed with different windows and stride sizes. The window size of each layer is shown in Figure 4.
To evaluate the validity of the method, we used the following evaluation indicators commonly used in classification tasks:
In these equations, T represents the result of the classifier prediction being correct and F represents that it is incorrect; P represents positive samples and N the negative samples; TP, TN, FP, and FN represent the number of samples corresponding to the following four cases; TP is true positive, an actual positive sample correctly determined to be positive; TN is true negative, an actual negative sample correctly determined to be negative; FP is false positive, a negative sample incorrectly determined to be positive; FN is false negative, a positive sample incorrectly determined to be negative; and Total is the total number of test samples.
3 Results
In this experiment, most areas of the surveillance images were unchanged, so we cropped areas where people appear along with the bright blotches in Datasets A and B. The pixel size of the cropped images was 224 × 224, as shown in Figure 5. The unirradiated images in Dataset A and the radiated images in Dataset C were randomly fused with the monitoring images in Dataset B (using the method described previously) to obtain the training set. Training images were labeled 0 or 1 according to the presence of radiation. Figure 6 shows the above process.

Cropped image set before fusion: (a) monitoring image; (b) bright blotches.

Image fusion generates training set samples.
Thus, we obtained the training set, including the verification set. In our previous work, we obtained monitoring images containing radiation. We used this previous data as the test set for evaluating the CNN model. Table 3 provides details about this prior data. Then the CNN was trained, validated, and tested on our deep learning server, which had the configuration shown in Table 4. The training loss and accuracy are shown in Figure 7.
Data used in this research
Type of radiation | Unirradiated image | Radiated image |
---|---|---|
Training set | 75,000 | 75,000 |
Validation set | 25,000 | 25,000 |
Testing set | 20,000 | 20,000 |
Size of data set | 3 × 224 × 224 (RGB, pixel) | 3 × 224 × 224 (RGB, pixel) |
Experimental setup in this research
CPU version | Intel® Core™ E5-4655 CPU @3.20 GHz |
---|---|
GPU version | NVIDIA GeForce GTX 1080Ti × 2 |
Framework | TensorFlow |
Operation system | Ubuntu 16.04 |
Network used | AlexNet |

Loss and accuracy of CNN training phase.
Finally, we used the testing data to test the effectiveness of the CNN model, with results given in Table 4. The precision, recall, F1, and accuracy of the test results indicate that the proposed method effectively detected radiation. In order to further verify the performance of CNN on the research object in this paper, we also trained and tested the other two widely used CNN models, and the final test results are listed in Table 5.
Precision, recall, F1, and accuracy on testing data
Model | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|
AlexNet | 0.8981 | 0.9736 | 0.9343 | 0.9314 |
GoogleNet | 0.9174 | 0.9293 | 0.9233 | 0.9228 |
ResNet | 0.9186 | 0.9443 | 0.9443 | 0.9303 |
4 Discussion
In this article, a new method based on CNN to detect nuclear radiation using CMOS surveillance cameras is proposed. However, limitations in the ability to obtain radiation images from actual populated places led the authors to use image fusion to construct training and validation sets. The data collected in the previous work as a testing set were also used.
From the results shown in Figure 6 and summarized in Table 5, we point out that our method achieves a recall score of 0.9736 for radiation detection. This score indicates that more radiation events can be identified successfully. Precision performance needs further improvement because a small number of unirradiated events were identified as radiation events.
5 Conclusion
In this article, a new method of detecting nuclear radiation under public surveillance scenarios that uses an uncovered CMOS camera, an image fusion method, and a CNN model is proposed. Surveillance videos were obtained from a CMOS camera with a common medical Technetium-99m (Tc-99m) radiation source. Then, an image fusion method was implemented to construct training and testing sets. Finally, we trained the CNN and tested our method’s performance for nuclear radiation detection. The experimental results show that performance with this testing set was not as high as for the training set, but recalled, F1, and accuracy scores still show significant effectiveness. Our proposed method offers significant promise for real-time detection of nuclear radiation using common monitoring cameras.
Acknowledgments
The authors thank LetPub (www.letpub.com) for its linguistic assistance during the preparation of this manuscript.
-
Funding information: This work was supported in part by the National Natural Science Foundation of China (Grant No. 11975044), Fundamental Research Funds for the Central Universities (Nos. FRF-TP-19-019A3).
-
Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.
-
Conflict of interest: The authors state no conflict of interest.
References
[1] Hirose K. 2011 Fukushima Dai-ichi nuclear power plant accident: summary of regional radioactive deposition monitoring results. J Environ Radioactivity. 2012;111:13–7. 10.1016/j.jenvrad.2011.09.003.Search in Google Scholar PubMed
[2] Sohu C. Available at http://www.sohu.com/a/114982338_116897 (accessed on Jan. 10, 2021).Search in Google Scholar
[3] Sohu C. Available at https://www.sohu.com/a/362028083_115354 (accessed on Jan. 10, 2021).Search in Google Scholar
[4] Han G, Jjs B, Kl C, Kcn D, Sjh E, Hck E. An investigation of medical radiation detection using CMOS image sensors in smartphones. Nucl Instrum Methods Phys Res A. 2016;823:126–34.10.1016/j.nima.2016.04.007Search in Google Scholar
[5] Wang X, Zhang SL, Song GX, Guo DF, Ma CW, Wang F. Remote measurement of low-energy radiation based on ARM board and ZigBee wireless communication. Nucl Sci Tech. 2018;29(1):31–6. 10.1007/s41365-017-0344-2.Search in Google Scholar
[6] Shoulong X, Shuliang Z, Youjun H. γ-ray detection using commercial off-the-shelf CMOS and CCD image sensors. IEEE Sens J. 2017;17(20):6599–604. 10.1109/JSEN.2017.2732499.Search in Google Scholar
[7] Wang C, Hu S, Gao C, Feng C. Nuclear radiation degradation study on HD camera based on CMOS image sensor at different dose rates. Sensor. 2018;18(2):514. 10.3390/s18020514.Search in Google Scholar PubMed PubMed Central
[8] Pérez M, Lipovetzky J, Haro MS, Sidelnik I, Blostein JJ, Bessia FA, et al. Particle detection and classification using commercial off the shelf CMOS image sensors. Nucl Instrum Methods Phys Res A. 2016;827:171–80. 10.1016/j.nima.2016.04.072.Search in Google Scholar
[9] Cheng QQ, Yuan YZ, Ma CW, Wang F. Gamma measurement based on CMOS sensor and ARM microcontroller. Nucl Sci Tech. 2017;28(9):1–5. 10.1007/s41365-017-0276-x.Search in Google Scholar
[10] Peng ZY, Gu YT, Xie YG, Yan WQ, Zhao H, Li GL, et al. Studies of an X-ray imaging detector based on THGEM and CCD camera. Radiat Detect Technol Methods. 2018;2(1):1–8. 10.1007/s41605-018-0058-y.Search in Google Scholar
[11] Cheng QQ, Ma CW, Yuan YZ, Wang F, Jin F, Liu XF. X-ray detection based on complementary metal-oxide-semiconductor sensors. Nucl Sci Tech. 2019;30(1):1–6. 10.1007/s41365-018-0528-4.Search in Google Scholar
[12] Zheng R, Liu C, Wei X, Wang J, Hu Y. Dark-current estimation method for CMOS APS sensors in mixed radiation environment. Nucl Instrum Methods Phys Res A. 2019;924:230–5. 10.1016/j.nima.2018.09.146.Search in Google Scholar
[13] Virmontois C, Belloir JM, Beaumel M, Vriet A, Perrot N, Sellier C, et al. Dose and single-event effects on a color CMOS camera for space exploration. IEEE Trans Nucl Sci. 2018;66(1):104–10. 10.1109/TNS.2018.2885822.Search in Google Scholar
[14] Van Hoey O, Salavrakos A, Marques A, Nagao A, Willems R, Vanhavere F, et al. Radiation dosimetry properties of smartphone CMOS sensors. Radiat Prot Dosimetry. 2016;168(3):314–21. 10.1093/rpd/ncv352.Search in Google Scholar PubMed
[15] Carrel F, Abou Khalil R, Colas S, De Toro D, Ferrand G, Gaillard-Lecanu E, et al. GAMPIX: a new gamma imaging system for radiological safety and homeland security purposes. 2011 IEEE Nuclear Science Symposium Conference Record. 2011 Oct 23–29; Valencia, Spain: IEEE; 2011. p. 4739–44. 10.1109/NSSMIC.2011.6154706.Search in Google Scholar
[16] Wagner E, Sorom R, Wiles L. Radiation monitoring for the masses. Health Phys. 2016;110(1):37–44. 10.1097/HP.0000000000000407.Search in Google Scholar PubMed
[17] Wei QY, Bai R, Wang Z, Yao RT, Gu Y, Dai TT. Surveying ionizing radiations in real time using a smartphone. Nucl Sci Tech. 2017;28(5):1–5. 10.1007/s41365-017-0215-x.Search in Google Scholar
[18] Wei QY, Wang Z, Dai TT, Gu Y. nuclear radiation detection based on un-covered CMOS camera under static scene. At Energy Sci & Technol. 2017;51(1):175–9. 10.7538/yzk.2017.51.01.0175. In Chinese.Search in Google Scholar
[19] Huang G, Yan Z, Dai T, Lee R, Wei Q. Simultaneous measurement of ionizing radiation and heart rate using a smartphone camera. Open Phys. 2020;18(1):566–73. 10.1515/phys-2020-0181.Search in Google Scholar
[20] Yan Z, Hu Y, Huang G, Dai T, Zhang Z, Wei Q. detecting nuclear radiation with an uncovered CMOS camera & a long-wavelength pass filter. IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC). 2019 Oct 26–Nov 2. Manchester, United Kingdom: IEEE; 2019. p. 1–3. 10.1109/NSS/MIC42101.2019.9059807.Search in Google Scholar
[21] Yan Z, Wei Q, Huang G, Hu Y, Zhang Z, Dai T. Nuclear radiation detection based on uncovered CMOS camera under dynamic scene. Nucl Instrum Methods Phys Res A. 2020;956:163383. 10.1016/j.nima.2019.163383.Search in Google Scholar
[22] Jiao L, Zhang F, Liu F, Yang S, Li L, Feng Z, et al. Survey of deep learning-based object detection. IEEE Access. 2019;7:128837–68. 10.1109/ACCESS.2019.2939201.Search in Google Scholar
[23] Tajbakhsh N, Shin JY, Gurudu SR, Hurst RT, Kendall CB, Gotway MB, et al. Convolutional neural networks for medical image analysis: full training or fine tuning? IEEE Trans Med Imaging. 2016;35(5):1299–312. 10.1109/TMI.2016.2535302.Search in Google Scholar PubMed
[24] Song T-A, Chowdhury SR, Yang F, Dutta J. Super-resolution PET imaging using convolutional neural networks. IEEE Trans Comput Imaging. 2020;6:518–28. 10.1109/TCI.2020.2964229.Search in Google Scholar PubMed PubMed Central
[25] Kromp F, Fischer L, Bozsaky E, Ambros IM, Taschner-Mandl S. Evaluation of deep learning architectures for complex immunofluorescence nuclear image segmentation. IEEE Trans Med Imaging. 2021;99:1. 10.1109/TMI.2021.3069558.Search in Google Scholar PubMed
[26] OV2710-1E. 1080p/720p HD color CMOS image sensor with OmniPixel®3-HS technology. Available from: https://www.ovt.com/sensors/OV2710-1E (accessed on November 9th, 2019).Search in Google Scholar
[27] Mao J, Guo Z, Geng H, Zhang B, Cao Z, Niu W. Design of visual navigation system of farmland tracked robot based on raspberry pie. 2019 14th IEEE Conference on Industrial Electronics and Applications. 2019 Jun 19–2. Xi’an, China: IEEE; 2019. p. 573–7. 10.1109/ICIEA.2019.8834077.Search in Google Scholar
[28] Xiao K, Du Z, Yang L. An embedded wireless sensor system for multi-service agricultural information acquisition. Sens Lett. 2017;15(11):907–14. 10.1166/sl.2017.3897.Search in Google Scholar
[29] Piella G. A general framework for multiresolution image fusion: from pixels to regions. Inf Fusion. 2003;4(4):259–80. 10.1016/S1566-2535(03)00046-0.Search in Google Scholar
[30] Zhang Z, Blum RS. A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application. Proc IEEE. 1999;87(8):1315–26. 10.1109/5.775414.Search in Google Scholar
[31] Wei C, Blum RS. Theoretical analysis of correlation-based quality measures for weighted averaging image fusion. Inf Fusion. 2010;11(4):301–10. 10.1016/j.inffus.2009.10.006.Search in Google Scholar
[32] Yang G, Tong T, Lu SY, Li ZY, Zheng Y. Fusion of infrared and visible images based on multi-features. Opt Precis Eng. 2014;22(2):489–96. 10.3788/OPE.20142202.0489. In Chinese.Search in Google Scholar
[33] Azis NA, Jeong YS, Choi HJ, Iraqi Y. Weighted averaging fusion for multi-view skeletal data and its application in action recognition. IET Computer Vis. 2016;10(2):134–42. 10.1049/iet-cvi.2015.0146.Search in Google Scholar
[34] LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436–44. 10.1038/nature14539 Search in Google Scholar PubMed
[35] Chen YN, Han CC, Wang CT, Jeng BS, Fan KC. The application of a convolution neural network on face and license plate detection. 18th International Conference on Pattern Recognition (ICPR'06). 2006 Aug 20–24. Hong Kong, China: IEEE; 2006. p. 552–5. 10.1109/ICPR.2006.1115.Search in Google Scholar
[36] Bobić V, Tadić P, Kvaščev G. Hand gesture recognition using neural network based techniques. 13th IEEE Symposium on Neural Networks and Applications (NEUREL). 2016 Nov 22–24. Belgrade, Serbia: IEEE; 2016. p. 1–4. 10.1109/NEUREL.2016.7800104.Search in Google Scholar
[37] Jiang Y, Chen L, Zhang H, Xiao X. Breast cancer histopathological image classification using convolutional neural networks with small SE-ResNet module. PLoS One. 2019;14(3):e0214587. 10.1371/journal.pone.0214587 Search in Google Scholar PubMed PubMed Central
[38] Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, et al. Going deeper with convolutions. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2015 Jun 7—15. Boston, UA: IEEE; 2015. p. 1–9. 10.1109/CVPR.2015.7298594.Search in Google Scholar
[39] Sharma N, Jain V, Mishra A. An analysis of convolutional neural networks for image classification. Proc Computer Sci. 2018;132:377–84. 10.1016/j.procs.2018.05.198.Search in Google Scholar
© 2022 Zhangfa Yan et al., published by De Gruyter
This work is licensed under the Creative Commons Attribution 4.0 International License.