Accurate and real - time object detection in crowded indoor spaces based on the fusion of DBSCAN algorithm and improved YOLOv4 - tiny network

: Real - time object detection is an integral part of internet of things ( IoT ) application, which is an important research ﬁ eld of computer vision. Existing lightweight algorithms cannot handle target occlu - sions well in target detection tasks in indoor narrow scenes, resulting in a large number of missed detec - tions and misclassi ﬁ cations. To this end, an accurate real - time multi - scale detection method that integrates density - based spatial clustering of applications with noise ( DBSCAN ) clustering algorithm and the improved You Only Look Once ( YOLO )- v4 - tiny network is proposed. First, by improving the neck network of the YOLOv4 - tiny model, the detailed information of the shallow network is utilized to boost the average precision of the model to identify dense small objects, and the Cross mini - Batch Normalization strategy is adopted to improve the accuracy of statistical information. Second, the DBSCAN clustering algorithm is fused with the modi ﬁ ed network to achieve better clustering e ﬀ ects. Finally, Mosaic data enrichment technique is adopted during model training process to improve the capability of the model to recognize occluded targets. Experimental results show that compared to the original YOLOv4 - tiny algorithm, the mAP values of the improved algorithm on the self - construct dataset are signi ﬁ cantly improved, and the processing speed can well meet the requirements of real - time applications on embedded devices. The performance of the proposed model on public datasets PASCAL VOC07 and PASCAL VOC12 is also better than that of other advanced lightweight algorithms, and the detection ability for occluded objects is signi ﬁ cantly improved, which meets the requirements of mobile terminals for real - time detection in crowded indoor environments.


Introduction
Dense object detection is one of the special application tasks of computer vision [1]. It has a wide range of application value in fields such as control and identification of suspicious objects [2], intelligent collection of human traffic statistics [3], and abnormal behavior detection [4]. Crowded indoor space is a single background scene that is less influenced by factors such as illumination, angle, and internal structure. The difficulty lies in the fact that when faced with factors such as mutual occlusion between multiple objects and incomplete camera framing, the traditional lightweight algorithms will lead to a great number of missed detections and false alarms, so it is of great research significance to achieve a lightweight detection algorithm that can accurately and quickly detect occluded small targets within crowded indoor spaces in the internet of things (IoT) environment [5].
Taking the elevator car detection application as an example, in the field of elevator safety management, the information provided by the elevator monitoring video is often used as the basis to judge the situation in the car and take corresponding measures. The current elevators generally have video acquisition devices on the top of the car, but this only provides real-time observation, video storage, and evidence collection and does not have functions such as timely warning and real-time monitoring. Although elevator managers can observe the video images and take timely measures after discovering the forbidden objects, managers cannot observe the surveillance video all the time, which may cause omissions in work. Therefore, it is necessary to solve the shortage of human monitoring through image acquisition equipment, processors, and target detection algorithms, that is, to design an intelligent video monitoring system that can detect prohibited objects in elevators throughout the day and realize the detection of prohibited target intrusions based on elevator monitoring video [6]. Through timely alarm to the management personnel, the workloads of relevant personnel are reduced and the elevator riding safety is guaranteed.
Traditional object detection methods rely on manual feature extraction and complete detection tasks through classifiers such as the support vector machine (SVM) [7]. The recognition process of such method is complex and highly subjective, and it is not sensitive to occluded salient regions, which leads to a poor performance in terms of detection accuracy, processing speed, and robustness. Recently, with the continuous progress in convolutional neural network (CNN), the concept of end-to-end framework has been applied to target detection algorithms, such as single shot multibox detector (SSD) [8] and the You Only Look Once (YOLO) series [9]. These algorithms combine the classification process and the regression network into a single stage, which has a significant improvement in the trade-off between detection accuracy and speed and is suitable for deployment on mobile terminals for target detection in crowded indoor spaces.
Existing algorithms such as YOLOv4-tiny network [10] achieve good detection performance in dense scenes, but they also have the following shortcomings: 1) The backbone network is too lightweight, and the contour evolution of the feature map is insufficient during the layer-by-layer transfer process, making it impossible to effectively learn enough features of the occluded targets in the course of training; 2) The neck network structure is too simple, with low efficiency when fusing feature maps of different sizes, and it is prone to lose detailed edge information; 3) The traditional K-means clustering has limitations in the post-processing stage, which can easily lead to missed detection.
The main contributions of the proposed method are listed as follows: 1) An adaptive multi-scale detection algorithm is proposed by taking the crowded indoor spaces such as the elevator car, bus car, and passenger aircraft cabin as the main research scenarios. Based on the YOLOv4-tiny model which is suitable for embedded hardware platforms, the neck network of the model is improved to improve the recognition accuracy of small targets. The Cross mini-Batch Normalization (CmBN) technique is used to replace the batch normalization (BN) technique from the original YOLOv4tiny model to ensure the estimation accuracy of statistical information and reduce estimation errors. The Mosaic data enrichment strategy is introduced in the training phase to improve the utilization of feature maps of different scales and alleviate the loss of edge information; 2) An improved clustering method combining DBSCAN and K-means algorithm is proposed. First, initial clustering is performed by DBSCAN clustering algorithm, and then multi-scale clustering is added to further obtain the local and overall information of the targets. After multi-scale clustering and convolution operations, the initial feature map is obtained. Afterward, the accurate center point position is derived with the K-means algorithm, which effectively accelerates the convergence and improves the classification accuracy for dense small targets. As a result, the classification effect of objects in different datasets and complex environments is improved, and the resistance to noise and interference is also improved.
The remainder of this article is organized as follows. Section 2 introduces the related research, and Section 3 explains the background knowledge of YOLOv4-tiny. Section 4 describes the proposed method specifically, including the improved YOLOv4-tiny network structure and the enhanced clustering algorithm for bounding box determination. In Section 5, the experimental results and discussion are presented. Finally, Section 6 summarizes the full text.

Related research
Before the rise of deep learning (DL) methodology, the traditional object recognition technology proposed in the early days mainly utilize the more obvious features including image corners, textures, and contours, and used feature descriptors such as histogram of oriented gradient [11], scale-invariant feature transform [12], speeded up robust features [13], Haar-like features [14], and local binary pattern [15], and the recognition and classification of the extracted object features are performed with template matching [16], boosting [17], SVM, and other methods. However, the traditional recognition algorithms have shortcomings such as relying on manual experience for feature selection and poor robustness in complex application scenarios.
As hardware equipment and DL technology evolve, DL-based frameworks have begun to emerge in the field of object detection. Classic DL models include CNN, deep belief networks, deep residual Networks, and AutoEncoders (AE) [18]. LeCun et al. [19] first applied artificial neural network to recognize handwritten fonts, and the LeNet [20] proposed later was the beginning of research on deep convolutional networks in the area of object recognition. Later, the ALexNet suggested by Krizhevsky et al. [21] achieved the best result of 11% false recognition rate in the visual object classes (VOC) challenge, which greatly improved the practicability of neural networks in the area of target recognition. The VGGNet [22] proposed by Oxford University further deepens the neural network and extracts deeper abstract features of the image. Goo-gleNet [23] developed by Google improves the computational efficiency of neural networks, and new versions including inception2, inception3, inception4, and Xception [24] were subsequently proposed.
The ResNet [25] developed by He et al. from the Microsoft Research Institute is a 152-layer deep neural network. The model gained the championship in the ImageNet Large Scale Visual Recognition Challenge 2015 competition with significantly lower error rate and reduced number of parameters compared to that of the VGGNet. In addition, there are also some studies on shallow networks and unsupervised DL models. For example, Chen et al. [26] proposed to use an unsupervised sparse AE network to learn from randomly sampled image patches, and finally train a softmax classifier to classify the objects. In 2014, Girshick et al. [27] proposed a regional detection-based CNN (R-CNN) in which a region-selective search algorithm is used to obtain 2,000 candidate bounding boxes and then convolution operations are performed, which reduce the redundant convolution operations and realize the function of multi-target recognition for a single image. So far, the CNN has gained considerable attention in the area of object recognition.
On the basis of the classic networks, by improving the way of obtaining candidate regions and the modification of the network structure, two types of object detection methods have been developed, namely the candidate region-based (also known as two-stage) algorithms, and the regression-based (also known as one-stage) object detection algorithms. In general, the candidate region-based algorithms have better recognition performance, while the regression-based algorithms have faster processing speed [28].
The popular candidate region-based object detection models include Spp-Net [26], Fast R-CNN [29], Faster R-CNN [30], and mask R-CNN [31]. These algorithms mainly involve two tasks: candidate region acquisition and candidate region identification. Spp-Net reduces the computing load of the network through pyramid pooling layers. Fast RCNN and Faster R-CNN improve the network in different aspects to improve the detection speed. The key improvement of Faster R-CNN is to use the region proposal network (RPN) instead of the selection search algorithm, which further improves the detection speed. RPN is currently the most accurate candidate bounding box localization method, so Faster R-CNN has very high localization accuracy. On the basis of Faster R-CNN, researchers have proposed mask R-CNN, RetinaNet [32], etc.
The regression-based (one-stage) object detection algorithms seek a compromise between recognition speed and recognition accuracy; the selection of candidate bounding boxes, feature extraction, object classification and the bounding box predictions are all regressed into the network, and the location and category of the target are obtained explicitly at the output layer, so that the recognition speed improves significantly. The regression-based object recognition algorithms mainly include YOLO [9], SSD [8], and EfficientNet [33]. These are the methods with fast processing speed and great detection accuracy.
In 2016, Redmon et al. proposed the YOLO network [9] which no longer uses the region proposal strategy, but the entire image input is divided into multiple grids. Each grid is responsible for forecasting the object whose center is within the grid. End-to-end prediction can be achieved by running a single CNN operation on an image, which significantly accelerates the speed of object recognition. YOLO also has some shortcomings. Its positioning accuracy is lower than that of the candidate region-based algorithms, it cannot identify small targets well, and its generalization performance is poor. Liu et al. proposed SSD [8] in the European Conference on Computer Vision 2016, in which the RPN structure is used to improve the YOLO network. By mapping objects at different scales through different convolutional layers, the detection accuracy for small targets is enhanced, while retaining the merits of fast computing speed from YOLO and high accuracy from Faster R-CNN. YOLO is superior to the SSD algorithm in terms of detection speed, but there is a problem of missed detections for mutually occluded targets. In addition, researchers have developed simplified versions of YOLOv3-Tiny [34], YOLOv4-tiny [10], and other lightweight models based on the original YOLO network. These models are relatively simple and efficient with fewer parameters, thus significantly reducing storage and computing requirements. Among them, YOLOv4-tiny is significantly better than other lightweight models in terms of training time and detection speed, which are applicable for embedded devices.

Yolov4-tiny
In the YOLO series, region classification proposals are integrated into a single neural network for the predictions of bounding boxes and classification probabilities, where the input image is partitioned into × S S grid cells and detection is performed in a single evaluation stage. To improve accuracy, the YOLOv2 model [35] introduces BN and direct position prediction strategies and replaces fully connected layers with convolutional layers to accelerate the training and detection processes. The YOLOv3 model [34] uses darknet-53 as the backbone, which utilizes 53 convolutional layers for feature extraction. The improved CSPdarknet-53 is used as the backbone in YOLOv4 [36], where feature extraction and connection are split into two parts using cross-stage partial (CSP) connections.
YOLOv4-tiny [10], as a lightweight version of the YOLOv4 model, using CSPDarkNet53-tiny as its backbone network, and the network structure of the YOLOv4-tiny are shown in Figure 1. CSPDarkNet53-tiny uses  Figure 1: Structure of Yolov4-tiny.
three simplified CSP blocks with residual modules removed in the CSP network. In order to further reduce computing complexity, the Mish function is replaced by the leaky rectified linear unit as the activation function, and the feature pyramid network is used to extract two feature maps of different sizes to predict the recognition results, reducing the number of model parameters and computational loads, thereby promoting the application in embedded systems and mobile devices. However, due to the simpler network structure, the detection performance of YOLOv4-tiny is also significantly lower than that of the YOLOv4 network, especially for small targets.

Proposed method
Based on the original YOLOv4-tiny model, this study designs a dense object recognition model for indoor spaces. The YOLOv4-tiny model has a simplified structure and fast inference speed and is suitable for embedded hardware platforms. However, the recognition accuracy of the model for small targets and overlapping targets within dense scenes needs to be improved, and further optimization and improvement are required.

Improvement of the network structure
The backbone network of the YOLOv4-tiny model contains three CSP modules, namely CSP1, CSP2, and CSP3. Among them, the CSP2 layer includes precise location information and more specific information, but fewer semantic information. The CSP3 layer includes a larger amount of semantic information but less specific information, the location information is relatively rough, and the position information and detail information of many small targets could be lost. To promote the recognition precision of the YOLOv4-tiny model for densely crowded targets, we propose an improved YOLOv4-tiny model, the detailed structure of which is shown in Figure 2. In the figure, Conv denotes convolutional operation, Leaky is the Leaky Relu activation function, and Maxpool represents max pooling operation.  From Figure 2, it can be seen that the proposed model adds a path connected to the CSP2 layer of the backbone network in the neck network of the original YOLOv4-tiny model, and the detection scales are expanded from two to three. The outputs of the CSP2 layer and the upsampling layer (UP layer) are concatenated in the channel dimensions, the feature maps from two different layers are fused, and then through two CBL modules, the feature map that is downsampled by eight times of the resolution of the input image is finally obtained. For an input image with a resolution of 416 × 416 pixels, the improved YOLOv4-tiny model outputs feature maps at three scales with resolutions of 13 × 13, 26 × 26, and 52 × 52 pixels, respectively. Since the occluded objects have the characteristics of small salient areas, the original YOLOv4-tiny network is prone to lose a lot of edge detail information. In the proposed model, large-scale feature map optimization technique is introduced into the improved backbone network, which enables the network to capture more detailed image information. By improving the resolution of input pictures, the dimensions of the output feature map are changed from 13 × 13, 26 × 26 to 13 × 13, 26 × 26, and 52 × 52, thereby enhancing the learning capability of the shallow network and reducing the information loss of the shallow layers in the training process.

CmBN strategy
In object detection tasks, due to the limited memory capacity of the video cards, the BN strategy is inaccurate in estimating statistical information, which may easily lead to increased model errors. This drawback will be more pronounced in resource-constrained embedded devices and can significantly degrade model performance. To this end, the CmBN strategy is used to replace the BN strategy in the original YOLOv4-tiny model.
During model training, the batch of image data in the training set is evenly divided into several minibatches and passed to the model for forward propagation. The weights of the model do not change until a single iteration is completed, so the statistics of different mini-batches in the same batch can be directly accumulated. The execution processes of the CmBN strategy are as follows: 1) When calculating the mean and standard deviation of the ith batch data, the output information of the convolutional layer of the (i − 1)th mini-batch data in the training set is combined; 2) The output of the convolution layer of the ith small batch of data is transformed into a normal distribution with a mean of 0 and a variance of 1; 3) Using learnable parameters, linearly transform the normalized output of the convolutional layers of the ith mini-batch to enhance its expressive ability.
The forward pass calculation process of the CmBN strategy is as follows: where l denotes the number of convolutional layers and j denotes the index number of the mini-batch. m represents the size of the mini-batch data, i is the index number of the mini-batch data, and τ is the serial number of the mini-batch index. x j i l , is the output at the lth convolutional layer for the ith batch of data from the jth mini-batch. ς¯j l and − ς¯j τ l are the average values of the output at the lth convolutional layer for the jth and ( ) − j τ th mini-batch, respectively. v j l is the sum of squares of the mean output at the lth convolutional layer for the j-th mini-batch. φ j l is the standard deviation of the output at the lth convolutional layer for the jth mini-batch. δ is a constant added to increase numerical stability.x j i l , is the normalized output at the lth convolutional layer for the ith batch of data from the j-th minibatch. γ and β are learnable scaling and translation parameters, respectively. y j i l , is the result obtained after performing CmBN on the output of the lth convolutional layer for the ith batch of data from the jth mini-batch.

Target bounding box prediction
In the majority of DL-based target detection algorithms, the convolutional layers generally only acquire the features of the targets, and then pass the acquired features to the classifier or regressor for result prediction. The YOLO series of algorithms use a 1 × 1 convolution kernel to complete the target prediction so that the dimension of the obtained prediction map is equal to the dimension of the feature map. The number of target bounding boxes in YOLOv4-tiny is 3, which is determined by each grid cell in the prediction map, and the feature map contains grid information, where C is the total number of categories and b represents the number of bounding boxes obtained, and different targets have their own unique bounding boxes.
is the parameter for each bounding box, which consists of the center coordinates, the size, the target score, and the confidence of the C classes of the bounding box. The relationship between the predicted bounding boxes and the network output can be expressed as follows:  the predicted target center relative to the center coordinates, and the values of t x and t y are controlled in the [0, 1] interval by the sigmoid function.
The YOLOv4-tiny algorithm consists of two parts, training and testing. During the training stage, a mass of data needs to be fed into the model, and during the prediction stage, the candidate bounding boxes are used to determine whether any target falls into the candidate box. If a target falls within the bounding box, its probability is as follows: In the process of object detection, images vary in size and variety. In order to determine the initial positions of the candidate bounding boxes in YOLOv4-tiny, the K-means clustering algorithm is utilized to determine the initial position of the bounding boxes. As an unsupervised learning method, the K-means clustering algorithm performs clustering operations on surrounding objects close to them by specifying a K value as the center. Through repeated iterations, the cluster center value is updated. When the intra-class difference is smaller and the out-of-class difference is larger, the desired effect is achieved, and the measurement of the "difference" is calculated based on the distance from the sample point to the centroid of the class to which it belongs. Euclidean distance is generally used to measure this difference, and the Euclidean distance measurement can be calculated as follows: where x is the sample point within the class, μ is the centroid within the class, n is the number of samples in each class, and i is the mark number of each data point.
However, the appropriate selection of K value in K-means clustering algorithm is often hard to complete, and it will directly affect the effect of clustering. At the same time, the clustering results of the Kmeans method can only guarantee the local optimal clustering in general and are more sensitive to noise interference. In addition, the K-means clustering algorithm is difficult to achieve the ideal clustering effect for non-convex data and data with large differences in size. In practice, the number of targets and the number of object categories in an image are generally unknown, and the objects may also be distributed in a very scattered way. Therefore, if the clustering is performed according to the specified K value, the distance between the obtained center point and the actual position may be too far. In contrast, the DBSCAN clustering algorithm has no bias on the shape of the clusters and does not need to input the number of clusters to be divided, only the appearance radius of the target is required to be set. In addition, the impact of the center point being too far away from the actual location can be alleviated by performing clustering based on density-reachable concept. Therefore, this article proposes a DB-K clustering method that combines DBSCAN and K-means algorithms. First, through the DBSCAN clustering algorithm, the initial good clustering effect is obtained under the condition of ignoring the center point, and thus several completed clusters are obtained. Then multi-scale clustering is added to further obtain the local and overall information of the object; specifically, multi-scale clustering is performed on the contour information of the object. After multi-scale clustering and convolution operations, the initial feature maps can be obtained. Finally, these clusters are used as input data, and the K-means algorithm is used to divide the clusters so that the accurate center point position can be obtained. This method can effectively speed up the convergence of the dataset and promote the classification accuracy of small objects.
In the DBSCAN clustering algorithm, the closeness of the sample distribution in the neighborhood is described by the parameter ( In the original YOLOv4-tiny, the seed points of the K-means clustering algorithm are chosen randomly, which increases the randomness of the clustering and will lead to poor clustering effect. This article proposes a strategy to reduce the randomness of seed point selection. First, the K value is obtained through the error sum of squares method, as shown in equation (11), and then the first cluster center is obtained through the K-means algorithm. Afterward, the obtained K clusters are analyzed, and the closest clusters are merged, thereby reducing the number of cluster centers; the corresponding number of clusters will also decrease when the next clustering is performed, obtaining an ideal number of clusters. After several iterations, when the convergence of the evaluation function reaches the expected value, the best clustering effect can be obtained as follows: In the experiment, 400 random points are selected and tested with the K-means clustering algorithm and the proposed DB-K hybrid clustering algorithm, respectively. In the K-means clustering algorithm, the K value is specified as 3, and the experimental results are shown in Figure 4(a). In the proposed DB-K clustering algorithm, the value of K is 3 and it is determined by the DBSCAN clustering algorithm, and the experimental results are shown in Figure 4(b). It can be clearly seen from the experimental results that the method proposed in this article has better clustering effect. Therefore, by integrating the DBSCAN clustering algorithm and improving the traditional K-means clustering algorithm, the classification performance of object detection in different datasets and complex environments is enhanced, and the robustness to noise and interference is also improved. With the proposed DB-K clustering algorithm, the performance of neural network in object recognition is significantly improved.

Experiment results and analysis
The experiments in this article are conducted in the Linux environment, and the system is configured as Ubuntu18.04, compute unified device architecture (CUDA) 11.0, and CUDA deep neural network library (CUDNN) 8.0. The hardware platform is equipped with an 8-core Intel 10400F CPU and the graphics card is GTX 960, with 16 GB RAM. During the experiment, GPU is used to speed up the training process. The selfmade data set is randomly split into training and testing sets in a ratio of 7:3, and multiple groups of ablation experiments are set up to verify the effect of each improved strategy on the model, thus obtaining the optimal model. In order to further authenticate the performance of the proposed algorithm, a comparative experiment with the most commonly used lightweight algorithms was conducted on the PASCAL VOC07 + 12 [37,38] public dataset, and the performance of different algorithms was compared in terms of the average detection accuracy (mAP) and detection speed (FPS).

Experiment datasets
In order to analyze the recognition performance of the proposed lightweight model in the crowded indoor spaces with occluded objects, two datasets were constructed: the PASCAL VOC07 + 12 public dataset which includes 16,551 training images and 4,952 testing images, and a self-made dataset. The self-made dataset involves four different scenarios, with 10,432 images captured from elevator cars and 9,139 images captured from bus cars, passenger aircraft cabins, and natural scenes. The training, validation, and test sets are partitioned in a ratio of 7:2:1. Among them, 70% of the pictures have the characteristics of mutual occlusion or incomplete camera framing. In this article, such pictures are defined as complex images in the experiment, which can effectively prevent the overfitting caused by the single background in the dense space during training, and improve the generalization capability of the model in crowded indoor scenarios. The dataset defines three detection categories: person, electric-bicycle, and bicycle. Baby carriage, trolley, furniture, other goods, and pets are used as negative samples to enhance the reliability of the model. The LabelImg tool designed by Tzutalin [39] is used to mark the labeling area and picture area of each picture according to a certain proportion, and the aspect ratio of the pictures is limited within 3:1 to make the predicted bounding boxes more fit to the targets. The statistics of the self-built dataset is shown in Table 1. Objects in dense indoor scenes such as elevators and carriages are prone to mutual overlap and occlusion, which will seriously affect the accuracy of target recognition. Aiming at the aforementioned problems, this study introduces Mosaic data augmentation technique [40] during model training to improve the model's capability to recognize occluded objects. The example images after Mosaic process is shown in Figure 5. Before each iteration, the DL framework not only acquires images from the training set, but also generates new images through Mosaic data augmentation, and then the newly generated images and the original images are combined and fed into the model for training. Mosaic data enhancement randomly selects four images from the training set, randomly crops the four selected images, and splices the cropped images in sequence to obtain a new image, and the generated image has the same resolution as the training set image. During random cropping, part of the bounding box of the target in the training set image may be cropped to simulate the effect of the object being occluded. In addition, this study optimizes Mosaic data augmentation, and proposes an improved Mosaic data augmentation, which uses the intersection-over-union (IoU) as an indicator and sets the threshold considering the relevant standards used when calibrating the dataset to filter the objects' bounding boxes in the newly generated image. The improved Mosaic strategy generates a new image according to the steps of Mosaic data enhancement and filters the object bounding boxes in the newly generated image. If the IoU between the object bounding boxes in the new image and in the corresponding original image is less than the threshold, the object bounding box in the new image will be deleted and it is considered that there is no object here. Otherwise, the object bounding box in the newly generated image will be retained, and it is considered that there is a target here.

Evaluation metric
The inference speed (ms/frame) is used as an evaluation metric for model recognition speed, and the inference speed is the time required for the model to recognize an image. The average precision (AP) is used as the evaluation metric for the model recognition accuracy. AP is calculated based on the precision (P) and recall (R) of the model. P, R, AP, and mAP are calculated as follows: ( ) where TP is the number of correctly detected positive samples, TN is the number of correctly detected negative samples, FP is the number of falsely detected positive samples, and FN is the number of falsely detected negative samples. P represents the percentage of correctly detected positive samples in all detected positive samples. R represents the percentage of correctly detected positive samples in all of the groundtruth positive samples. In the final evaluation result, AP represents the comprehensive evaluation of a certain category, and the greater the AP value is, the better the accuracy of a single category. mAP is an evaluation of the level of the entire network, C is the number of classes contained in the whole dataset, and c denotes a single class.

Model training parameters
The model parameters in the experiment are set as follows: the input image is 416 × 416; the epoch is set to 300; the batch_size is 128 for the first 70 rounds, and 32 for the last 230 rounds. The learning rate is 1 × 10 -3 for the first 70 rounds and 1 × 10 -4 for the last 230 rounds. We employed stochastic gradient descent optimization strategy for model training, with an initial learning rate whose momentum was gradually decreased as the degree of gradient descent increased, in order to improve convergence performance. Specifically, we set the momentum parameter to 0.9 to accelerate the convergence rate of the optimization process. The loss curves during training are shown in Figure 6.
As the epoch continues to increase, the loss values of both modes continue to decrease. After 70 rounds of training, the loss curves tend to be stable, and there is no underfitting or overfitting. The loss values of the original YOLOv4-tiny algorithm and the improved algorithm converge to around 2.3 and 1.9, respectively, which proves that the recognition accuracy of the model is constantly improving, and the hyperparameters of the proposed algorithm are set reasonably.

Comparison with the original YOLOv4-tiny
The test results of the original YOLOv4-tiny and the proposed model on self-built dataset are shown in Table 2, in which the mAP, inference speed, and model size comparisons are given. Among them, complex images account for 70% of the dataset, which correspond to the situation where the targets are occluded by each other or the camera view is incomplete. The mAP of the original YOLOv4-tiny model for recognizing complex images and all images in the test dataset are 75.47 and 84.25%, respectively, and the proposed model is 7.94 and 8.32% higher than those of the original model, respectively. The neck network of the improved YOLOv4tiny model reduces the loss of low-level feature map details and location information in the backbone network. The newly added feature maps have a smaller perceptual field for detecting blurred and occluded targets in images and improve the model's ability to detect small targets. And, the DB-K clustering algorithm is used to further improve the classification effect. Compared with the original Yolov4-tiny, the model size of the proposed model is increased by 2MB, and the inference time per image is increased by 0.37 ms. The improved model only adds three convolutional layers, one upsampling operation, and one connection operation to the neck network, and the backbone network of the model does not change. Therefore, the improved YOLOv4-tiny model maintains a fast inference speed while improving the detection accuracy.
The mAP curves of the proposed model and the original YOLOv4-tiny on PSACAL VOC dataset are shown in Figure 7. From the figures, it can be observed that with the proposed model, the mAP for recognizing objects of various scales and different categories are improved significantly. For example, compared with the original model, when using the proposed model, the APs for categories containing a large number of small targets such as potted plant, boat, and bird are increased by 7, 4, and 3%, respectively. The results validate that the proposed algorithm has better detection effect when dealing with scenes containing many occluded targets and small targets, which further proves the superiority of the proposed algorithm.

Ablation study
The ablation analysis is performed based on the original YOLOv4-tiny model and combined with different improvement strategies, and the training and performance evaluation are carried out on the self-built dataset containing four types of scenes, to validate the contribution of each module to the improvement of recognition accuracy under the precondition of guaranteeing real-time performance. The test results of different module combinations are shown in Table 3, all of which are trained using the proposed enhanced Mosaic technology. The effectiveness of each module is analyzed in the table, and it can be seen that each improved module has different degrees of contribution to the overall performance. Among them, the introduction of DBSCAN in Model 4 contributes the most to the network, and the mAP increases by 4.99%, which proves that the proposed clustering algorithm significantly improves the classification effect in complex environments, while increasing the robustness to noise and interference. The mAP value of the original YOLOv4-tiny (model 1) is 84.25%, and the mAP value after using the neck network optimization (model 2) is 86.77%, an increase of 2.52%, indicating that the introduction of this optimization module enables the model to strengthen the extraction of detailed information from shallow layers, which makes the training for occluded objects more in-depth. Model 3 replaces BN with CmBN on the basis of model 2. BN can only use the output features from the convolution layers of the current mini-batch, so the statistic information is not accurate enough, resulting in a poor performance. CmBN, on the other hand, realizes the expansion of samples by accumulating information from different mini-batches and makes the estimation of statistical information more accurate.   Faster RCNN [30], YOLOv4 [36], YOLOv3-tiny [34], and YOLOv4-tiny [10]. In addition, the DL frameworks and the trained models are ported to Jetson nano and Raspberry Pi 4B to test the inference speed of the models on the embedded hardware platforms. The results are listed in Table 4. From Table 4, it can be observed that the advantage of large networks is high detection accuracy. For example, the two-stage algorithm Faster RCNN using Resnet50 as the backbone network and the classic one-stage algorithm YOLOv4 have achieved mAPs of 83.77 and 90.01%, respectively. However, the size of these two models is too large, 330 MB and 256 MB, respectively, making such networks difficult to deploy to mobile terminals with limited computing capacity. The performance of the proposed method is comparable to that of the large network Faster RCNN in mAP, only 1.41% behind, and the size of the proposed model is only 1/13 to that of the faster RNN. Compared with the 64.3 million parameters of the YOLOv4 model, the parameters of the proposed model are only 6.1 million, which is less than 1/10. The advantage of the lightweight network is that the detection speed and accuracy are relatively balanced, and it can perform real-time detection on the mobile terminal, but the performance is relatively poor in complex scenes. Compared with two popular lightweight models YOLOv3-tiny and YOLOv4-tiny, the proposed method has significantly improved mAP performance while satisfying the real-time detection requirements. The proposed model will not introduce too much extra computing power and memory overhead in the inference process. The model retains the advantages of simplified structure and fast inference while ensuring a high recognition accuracy. It proves that the proposed model is applicable for the deployment in embedded hardware platforms.

Conclusion
In order to address the problem of poor performance of the original YOLOv4-tiny algorithm for detection of occluded object in dense indoor scenes, a modified target detection model is proposed based on YOLOv4tiny algorithm, and three feasible improvements are made: 1) The neck network structure of the original YOLOv4-tiny model is modified so that the model can learn more information from the occluded objects; 2) The CmBN strategy is used instead of the BN strategy, and the model error is reduced by accumulating the outputs of the convolutional layers; 3) The DBSCAN clustering algorithm is incorporated in the proposed network and then the anchor point coordinates are determined through the improved K-means clustering, thus the detection accuracy is further increased. Experimental results show that the mAP values of the proposed algorithm in the PASCAL VOC07 + 12 dataset and the self-built dataset are 92.57 and 82.36%, respectively, which are 8.32 and 6.87% higher than those of the original YOLOV4-tiny model, respectively. It proves that the performance of the proposed algorithm is significantly improved compared with the original YOLOV4-tiny model. The inference speeds of the proposed algorithm on the embedded platforms Jetson nano and Raspberry PI are 183 and 2,601 ms/frame, respectively, indicating that the processing speed of the proposed algorithm can satisfy the requirements of different real-time applications, the occluded target can be detected quickly and accurately, and the proposed model is applicable for practical application of target detection in crowded indoor spaces. There are still some areas for improvement of the proposed method. Although the recognition accuracy of the proposed algorithm has been greatly improved, it is limited by the lightweight nature of the backbone network. When performing detection tasks in general scenes, the detection accuracy of small objects is still lower than that of complex networks. In the future, we will continue to optimize the backbone network. In order to reuse local features and strengthen the fusion of global features, we can try to connect the CBM module and the CBL module in the CSP module in a dense connection structure. The CBM modules in different CSP modules can be tensor spliced after the upsampling operation, which further integrates the feature information of the shallow layer and the deep layer. In addition, an attempt will be made to augment the experimental dataset for small object detection in general scenarios.
Funding information: This work was not supported by any fund projects. Conflict of interest: The author declares that there is no conflict of interest regarding the publication of this article.
Data availability statement: The data used to support the findings of this study are included within the article.