Abstract:In order to solve the problem of missing detection and false detection in the helmet wearing detection model in complex construction scenes due to dense personnel, occlusion and small target size, this paper proposes an improved YOLOv8 based helmet wearing detection algorithm. Firstly, the CMUNeXtBlock module based on large core depth-separable convolution is introduced to improve the global awareness of the network by combining depth-separable convolution with reverse bottleneck technology. Secondly, the C2FICB module is designed to replace the C2f in the backbone network and integrate the semantic features between different channels and spatial locations to strengthen the network′s multi-scale generalization. Moreover, P2 micro-scale target detection layer is designed in the neck network to improve the network′s ability to capture local features. Finally, a RFAConv head(RFAHead) detection head based on the convolution of receptive field attention is proposed to optimize the expression of spatial features and further strengthen the ability of the model to extract global features. Experimental results show that in the Safety helmet dataset, the value of the improved model is increased by 5.2% and that of mAP@0.5-0.95 by 3.9% compared with the baseline model, respectively, effectively improving the accuracy of the safety helmet wearing detection model.