A Helmet Detection Algorithm Based on Transformers with Deformable Attention Module
-
Graphical Abstract
-
Abstract
Wearing a helmet is one of the effective measures to protect workers’ safety. To address the challenges of severe occlusion, multi-scale, and small target issues in helmet detection, this paper proposes a helmet detection algorithm based on deformable attention Transformers. The main contributions of this paper are as follows. A compact end-to-end network architecture for safety helmet detection based on Transformers is proposed. It cancels the computationally intensive Transformer Encoder module in the existing detection transformer (DETR) and uses the Transformer Decoder module directly on the output of feature extraction for query decoding, which effectively improves the efficiency of helmet detection. A novel feature extraction network named DSwin Transformer is proposed. By sparse cross-window attention, it enhances the contextual awareness of multi-scale features extracted by Swin Transformer, and keeps high computational efficiency simultaneously. The proposed method generates the query reference points and query embeddings based on the joint prediction probabilities, and selects an appropriate number of decoding feature maps and sparse sampling points for query decoding, which further enhance the inference capability and processing speed. On the benchmark safety-helmet-wearing-dataset (SHWD), the proposed method achieves the average detection accuracy mAP@0.5 of 95.4% with 133.35G FLOPs and 20 FPS, the state-of-the-art method for safety helmet detection.
-
-