Citation: | Jian SU, Shiang MAO, Wei ZHUANG, “AOYOLO Algorithm Oriented Vehicle and Pedestrian Detection in Foggy Weather,” Chinese Journal of Electronics, vol. x, no. x, pp. 1–11, xxxx doi: 10.23919/cje.2023.00.280 |
[1] |
Z. X. Zou, K. Y. Chen, Z. Shi, et al., “Object detection in 20 years: A survey,” Proceedings of the IEEE, vol. 111, no. 3, pp. 257–276, 2023. doi: 10.1109/JPROC.2023.3238524
|
[2] |
Z. Q. Zhao, P. Zheng, S. T. Xu, et al., “Object detection with deep learning: A review,” IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 11, pp. 3212–3232, 2019. doi: 10.1109/TNNLS.2018.2876865
|
[3] |
R. Girshick, “Fast R-CNN,” in Proceedings of the 2015 IEEE International Conference on Computer Vision, Santiago, Chile, pp. 1440–1448, 2015.
|
[4] |
S. Q. Ren, K. M. He, R. Girshick, et al., “Faster R-CNN: Towards real-time object detection with region proposal networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 6, pp. 1137–1149, 2017. doi: 10.1109/TPAMI.2016.2577031
|
[5] |
J. Redmon, S. Divvala, R. Girshick, et al., “You only look once: Unified, real-time object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Vegas, NV, USA, pp. 779–788, 2016.
|
[6] |
W. Liu, D. Anguelov, D. Erhan, et al., “SSD: Single shot MultiBox detector,” in Proceedings of the 14th European Conference on Computer Vision, Amsterdam, The Netherlands, pp. 21–37, 2016.
|
[7] |
M. Hassaballah, M. A. Kenk, K. Muhammad, et al., “Vehicle detection and tracking in adverse weather using a deep learning framework,” IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 7, pp. 4230–4242, 2021. doi: 10.1109/TITS.2020.3014013
|
[8] |
Y. Guo, R. L. Liang, Y. K. Cui, et al., “A domain-adaptive method with cycle perceptual consistency adversarial networks for vehicle target detection in foggy weather,” IET Intelligent Transport Systems, vol. 16, no. 7, pp. 971–981, 2022. doi: 10.1049/itr2.12190
|
[9] |
X. Y. Wang and C. Wang, “Vehicle multi-target detection in foggy scene based on foggy env-YOLO algorithm,” in Proceedings of the IEEE 7th International Conference on Intelligent Transportation Engineering (ICITE), Beijing, China, pp. 451–456, 2022.
|
[10] |
M. Li, X. X. Ren, X. B. Hu, et al., “Attention-based radar and camera fusion for object detection in severe conditions,” in Proceedings of the 2022 International Conference on Frontiers of Communications, Information System and Data Science (CISDS), Guangzhou, China, pp. 117–121, 2022.
|
[11] |
C. Sakaridis, D. X. Dai, and L. Van Gool, “Semantic foggy scene understanding with synthetic data,” International Journal of Computer Vision, vol. 126,no,9 pp. 973–992, 2018. doi: 10.1007/s11263-018-1072-8
|
[12] |
W. Y. Liu, G. F. Ren, R. S. Yu, et al., “Image-adaptive YOLO for object detection in adverse weather conditions,” in Proceedings of the 36th AAAI Conference on Artificial Intelligence, virtual event, pp. 1792–1800.
|
[13] |
P. Sen, A. Das, and N. Sahu, “Object detection in foggy weather conditions,” in Proceedings of the 4th International Conference on Intelligent Computing & Optimization, Cham, Germany, pp. 728–737, 2021.
|
[14] |
H. Dong, J. S. Pan, L. Xiang, et al., “Multi-scale boosted dehazing network with dense feature fusion,” in Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, pp. 2154–2164, 2020.
|
[15] |
V. A. Sindagi, P. Oza, R. Yasarla, et al., “Prior-based domain adaptive object detection for hazy and rainy conditions,” in Proceedings of the 16th European Conference on Computer Vision, Glasgow, UK, pp. 763–780, 2020.
|
[16] |
J. F. Wang, Y. Chen, Z. K. Dong, et al., “Improved YOLOv5 network for real-time multi-scale traffic sign detection,” Neural Computing and Applications, vol. 35, no. 10, pp. 7853–7865, 2023. doi: 10.1007/s00521-022-08077-5
|
[17] |
B. Y. Li, X. L. Peng, Z. Y. Wang, et al., “AOD-Net: All-in-one dehazing network,” in Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, pp. 4780–4788, 2017.
|
[18] |
G. Jocher, A. Chaurasia, and J. Qiu, “Ultralytics YOLOv8,” Available at: https://github.com/ultralytics/ultralytics, 2023.
|
[19] |
S. Liu, L. Qi, H. F. Qin, et al., “Path aggregation network for instance segmentation,” in Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Lake City, UT, USA, pp. 8759–8768, 2018.
|
[20] |
M. Y. Ju, C. Ding, W. Q. Ren, et al., “IDE: Image Dehazing and exposure using an enhanced atmospheric scattering model,” IEEE Transactions on Image Processing, vol. 30 pp. 2180–2192, 2021. doi: 10.1109/TIP.2021.3050643
|
[21] |
K. M. He, X. Y. Zhang, S. Q. Ren, et al., “Deep residual learning for image recognition,” in Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vegas, NV, USA, pp. 770–778, 2016.
|
[22] |
Q. B. Hou, D. Q. Zhou, and J. S. Feng, “Coordinate attention for efficient mobile network design,” in Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, pp. 13708–13717, 2021.
|
[23] |
J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Lake City, UT, USA, pp. 7132–7141, 2018.
|
[24] |
S. H. Gao, M. M. Cheng, K. Zhao, et al., “Res2Net: A new multi-scale backbone architecture,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 2, pp. 652–662, 2021. doi: 10.1109/TPAMI.2019.2938758
|
[25] |
Y. C. Liu, Z. R. Shao, and N. Hoffmann, “Global attention mechanism: Retain information to enhance channel-spatial interactions,” arXiv preprint, arXiv: 2112.05561, 2021.
|
[26] |
Z. H. Zheng, P. Wang, D. W. Ren, et al., “Enhancing geometric factors in model learning and inference for object detection and instance segmentation,” IEEE Transactions on Cybernetics, vol. 52, no. 8, pp. 8574–8586, 2022. doi: 10.1109/TCYB.2021.3095305
|
[27] |
Z. H. Zheng, P. Wang, W. Liu, et al., “Distance-IoU loss: Faster and better learning for bounding box regression,” in Proceedings of the 34th AAAI Conference on Artificial Intelligence, New York, NY, USA, pp. 12993–13000, 2020.
|
[28] |
B. Y. Li, W. Q. Ren, D. P. Fu, et al., “Benchmarking single-image Dehazing and beyond,” IEEE Transactions on Image Processing, vol. 28, no. 1, pp. 492–505, 2019. doi: 10.1109/TIP.2018.2867951
|
[29] |
G. Jocher, “Ultralytics/yolov5,” Available at: https://github.com/ultralytics/yolov5, 2020.
|
[30] |
C. Y. Wang, A. Bochkovskiy, and H. Y. M. Liao, “YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors,” in Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, pp. 7464–7475, 2023.
|
[31] |
Z. M. Wang, Z. H. Xue, X. H. Wu, et al., “Multi-scale feature fusion for vehicle detection in haze environment,” Computer Systems Applications, vol. 32, no. 2, pp. 217–225, 2023. doi: 10.15888/j.cnki.csa.008957
|
[32] |
C. Lyu, W. W. Zhang, H. A. Huang, et al., “RTMDet: An empirical study of designing real-time object detectors,” arXiv preprint, arXiv: 2212.07784, 2022.
|
[33] |
Z. Ge, S. T. Liu, F. Wang, et al., “YOLOX: Exceeding YOLO series in 2021,” arXiv preprint, arXiv: 2107.08430, 2021.
|