Citation: | Yanshan LI, Jiarong WANG, Kunhua ZHANG, et al., “Lightweight Object Detection Networks for UAV Aerial Images Based on YOLO,” Chinese Journal of Electronics, vol. x, no. x, pp. 1–13, xxxx doi: 10.23919/cje.2022.00.300 |
[1] |
B. Rocke, A. Ruffell, and L. Donnelly, “Drone aerial imagery for the simulation of a neonate burial based on the geoforensic search strategy (GSS),” Journal of Forensic Sciences, vol. 66, no. 4, pp. 1506–1519, 2021. doi: 10.1111/1556-4029.14690
|
[2] |
I. K. Hung, D. Unger, D. Kulhavy, et al., “Positional precision analysis of orthomosaics derived from drone captured aerial imagery,” Drones, vol. 3, no. 2, article no. 46, 2019. doi: 10.3390/drones3020046
|
[3] |
U. Andriolo, G. Gonçalves, N. Rangel-Buitrago, et al., “Drones for litter mapping: An inter-operator concordance test in marking beached items on aerial images,” Marine Pollution Bulletin, vol. 169, article no. 112542, 2021. doi: 10.1016/j.marpolbul.2021.112542
|
[4] |
H. Gupta and O. P. Verma, “Monitoring and surveillance of urban road traffic using low altitude drone images: A deep learning approach,” Multimedia Tools and Applications, vol. 81, no. 14, pp. 19683–19703, 2022. doi: 10.1007/s11042-021-11146-x
|
[5] |
Y. S. Li, S. F. Chen, W. H. Luo, et al., “Hyperspectral image super-resolution based on spatial-spectral feature extraction network,” Chinese Journal of Electronics, vol. 32, no. 3, pp. 415–428, 2023. doi: 10.23919/cje.2021.00.081
|
[6] |
A. Jain, R. Ramaprasad, P. Narang, et al., “Ai-enabled object detection in UAVs: Challenges, design choices, and research directions,” IEEE Network, vol. 35, no. 4, pp. 129–135, 2021. doi: 10.1109/MNET.011.2000643
|
[7] |
P. Mittal, R. Singh, and A. Sharma, “Deep learning-based object detection in low-altitude UAV datasets: A survey,” Image and Vision Computing, vol. 104, article no. 104046, 2020. doi: 10.1016/j.imavis.2020.104046
|
[8] |
G. Y. Tian, J. R. Liu, H. Zhao, et al., “Small object detection via dual inspection mechanism for UAV visual images,” Applied Intelligence, vol. 52, no. 4, pp. pp,4244–4257, 2022. doi: 10.1007/s10489-021-02512-1
|
[9] |
R. Walambe, A. Marathe, and K. Kotecha, “Multiscale object detection from drone imagery using ensemble transfer learning,” Drones, vol. 5, no. 3, article no. 66, 2021. doi: 10.3390/drones5030066
|
[10] |
Z. K. Li, X. L. Liu, Y. Zhao, et al., “A lightweight multi-scale aggregated model for detecting aerial images captured by UAVs,” Journal of Visual Communication and Image Representation, vol. 77 article no. 103058, 2021. doi: 10.1016/j.jvcir.2021.103058
|
[11] |
Y. Wang, “Survey on deep multi-modal data analytics: Collaboration, rivalry, and fusion,” ACM Transactions on Multimedia Computing, Communications, and Applications, vol. 17, no. 1s, article no. 10, 2021. doi: 10.1145/3408317
|
[12] |
M. Sharma, M. Dhanaraj, S. Karnam, et al., “YOLOrs: Object detection in multimodal remote sensing imagery,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14 pp. 1497–1508, 2021. doi: 10.1109/JSTARS.2020.3041316
|
[13] |
Y. S. Li, H. J. Tang, W. X. Xie, et al., “Multidimensional local binary pattern for hyperspectral image classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60 pp. 1–13, 2022. doi: 10.1109/TGRS.2021.3069505
|
[14] |
A. G. Howard, M. L. Zhu, B. Chen, et al., “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint, arXiv: 1704.04861, 2017.
|
[15] |
M. Sandler, A. Howard, M. L. Zhu, et al., “MobileNetV2: Inverted residuals and linear bottlenecks,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, pp. 4510–4520, 2018.
|
[16] |
A. Howard, M. Sandler, B. Chen, et al., “Searching for mobileNetV3,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South), pp. 1314–1324, 2019.
|
[17] |
X. Y. Zhang, X. Y. Zhou, M. X. Lin, et al., “ShuffleNet: An extremely efficient convolutional neural network for mobile devices,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, pp. 6848–6856, 2018.
|
[18] |
N. N. Ma, X. Y. Zhang, H. T. Zheng, et al., “ShuffleNet V2: Practical guidelines for efficient CNN architecture design,” in 15th European Conference on Computer Vision, Munich, Germany, pp. 122–138, 2018.
|
[19] |
Y. S. Li, L. D. Fan, and W. X. Xie, “TGSIFT: Robust SIFT descriptor based on tensor gradient for hyperspectral images,” Chinese Journal of Electronics, vol. 29, no. 5, pp. 916–925, 2020. doi: 10.1049/cje.2020.08.007
|
[20] |
K. Han, Y. H. Wang, Q. Tian, et al., “GhostNet: More features from cheap operations,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, pp. 1577–1586, 2020.
|
[21] |
G. Jocher, K. Nishimura, T. Mineeva, et al. “Yolov5,” Available at: https: //github. com/ultralytics/yolov5, 2020.
|
[22] |
Y. S. Li, T. Y. Guo, X. Liu, et al., “Action status based novel relative feature representations for interaction recognition,” Chinese Journal of Electronics, vol. 31, no. 1, pp. 168–180, 2022. doi: 10.1049/cje.2020.00.088
|