2023, 32(2): 303-312.
doi: 10.23919/cje.2022.00.008
Abstract:
In adaptive optics systems, the bad spot detected by the wavefront detector affects the wavefront reconstruction accuracy. A convolutional neural network (CNN) model is established to estimate the missing information on bad points, reduce the reconstruction error of the distorted wavefront. By training 10,000 groups of spot array images and the corresponding 30th order Zernike coefficient samples, learns the relationship between the light intensity image and the Zernike coefficient, and predicts the Zernike mode coefficient based on the spot array image to restore the wavefront. Following the wavefront restoration of 1,000 groups of test set samples, the root mean square (RMS) error between the predicted value and the real value was maintained at approximately 0.2 μ m. Field wavefront correction experiments were carried out on three links of 600 m, 1.3 km, and 10 km. The wavefront peak-to-valley values corrected by the CNN decreased from 12.964 µ m, 13.958 µ m, and 31.310 µ m to 0.425 µ m, 3.061 µ m, and 11.156 µ m, respectively, and the RMS values decreased from 2.156 µ m, 9.158 µ m, and 12.949 µ m to approximately 0.166 µ m, 0.852 µ m, and 6.963 µ m, respectively. The results show that the CNN method predicts the missing wavefront information of the sub-aperture from the bad spot image, reduces the wavefront restoration error, and improves the wavefront correction performance.