A 1064 nm single-photon lidar for three-dimensional imaging

Single-photon light detection and ranging system has been widely used in three-dimensional (3D) imaging for its advantages in weak echo detection and high resolution. However, long-range imaging is a great challenge due to the device performance limits and strong solar irradiance. In this paper, we experimentally demonstrate a single-photon imaging system with an operation wavelength of 1064 nm in daytime. An all-fiber optical system with a two-dimensional rotation platform is designed to realize a wide scanning. And a sub-pixel scanning method is used to improve spatial resolution. Image reconstruction is based on the iterative shrinkage-threshold algorithm; herein the noise threshold is self-adaptive to the received photon-counting distribution. Multi-range information can be retrieved from each pixel and 3D point cloud is finally generated. Results show that the range resolution is 38 cm and the spatial resolution is about 7.4 cm at the distance of 2.13km, three times of the diffraction limit of the optical system.


Introduction
Light detection and ranging (Lidar) system, based on the time-correlated single-photon counting (TCSPC), has emerged as an attracting technology for acquiring three-dimensional image [1]. Both range and intensity information from objects can be obtained from extremely weak echo signal. Utilizing excellent photon flux sensitivity and high time resolution of TCSPC technology and short wavelength, Lidar could realize high spatial resolution, accurate measuring precision and strong anti-interference ability [2,3]. In recent years, its application has spanned long-range imaging [4][5], aerosol detection [6], topographic survey [7][8] and so on.
Long-range imaging single-photon Lidar was originally based on free-space optical components and Geiger-mode array detectors [9][10][11][12]. To improve field of view and detection distance, fiber components were partially adopted in recent years. For example, NASA demonstrated an airborne lidar based on a 4x4 fiber array [13]. East China Normal University designed a photon-counting laser ranging system and detection distance reached 21 km [14]. University of Science and Technology of China proposed a 1550 nm photon-counting lidar and realized 45 km imaging [15]. These systems were composed of both free-space optical components and fiber ones, which increased system instability. Additionally, the scanner machines of the most reported single-photon lidar systems were reflective mirrors, which limited the scanning field of view.
Additionally, the reported single-photon Lidar mainly operated at 532 nm, 1064 nm or 1550 nm. The choice of wavelength has significant impact on the system efficiency. For example, solar irradiance at  [16]. Nowadays, commercial single-photon detector with the best performance is InGaAs/InP or Si-based single-photon avalanche diodes (SPADs) [17], whose quantum efficiency at 1064 nm is higher than that at 1550 nm. Other reported techniques with relatively higher detection efficiency mainly limit in laboratory, including superconducting nanowire single-photon detectors (SNSPDs) and up-conversion SPDs [18].
In this paper, we design a Lidar system operating at 1064 nm. An all-fiber optical system with a twodimensional rotation platform is designed and a sub-pixel scanning method is used to realize a wideview and high-resolution scanning. Finally, 3D images with clear details are reconstructed through a self-adaptive noise-threshold reconstruction algorithm.

Experiment setup
In our Lidar system, a pulsed laser source transmits a 1064 nm pulse and is coupled into a collimator to illuminate the scene (see Figure 1). The repetition rate of laser is 40 kHz and the width of the pulse is 5 ns. The reflected light mixed with solar background noise is received by a telescope after filtering by a 1 nm filter. The diameter of the telescope is 20 mm and the focal length is 40 mm. The receiving field of view is 225 μrad, nearly twice of the illumining field. Both the collimator and the telescope are installed on a two-axis scanning platform to realize an azimuth and pitch scanning. After the weak echo signal is received, it will be detected by a Si-based SPAD detector, whose dark rate is 100 cps and dead time is 23 ns. The TCSPC module records the output (start) time t 0 of the laser pulse and the receiving (stop) time t 1 of the echo pulse. The time resolution is 512 ps. The distance d between the object and the Lidar system can be calculated as d = ∆t • c / 2, which c is the light speed and ∆t is the time difference between t 0 and t 1 , also called time-of-flight (ToF). During the measurement, all the devices are controlled by the computer. Generally, the spatial resolution of a static system is determined by the divergence of the field of view (FOV), which is designed to be larger than the diffraction limit. For a fiber system, FOV is approximately equal to fiber diameter D divided by focal length f, which can be described as The emitting fiber of our system is 10/125 μm large mode area fiber and the focal length of the collimator is 80mm, so FOV can be calculated as 125 μrad. In long-range imaging, even a small divergence angle will result in a large projection on the object [see Figure 2(a)]. For a divergence angle of 125 μrad, the spatial resolution is about 26 cm at 2.1 km. A sub-pixel scanning method can be used to retrieve more object details [19]. As shown in Figure 2(b), the scanning angle is set to be 0.002° (around 35 μrad) both in the azimuth and pitch direction, smaller than one-third of the FOV, so that the resolution is improved to be 7.3 cm. During the scanning, the beam pointing is corrected in real-time according to the feedback of the angular sensor. The scene is scanned line-by-line and the speed of the azimuth axis is faster than the pitching one [see Figure 2(c)]. After the entire scene is scanned, a reconstruction algorithm is used to realize three-dimensional imaging.

Reconstruction algorithm
Although the narrow band filter blocks the majority of the solar irradiance, the background noise is still strong in daytime. For each pixel, the photon detection rate function λ(t) in the continuous time domain can be described as is impulse response of the scene, s(t) is normalized detection response function of the transmitting pulse and b is the noise generated from background irradiance and dark noise of detector. When the laser spot is partially reflected or partially blocked, multiple range information would be collected at one pixel [20]. For a reflector quantity of k, the response of the natural scene at a pixel can be written as In our simulation, Y is the photon histogram data directly recorded by the TCSPC. S is normalized Gaussian pulse of the laser pulse in the discrete domain. B can be retrieved by the mean of the photon histogram data in the system blind area (700 m range from the laser source). is adapted to B. In the case of the low noise level (B< 0.04),  is at least ten times of B, while for the high noise level,  is above twenty times of B. Then the convex optimization problem can be solved by an iterative shrinkagethreshold algorithm [20]. After the range and intensity information being retrieved, as shown in Figure 3, the correlation between the pixel and its eight neighbours is evaluated for image smoothing [17]. So, the quality of the range and intensity image is further improved. Finally, together with pointing data, 3D point clouds can be generated to exhibit a direct view of the reconstruction image. Geometry correction may be used to deal with title or deformation problems.

Results
The experiments were performed in daylight with the visibility of 7.98 km. The target scene was a pylon built on the top of a forested mountain about 2.1 km away from the transmitter. Our device, including the optical system, servo, electrical instruments and computer, was installed in a mobile cabinet sheltered in a building (see Figure 4). Figure 5 is a visible-band image of the target scene taken by a mobile phone at the location of the Lidar system.  Figure 5. Photo of the target scene taken by a visible-band camera.
The first experiment was performed with a scanning angle resolution of 0.008° (about 140 μrad, close to the static FOV). The entire scanning scope is 4° in horizon direction and 3.2° in azimuth direction. The acquisition would start if the platform had rotated to the specific angle. The acquisition time was 10 ms for every pixel. Figure 6(a) is the intensity image, in which the reflected intensity of the frame structure of poly is obviously lower than the forest and the circular reflectors of the poly. One reason is that the width of the frame is smaller than the projection width of the emitting laser, so that part of the energy could not be reflected. The range estimation with the maximum intensity is shown in Figure 6(b). The poly is about 2.13km away from the Lidar system and the spatial resolution can be calculated as 0.3m. Figure  6(c) is the enlarge image of the poly; herein the frame structure is difficult to distinguish. The range information with secondary intensity is also derived in Figure 6(d). The forest and poly frame structure contains partially-occluding objects so light is reflected at different ranges for a single pixel. According to the range and pointing data, we calculate three-coordinate-value encoded by each pixel and reconstruct 3D point clouds. As shown in Figure 6(e), x-y plane represents the horizon ground and z value is height from the ground. In the right sight view, it is clear to identify the canopy of trees and the pylon on the top of the mountain. The minimal range separation is 0.38 m among all the pixels with two range information, whose photon-counting histogram is plotted in Figure 7. To improve the spatial resolution, sub-pixel scanning method is used. The scanning angular resolution is 0.002° (about 35 μrad, one-third of the static FOV) and the per-pixel acquisition time stays 10 ms. The pylon with more details of is shown in Figure 8. The frame structure can be distinguished clearly. In this situation, the spatial resolution is improved to 7.4 cm at the distance of 2.13 km, three times of the optical system resolution.
Another significant factor to influence the image quality is the acquisition time. We extract the photon count with an accumulation time of 1 ms from the 10 ms data. Figure 9 shows the photon count histograms of the same pixel for different accumulation time. When the per-pixel acquisition time is 10 ms, the noise constant derived from the photon count histogram is 0.080 and the peak value of the signal photon count is 15. While for the acquisition time of 1 ms, the noise constant is 0.008 and the peak value is 3. Apparently, the acquisition time has a significant impact on the signal-to-noise ratio.
The image reconstruction results with a per-pixel acquisition time of 1ms are shown in Figure 10. Although the frame structure can also be recognized, some details of the object are missing and the point clouds of the object are sparse. Therefore, the quality of reconstructed image is decided on both a high scanning angular resolution and an adequate acquisition time.

Conclusions
This paper presents an all-fiber single-photon Lidar with the wavelength of 1064 nm for long-range imaging. The experiment results demonstrated that 3D point clouds of the reconstruction image were recovered from strong solar background in daytime. The range resolution is 38 cm and the spatial resolution is about 7.4 cm at the distance of 2.13 km, three times of the system resolution. In the future work, the emitting beam quantity could be increased to form a wider view, so as to realize a larger area imaging in shorter time.