Brought to you by:
Paper The following article is Open access

Characterisation of a multi-view fringe projection system based on the stereo matching of rectified phase maps

, , , , and

Published 16 February 2021 © 2021 The Author(s). Published by IOP Publishing Ltd
, , Citation A Shaheen et al 2021 Meas. Sci. Technol. 32 045006 DOI 10.1088/1361-6501/abd445

0957-0233/32/4/045006

Abstract

Multi-view fringe projection systems can be effective solutions to address the limitations imposed by the limited field of view, line-of-sight issues and occlusions when measuring the geometry of complex objects, associated with single camera–projector systems. However, characterisation of a multi-view system is challenging since it requires the cameras and projectors to be in a common global coordinate system. We present a method for characterising a multi-view fringe projection system which does not require the characterisation of the projector. The novelty of the method lies in determining the correspondences in the phase domain using the rectified unwrapped phase maps and triangulating the matched phase values to reconstruct the three-dimensional shape of the object. A benefit of the method is that it does not require registration of the point clouds acquired from multiple perspectives. The proposed method is validated by experiment and comparison with a conventional system and a contact coordinate measuring machine.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 license. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

Fringe projection is a structured light method that is used for industrial three-dimensional (3D) measurement due to its fast acquisition rates and non-contact, non-destructive nature. Fringe projection has been used in a variety of sectors, such as manufacturing quality control [1, 2], biomedicine [3], reverse engineering [4], aerospace [5] and automotive [6]. However, commercially available fringe projection systems based on a single camera–projector pair have limitations when acquiring the 3D form in one acquisition due to the small field of view of the camera, the frequent presence of occlusions and potentially high slope angles, especially for the freeform geometries of additively manufactured (AM) parts [7]. A possible solution to alleviate these limitations is to include multiple cameras and projectors and acquire multiple views from multiple perspectives. Multi-view systems have become an emerging research area in 3D form measurement. However, multi-view systems are more complex than single camera–projector systems and require not only the individual components to be characterised but also the structural relationship between the components (cameras and projectors) to define a global coordinate system and merge the data from multiple perspectives.

In multi-view systems, the characterisation (often called calibration in practice) has a decisive influence on the system performance and for accurate 3D surface reconstruction. Abedi et al proposed a method of geometric calibration and rectification of a circular multi-camera system using a pyramid object with symmetric triangles and opposite colours. The method processes all the cameras simultaneously and solved the issue of error accumulation [8]. Liu et al used a 3D target to characterise a multiple depth camera system using lidar scanning. The method determines the relative orientation between the cameras with limited overlapping fields of view and unifies the multi-camera coordinates in the same coordinate system [9]. Sun et al developed a method of global characterisation of a multi-camera system using a group of spherical targets. This one-time operation can globally characterise all the cameras with non-overlapping fields of view and avoids extensive workloads and accuracy loss caused by repeated processes [10]. A flexible method of global characterisation of multiple cameras using a transparent glass checkerboard was proposed by Feng et al [11]. The method utilises the refractive projection model and the concept of ray tracing to eliminate the error of refraction and to achieve high accuracy.

The characterisation of a multi-view fringe projection system is based on determining both the intrinsic and extrinsic properties of the cameras and projectors and bringing them into the global frame of reference. A common approach to multi-view system characterisation is the extension of the methods for characterising single camera–projector systems, proposed by Tsai, Zhang, and Huang [1215]. Where each camera is characterised with an accurately manufactured target (for example, a checkerboard or circle board) and the relationship between the multiple views is obtained by global optimisation of the extrinsic parameters of all the views. Albers et al presented a flexible characterisation method for a multi-sensor fringe projection system by incorporating the Scheimpflug optics for the cameras and defining a common world coordinate system using a planar target [16]. Gai et al proposed an easy-to-use characterisation of a multi-view fringe projection system, where the digital fringe projection and phase maps are used to acquire global characterisation [17]. Gdeisat et al [18] and Deetjen et al [19] developed the global characterisation methods for multiple camera-projection systems, whereby the cross-talk between multiple camera-projector pairs is avoided by using a particular light bandwidth (RGB optical colour filters). Deejtan et al also demonstrated the technique for high-speed 3D reconstruction of a flying bird. Servin et al combined the two techniques: co-phased profilometry and 2-step temporal phase unwrapping, and measured an industrial metallic discontinuous object which is coated with white-matte paint to reduce the specular reflection [20, 21]. A co-phased 2-projector and 1-camera based 360-degree profilometer was proposed which can measure highly discontinuous objects [22]. A plastic skull is measured by rotating it with φ rotation steps.

In typical fringe projection systems, the projector is modelled as an inverse camera, the camera is used to capture images for the projector and the transformation from the camera image pixels to the projector image pixels is carried out by a phase-stepped fringe projection technique [2326]. However, if the camera pixels are not aligned with the projector pixels, this can lead to a mapping error. In general, any error in the camera characterisation is transferred to the projector characterisation, which can significantly affect the performance and accuracy of the fringe projection system. In this paper, we present a novel method to characterise a multi-view fringe projection system, which does not require projector characterisation, therefore, the influence of mapping error is removed. The proposed method depends on the stereo matching between rectified unwrapped stereo phase maps based on the epipolar constraints. In general, the stereo vision and fringe projection methodologies are combined to acquire the dense disparity map which is incorporated with the stereo-camera characterisation information for 3D surface reconstruction [2730]. However, the proposed method relies on determining the correspondences in the phase domain. The absolute phase maps are acquired through the fringe projection method and the matched phase points in the stereo phase maps are triangulated for 3D reconstruction. The effectiveness of the proposed method is determined by finding the point-to-point distance deviations between the point clouds, which are acquired from different views. The results are compared with the contact coordinate measuring machine (CMM) measurements which serve as a reference for the dimensional measurements. A comparison of the proposed method with the conventional method of characterising the multi-view fringe projection system is also presented.

2. Methodology

The methodology of this work falls into five main stages: Step A—Camera characterisation, Step B—Phase map by fringe projection, Step C—Rectification of the unwrapped phase maps, Step D—Stereo matching of the rectified unwrapped phase maps and Step E—Three-dimensional reconstruction. The schematic is shown in figure 1.

Figure 1.

Figure 1. Schematic diagram of the characterisation method of the multi-view fringe projection system.

Standard image High-resolution image

Step A—Camera characterisation: The camera characterisation is performed by placing a checkerboard in the field of view of all the cameras (there are four in our example set up—see section 3). Images of the checkerboard at different orientations are captured by all cameras. Each camera is characterised separately using a pinhole camera model [1215]. The intrinsic and extrinsic parameters of each camera are determined using a developed image processing algorithm in MATLAB [31].

After characterising each camera individually, the stereo-camera parameters are generated using the camera characterisation information [3239]. In general, any number of pairs could be used; however, due to the lack of a common field of view and the area illuminated by the structured light, we have considered the adjacent camera pairs and treated the multi-view system as two sets of stereo-camera pairs. The transformation between the stereo-camera pairs is given by [40]:

Equation (1)

Equation (2)

where R and T are the rotation and translation matrices respectively and correspond to the extrinsic parameters that describe the transformation from the world coordinate system to the camera coordinate system. The superscript T in $(R_1)^T$ and $(R_3)^T$ represents the transpose. The relative orientation and location in each stereo-camera pair is defined with respect to the first checkerboard position, which is in the common field of view of the cameras and corresponds to the same global coordinate system. The first dataset has the checkerboard in the field of view of all cameras, therefore, it removes the need for the checkerboard to be visible to all cameras at all times.

Step B—Phase map by fringe projection: A fringe projection system can be mathematically modelled as a stereo-camera system and relies on triangulation of common points between the projector and the camera. Essentially, one camera in the stereo pair is replaced with a projector and the correspondence is determined by the characteristics of the projected structured light. In this work, the method of phase encoding is based on the phase-stepped fringe projection method [26]. A set of phase-stepped sinusoidal and binary encoded fringe patterns [41] are projected onto the surface of the object being measured. Different phase offsets are applied to the sinusoidal pattern and an image is captured at each step. The phase value at any particular pixel can be determined from the captured N phase-stepped sinusoidal images [2326]. The retrieved phase has 2π modulation and is unwrapped by removing the 2π discontinuities and acquiring a continuous phase map.

We have used temporal phase unwrapping [4245] to produce an absolute phase map. In temporal unwrapping, the fringe order is encoded into binary fringes and projected onto the object, and an absolute unwrapped phase map is acquired. A modification to the binary fringes is introduced by converting the binary values to greyscale values, which simplifies the search for 2π discontinuities in the phase map with respect to the neighbouring pixels. The unwrapping errors in the retrieved phase maps are corrected using a filtering algorithm that convolves the unwrapped phase map with a Sobel edge kernel and removes the random spikes and dips in the phase map.

Step C—Rectification of the unwrapped phase maps: One approach to triangulate a large number of points is to rectify the stereo images and estimate the disparity map. Rectification is a transformation applied to the images to project them onto the same plane and can account for camera distortion and the non-coplanar stereo-camera pair [46, 47]. A schema of the rectification process is shown in figure 2, which shows that the image rectification transforms each image such that the epipolar lines (shown as dotted lines in figure 2) are parallel. The epipolar lines are given by:

Equation (3)

Equation (4)

where F is the fundamental matrix [47]). The corresponding values on the epipolar lines have the same coordinates in both the phase images based on the epipolar constraint:

Equation (5)

The advantage of the rectification process is that the search for 2D correspondences reduces to a single line (1D search problem) [40].

Figure 2.

Figure 2. Schematic diagram of the rectification of the stereo-camera phase images. φL and φR are the projections of 3D point P onto the left and right camera phase images respectively. lL and lR are the epipolar lines of the left and right camera phase images respectively. OL and OR are the left and right cameras' optical centres respectively.

Standard image High-resolution image

The phase maps captured by the adjacent cameras (Step B—Phase map by fringe projection) are treated as stereo phase maps ($\phi_{L},\phi_{R}$), as shown in figure 2. An automated algorithm pre-processes the phase maps for denoising (via Gaussian smoothing, a low-pass filter which attenuates the high frequency components), undistorts the phase maps which accounts for radial and tangential lens distortion and by utilising the stereo parameters information, acquires the rectified version of the left and right phase maps φL and φR , respectively (shown as blue coloured phase images in figure 2).

Step D—Stereo matching of the rectified unwrapped phase maps: For the stereo matching of the rectified unwrapped phase maps, an automated image processing algorithm was developed. The rectification of the phase images limits the search for correspondences to a single line (epipolar line). By taking the same line in both the rectified unwrapped phase images, the search for correspondences is accomplished by determining the points at which the phase values match. The output of this process is a disparity map. An iterative approach for disparity estimation is used by taking the phase value in the left camera phase image $\phi_{L-\textrm{rect}}(x1_{1,\ldots,N}, y1_{1,\ldots,N})$ and comparing it with the corresponding line (epipolar line) of the right camera phase map $\phi_{R-\textrm{rect}} (x2_{1,\ldots,N}, y2_{1,\ldots,N})$ using a nearest neighbour search. For k-nearest neighbour (k = 2), the search for the nearest neighbour (row/line) in the right phase image to each point in the query data (point in the left phase image) is accomplished using an exhaustive search method. This method finds the distance from each query point to every point in the right phase image, arranges them in ascending order, and yields the k points with the smallest distances. The nearest neighbour search returns a numerical matrix representing the indices of the nearest neighbours.

The absolute phase differences in the row coordinate of the rectified phase maps, for the same phase value viewed in the left and right camera phase images, are the disparity values. The disparities are prefiltered to discard phase values outside the expected disparity range. To account for sub-pixel disparity values, the two lowest phase differences from the epipolar line are extracted, and a linear fit between the two phase points and the intercept is determined. In order to access the same phase values in the rectified unwrapped stereo phase maps, location maps $x(x1_{1,\ldots,N}, x2_{1,\ldots,N})$ and $y(y1_{1,\ldots,N}, y2_{1,\ldots,N})$ for N phase values are generated. By incorporating the location map and sub-pixel disparity information, the matched phase points between the rectified images are determined.

Step E—Three-dimensional reconstruction: After determining the correspondences between the rectified stereo phase maps, the 3D points can be obtained based on the triangulation principle. The computation of the scene structure depends on finding the 3D point, which is estimated by the intersection of rays back-projected from the corresponding phase image point pairs $(\phi_{L}, \phi_{R})$ through their associated camera projection matrices [47]. For this purpose, the camera projection matrix (3 × 4, and comprises of rotation and translation matrices), which maps 3D world points in the homogeneous coordinates to the corresponding points in the camera phase image, is retrieved using the camera characterisation information, and the rotation and translation of the camera. The matched phase points in the rectified stereo phase maps are combined with the respective projection matrix of the adjacent cameras (as a stereo-camera pair), and the 3D world coordinates of the object are determined.

3. Experimental setup and results

In order to evaluate the effectiveness of the proposed method, a multi-view fringe projection system has been set up, as shown in figure 3. The system comprises four DSLR cameras (Nikon D3500, 4496 × 3000 pixels), and two digital light processing (DLP) projectors (DLPC300 Texas Instruments) with a digital micromirror device (608 × 680 pixels). All cameras and projectors are mounted on a rigid metal frame to reduce mechanical vibration [48]. The projector's digital micromirror device chip is used to project a series of computer-generated fringe patterns onto the surface of the object, and the cameras capture the distorted fringes. In this set-up, the two cameras and a projector yield one stereo pair, and the multi-view system are configured as two sets of stereo pairs. The details of the system characterisation procedure are discussed in the following sections.

Figure 3.

Figure 3. Photograph of the multi-view fringe projection system. The system is comprised of four DSLR cameras and two projectors.

Standard image High-resolution image

3.1. Camera characterisation results

The camera characterisation is performed using a checkerboard (checker size: 4 mm). The calibration steps are as follows.

  • 1.  
    The position of the cameras is adjusted so that each camera is in the field of view and covers the measurement volume.
  • 2.  
    The checkerboard is placed in several positions (46 in our case) in the field of view of the cameras (1–4). In each position, images of the checkerboard are captured.
  • 3.  
    The captured images are processed to extract the coordinates of the checkerboard corners—an automated image processing algorithm was developed for this purpose.
  • 4.  
    From the corner information, the intrinsic and extrinsic parameters for each individual camera are determined.
  • 5.  
    After characterising each camera individually, the stereo-camera pairs are generated using the camera characterisation information. The relative orientation and location of each stereo-pair are determined with respect to the first checkerboard position. Figures 5(c) and (d) show the extrinsic parameter visualisation of the stereo-camera pairs.

We have estimated the quantitative accuracy of the characterisation by determining the reprojection error, which corresponds to the distance between the checkerboard point detected on the characterisation image (checkerboard image) and the corresponding world point projected onto the same image. The mean reprojection errors for individual camera characterisation are 0.052 pixels, 0.059 pixels, 0.062 pixels and 0.051 pixels for camera 1–4, respectively. For stereo-camera pairs, the mean reprojection errors are 0.055 pixels and 0.056 pixels, for stereo-camera pairs 1–2, respectively. Figures 4(a)–(d) and 5(a) and (b) show the mean reprojection error per image for the individual camera characterisation and the stere-camera pairs, respectively.

Figure 4.

Figure 4. Mean reprojection error per image for the camera characterisation. (a) Camera 1, (b) camera 2, (c) camera 3 and (d) camera 4.

Standard image High-resolution image
Figure 5.

Figure 5. Reprojection error per image for stereo-camera pairs (a) stereo-camera pair 1, (b) stereo-camera pair 2, and (c) extrinsic parameter visualisation of stereo-camera pair 1. The region with coloured squares corresponds to the checkerboard patterns detected by the stereo-camera pair 1, and (d) extrinsic parameter visualisation of stereo-camera pair 2.

Standard image High-resolution image

3.2. Rectification of the unwrapped stereo phase maps

Figures 6(a) and (b) show the captured images of a Nylon-12 complex shaped object ($110\,\textrm{mm}\,\times\,110\,\textrm{mm}\,\times\,50\,\textrm{mm}$) acquired with the fringe projection method. A set of 10 phase-shifted fringe patterns and binary encoded fringes are used. The binary fringes provide the information regarding the fringe order and are used to retrieve the absolute unwrapped phase maps from the distorted fringe images. The acquired absolute unwrapped phase maps, after applying the filtering algorithm for one of the stereo-camera pairs are shown in figures 6(c) and (d).

Figure 6.

Figure 6. Images of a complex shaped object ($110\,\textrm{mm}\,\times\,110\,\textrm{mm}\,\times\,50\,\textrm{mm}$) with fringes projected. The images are captured by two cameras in a stereo configuration (a) Camera-1 and (b) Camera-2. Filtered unwrapped phase maps of the object for (c) data shown in (a) and (d) data shown in (b).

Standard image High-resolution image

The image transformation is applied to the filtered phase maps (shown in figures 6(c) and (d)) and the rectified unwrapped phase maps are shown in figures 7(a) and (b). The rectification process follows stereo matching in which the phase value in the left phase image is compared with the corresponding row (epipolar line) of the right phase image. The matched phase points in the stereo phase images are triangulated to acquire the 3D coordinates of the object.

Figure 7.

Figure 7. Rectified unwrapped phase maps. (a) Rectified phase image for the phase map in figures 6(c) and (b) rectified phase image for the phase map in figure 6(d).

Standard image High-resolution image

3.3. Three-dimensional reconstruction results

We validated the proposed method by implementing the characterisation method on the multi-view fringe projection system and acquiring the 3D reconstruction results for a complex shaped object ($110\,\textrm{mm}\,\times\,110\,\textrm{mm}\,\times50\,\textrm{mm}$, Nylon-12). Following the steps in section 2 and using the stereo matching of the rectified unwrapped phase maps, the 3D reconstruction results for the complex artefact were acquired, shown in figures 8(a)–(c). The point clouds from two views are in the global coordinate system, and do not require any further registration. By combining the point clouds (shown in figures 8(a) and (b)), a dense point cloud is retrieved, as shown in figure 8(c).

Figure 8.

Figure 8. 3D reconstruction results for a complex artefact ($110\,\textrm{mm}\,\times\,110\,\textrm{mm}\,\times\,50\,\textrm{mm}$). On the left side, two point clouds acquired from stereo-camera (a) pair 1, (b) pair 2, respectively, and on the right side, (c) a consolidated point cloud after combining the two views (a) and (b).

Standard image High-resolution image

We determined the point-to-point distance of the two-point clouds in the overlapping region between them using the C2C (Cloud-to-Cloud) plugin in CloudCompare [49], where C2C indicates that the distances are determined between a point cloud regarded as a reference and a target point cloud [50]. A small region-of-interest (ROI) is chosen between the point clouds from two sets of stereo-camera pairs, and a comparison of the point cloud from stereo-pair 1 (reference point cloud) was made against the other stereo-pair 2 (target point cloud), shown in figure 9. Based on the least-squares fitting methods in CloudCompare [49] which relies on nearest-neighbour distances, the distribution of the distance deviations is shown in figures 9(b) and (c). The colour map corresponds to the Euclidean distance between each point in the reference point cloud (stereo-camera pair 1) and its closest point located in the compared point cloud (stereo-camera pair 2). The statistics for the pairs are shown in figure 9(c). The standard deviation of the ROI of the two point clouds is 24 µm which indicates that the point clouds acquired from two orthogonal perspectives are in the global frame.

Figure 9.

Figure 9. Results for the deviation of two point clouds for the complex artefact. (a) Point cloud showing the ROI as a red box, (b) a colour map indicating the point-to-point distance of the two point clouds from stereo-camera pairs 1 and 2, where C2C (Cloud-to-Cloud) depicts that the distances are determined between a reference point cloud (stereo-camera pair 1) and a target point cloud (stereo-camera pair 2) [50], and (c) a histogram depicting the statistics of the point-to-point distance of the two point clouds shown in (b), the standard deviation (St.Dev) of the distribution is 24 µm.

Standard image High-resolution image

The structured pattern and waviness seen in figures 9(a) and (b) are associated with systematic effects (offsets in the intrinsic and extrinsic parameters which exhibits complex distortions in the triangulated point clouds) in the measurement process, noise in the phase maps and the accuracy of the system characterisation. These deviations may be considered as a combined effect of projector's non-linear gamma effects (in case the system is not perfectly characterised), non-linear offsets between the DSLR cameras (as four cameras were used), vibration due to the mechanical shutter of DSLRs, and the camera's internal sensor noise.

A Mitutoyo Crysta Apex S7106 CMM (available at the University of Nottingham) was used to perform the dimensional measurements which are used as a reference [51]. Specific features were measured using the CMM (21 mm long, 3 mm diameter ball-tipped stylus with SP25 Probe, $\Phi$4 mm $\times$ 50 mm) according to the National Physical Laboratory (NPL) good practice guide No. 41 [52]. As per the manufacturer specification, the CMM has a volumetric length measurement accuracy $E_{0} = (1.7 + 3~\textrm{L}/1000)\,\unicode{x03BC}$m (L is the length of the measured object in millimetres) and maximum permissible probing error $P_{\textrm{FTU}} = 1.7\,\unicode{x03BC}$m. The features compared with CMM (four repeat measurements) are shown in figure 10 and listed in table 1. The value after the ± sign in table 1 is the standard deviation of the repeat measurements. Each measurement from the multi-view system has three repeats. Commercial software (GOM Inspect [53]) was used for inspection of the 3D reconstruction results acquired from the multi-view fringe projection system (shown in figure 8).

Figure 10.

Figure 10. Photograph of the complex artefact with labelled features measured by CMM. The results are shown in table 1.

Standard image High-resolution image

Table 1. Dimensional measurements of the complex artefact.

Feature measuredCMM measurementMulti-view measurementDeviation of multi-view data from CMM
Sphere-1 diameter(22.462 ± 0.011) mm(22.580 ± 0.007) mm(0.118 ± 0.013) mm
Sphere-2 diameter(22.367 ± 0.014) mm(22.386 ± 0.001) mm(0.019 ± 0.014) mm
Sphere-1 to sphere-2(112.447 ± 0.010) mm(112.534 ± 0.005) mm(0.087 ± 0.011) mm
centre distance   
Hemisphere diameter(60.194 ± 0.173) mm(60.063 ± 0.003) mm(0.131 ± 0.173) mm
Wedge-1 inclination(44.964 ± 0.017)°(45.052 ± 0.004)°(0.088 ± 0.018)°
Wedge-2 inclination(135.191 ± 0.018)°(135.317 ± 0.003)°(0.126 ± 0.018)°

Table 1 shows the deviation of the specific features measured by the multi-view fringe projection system and compared them with CMM data. From the table, we can see a deviation between the multi-view fringe projection data and that from the CMM between 19 and 131 µm. The influence factors causing the deviations between the features of interest in (table 1) can be summarised as follows. Firstly, the scanned data from the multi-view system contains information only on the two visible sides. Generally, in a stereo-camera system, the origin is at the optical centre of camera-1 and the 3D reconstructed points are generated with the origin at the optical centre of camera-1. In our multi-view fringe projection system, camera-1 and camera-3 are considered as the origin for stereo-camera pair 1 and stereo-camera pair 2, respectively and 3D reconstructions are made accordingly. Secondly, the other two sides have voids that affect the comparison analysis. This limitation can be overcome by adding two more stereo-camera pairs in the other two quadrants. Essentially, one stereo-camera pair is needed in each quadrant to reconstruct the full form. Thirdly, the systematic errors causing the waviness and structured pattern shown in figures 9(a) and (b) contribute to the deviation of the measured features from the CMM data as shown in table 1. The above mentioned factors contribute towards the larger deviation of sphere-1.

The multi-view stereo-camera fringe projection system provides higher point densities and addresses the issues of occlusions and shadowing effects with the object, which are typically seen in a single view fringe projection systems. The reconstructed point clouds from multiple perspectives are in the same coordinate system and do not depend on point cloud course/fine registration methods. The point cloud in figure 8(c) contains more information and have a higher number of data points compared to the acquisition from a single-view, as shown in figures 8(a) and (b). The point-to-point distances for a small ROI between the two-point clouds of the measured object is around 24 µm (see figures 9(a)–(c)) which will further be optimised in the future work to achieve a higher accuracy of the system characterisation. The future work will focus on introducing more robust optical components (high-speed machine vision cameras and projectors with high frame rates), investigating the dependence of the structured pattern on the system characterisation accuracy and the correspondence method, and incorporating the information rich metrology (IRM) to make it smart optical form measurement system.

4. Comparison with the conventional method

We compared our proposed method with a conventional method. The multi-view fringe projection system consists of two projectors (DLPC300 Texas Instruments) and two cameras (Nikon D3500, 4496 × 3000 pixels), the arrangement is shown in figure 11(a). The conventional method of characterising a multi-view fringe projection system relies on capturing several positions of a standard checkerboard (checker width is 4 mm) in the measurement volume and determining the intrinsic parameters of the cameras [54]. The projector is incapable of capturing images, therefore, the camera captures the images for the projector and the one-to-one correspondences between the camera and the projector image pixel coordinates are determined using a phase-stepped fringe projection method. The absolute phase is obtained through temporal phase unwrapping that utilises a combined phase-stepped and binary coded method [26, 41]. The retrieved phase maps are used to determine the extrinsic parameters and the global frame of reference.

Figure 11.

Figure 11. Conventional method. (a) Photograph of the multi-view fringe projector system with two cameras and projectors considered for the conventional approach, (b) schematic diagram of the characterisation of the multi-view fringe projection system.

Standard image High-resolution image

Following the pipeline shown in figure 11(b), a set of horizontal and vertical phase-stepped fringe patterns are projected onto the checkerboard and images are captured at different positions. The checkerboard was moved manually in the measurement volume. By incorporating the absolute phase maps, the projector coordinates are determined from the camera coordinates using the one-to-one correspondence established through the phase maps [2326]. The transformation relation can be represented as:

Equation (6)

Equation (7)

where $(u^{p}, v^{p})$ are the image coordinates of the projector, $(u^{c}, v^{c}$) are the camera image coordinates, $(\phi_{h}, \phi_{v})$ are the horizontal and vertical phase values captured by the camera, and P is the number of pixels per fringe period (called fringe pitch).

The coordinates of the checkerboard corners were detected using a developed image processing algorithm [31]. The intrinsic and extrinsic parameters of all the cameras were determined based on a pinhole camera model as explained in section 2 (Step A—camera characterisation). The absolute phase maps are used to find the one-to-one correspondence between the camera and projector intensity pixels and to estimate the projector parameters. Figure 12 shows the reprojection errors for the cameras and projectors. The mean reprojection error per image for the cameras is 0.04 pixels, however, the error in the projector characterisation is around 0.20 pixels. The accuracy of the system characterisation is highly dependent on the characterisation of the individual optical components (cameras and projectors) and has a significant influence on the system performance.

Figure 12.

Figure 12. Mean reprojection error per image for the camera characterisation. (a) Camera 1, (b) camera 2, (c) projector 1 and (d) projector 2.

Standard image High-resolution image

Each camera and projector is regarded as a stereo pair and the transformation relationship is given by:

Equation (8)

Equation (9)

where Xw is the homogeneous point coordinate in the world coordinate system, (Ic , Ip ) are the homogeneous coordinates of the image point in the image coordinate system, ($A^c, A^p$) correspond to the intrinsic matrices, ($R^c, R^p$) and ($T^c, T^p$) are the rotational and translational matrices for the camera and projector, respectively.

The global frame is defined by taking a plane in the common field of view of all the cameras and projectors. The world coordinates between each camera–projector pair are established by triangulation and given as,

Equation (10)

where (x, y, z) are the world coordinates, ($u^c, v^c$) are the camera image coordinates and up is the projector image coordinate, hij (i = 1, 2, 3 and j = 1, 2, 3, 4) are the elements of the homography matrices for the camera and projector [54].

Using the triangulation principle (equation (10)), the correspondences between each camera-projector pair are generated and 3D point clouds are acquired. Figures 13 shows the 3D reconstructions results of a hemisphere shaped AM artefact. The top row (figures 13(a)–(c)) depicts the results for the conventional method acquired using two sets of cameras and projectors. The correspondences between each camera–projector pair are established based on the triangulation principle, and the outcome is two separate point clouds from orthogonal perspectives. Figure 13(c) corresponds to the combined point cloud of figures 13(a) and (b) based on the conventional approach. The bottom row (figures 13(d)–(f)) shows the results of the same hemisphere shaped AM artefact but 3D surface reconstruction is achieved based on the stereo rectification approach (explained in sections 2 and 3). The CMM measurements (four repeats) and the deviations of the multi-view results from the CMM are listed in table 2. In contrast to the proposed method, the hemisphere height measurement has larger deviations (134 µm) for the conventional method (table 2).

Figure 13.

Figure 13. 3D reconstruction results of a hemisphere shaped AM artefact $60\,\textrm{mm}\,\times\,60\,\textrm{mm}\,\times\,20\,\textrm{mm}$. Top row: conventional method (a)–(b) Point clouds acquired from camera-projector pair 1 and 2 respectively, (c) combined point cloud of the data shown in (a)–(b). Bottom row: proposed method. (d)–(e) Point clouds from two sets of stereo-camera pairs and (f) the combination of two point clouds shown in (d)–(e).

Standard image High-resolution image

Table 2. Dimensional measurements of a hemisphere shaped AM artefact. Each measurement has three repeats. The hemisphere height measured by CMM (four repeat measurements) is (9.6830 ± 0.0004) mm.

Hemisphere height (Ref: top plane of the base)Multi-view measurement (mm)Deviation of multi-view data from CMM data (mm)
Proposed method(9.671 ± 0.009)(0.012 ± 0.009)
Conventional method(9.549 ± 0.015)(0.134 ± 0.015)

In order to evaluate the effectiveness of the two approaches, a small ROI was chosen in the overlapping region between the two point clouds. The point-to-point distances are determined and shown as a deviation map in figure 14(b) for the conventional method and figure 14(e) for the proposed approach. The statistical distributions of the deviation map are depicted in figures 14(c) and (f) for the conventional and the proposed approaches, respectively. The conventional method has a mean value of 293 µm and a standard deviation of 181 µm. The conventional method struggles with mapping errors; the point clouds comprise more noise and require further registration using iterative-closest-point and fine registration algorithms. However, with our proposed method, the mean (figure 14(f)) is 58 µm and the deviation for the overlapping ROI is 20 µm.

Figure 14.

Figure 14. Results for the deviation of two point clouds of a hemisphere shaped AM artefact. Top row: conventional method. (a) Point cloud showing the ROI as a red box, (b) colour map indicating the point-to-point distance of the two point clouds from camera-projector pairs 1 and 2, where C2C (Cloud-to-Cloud) depicts that the distances are determined between a reference point cloud (stereo-camera pair 1) and a target point cloud (stereo-camera pair 2) [50] and (c) histogram depicting the statistics of the point-to-point distance of the two point clouds shown in (b). Bottom row: proposed method. (d) ROI shown as a red box, (e) point-to-point distance between the two point clouds from stereo-pair 1 and 2 and (f) statistical distribution of the data shown in (e).

Standard image High-resolution image

5. Conclusions

In this paper, a novel characterisation approach for a multi-view fringe projection system has been presented. The method relies on finding the correspondences between the rectified unwrapped stereo phase maps, and the matched phase values between the stereo phase images are triangulated to acquire 3D form. In contrast to the existing methods for determining the correspondences between the camera and projector in multi-view fringe projection systems, the benefit of this method is that it does not depend on the projector's characterisation (does not require multiple characterisations) as the stereo cameras would have the same phase value for the same point, irrespective of the projector. However, the effectiveness of this method is highly depended on the system's characterisation, and any offset in the stereo-camera pairs will affect the robustness of the method.

The characterisation method has been implemented, and the system has been used for the form measurement of complex AM artefacts. The 3D reconstruction results from multiple perspectives are effectively in a global frame and do not require further registration. Furthermore, the reconstructed results have addressed some of the limitations of a single view system, primarily associated with occlusions, shadowing and high slope angles. We also compared the proposed method with a conventional method and achieved improved performance. The future work will focus on introducing machine vision cameras, and to investigate the relationship of the structured deviations with the characterisation accuracy and the proposed correspondence method. This investigation will help to address the current issues with the DSLR's camera (camera's internal sensor noise, vibration due to mechanical shutter), and to achieve improved accuracy of the extrinsic properties of the stereo-camera pairs.

Acknowledgments

This research was funded by The EU Framework Programme for Research and Innovation—Horizon 2020—Grant Agreement No. 721383 within the PAM2 (Precision Additive Metal Manufacturing) research project and by the Engineering and Physical Sciences Research Council [EPSRC Grant Nos. EP/M008983/1, EP/L016567/1]. The authors would like to thank Dr Mohammed Isa (University of Nottingham) for providing help with the CMM measurements.

Conflicts of interest

The authors declare no conflicts of interest.

Please wait… references are loading.