An illumination direction estimation algorithm of face images

Illumination is one of the important factors affecting the accuracy of face recognition. This paper proposes an illumination direction estimation algorithm of face images and it can be applied to the non-frontal face image. In order to eliminate the influence of face’s albedo, the algorithm use SfSNet to get the shading image. The algorithm divides the face region into eight regions by using five landmarks in face and estimate the illumination direction by analyzing the average brightness of each regions. Experiment on YaleB dataset shows that the algorithm can estimate the illumination direction correctly in most cases.


Introduction
Face recognition is widely used in various fields nowadays. Under limited conditions, the accuracy of face recognition is often high. However, under uncontrolled conditions, especially extreme lighting conditions, can greatly reduce the accuracy of face recognition. Two face images of same person under different illuminations may be more difficult to distinguish than two images of different person under the same illumination [1]. Therefore, it is necessary to analyze the illumination of face images.
Many previous work has studied the illumination direction of face images. Sun proposed a method to estimate the illumination direction based on bifurcate tree and SVM. The illumination direction is divided into a series of angular spaces according to the horizontal and vertical rotation angles as the target classification [2]. The research shows that under certain assumptions, the Lambert convex surface image can be represented as a linear combination of nine spherical harmonic basis images on the convex surface, and the nine linear combination coefficients reflect the illumination of the image [3]. Chen proposed a new approach to solve the parameters of the illumination includes energy and direction, which is developed based on the model of diffuse reflection and ambient reflection [4]. Gamal Fahmy present a method for automatically detecting the lighting direction and glare region of face based on statistics derived from the images [5]. Some methods use the integral projection of a grayscale image to estimate the illumination direction [6] [7]. Most the methods mentioned above can only estimate the illumination of the frontal face image, which make them less practical. This paper propose a new algorithm to estimate the illumination direction of face image. Through simple observation, we can find that the direction of light is the direction of the ray pointing to the face by connecting the center of the highlight area and the center of face [6]. It is worth noting that our goal is not to get an accurate light angle, but to give an approximate illumination direction. Because we focus more on the impact of illumination on the face image rather than on the illumination itself.

Illumination direction estimation Algorithm
The method use SfSNet to eliminate the influence of face's albedo. Then we divide eight face regions by positioning five key points of a face. The illumination direction can be estimated by analyzing the average brightness of each region.

Get shading image
SfSNet can reconstruct shape, reflect and illumination information by inputting a face image [8]. By inputting a face image into SfSNet, the output can get a shading image which only include the illumination information of face. It can eliminate the interference of skin color and skin texture in this way. The frame architecture of SfSNet is shown in Figure 1.

Segmentation of face region
SfSNet detect face key points using [9] to clip face region. We use five key points to divided the face into eight regions. Five key points can be used to calibrate left eye, right eye, nose and both sides of mouth corner. We divide the face into four regions by connecting the key points of the nose with other key points. The four regions can represent the forehead, left cheek, right cheek and chin. Each region is further divided into two to get more accurate results. The whole process is shown in Figure 2. For convenience, we name the regions as shown on right of Figure 2.   (2) The illumination of face image will be considered as balanced light. Besides the balanced light situation, Our algorithm can divides the unbalanced illumination into eight types according to the position of the brightest region.

Experiments and Analysis
We calculate the average pixel brightness of each region for face images in YaleB Dataset. Since the groundtruth of illumination in YaleB Dataset only include 3 situations including right light, left light and balanced light. When the brightest regions is in left side of face including region 1, 2, 3, and 4, we came to the conclusion that the image illumination direction is on the left. The situation of brightest regions in another side of face represents right light.

The frontal face images
The algorithm is validated on YaleB Dataset. By inputting the face image into the SfSNet, the corresponding shading image is outputted. The first row in Figure 3 is the face images and the second row is corresponding shading images. The third row shows the partition of face region of by using five landmarks. The results of the brightness of each region are in Table 1. The average brightness of the region has been normalized.

The non-frontal face images
Our algorithm is better than other illumination estimation algorithms is that it can estimate the illumination of non-frontal face images. Figure 4 shows partitioning a non-frontal face by using face key points. We show the brightness of each regions in Table 2.The average brightness of the region has been normalized.

Performance of algorithm
We set the threshold to 0.2 on the YaleB dataset. Our algorithm performance is in Table 3. From the experimental results, we can conclude that our algorithm has great performance on YaleB dataset, which proves that our algorithm can correctly estimate face image illumination in most cases. In addition, our algorithm is also applicable to non-face images. However, for some extreme posture face images, face key points cannot be correctly calibrated, and our algorithm will be invalid. There is a great relationship between image brightness and light intensity. In this experiment, some wrong examples are because the light from the front is too strong, resulting in a large difference in the overall image contrast, but the groundtruth is a balanced light image. Similarly, the light from the left or right directions is too weak to make the overall brightness of the image more balanced, but groundtruth is a polarized image.

Conclusion
In this paper, we proposed a new approach to estimate the illumination direction of face images. The method focuses on the effect of illumination on the face image rather than the illumination itself. By using the key points of face to segment the face region, the method can deal with the non-frontal faces. Experiments on YaleB dataset show that our method can effectively estimate the direction of illumination. However, The face location algorithm used in this paper can not accurately landmark the key points of face in some cases with poor image quality. In future work, we can divide the face into more regions for discrimination, which may bring more accurate results.