Computer-aided Detection of Lung Tumors in Chest X-ray Images Using a Bone Suppression Algorithm and A Deep Learning Framework

Detecting lung tumors in early stage by reading chest X-ray images is important for radical treatments of the disease. In order to decrease the risk of missed lung tumors, diagnosis support systems that can provide the accurate detection of lung tumors are in high demand, and the use of artificial intelligence with deep learning is one of the promising solutions. In our research, we aim to improve the accuracy of a deep learning-based system for detecting lung tumors by developing a bone suppression algorithm as a preprocessing for the machine-learning model. Our bone suppression algorithm was devised for conventional single-shot chest X-ray images, which do not rely on a specific type of imaging systems. 604 chest X-ray images were processed using the proposed algorithm and evaluated by combining it with a U-net deep learning model. The results showed that the bone suppression algorithm successfully improved the performance of the deep learning model to identify the location of lung tumors (Intersection over Union) from 0.085 (without the bone suppression algorithm) to 0.142, as well as the ability to classify the lung cancer (Area under Curve) that increased from 0.700 to 0.736. The bone suppression algorithm would be useful to improve the accuracy and the reliability of the deep learning-based diagnosis support systems for detecting lung cancer in mass medical examinations.


Introduction
Lung cancer is an intractable disease and is reported to cause more than 70,000 people's deaths every year in Japan [1]. Since the radical surgery is the most effective treatment of the lung cancer, it is important that radiologists can find lung tumors in early stage through screening procedures in mass medical examinations. In this situation, the most used modality is the chest X-ray imaging that can provide comprehensive lung images easily and, at low cost and exposure risk. Although the chest X-ray images have played important roles in diagnosis of lung cancer [2], detecting lung tumors in early stage from those X-ray images is yet difficult because the lesions are often hidden in the soft tissue and bones. A previous study reported that risks of missed lung tumors due to human errors are caused by increasing of the reading time and aging of radiologists [3]. In order to minimize such risks, diagnosis support systems to provide the accurate and robust detection of lung tumors have been necessary, and the use of artificial intelligence with deep learning techniques has shown strong potentials for the given problem.
To further improve the accuracy of the diagnosis support system, the dual energy subtraction Xray imaging may be used [4] [5], as the imaging technique can enhance lesions in the lung tissue by irradiating different intensity of X-ray to the chest twice and effectively suppressing the bone that often obscures the lesions in the conventional single-shot chest X-ray images. Nevertheless, the dual energy  [6] and thus, an image processing algorithm to suppress the bone region from the single-shot X-ray images would be beneficial to develop a highly accurate, robust, and practical diagnosis support system.
In this paper, we developed a bone suppression algorithm as a preprocessing step for a deep learning framework to detect lung tumors using single-shot X-ray images. The performance of the developed algorithm and the deep learning framework were evaluated using 604 chest X-ray images acquired in a mass medical examination.

Method
In this study, we first devised a bone suppression algorithm that suppresses the bone structure from single-shot chest X-ray images and enhances pathological structures in the soft tissue. Secondly, a deep learning model to detect and classify lung tumors was constructed. Thirdly and finally, using 604 chest X-ray images, the artificial intelligence model was trained and tested with and without the bone suppression algorithm as a preprocessing step for the model.

Bone Suppression Algorithm
In our research, we proposed the bone suppression algorithm for single-shot chest X-ray images based on an image processing algorithm using gradient differences in the images, referring to the previous method using gradient differences and transformation into the ST-space [7]. It is reported that this method achieved the improvement of the lung tumors detection ability as two experienced radiologists validated the chest X-ray images with the method.
Our processing steps shown in Figure 1 were as follows: 1) segmenting the area of clavicles and ribs manually; 2) determining the signal intensity of the bone contour so that the gradient difference was eliminated, assuming that the series of points which were large values of the gradient difference between the area and the surrounding area are the boundary between bone and soft tissue (bone contour); 3) filling the internal signal intensity with the average of the signal intensities of two points from the upper left, lower left, upper right, and lower right of the bone contour signal intensities, respectively; 4) the signal intensity of the bone was determined by taking the average of the four internal signal intensities; and 5) the bone was suppressed by subtracting the calculated differences of each chest bone from the original chest X-ray images.  Since our bone suppression algorithm was a gradient-based algorithm, processing the entire image had a potential to eliminate not only bone to be suppressed, but also lung lesions that have to be detected. Therefore, we manually set the bone area and applied the bone suppression algorithm only to that area.
As a preliminary result of the proposed algorithm, Figure 2 shows a comparison of a chest X-ray image before (Figure 2 Figure 2 (a) and (d), showed that the morphological features of the lung tumor were not affected by the processing. It is noted that since the proposed method assumed that the intensity of the bone regions is constant, some areas were found to be over-subtracted because the signal intensity of bones could change depending on its density and thickness. In the following sections, the proposed bone suppression algorithm was applied to the X-ray images to be used for training and testing the neural network model.

The model of Artificial Intelligence
In this section, we describe the artificial intelligence model, based on U-Net [8], for segmentation of lung tumors used in this study. The outline of the model is shown in Figure 3. Next, we describe the creation of the artificial intelligence model for classification used in this research, which is based on VGG [9]. The outline of the model is the second half of the segmentation model. For this classification, we used an RGB image as input, where the labels obtained by segmentation are combined with the original chest X-ray image.

Dataset and Evaluation method
In this study, we used the chest X-ray images which were taken for lung cancer screening in Miyagi-Anti-Tuberculosis Association. The dataset consists of 604 images: 302 images with cancer lesions and 302 images without any pathological observation. It is noted that only the pathological cases  undertook CT scanning for proofing the diagnosis with X-ray images, there might be a potential to have missed lung tumors in the images categorized as healthy.
As shown in Table 1, the acquired datasets were divided into Train Data, Validation Data and Test Data. Train and Validation Data were used in the learning process of the artificial intelligence and divided at a ratio of 8 to 1 randomly, so the number of cancer images and healthy images in Train Data and Validation Data were changed for every epoch. Test data was only used for testing the performance of trained artificial intelligence. The ground truth labels for the lung tumors were prepared based on radiologist's indications using MATLAB Image Labeler. Table 2 shows the parameters used in the learning process of the artificial intelligent. Because the number of normal pixels is by far more than that of cancer pixels, we set class weights as the total of cancer pixels in Train Data and Validation Data equal to that of normal pixels. Although not only segmentation but also classification needed class weights, we didn't need them as we included the same number of cancer images and healthy images in the dataset. We used the Intersection over Union (IoU) as evaluation method for segmentation and the Area Under the Curve (AUC) as evaluation method for classification. IoU is an index often used to evaluate object detection and segmentation tasks and it represents how large the predicted label is overlapped with its ground truth label. It is expressed as follows, ,where TP, FP and FN respectively mean true positive, false positive and false negative.    Table 3 shows the values of IoU without and with bone suppression algorithm. From Table 3, it was confirmed that the value of IoU decreased as the image size increased and increased with our bone suppression algorithm in every image size. We show the prediction output images of segmentation using images without and with bone suppression algorithm in each image size as Figure 4. The trained deep learning model in each image size and using images without and with bone suppression algorithm used the original image which is one of the Test Data as input image and output the predict image. The blue area is showed that the deep learning model predict that is normal area and the yellow area mean that is cancer area. From Figure 4, it can be seen that the number of false positives is increased as image size increased and reduced by using images with the bone suppression algorithm.  To compare the value of IoU with and without bone suppression algorithm in each image size, a graph of the IoU is shown in Figure 5. In the [256,256] image with the bone suppression algorithm, the value of IoU took 0.1424 with the highest accuracy. The IoU was improved from 0.0849 to 0.1424 by training the artificial intelligence using the images with the bone suppression algorithm, comparing using the images without the bone suppression algorithm. It can be confirmed that the application of the bone suppression algorithm improved the value of IoU. Also, the average value of IoU increased by 0.0139.

Classification
We showed Table 4 as the value of AUC without and with the bone suppression algorithm in each image size. Since the classification is performed using the output image of segmentation as input, the segmentation result must be excellent. The value of AUC tended to decrease as the image size increased in images using both without and with the bone suppression algorithm. It was confirmed that the application of the bone suppression algorithm improved the value of AUC. The segmentation results also showed an improvement in the value of IoU, which is considered to be a reasonable result.
The graphs of the AUC without and with the bone suppression algorithm are shown in Figure 6. From Figure 6, it was confirmed that the application of the bone suppression algorithm increased the value of AUC in every image size. Also, the average value of AUC increased by 0.0907. In the classification evaluation, the value of AUC was 0.7364, the most accurate value in our research, in the [256,256] image with the bone suppression algorithm. The value of AUC was improved from 0.7009 to 0.7364 by training the artificial intelligence using the images with bone suppression algorithm, comparing using the images without the bone suppression algorithm.

Conclusion
In this paper, we proposed a computer-aided detection method for lung tumors in chest X-ray images by developing an artificial intelligence model coupled with a bone suppression algorithm. The experiments showed that the proposed method improved the ability to detect the location of lung cancer and to classify the lung cancer.
In the future study, the bone suppression algorithm should be improved to further improve the performance in detecting and classifying the lung tumors.