This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.
Paper The following article is Open access

Fibre-optic based particle sensing via deep learning

, , , , , , , , and

Published 30 September 2019 © 2019 The Author(s). Published by IOP Publishing Ltd
, , Citation James A Grant-Jacob et al 2019 J. Phys. Photonics 1 044004 DOI 10.1088/2515-7647/ab437b

2515-7647/1/4/044004

Abstract

We demonstrate the capability for the identification of single particles, via a neural network, directly from the backscattered light collected by a 30-core optical fibre, when particles are illuminated using a single mode fibre-coupled laser light source. The neural network was shown to be able to determine the specific species of pollen with ∼97% accuracy, along with the distance between the end of the 30-core sensing fibre and the particles, with an associated error of ±6 μm. The ability to be able to classify particles directly from backscattered light using an optical fibre has potential in environments in which transmission imaging is neither possible nor suitable, such as sensing over opaque media, in the deep sea or outer space.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

Introduction

Fibre-optic based sensors are ideal for worldwide deployment and sensing in a range of in situ environments owing to their low-cost, light-weight, small and flexible nature. Such fibre sensors can operate in strong electromagnetic fields [1], at high pressures [2], at high temperatures [3] and down to cryogenic temperatures [4], without reduction in sensing performance. In addition, they can operate via single-ended interrogation, such that one end of the fibre can be free to interact like a probe, thus allowing for in situ sensing in a variety of environments [5]. Sensing examples include areas within stress and strain monitoring in marine and aircraft structures [6, 7], landslide detection [8], real-time humidity monitoring [9], hydrogen leak detection from rocket engines [10] and use in the oceans for sea temperature measurements [11].

Particle monitoring in a range of environments is particularly desirable, for example, monitoring the presence and characteristics of plastic microbeads in the sea [12] from cosmetics [13, 14] and from the breakdown of larger plastics [15], which can have a demonstrably negative impact on marine life [1619]. Monitoring of this type can give an indication of the origin and levels of plastic pollution in the marine environment. In air, monitoring of particles, such as those generated by diesel engines, wood burning and pollen, is invaluable for understanding spatiotemporal variation in particulate concentrations, and thus human exposure, which is universally recognised as a current major global health problem. Air pollution (including the effect of gases such as nitrogen oxides) has been associated with increased risk of respiratory and cardiovascular diseases, several cancers, type 2 diabetes mellitus, and dementia [20].

When a particle is illuminated by light, the scattered light encodes information regarding the properties of the particle (such as its refractive index, structure and size) as well as the properties of the surrounding media in which the particle is immersed [21, 22]. It has previously been demonstrated that it is possible to count particles of size <10 μm, by imaging their scattered light [23]. However, such particle detection systems generally use forward scatter, limiting their use in hard-to-reach environments. Combining particle sensing with fibre optics allows the unique opportunity for backscatter light detection, thereby allowing sensing in environments in which transmission imaging is not possible or suitable.

Deep learning convolutional neural networks [24, 25], inspired by the primate visual cortex [26], have gained much interest in the past few years, owing to their ability to classify a vast number of objects, with a certain probability (confidence percentage), that outperforms the capability of humans [27, 28]. These types of networks have been implemented in real-time [23] and have been used in areas such as speech recognition [29], facial recognition [30], smile detection [31], video classification [32] and for identifying the songs of different birds [33, 34]. The ability to remotely update neural networks without requiring hardware modifications (and with minimal local processing power) is potentially cost effective and advantageous for global distribution [35].

In the field of optics, deep learning has led to classification and predictive capability in laser ablation [3638], advances in microscopy [39, 40], label-free cell classification [41], object classification through scattering media [4245] and through scattering pattern imaging of plastic microparticles, cells, spores and colloidal particles [4651]. In the field of fibre-optics, deep learning is gaining interest [52], with work having been reported for perimeter monitoring [53], self-tuning mode-locked fibre lasers [54], and for fibre-optics being used to classify and reconstruct the input handwritten digits and photographs from the speckle patterns transmitted through multimode fibre [5557]. In addition, deep learning has been used in optical communications [58, 59], and more specifically, for real-time fibre mode demodulation [60], end-to-end fibre communications [61], and improvement in fibre transmission [62]. Imaging has also been demonstrated using microstructured fibre [63] and multimode fibre array [64] with deep learning.

Here, we extend on our previous work where deep learning pollution particle detection was carried out using free-space optics [23], and other work showing the capture and analysis of particles in the field using free-space optics [65], by demonstrating the ability to successfully classify real-world bio-aerosol particles (pollen grains) in real-time, via collection of their backscattered light using optical fibres. We show that the neural network can also determine the distance between the pollen particles and the end of the fibres (potentially allowing for 3D mapping), and we examine the robustness of the network by varying the ambient light levels using an additional white light source.

Experimental methods

Sample fabrication

Iva xanthiifolia and Populus deltoides pollen grains from Sigma Aldrich, and Narcissus pollen grains collected from the University of Southampton grounds, were deposited onto a substrate (a 25 mm by 75 mm, 1 mm thick soda-lime glass slide). The pollen grains ranged from ∼10 to ∼50 μm in size, and scanning electron microscope images of a selection of the particles are shown in the inset of figure 1. Each pollen type was deposited onto separate region of the glass slide to ensure that the correct labels could be applied during training. The sample was placed on a three-axis stage (25 mm travel, 10 μm resolution) for positional control.

Figure 1.

Figure 1. (a) Schematic of setup for backscattered imaging of particles using a SMF for illumination of the particles and a 30-core sensing fibre for collecting the backscattered light. (b)–(d) Example images showing the light collected by the sensing fibre and captured by the camera for Narcissus, Iva xanthiifolia and Populus deltoides pollen grains, respectively, each with an SEM image inset showing a grain of the corresponding pollen type. (e) Microscope image of the 30-core sensing fibre.

Standard image High-resolution image

Imaging setup

As shown in figure 1, light from a diode laser operating at 650 nm was launched into a single mode optical fibre (SMF), while the other end of this illuminating fibre was placed <100 μm from the pollen coated glass slide, producing a spot size with a diameter of approximately 100 μm. The backscattered light from the particles was collected by a second fibre (referred to here as the sensing fibre). The sensing fibre is a passive multicore fibre (MCF) [66] with 30 cores with 4 different types of core to supress the coupling between cores in long lengths of fibre. The MCF length was <1 m and, owing to the MCF core structure, intra-core coupling was negligible, potentially allowing higher accuracy for training and testing of the neural networks. However, other fibre bundle designs could have be used for this sensor. The residual light in the MCF cladding (i.e. not in the cores) was low due to the stripping effect of high index fibre coating applied to the MCF during manufacture. The MCF was placed adjacent and parallel to the fibre light source (the SMF). Light transmitted from the end of the sensing fibre was imaged onto a CMOS colour camera (Thorlabs, DCC3260C, 1936 × 1216 pixels, 10 ms integration time), using a 4 mm focal length lens. The camera was connected to a computer to allow real-time processing via a neural network of the experimentally recorded camera images.

For particle species prediction, the light scattered from a particle varies depending on parameters such as the particle size and type and so the backscattered light from the laser-illuminated pollen, collected by the sensing fibre, also varies accordingly. Examples of these images are shown in figures 1(b)–(d). For the real-time demonstration of particle identification, experimental images were recorded from an area of the sample that was not used for training, but also had separate regions for each type of pollen, so that it was possible to determine if the identification was correct or not. Secondly, for distance prediction, the Populus deltoides pollen was used; the glass slide was translated along the z-axis (laser axis) with 200 images recorded at 0 μm, then 200 images recorded 50 μm and so on, at every 50 μm step, for a total distance of 1 mm. Here, half of the recorded images were used for training and validation, and the other half for testing. In order to demonstrate the robustness of this approach, the ambient light level was changed using an additional white light source (a halogen lamp, I. + W. MUSTER Gdb, 150 W) that was directed towards the end of the sensing fibre from the rear of the glass slide, as shown in figure 1. Neutral density filters were placed in front of the source to vary the output level of the additional white light between 0.1% and 100% of the lamp's maximum output. Images of the scattered light transmitted through the sensing fibre were recorded for a total of 10 images per light level, for 11 different light levels and 3 pollen types. These images were trialled on the neural network that was trained with zero additional white light, i.e. only room light, in order to test the robustness of the network to environmental changes, such as light level changes.

Neural network

A convolutional neural network [27, 67] was used and trained on an NVIDIA RTX 5000 graphics processing unit, with the input as the camera image, and the output was either the particle type (a confidence percentage level for each of the 3 species of pollen) or the distance between the end of the sensing fibre and the particle (a single regression output [68]). Figure 2 shows a simplified schematic of the training procedure of the neural network for the case of identification of the pollen type. Images of the collected backscattered light were passed into a neural network until the prediction error—comparing the predicted output pollen type with the actual pollen type—was minimised.

Figure 2.

Figure 2. Schematic of neural network training, for the case of identification of the pollen type.

Standard image High-resolution image

More specifically, the images recorded on the camera were cropped to 501 × 501 pixels, resized to 256 × 256 pixels, and then normalised and centred linearly to have a mean of 0 and variance of 1 jointly across all colour channels. The neural network used in this work originated from the second version of ResNet-18 [69] proposed by He et al [70] (the batch normalisation [71] was used for all networks as a regularisation technique with momentum of 0.95 and mini-batch size of 32). For the classification task (identification of the type of particle), the neural network used was structurally identical to the second version of ResNet-18. In the regression task (predicting the distance between the sensing fibre and the particle), there was only one 1-by-1 filter at the last convolutional layer of the second version of ResNet-18, and hence the output of the regression network was a single number. Deep residual learning (i.e. ResNet) addresses the degradation problem, which appears in many neural network algorithms and can be described as a decrease in proportionality between the depth and the performance of a network when the depth of the network is increased, in deep learning using skip connectivity [70]. This types of structures work reportedly well in many use cases; thus it is chosen for this work. The ResNet-18, as the name stated, used 18 convolutional layers. Apart from the first and the last layer, there were four groups of two connected residual modules, each of which contained two convolutional layers that possessed numbers of 3-by-3 filters. Similar to the idea of stabilising the time complexity between each layer in VGGnet [72], at the first layer on each group, except for the first group, the stride of filters was set to two in order to halve the size of feature maps, whereas the number of filters at all convolutional layers from that group was doubled in comparison with the previous group, which increased the total number of feature maps by a factor of two. The variable initialisation policy followed [73], and the parametric rectifier (PReLU) from the same paper was also adopted to replace the ReLU activation functions [74] used in [69]. The ADAM optimiser [75] was used, with a learning rate of 0.0001. The cost functions used were cross-entropy and mean square error for the classification and regression task respectively. Neither dropout nor L1/L2 regularisation were used in this work. The training time was 5 min, and ran for 1 epoch.

The data collection and training were split into two cases; firstly particle identification, and secondly distance prediction. In the first case, across the 3 different types of pollen, a total of 1500 images were collected, 90% of which were used for training and 10% were used for validation of the neural network. For training a neural network for distance prediction, out of the 2200 images collected, 90% were used for training and 10% were used for validation.

Results and discussion

The results for real-time identification of pollen grain type are shown in figure 3. The inset to figure 3 displays a 10x magnification microscope image indicating the distribution of particles over the surface of the glass slide, showing the variation in size, shape and orientation of the particles. Here, the stage was moved randomly across a region of the sample that was not used when obtaining images for the training data set. When the laser light is incident on a particle, the backscattered light is collected by the camera, inserted into the neural network and a prediction of the pollen type is made. (The pollen type prediction is compared to ground truth via a priori knowledge of the pollen location on the slide.) Out of 30 measurements, 29 predictions were correct (∼97% accurate, compared with 99.3% accuracy for the training dataset and 99.9% for the validation dataset), with a mean confidence percentage for correct prediction of 85.8%. The incorrect measurement was a prediction of Narcissus, when the actual pollen type was Populus deltoides. This is perhaps due to the variability in the shape of Populus deltoides pollen grains, which may have meant that for certain orientation and size, its scattering features appear similar to that of Narcissus. Increasing the amount of training data will likely lead to an increase in prediction accuracy of the neural network [76], while increasing the number of light sensing fibres could also increase the amount of the scattered light that is collected (thereby providing a larger effective numerical aperture) and therefore also improve the spatial resolution with which the particles are sampled.

Figure 3.

Figure 3. Capability of the trained neural network for predicting the type of pollen, in real-time, for 30 separate predictions, showing the associated confidence percentages. A pink outline indicates an incorrect prediction. Each measurement took 10 milliseconds. Inset: microscope images indicating the variation of pollen grain size, shape and orientation.

Standard image High-resolution image

As demonstrated in previous work [23], when a neural network is applied to an image of a scattering pattern from a particle that was not observed during training, the neural network will produce a confidence percentage for each of the known particles, based on how closely the scattering pattern from the unknown particle matches features from the scattering patterns from the known particles. In the previous work [23], the neural network, which was trained on forward scattered light from a range of pollen particles, was provided with an image of a scattering pattern from a 5 μm diameter polystyrene microsphere. The neural network produced similar confidence percentages for all known particles (the highest being 46% for the particle type of wood ash), hence implying that scattering pattern from the 5 μm diameter polystyrene microsphere most closely matched the scattering pattern expected from wood ash. As the neural network was not trained on scattering data from polystyrene microspheres, it was incapable of correctly identifying that type of particle.

To extend this discussion, here, we tested the neural network on the collected backscatter from a blank glass slide. Once again, as the neural network was not provided with labelled training data corresponding to the blank glass slide, it was unable to identify this type. Interestingly, in this case, the confidence percentage for a blank glass slide was 99.98% for Populus deltoides, hence implying that the features in the scattering pattern from the blank glass slide matched features in the scattering patterns from Populus deltoides. This similarity can be observed in figure 4, which shows the light collected by the sensing fibre and captured by the camera for Narcissus, Iva xanthiifolia, Populus deltoides pollen grains and the blank slide, respectively. Some specific regions in the images that are different for the blank slide and Populus deltoides, compared with the images for Narcissus and Iva xanthiifolia, are highlighted.

Figure 4.

Figure 4. The light collected by the sensing fibre and captured by the camera for Narcissus, Iva xanthiifolia, Populus deltoides pollen grains and the blank slide, showing the similarity between the latter two. The brightness and contrast levels of the images have been adjusted equally, compared with the other figures, for ease of analysing.

Standard image High-resolution image

To minimise the chances of such false identification, a broader range of particles could be used for training, the confidence percentage cut-off for positive identification set high enough, and an unknown category (null category) could be created for the neural network.

The capability for detecting the distance between the end of the sensing fibre and a particle, in combination with the previously demonstrated capability for identification of particle type, offers the potential for simultaneous 3D spatial mapping and identification of particles. Figure 5(a) shows the capability of the trained neural network in determining the distance of the Populus deltoides pollen, from the end of the sensing fibre, and (b) shows examples of the collected light imaged on to the camera for 3 different distances. The neural network is shown to be capable of accurately determining the distance, with a standard deviation in the error of ±6 μm. The method could be extended to further distances, provided that data was collected at those distances and that there was significant signal recorded by the camera. Although the signal-to-noise of the light scattered from the particles would inevitably decrease as the distance is further increased, this could be counteracted by using a more powerful laser beam or adding additional imaging optics onto the end of the fibre.

Figure 5.

Figure 5. (a) Demonstration of the capability of the trained neural network to predict the distance between the end of the sensing fibre and the Populus deltoides pollen, with a standard deviation in the error of ±6 μm and R-value 0.9976. (b) Examples of camera images showing the collected backscattered light with distances of 50, 450 and 850 μm between the end of the sensing fibre and the pollen.

Standard image High-resolution image

To test the robustness of the neural network under different environmental conditions, and to demonstrate its capability in real-world sensing, a white light source was introduced behind the sample (see figure 1) in order to vary the ambient light levels and thus change the collected backscatter signal; similar in effect to the variation in sunlight throughout the day. The neural network used in this case was the same as that used for figure 3, and hence was trained with zero additional white light, i.e. only room light. Figure 6(a) shows the accuracy of the prediction for the three different types of pollen grain at a fixed position as a function of levels of additional white light. Figure 6(b) shows example images of the backscattered light from Narcissus pollen, illuminated at three different levels of additional white light. The results show that even in illuminance equivalent to that of full daylight, the neural network was still able to identify Narcissus and Iva xanthiifolia. Interestingly, for Populus deltoides, although the confidence percentage decreases at first as the illuminance is increased from low levels of additional white light, it increases again when the level of additional white light is increased from daylight to direct sunlight levels of illuminance and above. This is perhaps due to the wide range of features and signal intensity in the backscattered light images that the neural network associates with each type of pollen. Indeed, when the additional white light at maximum illuminance was directed at the end of the sensing fibre, without laser illumination and in the absence of pollen, the neural network predicted Populus deltoides with 100% confidence from the captured image. Whilst the power levels of the white light source used in figure 6 were similar to those in the real-world, the spectrum, and hence colour-balance, of the real-world light sources would depend on the mechanism with which the light was generated. Thus, the neural network accuracy and associated confidence percentages, may be different when using the real-world light sources. To increase the prediction accuracy, the neural network could be trained on varying light levels for each pollen type. For potential sensor applications, environmental conditions are expected to vary, i.e. light levels and colour at dawn may differ considerably from those at midday and ambient temperature may also have an effect. The humidity and the temperature will affect the refractive index of the particles, and hence the scattered light pattern recorded on the camera. With appropriate training data, the neural network could also be applied in order to determine these additional parameters. Additional training data corresponding to different environmental conditions could be created via augmentation of the existing data [7779], in order to improve accuracy for real-world measurements without the need to collect very large datasets. In addition, to reduce the effects of sunlight, a wavelength filter could be installed in front of the sensor to block out any light that is of a different wavelength to the laser.

Figure 6.

Figure 6. (a) Demonstration of capability of trained neural network for identification of type of pollen grains in the presence of varying levels of additional white light. (b) Example images of backscattered light from Narcissus pollen illuminated with additional white light levels of 7 , 70 and 700 Wm−2. Values of real-world illuminance taken from [8082].

Standard image High-resolution image

Conclusion

In conclusion, we have demonstrated a remote, fibre-coupled method for classification of particles, via analysis of their backscattered light. The illumination source is a fibre-coupled laser diode and backscattered light is collected via a 30-core MCF. The trained neural network was able to identify pollen grain type in real-time, with an accuracy of ∼97%. The capability for determining the particle distance from the end of the sensing fibre was also demonstrated with an accuracy of ± 6 μm. In addition, the neural network was shown to be robust in the presence of varying ambient white light levels. The combination of these two techniques could allow the simultaneous identification and 3D spatial mapping of particles in challenging and hostile environments.

Acknowledgments

BM was supported by an EPSRC Early Career Fellowship (EP/N03368X/1). ML was supported by a BBSRC Future Leader Fellowship (BB/P011365/1) and a Senior Research Fellowship from the National Institute for Health Research Southampton Biomedical Research Centre. We gratefully acknowledge the support of Fujikura Europe Ltd, who donated the 30-core MCF. Data supporting this manuscript is available at https://doi.org/10.5258/SOTON/D0759

Conflicts of interest

The authors declare no conflict of interest.

Please wait… references are loading.
10.1088/2515-7647/ab437b