Tracking the Motion of Large-Scale Lightning Events based on the Optical Flow Method

Lightning is a meteorological phenomenon and can cause the property damage and the loss of the lives. Tracking and warning the lightning is a very important issue in safety control and management for many fields. Some devices and corresponding analysis methods are suggested for this objective. Nowadays, the satellite and radar are two main devices used to track and warn lightning. However, their output is an indirect signal for the tracking and warning. So the false positive problem exists if only based on these two types of devices. Instead, this work proposes a new method. We use another kind of data from the lightning locator, which is the only device could accurately record the time, position and strength for one lightning occurrence. Meanwhile, in order to effectively apply this type data, an integrated method based on the optical flow method is proposed. On a real lightning process, the experiment’s result proves our method’s performance. Unlike the traditional methods that only track the representative center to stand for the motion of a group of lightning, the most outstanding advantage of our methods is the capability of detecting the minor motion of large-scale lightning events. Therefore, it could be used for the elaborate lightning warning.


Introduction
Lightning is a meteorological phenomenon of sudden electrostatic discharge. Cloud and the ground lightning (CG lightning) are listed in top ten the most severe natural calamities in the world. According to the statistics from the National Oceanic and Atmospheric Administration (NOAA), the casualty caused by the CG lightning has outnumbered the cyclone and the tornado. Tracking the lightning events' moving route and issuing alerts can eliminate its impact on the lives and property. Currently, it is impossible to forecast the accurate appearing position for one lightning event because of the randomicity and transience of lightning occurrence. The practicable way is to monitor the thunderstorm cloud by devices like the radar and satellite [1]. The electrically charged regions of a thunderstorm cloud is a necessary but not sufficient condition for lightning. Because thunder-storm cloud mustn't mean lightning, if only depending on recognizing the thunderstorm cloud, lightning warning always has a 2nd International Conference on Oil & Gas Engineering and Geological Sciences IOP Conf. Series: Earth and Environmental Science 558 (2020) 042028 IOP Publishing doi: 10.1088/1755-1315/558/4/042028 2 false positive problem. The lightning locator is the only device that could accurately record the location and time of a lightning event [2]. Although it's impossible to track single lightning event, lightning events have a characteristic of always concentrating on a region in a period of time, and move as a group. Therefore, the motion tracking technique for group of point objects can be used to track the overall motion of large-scale lightning events.
The motion tracking technique has been successfully applied in many fields. Tracking and estimating the movement of sea ice is indispensable for understanding the exchange of mass, energy and momentum between the atmosphere and the ocean [3][4]. The Synthetic Aperture Radar (SAR) provides active microwave images of the sea ice motion. A region-based method by analyzing non-overlapping local pixel neighbors estimates sea ice motion on a regular grid lattice. For the animal migration tracking, Zhang [5] utilizes image segmentation to detect motion target, then tracks the target by centroid based tracking algorithm. Besides the methods above, the optical flow method is a popular method for motion tracking [6][7][8][9][10][11].
With regard to lightning tracking, methods are mainly based on spatial-clustering [12] and the neural network [13]. Spatial clustering method congregates points data which represent lightning events into several clusters based on certain distance scheme and then track the route of each cluster's centroid. TITAN [14] is a typical centroid based method. It fits every lightning group into ellipse shape. Such centroid and ellipse shape representing a group of lightning events has good generality, but it loses local and detailed motion information. Neural network is also applied to lightning tracking. The output of the neural network is classified as four directions: North East, North West, South East, and South West. Through training the neural network, the new coming lightning's direction would be predicted as one of the four candidates. Apparently, such direction division is much rough and can't meet the need for accurate lightning warning.
In this paper, we focus on large scale lightning events tracking by the optical flow method. To our best knowledge, there has no work that has tried to apply the optical flow method to tracking large scale lightning events based on the lightning locator data.

Method
The lightning locator data is a kind of time series data that records each lightning occurrence's time, position and strength. The combination of the lightning locator data and the optical flow method is an optimal solution for lightning motion tracking. Firstly, compared to other existed methods, our input data is more accurate than the indirect signals like satellite cloud images or meteorological radar echo signals. Secondly, taking into account of the lightning events motion' characteristics of non-rigid cluster, multiple objects, and randomly split or merge, the optical flow method should be more effective for several reasons. Firstly, the optical flow method doesn't need to deal with the problem of centroid recognition. Secondly, the optical flow method estimates all pixels' motion so the local motion information is never missed. Thirdly, it works better when the target of lightning units has deformation.

The Framework of Tracking Method.
Here we present the proposed tracking method for the large-scale lightning events. We call one lightning occurrence as a lightning event and a group of close lightning events as lightning unit. The first operation in the tracking system is to transfer lightning events into the point data according to their geography coordinate. Some pre-processing ought to be done. From the aspect of time dimension, lightning events are divided into frames with a proper interval, which prepares the data for the optical flow. The mean filtering is the second operation to change discrete point data into smoother gray-scale pixel data. The morphological closed operation is applied to eliminate the small lightning units so the main and big lightning units stand out in the frame. The optical flow estimation as the essential operation is executed. Finally, the velocity vectors are output as result. The workflow is shown as follows.
Where, f(n) gives out the first event's time in frame n. The index of the lightning frame could represent the time order. Through comparing the overall relation of lightning events' position in two neighbouring frames, lightning events' motion could be tracked frame by frame. In order to reduce the inference difficulty caused by large time span between two frames, we make two neighbouring frames overlap a part.

Event to Point.
When estimating the motion, the optical flow method works in the pixel space. It needs to transfer all lightning events in the geographic space to the points in the pixel space. When the lightning motion is estimated in the pixel space, we also expect to know where lightning will arrive in the real world. It needs to transfer reversely to the geographic space so the lightning warning could be issued to nearby organization or people. Therefore, this step aims to construct the mapping relation between the monitoring geographic area and the certain sized pixel rectangle for optical flow computation. Suppose the monitoring area is restricted by four parameters , , , and . The pixel rectangle is a matrix containing N r rows and N c columns. N r and N c control the resolution. For each lightning event (x, y, t) in R n , compute its mapping location (x',y') in rectangle matrix with following formula： Initially, all mapped points are assigned with 255 gray value. After transferring, the result I n corresponds to the nth frame in form of point data.

Point to Pixel.
The optical flow method can't be applied well if the pixel data only consists of discrete points and all of them have the same gray level, because the gradient information is absent. Therefore, we should fill the 'blank' space among 'gray' points to increase the gray gradient information. Several methods can implement the interpolation task. Here, we use the image filtering technique. The image filtering technique can be used for: (a) the suppression of noise in homogeneous regions, (b) the preservation of edges (spatial or temporal) and (c) the removal of impulses (of constant or random value) [15]. Despite that image filtering is mainly used to remove the noise, we can utilize it to interpolate more abundant pixel points from both the pixel number and gray information among the neighbours of the scattered points owing to its smoothing effect. Compared to other ways such as the IDW (Inverse Distance Weighted) algorithm whose time complexity is O(n), using the filtering method has more quick speed with the time complexity O (1).
Here, we choose the mean filter rather than the Gaussian filter for the Gaussian filter sets more weight on the center pixel of the mask.
Given an operator M = 1 1 1 1 The mean filter uses M to do convolution computation with In.

Morphological Close.
Mathematical morphology is a kind of digital image processing based on Lattice Theory and Topology, including extracting the feature and shape from image for pattern recognition and segmentation. In our proposed system, we use the morphological close operation to remove the small lightning units because these small lightning units are trivial and have discontinuity in space-time. Their motion is useless for the overall motion tracking.
The lightning units' shape always changes over time, which increases the difficulty for the researchers to recognize them. This problem is more outstanding if using a big mask. Simply, we can ameliorate the problem by adjusting the size of the mask. In our experiment, disk shape mask of 2-pixelradius is chosen. In this operation, D is the mask for the Morphology close operation on J.

Optical Flow Method
After the morphological closing operation, optical flow can be applied on the processed frames. The Optical Flow method assumes that the brightness intensity is invariant for the neighbourhoods of a displaced pixel in the motion field. This assumption can be described by the Optical Flow Constraint Equation: ( , , ) = ( + , + , + ) K (x, y, t) represents the brightness of the pixel at (x, y) at t frame. Expand the right hand of equation 5 with the Taylor series: ( + , + , + ) = ( , , ) + + + Then combine equation 5 and 6 to yield: [u, v] is the optical flow vector. In order to accelerate computing speed, we assume that the neighbouring pixel have the same motion in a 5 × 5 patch, then get the equation like: Equation 8 requires to solve K x , K y , and K t . K x and K y are solved by convolution mask (e.g. Roberts operator) and K t is solved by subtracting the current frame with next frame. After obtaining those three derivatives, the remaining question is to compute u and v in the equation 8. In practice, the equation 8 can't be satisfied due to noise. We use the Linear Least Squares method to obtain the optimum solution. Mathematically, the linear least squares can be used for approximately solving an overdetermined system of linear equations, where the best approximation is defined to minimize the squared sum of differences between the idealized value provided by the model and data point.
In ideal condition, the equation 7 is held. But in practice, the left part of the equation 7 represents how close the error term is to the ideal value so we define the error term as: Where the operator Σ refer to the sum of computation in a patch. And the optimization objective becomes: Then to get the derivative of equation 10 with respect to [u] and [v]. The results are respectively written as: Set term 11 and term 12 equals zeros simultaneously, to compute the smallest value of the equation 8, then get equation 13: On the basis of equation 13, we could further get the equation 14.
Finally, we get the result u and v.

Experimen
In order to validate our method, we collect a set of lightning events from a process that happened in the meeting belt of Hubei and Hunan province in China. This lightning process started from 18:30, in 23th, August, in 2014. The monitoring region covers a rectangle area from N29˚00ʹ01ʺ, E111˚30ʹ01ʺ to N30˚99ʹ63ʺ, E113˚30ʹ00ʺ. The figure 2 presents the distribution of all lightning events in this process.

Figure 2. The distribution of the lightning events used to validate
Before tracking how these lightning events move as the time goes on, this dataset is divided into 51 frames whose time width is 15 minutes. Each neighboring frame have 14 minutes' overlapping, which can overcome the problem of needing to find correspondence points between neighboring frames. Considering that the motion estimated by the optical flow is so small that it is hard to be observed by the visual recognition and that the tendency of lightning motion indeed persists. We use a perdition way to evaluate our method. Firstly, split the 51 frames into 5 groups at 10 frames intervals. For the first group, the first frame' id is 1 and the last frame's id is 11. The second group starts from frame 11 and ends with the frame 21, and so on. Secondly, we select the middle frames in each group to pro-duce the velocity vector. For group 1, the middle frames are number 5 and 6. If the velocity vector from the  Take the first group as an example to analyze. In the figure 3. (c), the lightning unit on the top left corner in the frame 1 is indicated by the velocity vector that has the trend of moving to bottom-right. Then investigate this lightning unit's position difference between number 11 and 1. This lightning unit moves from the coordinate center (15,40) in frame 1 to the coordinate center (25, 45) in frame 5 and 6, further to the coordinate center (35, 50) in the frame 11. Such a chain of position change is consistent with the trend of moving to bottom-right direction. More important, besides the moving direction, our method also could point out the moving speed value. The arrow's length reflects how fast it moves. Compared to the speed from upper to bottom, the speed from left to right is bigger, so the lightning unit's displacement to right is more far. Similarly, we give out the velocity vector estimation for the rest 4 groups, whose results are shown in the figure 4. The consistency between motion estimation and the real displacement are perfectly hold in them.