A Bayesian Approach to Period Searching in Solar Coronal Loops

and

Published 2017 February 28 © 2017. The American Astronomical Society. All rights reserved.
, , Citation Bryan Scherrer and David McKenzie 2017 ApJ 837 24 DOI 10.3847/1538-4357/aa5d59

Download Article PDF
DownloadArticle ePub

You need an eReader or compatible software to experience the benefits of the ePub3 file format.

0004-637X/837/1/24

Abstract

We have applied a Bayesian generalized Lomb–Scargle period searching algorithm to movies of coronal loop images obtained with the Hinode X-ray Telescope (XRT) to search for evidence of periodicities that would indicate resonant heating of the loops. The algorithm makes as its only assumption that there is a single sinusoidal signal within each light curve of the data. Both the amplitudes and noise are taken as free parameters. It is argued that this procedure should be used alongside Fourier and wavelet analyses to more accurately extract periodic intensity modulations in coronal loops. The data analyzed are from XRT Observation Program #129C: "MHD Wave Heating (Thin Filters)," which occurred during 2006 November 13 and focused on active region 10293, which included coronal loops. The first data set spans approximately 10 min with an average cadence of 2 s, 2'' per pixel resolution, and used the Al-mesh analysis filter. The second data set spans approximately 4 min with a 3 s average cadence, 1'' per pixel resolution, and used the Al-poly analysis filter. The final data set spans approximately 22 min at a 6 s average cadence, and used the Al-poly analysis filter. In total, 55 periods of sinusoidal coronal loop oscillations between 5.5 and 59.6 s are discussed, supporting proposals in the literature that resonant absorption of magnetic waves is a viable mechanism for depositing energy in the corona.

Export citation and abstract BibTeX RIS

1. Introduction

The physics behind coronal heating is one of the most outstanding problems in solar physics. There are three primary models that attempt to explain this heating—magnetic reconnection, spicules, and wave heating—though there is as yet no theory that has consistent predictive power.

This study tests the resonant absorption model of coronal heating, a method of wave heating first developed by James Ionson (1978) when he worked out the theory behind irreversible heating by Alfvénic surface waves in coronal loops. Grossman & Smith (1988) then expanded on this model by looking at a particular mechanism, the resonant absorption of shear Alfvén waves. The primary problem in the model, however, is that there has been no dissipation mechanism derived from the ideal theory of magnetohydrodynamics (MHD). Alfvénic waves carry the energy needed to heat the corona, but in theory tend to carry it out into the solar wind rather than depositing that heat into the corona.

The dissipation of Alfvénic waves in a loop of plasma can be modeled similarly to an electromagnetic wave in a resonant cavity perturbed at both ends by a white spectrum of mechanical driving. The source of the wave energy is generally considered to be mechanical perturbation of line-tied magnetic loops by convective motions in/beneath the photosphere; the loops will tend to respond at their global mode frequencies (Ofman et al. 1996). The global mode is given by $\tau =\tfrac{2L}{\langle {v}_{A}\rangle }$, where $\langle {v}_{A}\rangle $ is the average Alfvén speed throughout the loop, and L is the length of the loop. Waves oscillating with period τ are then more able to effectively transmit their energy from the loop to the corona. Studies of the effect of the global mode on coronal loop heating, in both linear and nonlinear regimes, have been done by Ofman & Davila (1995).

This rate at which the coronal loop is irreversibly heated was modeled by Ionson is

where RL is the length and height of the loop, k is the wave vector perpendicular to the loop's magnetic field, and qs is the couping rate of the wave to the surrounding plasma. Ionson also used the physical and geometric properties of the resonant cavity to estimate that the global modes must have a range of periods of ∼30–450 s in order to heat the corona. Experimental evidence presented by McKenzie & Mullan (1997) indicated that these global modes can have periods as low as 9.6 s.

Finding global modes thus requires not merely a spatial resolution sufficient to resolve individual coronal loops and an instrument with a temperature response capable of observing the hot plasma in the loops, but also a temporal resolution high enough to resolve the predicted periods of the global modes.

Such studies have been done; examples include McKenzie & Mullan (1997), who observed periodicities between 9.6 and 61.6 s in X-ray observations of coronal loops from the Yohkoh SXT instrument, and Savcheva & DeLuca (2008), who reported periodicities of 5 min in Hinode X-ray Telescope (XRT) data. The XRT has an angular resolution of 1'' per pixel (Golub et al. 2007) compared to a resolution of 2farcs45 per pixel for the SXT (Tsuneta et al. 1991). In contrast, the XRT has at best a cadence of 2 s whereas the SXT collected data with a cadence as fast as 0.25 s.

The present study differs primarily in its method of analysis. Since the X-ray luminosity of the Sun tends to vary dramatically, and on short timescales, XRT uses an automatic exposure control function which adjusts the camera exposure "on the fly" to optimize the dynamic range of the images, i.e., to avoid saturation and also ensure not-too-dark images. As a result of this exposure variation, as well as periodic interruption of the image sequences by flare-patrol observations, the sampling cadence is almost never uniform. This non-uniformity impacts the analysis methodology, since typical Fourier and wavelet techniques generally tend to be less capable of accurately characterizing periodic variations of intensity in unevenly sampled data, especially when the variations have low signal-to-noise ratios (S/Ns). Our solution was to use the data to generate a Bayesian generalized Lomb–Scargle (BaGLS) periodogram. The Lomb–Scargle periodogram was first derived by Lomb (1976) and Scargle (1982). Its Bayesian variant was initially worked through by Bretthorst (2001) and is further developed in this work.

There are a variety of benefits to using this method. One of the main advantages is the expectation that the use of Bayes' theorem (1763) will suppress the noise more than the methods mentioned previously, allowing for a sharper determination of periodicities in X-ray light curves of coronal loops. In addition, the algorithm inherits the Lomb–Scargle method's relative insensitivity to the uneven sampling of the data to be analyzed.

Lastly, the periodogram generated for each light curve is a Bayesian posterior distribution rather than a power spectrum. As such, the peaks in the distribution directly represent the probability of a sinusoidal signal being present at a given period, allowing for a more natural comparison of peaks in the generated periodograms. This contrasts to a standard power spectrum, where the significance of each peak has to be calculated using a test like that developed by Fisher (1929) or its refined version made by Shimshoni (1971).

Section 2 will derive the equations for the BaGLS periodogram and describe methods for determining whether peaks in a given periodogram are actual detections or false positives. Section 3 compares the periodograms generated by Fourier, Lomb–Scargle, and BaGLS analyses. Section 4 will demonstrate the algorithm on synthetic data as well as signals injected into half-resolution XRT data. Section 5 applies the BaGLS code to a full resolution XRT data set to search for periodicities. Section 6 discusses the periods found in Section 5 and whether the BaGLS algorithm did genuinely detect intensity oscillations in coronal loops.

2. Formalism

The following derivation is largely drawn from the work of Bretthorst (2001) and Gregory (2005). The present study differs from those of Bretthorst and Gregory in its more complete treatment of the Bayesian evidence.

As shown by Bretthorst, the BaGLS periodogram takes as its only assumption (its prior) that the time series data comprise a single periodic signal, including an offset to the data and noise of all types (signal, systematic, etc.). The data set can be modeled as

Equation (1)

Equation (2)

where $d({t}_{i})$ is the data point taken at time ti, f is the signal frequency, A and B are the cosine and sine amplitudes, θ is a phase offset that will be described below, $Z({t}_{i})$ is an apodizing function, and $n({t}_{i})$ is the signal noise. The subscripts R and I refer to the two parts of the data, real and imaginary, respectively. Not all signals include both components, but they are included in the derivation for completeness.

Given this prior information I, and given the data $D\equiv \{d({t}_{1})...d({t}_{N})\}$, we are interested in the posterior probability of the frequency. Following the derivation given by Bretthorst (2001), the posterior distribution can be written as

Equation (3)

where the posterior distribution of the frequency is calculated by integrating over the joint posterior distribution of the amplitudes A and B as well as the standard deviation of the noise σ, which is taken to be Gaussian (an assumption that was later tested in the Hinode data and confirmed via a kurtosis excess analysis).

After factoring using Bayes theorem and assuming logical independence of the frequency, amplitudes, and noise, the posterior distribution becomes

Equation (4)

To complete the posterior distribution, the form and amplitude of the evidence $P(D| I)$ must be determined by factoring it using Bayes theorem, resulting in an expression that looks similar to the likelihood except integrated over the entire frequency range. This normalization ensures that the probabilities reported on the periodogram sum to 1.

The full posterior distribution reads

Equation (5)

Our initial assumption of the signal consisting of a single periodic function can be extended to say that the frequency and amplitudes are stationary as time progresses, allowing them to be assigned the uniform priors $P(f| I)=\tfrac{1}{{f}_{\max }-\,{f}_{\min }}=\tfrac{1}{F}$, $P(A| I)=\alpha $, and $P(B| I)=\beta .$ Note that the prior will be zero below fmin and above fmax.

The noise cannot be treated so simply, since there is no initial knowledge of its form. Instead, it is treated as a free parameter and taken to have a Jeffrey's prior of $P(\sigma | I)=\tfrac{1}{\sigma }.$

Our posterior distribution is then reduced to

Equation (6)

From our knowledge about the variance of the noise, the likelihood is calculated as Gaussian. This gives the fully solvable posterior distribution

Equation (7)

where ${d}^{2}={{\rm{\Sigma }}}_{i=1}^{{N}_{R}}\,d{({t}_{i})}^{2}+{{\rm{\Sigma }}}_{j=1}^{{N}_{I}}\,{d}_{I}{({t}_{j}^{\prime })}^{2}$ and where R, I, C, and S are taken from Bretthorst (2001) and defined as

Equation (8)

Equation (9)

Equation (10)

and

Equation (11)

R(f) and I(f) take the role that is normally assumed by the real and imaginary parts of a weighted discrete Fourier transform over uniformly sampled data. C(f) and S(f) are the effective number of data items in the real and imaginary part of the measurement. Lastly, θ is used to ensure that the model functions are orthogonal despite having non-uniform sampling, and is defined as

Equation (12)

Trigonometric addition and double angle formulae further show that

Equation (13)

The integrals of the posterior distribution are solved by first completing the square and evaluating the Gaussian integrals of the amplitudes and a Gamma integral over σ.

After integrating over the amplitudes, the posterior distribution is

Equation (14)

with $h{(f)}^{2}\equiv \tfrac{R{(f)}^{2}}{C(f)}+\tfrac{I{(f)}^{2}}{S(f)}$.

Since d2 is independent of the frequency, it can be factored out of the numerator and denominator. Moreover, given that we are integrating over discrete frequencies, the integral in the numerator can be changed to a sum. This leaves us with a final, normalized posterior distribution of

Equation (15)

In the analysis of the half- and full-resolution XRT data (below), two simplifications to the posterior were made. The first is setting ${d}_{I}({t}_{j}^{\prime })=0$, since there is no imaginary component of the signal. The second arises from the assumed stationary nature of the signal amplitude (i.e., the signal does not decay), which allows us to set the apodizing function $Z({t}_{i})=1$. Note that neither of these changes the form of the posterior distribution, only aspects of the factors within.

It should also be noted that the Bayesian method for generating periodograms can use prior information to place limits on the parameter being searched for (Gregory 2005). In a Fourier analysis, the upper and lower bounds on the periods to be searched are defined by the Nyquist limits, which are given by ${p}_{\min }\,=\,2\,{\rm{\Delta }}t$ and ${p}_{\max }\,=\,\tfrac{T}{2}$, where ${\rm{\Delta }}t$ is the average signal cadence and T is the total signal time. In this study, however, part of the Bayesian prior is using what was learned in earlier studies—in this case from McKenzie & Mullan (1997)—to set an upper bound on the periods to be searched at 100 s, reducing the amount of period-space that has to be searched by the algorithm. As such, the range of periods that will be searched is [pmin, 100] s, with no preference for any period within that range. This also allows for a finer, more precise search of the period, since the number of bins, set as $N=\tfrac{T}{2\,{\rm{\Delta }}t}-1$, is not reduced.

A further factor in the calculation of periodicities is the removal of secular trends in the data and only analyzing the residuals. This is done since such trends are not accounted for in the BaGLS algorithm, as they can vary greatly and in unknown ways over all the light curves. This is done by subtracting a running mean from each light curve, using independent spans of 31 time steps. In turn, this acts as a high pass filter, attenuating periods longer than (average light curve cadence) × 31 which in this study is a minimum of 62 s.

Moreover, no smoothing or windowing (such as a Hanning function) is applied to the data. These procedures are designed to suppress noise in a signal before processing and actually impede the operation of the BaGLS algorithm. While a smoothing/windowing function could be built into the prior, this adds an unnecessary level of complexity to the algorithm.

While this model has many benefits, it also has a significant number of drawbacks, which revolve around the prior, the assumption that the input signal has a single sinusoid contained within. False positives can occur when there are transient brightenings (e.g., microflares), which are reported as phenomena with periods lasting the entire light curve as well as potentially suppressing any actual long-lasting periods in the signal. High peaks will also on occasion simply happen as a result of fluctuations in the noise. It is also unknown whether any periods detected are actually created by sine curves or some other periodic function.

There are, however, ways to mitigate these problems. Light curves with transient brightenings can either be corrected or simply dismissed as candidates to analyze. The shape of a periodogram as a whole can provide insight as to whether a peak is the result of noise or not. Models can be developed that use non-sinusoidal periodic priors to generate periodograms and then compared to the current algorithm (though this approach is not taken in this study).

One other weakness of the algorithm is that it only reports a single periodicity in a signal, treating any others as noise, essentially giving false negatives. While an algorithm could be built to search for multiple periodicities, such an approach is beyond the scope of an initial attempt to build a Bayesian method to search for periodic signals in coronal loops. Moreover, given the sparsity of the data in the present study—the XRT light curves analyzed herein have only at most 280 data points—restricting the model to only a single sine is a reasonable safeguard against overfitting the data.

The primary way the generated periodograms will be analyzed will be with the Bayesian credible interval statistic, supported by the peak probability height and the median absolute deviation of each periodogram.

The Bayesian credible interval is analogous to the better-known significance level, though it answers a different question. A significance level rejects to some level the null hypothesis that the data is not merely noise. The credible interval, in contrast, asserts some probability of the prior being correct within a range of values. In this study, a 99.5% credible interval gives the set of period bins that have a 99.5% probability of containing the actual period of the signal.

This allows for a Bayesian method to discriminate between peaks in the periodograms that reside in the noise and those that are actual detections of periodicity. A genuine signal, for example, might have a low probability peak but a small credible interval, indicating that the detected period fell on the border of two period bins. On the other hand, while it is possible for noise to generate a sharp peak in the probability, noise very rarely generates a correspondingly small credible interval (see Figure 17).

The credible interval is calculated by sorting every value on the periodogram (and corresponding period bin) from greatest to least. The probabilities are added until they exceed the predetermined threshold (e.g., 99.5%). The set of period bins that include all the points added together form the credible interval.

In the present study, the median absolute deviation (MAD) is used to study the statistical dispersion of the non-peaked regions of the data. It is to the median as the standard deviation is to the mean and is calculated as $\mathrm{MAD}=\mathrm{median}(\mathrm{abs}({X}_{i}-\mathrm{median}({X}_{i})))$, where Xi are the values comprising the periodogram. The MAD score was chosen to study the "grass" of the periodograms, rather than the standard deviation, because it is a more robust statistic to outliers—in this case the peaks in the periodogram. It allows for a clearer understanding of the variations present in the periodogram and thus the level of noise that the BaGLS algorithm has been unable to suppress.

3. Comparison of the Fourier, Lomb–Scargle, and BaGLS Periodograms

With the BaGLS algorithm developed, the first test is to compare it to the Fourier power spectrum and standard Lomb–Scargle periodogram. A sinusoidal signal with a period of 42 s and an S/N of 0.3 was made to test this. Each time step is 2 s plus a random offset between 0 and 2 s to simulate non-uniform sampling.

As seen in Figure 1, the Fourier transform does poorly in finding a period in non-uniform sampling. The Lomb–Scargle performs well in finding the correct period, though it still has some noise in its periodogram and peaks below the 99.5% significance level. In contrast, the BaGLS posterior distribution finds the correct period while having a 99.5% credible interval of one period bin. As stated in Section 1, the BaGLS posterior does a far better job of suppressing the noise, essentially having only a single sharp peak in its periodogram.

Figure 1.

Figure 1. A comparison of the Fourier power spectrum, Lomb–Scargle periodogram, and BaGLS posterior distribution. The vertical red line represents the actual period in the sinusoid. The dotted horizontal line in the Fourier power spectrum and Lomb–Scargle periodogram is the 99.5% significance level. The two short red lines in the BaGLS posterior distribution are the 99.5% credible interval. The significance levels are calculated as described in the text. The credible interval is found using the method outlined in Section 2.

Standard image High-resolution image

The significance levels in Figure 1 are calculated using the method of Shimshoni (1971), which is an extension of the Fisher significance test in harmonic analysis that gives a ranking to peaks in a periodogram rather than a single cutoff level below which results are discarded. Shimshoni demonstrated that this method finds significance in all periods not rejected by a Fisher test as well as some that are.

4. Synthetic Data Tests

To refine the BaGLS algorithm and better understand its output, extensive tests were done. At first, this was on fully simulated signals, sine curves with noises and secular trends added. These were complemented by "hound and hare" searches to uncover synthetic signals injected into half-resolution XRT data. In the end, criteria based on the 99.5% credible interval of each periodogram, supported by the probability peak and MAD score, were established to denote which light curves in the data have oscillations attributable to solar phenomena.

4.1. Fully Simulated Data

The first tests of the fully simulated data were designed to test what happens when two sinusoidal periodicities are present in a signal. In these cases, we shall deem a period to have been "detected" when the 99.5% credible interval corresponds to the periodicity with the highest amplitude.

The periods used to generate the fully simulated data were T = [15.7, 24.0, 33.3, 42.0, 56.5](s).

Each signal is generated as follows:

Equation (16)

where A and B are the amplitudes of the two sinusoidal signals, never going above 2, ω is the angular frequency defined by $\omega =2\pi /T$, poly is either a second- or third-order polynomial and noise is Gaussian noise centered at zero. The S/N is given by ${\rm{S}}/{\rm{N}}=\tfrac{[(A\,\mathrm{or}\,B)/\sqrt{2}]}{\sigma }$, where $A\,\mathrm{or}\,B$ is the larger of the two amplitudes and σ is the standard deviation of the Gaussian distribution. The simulated cadence is 2 s plus a random offset of $[0.0,0.02,0.04,...,2.0]$ s, simulating uneven sampling.

Figure 2 demonstrates some of the strengths and weaknesses of the BaGLS algorithm. At an S/N of 0.3, the algorithm still produces sharp peaks in the periodogram. Also notice that the 99.5% credible intervals in each case contain the inserted period and are at most four period bins wide, indicating a small (and correct) region containing the injected period. The fact that the peak in the periodogram of the third curve is only 58.5% probability shows why the credible interval is a useful statistic: it is robust against a slight smearing of any peaks in the periodogram.

Figure 2.

Figure 2. A Bayesian analysis of three synthetic signals with S/N = 0.3. The first column contains the initial signal with the injected period, noise and type of secular trend (P2 and P3 are second- and third-order polynomials respectively). The second column contains the standard BaGLS periodogram. The third column is the same plot with the y-axis being shown on a log scale. Lastly, the second two rows have a signal with two periods injected. The period emphasized with asterisks was injected with twice the amplitude of the unemphasized period. The vertical red lines are the periodicity with the highest amplitude in each signal.

Standard image High-resolution image

In contrast, Figure 2 also shows how the BaGLS algorithm can be fooled if two periodicities are present in the same light curve. In both the second and third signals, which each contain two periods, the periodogram only peaks at the period with the greater amplitude. The other period is essentially treated as noise.

The third row also demonstrates that multiple periods can cause the probability peak in the periodogram to shift away from either of the actual periods. The peak reported is two period bins away from the true signal, rather than at the closest period bin.

Keep in mind this is an example of what can go wrong, not what necessarily will. Table 1 is the result of analyzing the signals from Figure 2 with 1000 different noises, all randomly generated with S/N = 0.3, to give 1000 unique light curves for each combination of periods shown. What one sees is that the BaGLS algorithm does in fact produce probability peaks that, much more often than not, correspond to the injected period with the higher amplitude. This agrees with our assessment of the problems of analyzing a signal with multiple periodicities from Section 2 and reaffirms the need to carefully select the data to be analyzed.

Table 1.  Values Generated from Analyzing the Curves in Figure 2 with 1000 Different Unique Noises

${\rm{S}}/{\rm{N}}=0.3;\mathrm{Trend}:\,{x}^{2}$
Injected Signal Periods (s) 15.7 *24.0* + 33.3 42.0 + *59.5*
Detected Period Mean (s) 16.8 26.5 56.6
Detected Period Std. Dev. (s) 5.3 7.4 5.4
Detected Period Median (s) 15.8 24.3 58.9
Detected Period MAD (s) 0.0 0.7 0.7
Detected Probability Mean (%) 90.8 62.2 40.5
Detected Probability Std. Dev. (%) 19.6 28.3 11.7
Detected Probability Median (%) 99.8 62.4 40.0
Detected Probability MAD (%) 0.2 25.8 7.1

Note. The values presented here were generated from analyzing the curves in Figure 2 with 1000 different unique noises (all with S/N = 0.3). They are the mean, standard deviation, median, and median absolute deviation of the detected periods and their respective probabilities. The period emphasized with asterisks was injected with twice the amplitude of the unemphasized period.

Download table as:  ASCIITypeset image

One point to note about Table 1 is that lower probability peaks tend to be seen at longer periods. To test this systematically, the BaGLS code was run over the full period range to be analyzed for oscillations in the solar corona. One thousand sine curves, each with different noises (all at S/N = 0.3), were generated for every period between [4, 100] s. For each period in that range, we counted the number of times the correct peak was detected (to within one period bin). The results are summarized in Figure 3. The overall linear trend seen in Figure 3 indicates that the BaGLS code finds the inserted period correctly more often than a random periodicity in the noise. The fact that the algorithm finds the correct period less reliably as the injected period gets longer is an indication that as the number of period cycles goes down in a sample, the code becomes less sensitive. This is an expected weakness, one shared by Fourier and wavelet techniques especially for low S/N.

Figure 3.

Figure 3. A scatter plot showing the probability peaks of 1000 simulated signals over all the periods between 4 and 100 s, inclusive. The x- and y-axes are the injected period and the mode of the period with the peak probability the 1000 tests at each period, as detected by the BaGLS algorithm. The color indicates how many times the correct period was found. The light black line is abscissa = ordinate, the line which we expect the points to fall on if the BaGLS algorithm is working correctly.

Standard image High-resolution image

4.2. Real Data Injected with Simulated Light Curves

The second test of the BaGLS algorithm was to extract a synthetic light curve injected into real data using a hound-and-hare search, where one of us (D.M.) inserted some number of synthetic light curves into a half-resolution Hinode XRT data set and the other (B.S.) searched for and isolated them using the BaGLS algorithm. This allowed for a test of the BaGLS algorithm on unknown periods and S/Ns.

The sample data set, Sample 1, was chosen from Hinode XRT Observation Program #129C: MHD Wave Heating (Thin Filters). It focuses on AR 10293, an active region from 2006 November 13 depicting coronal loops for the full set of the 280 images used in the movie. The sample goes from 10:01:20.352 to 10:10:48.541 UTC, with an average cadence of 2 s. The image resolution is 2'' per pixel. The filter used for this sample is the Al-mesh analysis filter.

4.2.1. Synthetic Test 1

Synthetic Test 1, the first injected light curve into Sample 1, consisted of a single simulated light curve with a period of $42\,{\rm{s}}$ (Figure 4). A wavelet analysis was used on the pixel pre-injection to check for any periodic signals already present in the data that could pollute the test. None were found. This was confirmed by the BaGLS algorithm.

Figure 4.

Figure 4. The top sequence of plots is the original light curve in (170, 175) and its respective BaGLS periodogram in standard and log scale. With no injected signal, the BaGLS algorithm detects a period of 7.6 s with an 8.1% probability of being a true signal. In contrast, when the simulated signal is injected (with no obvious difference in the light curve), the BaGLS code finds a period 41.9 s (the closest period bin to 42 s) with a 99.6% probability of being a true signal and a 99.5% credible interval of one period bin.

Standard image High-resolution image

To isolate this signal from the rest of the data set, which at this point can be noise compared to the injected signal, we assumed that the periodogram of the simulated signal generated by the BaGLS code would have a probability peak higher than any other signal. The algorithm was then run over every pixel with a mean intensity greater than 1000 DN/s/pixel (focusing on the brighter parts of the active region). The probability peaks of each periodogram were then recorded and ranked. The periodogram with the highest probability was asserted to be generated from the synthetic light curve. This was confirmed by D.M.

After the discovery of the synthetic light curve by ranking the probability peaks of the periodograms, the periodograms were then ranked by their lowest median absolute deviation, looking for the periodogram that had the most suppression of the non-peaked regions. This method concurred with the probability ranking, with the synthetic light curve having a MAD nine orders of magnitude smaller than any other periodogram.

4.2.2. Synthetic Test 2

Three signals were injected for Synthetic Test 2. Light curves in Sample 1 were shuffled (by randomly swapping values in indices until each had been swapped once) to remove any periodicity (rather than simply finding a pixel with no periodicity seen with a wavelet, as was done in Synthetic Test 1). One sine curve with a varying period and S/N was then inserted into each of three selected pixels. The first two pixels were searched for by the MAD method of ranking from Synthetic Test 1, used because of the nine order of magnitude differentiation between the synthetic and non-synthetic light curves. It was assumed that the two periodograms with the lowest MAD scores were a result of the simulated light curves. The pixels were (x, y) = (83, 158) and (84, 170) with periods of 74 s and 64 s respectively (Figures 5 and 6). These were confirmed by D.M. to be two of the three synthetic pixels.

Figure 5.

Figure 5. Light curve and periodogram of injected pixel at (83, 158).

Standard image High-resolution image
Figure 6.

Figure 6. Light curve and periodogram of an injected pixel at (84, 170).

Standard image High-resolution image

The final pixel was found by assuming that if a synthetic signal was binned with a non-synthetic signal, the real data would drown out the injected light curve. To achieve this, the images of Synthetic Test 2 were binned 2 × 1 along the x-axis and then separately 1 × 2 along the y-axis (Figure 7).

Figure 7.

Figure 7. A visual representation of the binning procedure used in Synthetic Test 2. To isolate the final synthetic pixel of Synthetic Test 2, each axis was separately binned by a factor of two. The resultant images were analyzed by the BaGLS code. If a period that was in the original image did not show in either of the two binnings, the intersection of the cyan and magenta lines, that pixel was flagged for further analysis.

Standard image High-resolution image

A search was then devised looking for periods that existed in the original data set but not the binned ones. A pixel had to show up in the original search but in neither of the two previous searches that binned the data. This left us with 535 pixels out of 13,648 to search through.

To further isolate the synthetic light curve, its probability peak was assumed to be above 0.5, the quotient between the highest peak and the second highest peak to be greater than 3.5 and the quotient of the median and the MAD scores of each periodogram greater than 3.5. These values were selected because they were the only three statistics shared by the previously discovered synthetic light curves. This brought the number of potential synthetic light curves down to 12.

A last assumption was made, based on the knowledge that this pixel would be significantly weaker than the previous two. Of the 12 remaining pixels, the one with the synthetic data would have the lowest maximum power. That pixel was (x, y) = (165, 168), and was confirmed to be correct (Figure 8).

Figure 8.

Figure 8. Light curve and periodogram of an injected pixel at [165, 168].

Standard image High-resolution image

The trials needed to find this third light curve reflect the sensitivity of the BaGLS algorithm. The injected signal had an S/N of 0.2, compared to the minimum S/N of 0.4 for the other injected signals in Synthetic Tests 1 and 2. As was first observed in Section 4.1 and as will be described below, the algorithm's sensitivity has at best an S/N of ∼0.3. Thus a light curve with S/N 0.2 is very difficult to distinguish from random noise.

4.2.3. Re-analysis of Synthetic Tests 1 and 2

Being unsatisfied with this ad hoc method of finding periodic signals generated from synthetic data, we devised a different method to isolate synthetic light curves from the others. Instead of assuming anything about the powers or MAD scores of the synthetic data, one can ask whether or not the periodicities in each light curve are maintained in time. The assumption is that the sine signal exists through the whole ten minutes with the same period, whereas any periodicity in other light curves would not.

As such, each light curve was cut into four parts (∼150 s each, to be able to find periods of ∼60 s). Given the original period of each pixel and the four new periods, one can ask whether the average of the four new periods stay to within round-off error of the original period. Only the synthetic signals achieved this criterion.

4.2.4. Synthetic Test 3

The third test of synthetic signals injected into Sample 1 involved an unknown number (to B.S.) of simulated signals at a variety of periods and amplitudes as a final test for S/N sensitivity of the BaGLS algorithm. Repeats of the methods looking at the probability peaks, median absolute deviations, and periods of the split data occurred, finding four synthetic pixels. The binning method was rejected as being too arbitrary. This left some number of unknown pixels for the method to uncover.

The periodograms were then ranked based on the size of their 99.5% credible intervals. In contrast to ranking periodograms based on noise suppression based on MAD scores, ranking by credible intervals analyzes the concentration of probability in the periodograms. While this did not aid in isolating synthetic signals, it acted as a way to compare the synthetic signals to the other light curves in Sample 1 to find those signals which should be further analyzed in the wave heating analysis.

After the full list of injected signals was released to B.S., it was decided that periodograms with a credible interval of three period bins or fewer would be selected for the wave heating analysis. This cutoff was chosen because it not only included the four signals isolated with the techniques used above, but also a fifth injected sine signal. It simultaneously allows for those peaks which are slightly smeared in their respective periodograms.

Using these methods, a lower limit on the S/N that can be reliably detected was established for this data set. The injected periods and S/Ns as inserted by D.M. are displayed in Table 2. From the analysis, it was determined that the BaGLS algorithm can reliably detect sinusoidal periodicities with S/N > 0.354 if the periods are less than 42.0 s. This roughly agrees with the tests on fully simulated data in Section 4.1.

Table 2.  The Coordinates, Injected Periods, Detected Periods, S/N, Probability Peaks, and 99.5% Credible Intervals of the Sinusoidal Signals Injected into the Third Synthetic Data Test of Sample 1

    Periods (s)      
X Y Injected Detected S/N Peak (%) 99.5% CI (bins)
82 130 75.0 76.9 0.35 16.706 93.0
64 126 64.0 64.3 0.35 39.349 6.0
127 102 42.0 41.9 0.35 96.328 3.0
78 135 31.4 31.4 0.49 100.0 1.0
75 158 21.0 20.9 0.71 100.0 1.0
186 168 16.0 16.0 0.71 100.0 1.0
62 172 11.0 11.1 1.06 100.0 1.0

Download table as:  ASCIITypeset image

Detections of actual periodicities in Sample 1 will be discussed in Section 5.

5. Data Selection/Signal Processing

Two data sets were chosen for the full wave heating analysis. The first, hereafter Data 1, was from XRT Observation Program #129C: "MHD Wave Heating (Thin Filters)," focusing on the same active region AR 10293 as Sample 1. There are 80 images with a cadence averaging 3 s and varying by up to 1 s, going from 11:00:24.763 to 11:04:21.782 UTC. The second data set, Data 2, is from the same observing run and comprises 223 images with a cadence averaging 6 s and varying by up to 4 s, going from 11:04:27.284 to 11:26:40.725 UTC. (The 3 s cadence run did not continue as a result of a data buffering issue on board Hinode.) Both Data 1 and Data 2 have an image resolution of 1'' per pixel (full resolution for XRT) and use the same filter, the Al-poly analysis filter.

The XRT data are conditioned before analysis via the SolarSoft IDL program xrt_prep (Freehand & Handy 1998; Kobelski et al. 2014), which includes steps to decompress the data, remove dark current, normalize the exposures, correct for alignment jitter, and calculate systematic uncertainties. Image drift was calibrated with procedures detailed in Yoshimura & McKenzie (2015). Lastly, a running mean was subtracted from every light curve analyzed, as described in Section 2 (Figure 9).

This simultaneously removed secular trends and acted as a high pass filter, attenuating periods greater than 186 s for Data 1 and 372 s for Data 2. As our analysis only studied periods up to 100 s, this attenuation did not limit our period searching capabilities.

Figure 9.

Figure 9. A demonstration of the running mean algorithm. It subtracts off secular trends while maintaining the higher-frequency periodicities in the signal. The lack of smoothing is intentional, as discussed in Section 2.

Standard image High-resolution image

Signals were rejected if they contained microflares, which show up in a BaGLS periodogram as a period, despite the event being transient (Figure 10). This filter was based on the active region transient brightening detection method developed by Kobelski et al. (2014).

Figure 10.

Figure 10. An example of how microflares can confuse the period searching analysis. There may be an underlying periodicity in the signal but it is considered as noise in both the BaGLS and wavelet spectra. The wavelet spectrum is explained in Section 5.1.

Standard image High-resolution image

Lastly, both Data 1 and Data 2 had a portion of their images copied into the lower right corner of the data set where 36 control pixels of various periods and amplitudes were inserted in the same way as Synthetic Test 3 (Tables 3 and 4). Any of the synthetic signals detected by the BaGLS algorithm were then cross-referenced with a full record of the injected light curves to establish a S/N detection limit of the BaGLS algorithm in the respective data sets. The S/N is defined as the amplitude of the sine wave divided by the Gaussian noise at each point.

Table 3.  The Coordinates, Injected Period, S/N, Detected Period, and Probability Peak of all Synthetic Pixels with a 99.5% Credible Interval Less Than Three Period Bins for Data 1

X Y Injected Period S/N Detected Period Probability
431 40 22.5 0.71 26.3 1.000
431 51 30.0 0.71 28.9 0.697
441 51 30.0 0.85 28.9 1.000
451 51 30.0 1.06 28.9 0.817
421 61 45.0 0.49 44.1 0.923
431 61 45.0 0.71 44.1 0.977
441 61 45.0 0.85 44.1 0.948
451 61 45.0 1.06 44.1 0.928
441 71 60.0 0.85 59.4 0.847
451 71 60.0 1.06 59.4 0.956
451 81 75.0 1.06 72.1 0.763

Note. In this data set, the BaGLS algorithm detected S/N > 0.71 if the periods are less then 45.0 s and S/N > 0.49 if the period approximates 45.0 s. Though the lower S/N limit is only half as good as that found in Sample 1 and Data 2, that is likely a result of only having a quarter as many data points.

Download table as:  ASCIITypeset image

Table 4.  The Coordinates, Injected Period, S/N, Detected Period, and Probability Peak of All Synthetic Pixels with a 99.5% Credible Interval Less Than Three Period Bins for Data 2

X Y Injected Period S/N Detected Period Probability
410 34 15.0 0.35 24.9 0.995
410 43 22.5 0.35 22.5 1.000
420 43 22.5 0.49 22.5 1.000
430 43 22.5 0.71 22.5 1.000
440 43 22.5 0.85 22.5 1.000
450 43 22.5 1.06 22.5 1.000
400 53 30.0 0.21 29.8 0.998
420 53 30.0 0.49 29.8 1.000
430 53 30.0 0.71 29.8 1.000
440 53 30.0 0.85 29.8 1.000
450 53 30.0 1.06 29.8 1.000
410 63 45.0 0.35 45.1 0.978
420 63 45.0 0.49 45.1 1.000
430 63 45.0 0.71 45.1 1.000
440 63 45.0 0.85 45.1 1.000
450 63 45.0 1.06 45.1 1.000
410 73 60.0 0.35 60.4 0.842
420 73 60.0 0.49 59.6 0.908
430 73 60.0 0.71 59.6 1.000
440 73 60.0 0.85 60.4 0.987
450 73 60.0 1.06 59.6 0.965
420 83 75.0 0.49 74.2 0.618
430 83 75.0 0.71 75.0 0.735
440 83 75.0 0.85 75.0 0.991
450 83 75.0 1.06 75.0 1.000

Note. In this data set, the BaGLS algorithm detected S/N > 0.35 if the periods are less then 60.0 s and S/N > 0.21 if the period approximates 30.0 s. It is worth noting that only one pixel with an injected signal close to the Nyquist limit (12 s period) was found. This loosely agrees with studies of synthetic data, which showed that the BaGLS algorithm tends to start consistently finding periods at approximately twice the Nyquist limit.

Download table as:  ASCIITypeset image

In addition, the detected injected light curve with the lowest mean intensity (measured in DN/s/pixel) was used as the lower intensity limit for what constituted a bright enough part of the active region for the BaGLS code to analyze. This is not a limitation of the BaGLS algorithm, but rather a choice made to limit the period searching to brighter parts of the active region to focus our search on periodicities in coronal loops. The minimum intensity analyzed was 620 DN/s/pixel for Data 1 and 579 DN/s/pixel for Data 2. This method was also used on Sample 1 to normalize it with the other two data sets, placing its minimum mean intensity to be analyzed at 2470 DN/s/pixel.

5.1. Wavelet Analysis

As a further way to analyze the light curves in Data 1 and Data 2, a wavelet analysis was introduced. The method used is that developed by Torrence & Compo (1998), which uses a mother wavelet (in this case a Morlet wavelet with ${\omega }_{0}=6$) to extract a period information from a signal locally, mapping out any changes of the signal's period through time. This was not done during the synthetic testing so that the analysis would not be biased with non-Bayesian period searching techniques.

This technique is primarily used to cross-check the BaGLS algorithm and to further confirm periodicities in light curves. For every light curve, the period detected by the BaGLS code is analyzed by the wavelet code, returning how many period cycles are detected at that period at the 99.5% significance level.

Wavelets were also used to check for changing and multiple periodicities. A wavelet analysis was used, rather than a Bayesian approach to provide a bridge between the methods used in this paper and the more familiar forms of statistical analysis.

Considering that the wavelet analysis is more susceptible to signal noise than the BaGLS algorithm, since it makes minimal assumptions about the shape of the signal, light curves that do not have period cycles at the BaGLS period are not rejected outright but treated with more reservations in the analysis. This is supported by the fact that while there are wavelet spectra that do not have a 99.5% significance contour at the period specified by the BaGLS algorithm, there are no power spectra that have such contours that conflict with the period detected by the BaGLS code. Note that this wavelet analysis can only be performed on data with a near constant cadence. Without this condition, the Torrence and Compo wavelet algorithm tends to break down. Some examples of spectra are detailed in Figures 1113.

Figure 11.

Figure 11. One of the synthetic signals in Data 2. Notice that the wavelet spectrum does not show a consistent periodicity at the injected period, while the BaGLS algorithm has a probability peak at the injected period of 1.0 (to the double precision limit of the code). The red bars in the BaGLS periodogram indicate the 99.5% credible interval.

Standard image High-resolution image
Figure 12.

Figure 12. A detection in Data 2. Similar to Figure 11, the BaGLS algorithm asserts a detection with a 99.495% probability while the wavelet analysis only shows significance contours at the detected period for part of the light curve.

Standard image High-resolution image
Figure 13.

Figure 13. A light curve from Data 2 where the wavelet analysis and the BaGLS code do not agree with each other. The BaGLS code asserts a probability of 99.961% of the light curve having a period of 19.3 s, while the wavelet code shows no significance contours. As noted above, this does not reject the pixel, but forces a more careful treatment of any periodicity found in this light curve by the BaGLS algorithm.

Standard image High-resolution image

5.2. Results of the BaGLS Period Searching

In the manner described in Section 2, the BaGLS algorithm has analyzed 21,865 light curves in Data 1, 27,460 light curves in Data 2, and 12,882 light curves from Sample 1. Figures 1416 are maps showing the locations of the pixels selected for further study from Sample 1, Data 1, and Data 2 respectively. Tables 57 list the coordinates, periods, probability peaks, wavelet cycles at the 99.5% significance level, and 99.5% credible intervals of the pixels selected for further study from Sample 1, Data 1, and Data 2, respectively.

Figure 14.

Figure 14. The pixels from Sample 1 containing light curves to be analyzed. The color of each pixel denotes the period of the light curve found by the BaGLS algorithm.

Standard image High-resolution image
Figure 15.

Figure 15. The pixels from Data 1 containing light curves to be analyzed. The color of each pixel denotes the period of the light curve found by the BaGLS algorithm.

Standard image High-resolution image
Figure 16.

Figure 16. The pixels from Data 2 containing light curves to be analyzed. The color of each pixel denotes the period of the light curve found by the BaGLS algorithm.

Standard image High-resolution image

Table 5.  The Coordinates, Periods, Probability Peaks, and Wavelet Cycles at the 99.5% Significance Level and 99.5% Credible Intervals for the 11 pixels Selected for Further Study in Sample 1

X Y Period Probability Cycles (99.5%) CI (99.5%)
146 118 5.5 0.997 0.0 1
69 135 11.1 0.999 1.4 1
61 136 6.2 0.998 0.0 1
113 117 40.5 0.824 1.4 3
94 122 42.6 0.926 3.5 3
65 127 43.3 0.621 2.3 3
103 130 37.7 0.876 2.2 3
71 141 40.5 0.670 5.3 3
181 149 45.4 0.720 3.0 3
84 153 39.8 0.801 1.9 3
101 179 39.1 0.851 2.4 3

Download table as:  ASCIITypeset image

Table 6.  The Coordinates, Periods, Probability Peaks, and Wavelet Cycles at the 99.5% Significance Level and 99.5% Credible Intervals for the 21 pixels Selected for Further Study in Data 1

X Y Period Probability Cycles (99.5%) CI (99.5%)
176 238 6.0 0.998 0.0 1
235 257 16.2 0.999 5.1 1
180 259 28.9 0.998 2.2 1
207 261 28.9 0.998 0.6 1
226 269 26.3 0.996 0.0 1
168 273 28.9 0.999 1.1 1
212 293 6.0 0.997 0.0 1
142 299 23.8 0.995 1.1 1
148 325 8.5 0.998 0.0 1
138 328 28.9 0.998 0.0 1
140 335 6.0 1.000 0.0 1
153 340 31.4 1.000 2.4 1
188 348 13.6 0.996 1.1 1
393 83 26.3 0.993 2.3 2
117 300 31.4 0.987 2.2 2
269 232 56.8 0.735 0.0 3
307 234 44.1 0.499 0.6 3
238 239 49.2 0.953 0.6 3
239 268 46.6 0.837 0.8 3
162 342 54.3 0.859 1.2 3
110 367 54.3 0.783 0.0 3

Download table as:  ASCIITypeset image

Table 7.  The XY Coordinates, BaGLS Detected Period, Probability, and 99.5% Credible Interval, and the Number of Period Cycles Detected by the Wavelet Code at the 99.5% Significance Level of the 23 pixels in Data 2 Selected for Further Study

X Y Period Probability (Mean Prob) (Count) CI (99.5%) (Mean CI) (Count) Cycles (99.5%)
138 252 24.9 0.995 0.240 0 1.0 109.8 1000 1.2
161 266 36.2 0.998 0.229 0 1.0 109.8 1000 2.2
91 295 19.3 1.000 0.227 0 1.0 109.9 1000 0.0
140 305 37.0 0.999 0.223 0 1.0 109.9 1000 0.0
120 308 24.9 0.997 0.238 0 1.0 109.9 1000 1.7
214 336 13.6 0.998 0.250 0 1.0 109.8 1000 0.0
159 338 27.4 0.999 0.210 0 1.0 109.9 1000 2.4
175 348 27.4 0.995 0.232 0 1.0 109.2 1000 0.9
103 349 24.1 0.996 0.245 0 1.0 109.6 1000 4.2
330 355 34.6 0.995 0.236 0 1.0 109.9 1000 3.3
113 368 26.5 0.998 0.234 0 1.0 109.2 1000 0.0
143 386 34.6 0.999 0.261 0 1.0 109.7 1000 1.2
186 222 46.7 0.917 0.232 4 2.0 109.2 1000 0.8
171 238 52.4 0.737 0.218 2 2.0 109.8 1000 0.0
154 262 51.6 0.618 0.239 12 2.0 109.9 1000 0.9
192 277 50.0 0.727 0.251 7 2.0 109.7 1000 2.3
225 303 57.2 0.546 0.228 14 2.0 109.8 1000 1.2
209 258 59.6 0.911 0.207 0 3.0 109.9 1000 0.0
192 268 54.8 0.962 0.225 0 3.0 109.8 1000 0.0
114 285 51.6 0.921 0.237 1 3.0 109.4 1000 0.0
112 309 59.6 0.863 0.243 2 3.0 109.3 1000 2.2
132 314 58.8 0.913 0.223 1 3.0 109.8 1000 1.9
171 317 58.8 0.791 0.235 4 3.0 109.8 1000 3.9

Note. The mean column has the average value of the probability peaks or credible intervals. The count column is how many times the shuffled values were greater than the real values.

Download table as:  ASCIITypeset image

As a final check against false positives, the full set of light curves analyzed in Data 2 were shuffled to remove any periodicities while maintaining the same statistical scatter. Periodograms of these shuffled light curves were then generated using the BaGLS algorithm, looking in particular for any credible intervals with widths less than or equal to three period bins due to the noise in the data. This process was repeated 1000 times for each light curve. No 99.5% credible intervals less than three period bins were found for any of the 27.5 million shuffled light curves, showing the credible interval to be a robust statistic against false positives. Table 7 is a full listing of the statistics generated by this method for those pixels in Data 2 that were selected for study.

Figure 17 is a detailed look at this method for pixel [186, 222]. It contains a histogram of the probability peaks and 99.5% credible intervals for each of the 1000 shuffled light curves. It shows that while noise can produce probability peaks above the detected value, it can only rarely generate 99.5% credible intervals of three period bins or fewer.

Figure 17.

Figure 17. Histograms of the probability peaks and 99.5% credible intervals of pixel [186, 222], one of the pixels listed in Table 7. The actual value is noted as the red line, while the green histogram bins are where the probabilities and credible intervals fall after being shuffled. Note that while noise can generate probability peaks greater than the peak of the unshuffled light curve, it does not generate 99.5% credible intervals with a width of three period bins or fewer.

Standard image High-resolution image

6. Discussion

As noted in Section 5, a total of 55 light curves have been found in this study that have been determined suitable for further study of wave heating in the corona. These likely have an S/N > 0.354 for Sample 1, S/N > 0.7070 for Data 1 and S/N > 0.3535 for Data 2. The discrepancy in S/N between the data sets is likely caused by the fact that Data 1 is a quarter of the length of Data 2, thus the BaGLS algorithm is half as sensitive.

To estimate the actual amplitudes of each signal, a period-folding method similar to that of Kelley (1993) and Stellingwerf (1978) was developed using the period matching the probability peak of the BaGLS periodogram. This was not used to further refine periods, as the noise present in the light curves prevented a consistent recovery of the sinusoidal periods during testing. An example of the period folding is shown in Figure 18.

Figure 18.

Figure 18. The period folding technique applied to pixel [186, 222] from Data 2. The error bars were found by propagating the error of each data point during the folding. The number of bins is the quotient of the detected period divided by the average cadence, a direct result of period folding techniques.

Standard image High-resolution image

Amplitudes were generated via coherent addition and then averaging of the subsections of the folded light curve and are taken as the distance peak to valley on the folded light curves. The amplitude's error is measured as the standard deviation of the folded curve. From this, a $\tfrac{{\rm{\Delta }}I}{I}$ was calculated using the folded amplitude and the mean amplitude of the raw light curve. This was followed by an estimation of plasma β using the method of Zaitsev & Stepanov (1989) in their "oscillating loop" model. Assuming that their model can be used for soft X-rays (it was derived for hard X-rays), the modulation of the loop intensity is related to ${\beta }_{P}$ as

Equation (17)

One can also compare intensity to density with the relations

Equation (18)

Equation (19)

where the temperature- and abundance-dependent constants making the relations into equalities presumably do not change over the observed timescales. These results are tabulated in Tables 810 for Sample 1, Data 1, and Data 2 and are in rough agreement with McKenzie & Mullan (1997), though the values above 100 DN/s/pixel are possible outliers in the data.

Table 8.  Coordinates, Period, Amplitude, $\tfrac{{\rm{\Delta }}I}{I}$, and ${\beta }_{P}$ for Sample 1. The Errors on ${\beta }_{P}$ Are the Same as Those on $\tfrac{{\rm{\Delta }}I}{I}$

X Y Period (s) Amplitude (DN/s/pixel) $\tfrac{{\rm{\Delta }}I}{I}$ $\tfrac{{\rm{\Delta }}\rho }{\rho }$ ${\beta }_{P}$
113 117 40.5 179.9 ± 96.4 0.04 ± 0.02 0.02 ± 0.01 0.1 ± 0.1
146 118 5.5 5.0 ± 3.7 0.002 ± 0.001 0.001 ± 0.001 0.01 ± 0.004
94 122 42.6 162.5 ± 81.0 0.04 ± 0.02 0.02 ± 0.01 0.2 ± 0.1
65 127 43.3 169.7 ± 82.6 0.06 ± 0.03 0.03 ± 0.02 0.2 ± 0.1
103 130 37.7 228.2 ± 103.0 0.04 ± 0.02 0.02 ± 0.01 0.1 ± 0.1
69 135 11.1 42.6 ± 27.7 0.01 ± 0.01 0.01 ± 0.003 0.04 ± 0.02
61 136 6.2 29.1 ± 21.0 0.01 ± 0.003 0.003 ± 0.002 0.02 ± 0.01
71 141 40.5 254.1 ± 120.4 0.03 ± 0.02 0.02 ± 0.01 0.1 ± 0.1
181 149 45.4 122.4 ± 53.5 0.04 ± 0.02 0.02 ± 0.01 0.2 ± 0.1
84 153 39.8 257.4 ± 126.7 0.03 ± 0.01 0.01 ± 0.01 0.1 ± 0.1
101 179 39.1 128.9 ± 65.0 0.05 ± 0.03 0.02 ± 0.01 0.2 ± 0.1

Download table as:  ASCIITypeset image

Table 9.  Coordinates, Period, Amplitude, $\tfrac{{\rm{\Delta }}I}{I}$, and ${\beta }_{P}$ for Data 1. The Errors on ${\beta }_{P}$ Are the Same as Those on $\tfrac{{\rm{\Delta }}I}{I}$

X Y Period (s) Amplitude (DN/s/pixel) $\tfrac{{\rm{\Delta }}I}{I}$ $\tfrac{{\rm{\Delta }}\rho }{\rho }$ ${\beta }_{P}$
176 238 6.0 35.0 ± 25.5 0.02 ± 0.02 0.01 ± 0.01 0.1 ± 0.1
235 257 16.2 20.21 ± 15.0 0.03 ± 0.02 0.0@ ± 0.01 0.1 ± 0.1
180 259 28.9 91.8 ± 47.9 0.05 ± 0.02 0.02 ± 0.01 0.2 ± 0.1
207 261 28.9 125.6 ± 64.0 0.04 ± 0.02 0.02 ± 0.01 0.2 ± 0.1
226 269 26.3 79.1 ± 46.9 0.05 ± 0.03 0.02 ± 0.01 0.2 ± 0.1
168 273 28.9 71.9 ± 33.4 0.06 ± 0.03 0.03 ± 0.01 0.2 ± 0.1
212 293 6.0 33.6 ± 24.5 0.03 ± 0.02 0.01 ± 0.01 0.1 ± 0.1
142 299 23.8 65.4 ± 36.4 0.04 ± 0.02 0.02 ± 0.01 0.1 ± 0.1
148 325 8.5 7.3 ± 5.8 0.01 ± 0.005 0.003 ± 0.003 0.02 ± 0.02
138 328 28.9 54.7 ± 28.7 0.06 ± 0.03 0.03 ± 0.02 0.2 ± 0.1
140 335 6.0 13.0 ± 9.5 0.01 ± 0.01 0.07 ± 0.005 0.05 ± 0.04
153 340 31.4 38.9 ± 21.1 0.05 ± 0.03 0.03 ± 0.01 0.2 ± 0.1
188 348 13.6 22.1 ± 16.1 0.02 ± 0.01 0.01 ± 0.07 0.1 ± 0.05
393 83 26.3 51.8 ± 33.0 0.07 ± 0.05 0.04 ± 0.02 0.3 ± 0.2
117 300 31.4 31.09 ± 15.1 0.03 ± 0.01 0.01 ± 0.01 0.1 ± 0.1
269 232 56.8 243.7 ± 121.7 0.1 ± 0.07 0.06 ± 0.03 0.4 ± 0.3
307 234 44.1 77.6 ± 46.1 0.06 ± 0.04 0.03 ± 0.02 0.2 ± 0.1
238 239 49.2 86.7 ± 40.6 0.08 ± 0.04 0.04 ± 0.02 0.3 ± 0.2
239 268 46.6 63.3 ± 34.3 0.08 ± 0.04 0.04 ± 0.02 0.3 ± 0.2
162 342 54.3 72.4 ± 32.8 0.09 ± 0.04 0.04 ± 0.02 0.3 ± 0.2
110 367 54.3 58.0 ± 29.527 0.09 ± 0.05 0.04 ± 0.02 0.3 ± 0.2

Download table as:  ASCIITypeset image

Table 10.  Coordinates, Period, Amplitude, $\tfrac{{\rm{\Delta }}I}{I}$, and ${\beta }_{P}$ for Data 2. The Errors on ${\beta }_{P}$ Are the Same as Those on $\tfrac{{\rm{\Delta }}I}{I}$

X Y Period (s) Amplitude (DN/s/pixel) $\tfrac{{\rm{\Delta }}I}{I}$ $\tfrac{{\rm{\Delta }}\rho }{\rho }$ ${\beta }_{P}$
138 252 24.9 10.6 ± 6.3 0.02 ± 0.01 0.01 ± 0.01 0.1 ± 0.05
161 266 36.2 27.1 ± 15.5 0.04 ± 0.03 0.02 ± 0.01 0.2 ± 0.1
91 295 19.3 7.8 ± 5.3 0.01 ± 0.01 0.01 ± 0.003 0.04 ± 0.03
140 305 37.0 16.6 ± 10.8 0.01 ± 0.01 0.01 ± 0.003 0.04 ± 0.3
120 308 24.9 9.5 ± 5.7 0.01 ± 0.01 0.005 ± 0.003 0.04 ± 0.02
214 336 13.6 2.3 ± 1.7 0.003 ± 0.002 0.002 ± 0.001 0.01 ± 0.01
159 338 27.4 16.4 ± 10.6 0.01 ± 0.01 0.01 ± 0.005 0.1 ± 0.04
175 348 27.4 4.6 ± 3.0 0.01 ± 0.004 0.002 ± 0.002 0.02 ± 0.02
103 349 24.1 20.2 ± 13.0 0.04 ± 0.03 0.02 ± 0.01 0.2 ± 0.1
330 355 34.6 11.7 ± 6.8 0.01 ± 0.01 0.01 ± 0.004 0.1 ± 0.03
113 368 26.5 9.0 ± 5.9 0.01 ± 0.01 0.01 ± 0.005 0.1 ± 0.04
143 386 34.6 7.7 ± 4.4 0.02 ± 0.01 0.01 ± 0.005 0.1 ± 0.04
186 222 46.7 16.3 ± 9.7 0.03 ± 0.02 0.01 ± 0.01 0.1 ± 0.1
171 238 52.4 29.1 ± 16.9 0.03 ± 0.02 0.01 ± 0.01 0.1 ± 0.1
154 262 51.6 13.5 ± 7.8 0.02 ± 0.01 0.01 ± 0.01 0.1 ± 0.05
192 277 50.0 19.0 ± 11.1 0.01 ± 0.01 0.01 ± 0.003 0.05 ± 0.03
225 303 57.2 26.6 ± 13.1 0.05 ± 0.02 0.02 ± 0.01 0.2 ± 0.1
209 258 59.6 60.6 ± 32.5 0.02 ± 0.01 0.01 ± 0.07 0.1 ± 0.05
192 268 54.8 42.7 ± 23.9 0.02 ± 0.01 0.01 ± 0.01 0.1 ± 0.04
114 285 51.6 18.7 ± 10.1 0.02 ± 0.01 0.01 ± 0.01 0.1 ± 0.05
112 309 59.6 42.4 ± 22.9 0.04 ± 0.02 0.02 ± 0.01 0.2 ± 0.1
132 314 58.8 30.3 ± 15.7 0.03 ± 0.02 0.02 ± 0.01 0.1 ± 0.1
171 317 58.8 112.9 ± 69.3 0.07 ± 0.04 0.03 ± 0.02 0.3 ± 0.2

Download table as:  ASCIITypeset image

Further work on this study would involve taking a larger sampling of XRT data with similar cadences over a range of temperatures. This could be achieved by using the slew of high-cadence XRT data existing on the same day (and two days after) as the data presented herein, although one would have to be careful in selecting and calibrating the data as the times at which they are taken with different analysis filters is not strictly concurrent. One could then apply this same methodology to all those data sets, obtaining ${\beta }_{P}$ through temperature-dependent methods and seeing if the values yield similar or different results. This would be the starting point for a deeper study into the physics underlying coronal loop oscillations.

Modifications could also be made to the BaGLS algorithm itself. Currently, it searches only for the fundamental harmonic of sinusoidal signals, but there is no known reason why coronal loop oscillations have to be of this form. Algorithms could be designed to search for higher harmonics or completely different functional forms, such as a prior to detect a triangle, square, or sawtooth function. Such an approach might also be able to narrow down the functional form(s) of the coronal loops. Similar approaches could also be taken with the noise, assuming for example red noise instead of white noise. These methods could also be combined into searching for odds ratios (N. J. Cornish 2016, private communication) between different functional forms and noise. While this is mathematically and computationally more difficult, it raises the possibility of arriving more naturally at not only the period but also the amplitude and phase information (both of which BaGLS unceremoniously drops).

The authors thank Dana Longcope, Charles Kankelborg, and Neil Cornish for their insights into solar physics and data analysis that were invaluable in sharpening the points made in this work. Hinode is a Japanese mission developed and launched by ISAS/JAXA, collaborating with NOAJ as a domestic partner, NASA, and STFC (UK) as international partners. Scientific operation of the Hinode mission is conducted by the Hinode science team organized at ISAS/JAXA. This team mainly consists of scientists from institutes in the partner countries. Support for the post-launch operation is provided by JAXA and NOAJ (Japan), STFC (UK), NASA, ESA, and NSC (Norway). This work was partially supported by NASA under contract NNM07AB07C with the Smithsonian Astrophysical Observatory.

Please wait… references are loading.
10.3847/1538-4357/aa5d59