Optical Variability of ICRF3 Quasars in the Pan-STARRS 3Pi Survey with Functional Principal Component Analysis

, , , and

Published 2021 June 18 © 2021. The American Astronomical Society. All rights reserved.
, , Citation C. T. Berghea et al 2021 AJ 162 21 DOI 10.3847/1538-3881/abfc51

Download Article PDF
DownloadArticle ePub

You need an eReader or compatible software to experience the benefits of the ePub3 file format.

1538-3881/162/1/21

Abstract

We make use of individual (epoch) detection data from the Pan-STARRS "3π" survey for 2863 optical ICRF3 counterparts in the five wavelength bands g, r, i, z, and y, published as part of the Data Release 2. A dedicated method based on the Functional Principal Component Analysis is developed for these sparse and irregularly sampled data. With certain regularization and normalization constraints, it allows us to obtain uniform and compatible estimates of the variability amplitudes and average magnitudes between the passbands and objects. We find that the starting assumption of affinity of the light curves for a given object at different wavelengths is violated for several percent of the sample. The distributions of rms variability amplitudes are strongly skewed toward small values, peaking at ∼0.1 mag with tails stretching to 2 mag. Statistically, the lowest variability is found for the r band and the largest for the reddest y band. A small "brighter-redder" effect is present, with amplitudes in y greater than amplitudes in g in 57% of the sample. The variability versus redshift dependence shows a strong decline with z toward redshift 3, which we interpret as the time dilation of the dominant time frequencies. The colors of radio-loud ICRF3 quasars are correlated with redshift in a complicated, wavy pattern governed by the emergence of brightest emission lines within the five passbands.

Export citation and abstract BibTeX RIS

1. Introduction

The crucially important tie of the radio and optical fundamental reference frames relies on a few thousand International Celestial Reference Frame (ICRF) sources with optical counterparts bright enough to be detected by the Gaia mission. The optical ICRF sources are not ideal for this work, with the point-like active galactic nucleus (AGN) images being often perturbed by the substrate structures of host or neighboring galaxies (Makarov et al. 2012). A smaller fraction of reference frame sources resides in double systems or microlensed images. The intrinsic variability of AGNs is another complication from both an astrophysical and technical point of view. The astrometric photocenter is determined by the combined flux from the substrate galaxy and the point-like AGN, which may vary in brightness and color. The resulting astrometric perturbations are complex and unpredictable, with magnitudes possibly dependent on redshift and AGN luminosity. Our overarching motivation is to address the emerging problem of the astrometric link between the two fundamental celestial reference frames, the radio-based ICRF3 4 and the optical Gaia mission frame (Brown & Gaia Collaboration 2016; Gaia Collaboration et al. 2016). Using the earlier versions of VLBI-determined and Gaia astrometric catalogs, a significant fraction (a few to several percent) of reference objects was found to have large radio-optical position offsets (Mignard et al. 2016; Makarov et al. 2017; Petrov & Kovalev 2017). This can perturb or completely derail global astrometric solutions that rely on the VLBI positions of a relatively small set of radio-loud AGNs to remove the rigid rotation and other sky-correlated errors (Berghea et al. 2016; Lindegren et al. 2018). This effect was confirmed with the more accurate Gaia Data Release 2 (DR2; Makarov et al. 2019; Plavin et al. 2019). Within the ICRF3 sample, the offset is weakly dependent on redshift, except for the nearest AGNs, which may in fact be physically offset (dislodged) from the optically dominating centers of their host galaxies. At greater cosmological distances, relativistic jets dominating the radio photocenters may be shifted from the central engines by measurable amounts. Some of these interpretations can be indirectly tested using light curves and general variability estimates. We would ultimately like to know if the objects with significant position offsets also display greater optical variability. In this paper, we are looking for answers to these questions:

  • 1.  
    How variable are the ICRF3 quasars overall?
  • 2.  
    How do the amplitudes of variability compare between the five passbands of Pan-STARRS?
  • 3.  
    Does the degree of variability statistically depend on redshift?
  • 4.  
    What are the typical colors of ICRF3 quasars and are they dependent on redshift?
  • 5.  
    Do ICRF3 quasars appreciably change colors due to brightness variations?

Quasars are intrinsically variable sources in the optical continuum, which helps to distinguish them from stars in large photometric surveys (Van den Bergh et al. 1973; Hawkins 1983; Vanden Berk et al. 2004). In a large magnitude limited sample, stars always have statistically smaller "amplitudes" of variability. The character of photometric variability of stars is also different from that of quasars and AGNs. While the stellar light curves often have periodic or quasi-periodic character caused by internal pulsations or binary effects, or exhibit randomly placed short-term flares, quasars' flux appear to follow a stochastic, unpredictable pattern, which is mathematically closer to a random process. These properties make the definitions of both "variability amplitude" and "mean magnitude" model-dependent, to be inferred from fitted parameters of a chosen stochastic model. Many studies described the degree of variability in terms of empirically determined structure function SF(Δt), where Δt is the time difference between two measurements. The square of first-order structure function within the simplest random process models can be interpreted as the sample estimate of the variance of the magnitude difference between two measurements separated by Δt in time (a more general definition of structure function can be found in, e.g., Kasliwal et al. 2015). It can be presented as

Equation (1)

where ${\sigma }_{m}^{2}$ is the variance of the true magnitude, ${\sigma }_{n}^{2}$ is the variance of observational noise, and ρt) is the autocorrelation function, assuming that (1) the true magnitudes and noise are statistically independent; (2) the expectancy of noise is zero (no bias); (3) the noise is uncorrelated (no systematic errors); and (4) the process is stationary, i.e., the variance of true magnitude is independent of the true magnitude. It is common to further assume that the autocorrelation function equals 1 at zero time lag and smoothly declines to 0 at a certain characteristic time lag τ. Somewhat arbitrarily, this expectation leads to common attempts to model the structure function as a power law with a certain slope γ and characteristic lag τ, to be determined from observation. Certain models are mathematically consistent with this expected behavior of the autocorrelation function, most notably, the CARMA (1, 0) continuous random process, which motivated its application in AGN light-curve analysis for interpolation and forecasting (e.g., Kelly et al. 2014; Moreno et al. 2019). These models are empirical rather than physical, as the physics behind quasar variability is likely to be complex and multivariate in nature. CARMA(p,q) models with greater number of free model parameters may be closer to the physical reality (Kasliwal et al. 2017). Even within its limited scope and implicit assumption, the three parameters in the above equation are highly dispersed among individual objects in a sample. To gain some confidence in the derived models, large samples of data are often utilized, such as spectroscopically confirmed quasars from the Sloan Digital Sky Survey (SDSS; Vanden Berk et al. 2004). The result may be distorted by trying to represent a diverse population of light curves with a single and simple fitting function and a set of parameters. Alternatively, the structure function and random process models can be meaningfully estimated for individual objects given a very dense and continuous observational cadence, such as a Kepler mission light curve (Wehrle et al. 2013), but this severely limits the available sample.

In this paper, we attempt to find a compromise solution and to gain some insight into the optical properties of individual ICRF3 objects while making as few model assumptions as possible. The method of data analysis should be robust enough to process the sparse and nonuniform data sets for each individual object, rather than treat the data as a single statistical ensemble. We therefore develop a specialized Functional Principal Component Analysis (FPCA) approach, which is flexible and amenable to additional regularization constraints and Bayesian prior. The use of FPCA in astronomy is gaining momentum (e.g., He et al. 2018), but this is the first application of this technique to survey-type AGN light curves, to our knowledge.

2. The Sample

We analyze astrometric and photometric epoch data collected by the Pan-STARRS DR2 (PS1-DR2) for a sample of 2863 ICRF3 radio sources. PS1 is a wide-field imaging system, with a 1.8 m telescope and 7.7 deg2 field of view, located on the summit of Haleakala in the Hawaiian island of Maui. The 1.4 Gpixel camera consists of 60 CCDs with pixel size of 0.256 arcsec (Onaka et al. 2008; Tonry & Onaka 2009). It uses five filters (gP1, rP1, iP1, zP1, and yP1, hereafter grizy), similar to the ones used by the (SDSS; York et al. 2000). The Pan-STARRS photometric system is described in Tonry et al. (2012). The largest survey PS1 performed was the 3π survey, covering the entire sky north of −30°decl. The epoch data for the 3π survey used in this paper was made available to the public in 2019 January as part of the second data release (DR2).

We started by cross-matching all 4536 ICRF3 sources with the MeanObjectView PS1 catalog in the public archive at the Space Telescope Science Institute (STScI), 5 using a 3'' cone search. This resulted in 3348 objects with at least one match, 133 of which had more than one counterpart. Color images were extracted for all these objects using the cutout server at STScI. 6 We removed 194 sources, which had a significant offset 0farcs2 between their PS1 and ICRF3 positions. We also removed sources that had nearby optical companions (within ≈0farcs5) because they could affect the photometry. We use the MeanObjectView filter flag data and visual inspection to remove all sources that showed extended structure. This process left 2896 sources that were deemed acceptable for this study.

The epoch data were extracted from the archive for all these sources (from the view Detection in the PS1 archive). We finally removed 14 sources with fewer than 10 observations, while 19 were missing in the epoch data. These are all inside a specific PS1 observation stripe (decl. range 54fdg2 to 57fdg6) due to errors in processing of PS1 DR2 data (R. White 2020, private communication). The final sample contains 2863 ICRF3 sources. The epoch data has been cleaned for outliers using a Gaussian kernel with probability density <0.05, described in detail elsewhere (Berghea et al. 2016). The purpose of this cleaning was to remove rather frequent outliers generated by flukes and real double or multiple sources within the search radius of 3''. These outliers have usually discordant magnitude estimates and impact the light-curve analysis, in some cases resulting in complete failure.

The epoch data for each contains the IVS name of the associated radio source, ICRF3 coordinates (R.A. and decl. at J2000), PS1 measurements of gPS1, rPS1, iPS1, zPS1, and yPS1 magnitudes, individual astrometric determinations (epoch R.A. and decl.), associated formal errors, and other observations and metadata. Not every object has observations in each of the five filters. The typical number of detections per object is 60, so that there are on average only 12 observations per filter, but the numbers are very nonuniformly distributed. The time span of the survey is about 4 yr. The observational cadences are sparse and nonuniform.

3. The Character of Light Curves

AGNs are intrinsically variable sources. The light curves are not fully self-correlated or periodic, and are believed to be better represented by stochastic, memory-less continuous processes, such as damped random walk or autoregressive models (Andrae et al. 2013). The spectral power density of variability is often assumed to approximately scale with frequency as f−2 (Collier & Peterson 2001). More recent evidence based on well-sampled Kepler mission and OGLE data suggests significantly steeper scaling dependencies, turning points, and intrinsic scatter between individual objects of the same class (Mushotzky et al. 2011; Zu et al. 2013; Kasliwal et al. 2015; Smith et al. 2018). The characteristic timescale of variations is about 1 month, but significant variations have been detected even on the intra-night timescales. PS1 observations span about 4 yr, but the observing cadence was irregular, with batches of epochs separated by long spans with no data at all (Figure 1). The degree of variability appears to be different among the sample sources too. To complicate the analysis even more, PS1 magnitudes were obtained in five different filters. The distribution of epochs is systematically different for each of the Pan-STARRS filters. A given object was never observed with two filters on the same night. Brightness variations of quasars are not always achromatic (Hawkins 1996), for a few physical reasons discussed below. Thus, we cannot generally assume that magnitudes follow the same light curve in each filter with only a constant shift between them.

Figure 1.

Figure 1. A Pan-STARRS light curve for one of the fundamental reference frame quasars. Measurements with different filters are shown with different symbols: blue circles for g, cyan circles for r, filled magenta squares for i, open red squares for z, and open brown squares for y. Most of the formal magnitude errors are between 0.02 and 0.05 with the exception of y-filter measurements where half of the errors are between 0.10 and 0.014 mag. Note the large gaps between observations due to observing seasons, possibly one faint outlier at the beginning of the survey, and the variable gi color of the object. Also note that most observations are not simultaneous in different filters.

Standard image High-resolution image

Let L(t) be a light curve of a given QSO as a function of time expressed in magnitudes. We assume that L(t) is a continuous and smooth (differentiable) function. This function is often modeled using a continuous autoregressive (CAR) process for survey-type light curves (e.g., Kelly et al. 2009), further developed into the more general and accommodating CARMA(p,q) model (Kelly et al. 2014; Simm et al. 2016; Kasliwal et al. 2017). In our case, there are too few data points in each of the five filters to use this method. The main results of the CAR-fitting are the characteristic timescales and the level of random noise component, which have statistical bearing on certain physical parameters of interest, e.g., the L/LEdd ratio. Given the sparsity of the data, we set more modest goals for this study: estimate the degree of variability in a consistent and reproducible way for each ICRF objects in each filter, as well as "average" magnitudes and possible chromatic deviations in the light curves.

4. Functional PCA

The basic idea of the FPCA approach is that the underlying light curve can be represented by a finite set of basis functions, and the observed discrete series of magnitudes can be well represented by a small number of orthogonal discrete functions called principal components. Each of the PC functions is completely represented by its projection onto the set of basis functions. Here we use the Fourier functions of time augmented with a linear trend as the basis in the space of continuous functions:

Equation (2)

where t0 is the midepoch of observations, Δt is the time span of observations. The presence of the linear trend term is justified by the limited time span of PS1 light curves (roughly, 4 yr) and the likely possibility that the longer-term light-curve variation will not be captured by the periodic Fourier terms. The limiting index K defines the highest frequency of the fitting functions. Technically, any discrete set of magnitudes can be exactly represented by this decomposition if K corresponds to the Nyquist frequency of the smallest time interval of consecutive observations, $\delta {t}_{\min }=\min ({t}_{i+1}-{t}_{i})$, i.e., $K=\mathrm{Ceil}({\rm{\Delta }}t/(2\,\delta {t}_{\min }))$. In practice, we found it practical to restrict the model to fitting only variations on timescales equal to or longer than 2 weeks (14 days). If true variations within 2 weeks are present in the data, they are not captured by the fitting model, thus raising the residual errors. This is first in a series of sacrifices we have to make to obtain a stably solvable problem.

We note that Equation (2) is already a step away from the idea of using basis functions, because while the Fourier terms are mutually orthogonal, the linear trend term is generally not. This has no practical consequence for the following processing, because the discretized functions will be orthogonalized in the following PCA.

Let us rewrite Equation (2) as

Equation (3)

where Yi (t) are the Fourier terms and the linear trend, in arbitrary order. The measurements are taken at a set of discrete epochs tj , j = 1,...,n. The functions Yi (t) can be discretized at the same cadence, Yij = Yi (tj ), yielding a set of linear equations. These equations cannot be solved for the coefficients αi because the system is severely indeterminate. In order to further limit the number of unknowns, we perform PCA of the discretized functions Yij.

Let Y = ( Y 1, Y 2,..., Y 2K+1) be the matrix composed of column vectors Y i , which include values of the fitting function Yi at times tj , j = 1,...,n. The size of the matrix is n × (2K + 1). If

Equation (4)

is the singular value decomposition (SVD) of the matrix, the diagonal of Σ includes the singular values σj in descending order, σ1σ2 ≥ ... ≥ σn , and the leading n columns of U are the singular (basis) vectors spanning the space of vectors Y i . This decomposition allows us to not only decrease the number of fitting functions to the number of observations, but to further restrict the solution to a smaller number of the most significant functions. The first column of U is the most significant vector, which is most representative of the actual observational cadence, called the first principal component. The exact representation of the discretized functions Y i is achieved with n principal components, but it turns out that a sufficiently close representation is obtained with a smaller number of components corresponding to the largest σj . We chose to make a cut at j = nσ such that σj ≥ 0.1 σ1 for j = 1, 2,...,nσ , and σj < 0.1 σ1 for j = nσ ,...,n, i.e., use only the components with corresponding singular values greater or equal to one-tenth of the largest singular value.

The emerging problem can now be written as

Equation (5)

where the continuous functions Ui are certain linear combinations of the much greater number of functions Yi defined by the SVD in Equation (4). When discretized on a particular cadence of times of observation, the overdetermined system of linear equations

Equation (6)

can be solved by the Least Squares method, but, using the orthonormality of PCs, the straightforward solution is obtained by projection

Equation (7)

Now recall that the vector L includes all observations of a given object in all the five bands, and it describes only the variation of brightness around a constant magnitude. The constant magnitude and the amplitude of variation can be different for different filters. The next tier of this model is to introduce an affine transformation from the basic light curve L to a filter-specific light curve L (f):

Equation (8)

where the vector L (f) includes all the observed magnitudes for a specific filter, and L f is the basic variation function L(t) computed at the times of observation in that filter. L (f) is the filter-specific observed light curve, while L f is the corresponding portion of the fitted light curve L in Equation (6). The fitting coefficients m(f) and a(f) (up to 10 for each object) are the main products of this technique. Equations (5) and (8), each being perfectly linear, have to be solved in combination, bringing up a nonlinear problem. Also observe that this problem is degenerate, because the coefficients βi can be multiplied by a number and a(f) divided by the same number producing the same fit. To resolve this degeneracy, and also to be able to compare the variability amplitudes between the objects, a standard normalization of the basic variability vector L is required. The natural choice is to set the norm of the vector to its rank:

Equation (9)

This choice of normalization allows us to interpret the resulting m(f) as the estimated "long-term average" magnitude and a(f) as the estimated "characteristic amplitude" of variability in the same units.

The fitting procedure for each quasar begins with evaluating the original fitting functions Yi at times of observations tj . The resulting n by 2K + 1 matrix provides the principal components U i per Equation (4), which define the functional principal components Ui (t). The observed magnitudes are not used at this step, which should be done only once. The subsequent optimization problem is solved iteratively, starting with initial estimates of m(f) and a(f) for each of the five filters. We use the median magnitudes and MAD values as initial estimates. Equation (8) is solved in inverse in each iteration to obtain the segments L f , which are further combined into L at the right cadence and the coefficients βi are computed per Equation (7). To preserve the normalization of these coefficients, a quantity called scale is computed

Equation (10)

and applied, after which the forward loop of iteration begins with computing L (f) and solving Equation (8) by linear regression. The convergence of this method is normally fast, reflected in the scale numbers tending to unity and the updates to m(f) and a(f) tending to zero.

5. Regularization

One of the products of this FPCA technique is the continuous function L(t) representing the fitted underlying magnitude variation of the object. It can be easily reconstructed from the final set of βi , basis functions Yi (t), and matrices Σ and V . Due to the large number of basis functions involved in the fit, covering the entire range of observable frequencies, the estimated light curves can take large, unrealistic deviations in the gaps between observations, where the solution is not constrained by the data. To regularize the solution better, and to introduce a Bayesian prior into the fitting process, we chose to weight the set of basis Fourier functions by

Equation (11)

This helps to obtain smoother and better behaved light-curve solutions and to introduce the previously found inference that the spectral power density is a declining function with frequency (although an exact dependence is problematic for the wide range of timescales from one week to several weeks involved in our case). Even with this regularization, the fits tend to show unreasonable-looking wiggles between widely separated nodes. This is the direct consequence of the long gaps in the observing cadence of Pan-STARRS due to observing seasons for any ground-based telescope. Unfortunately, the fits cannot be used to interpolate the light curves between the data points.

The fitted light curve L(t) is not used in this study to interpolate or predict the magnitudes between the points of observation, so the utility of this additional regularization may be questionable. However, experimenting with the data in the general sample, we found a small fraction of objects with one or two apparent outliers in one of the bands, which show a sudden change against a general trend confirmed by nearly simultaneous measurements in other bands. The available formal errors and quality flags do not allow us to objectively distinguish such occurrences, while a single deviant point may result in a poorly justified high-frequency component of the fitted function and bias the amplitude estimation in the unaffected bands. Therefore, this regularization helps to slightly reduce the rate of perturbed solutions. As a downside, the fitted light curves cannot be used to investigate the spectral power distribution of ICRF quasar variability.

Propagation of random measurement errors into the estimated parameter space is an important and nontrivial aspect of light-curve analysis. Our method has been specifically designed to work for the extremely sparse and irregular observational cadences of the Pan-STARRS general survey obtained with five different filters. These five cadences are combined into a single light curve allowing free affine mapping between them. The underlying signal is assumed to be homomorphous, but each individual measurement is perturbed by a random uncorrelated error, which violates this assumption. As a result, a perfectly constant source processed with the FPCA method will show small rms variability amplitudes that are merely random coherences between the filter-specific data samples. It is a well-known fact that large gaps in light curves can give rise to magnified random errors in the estimated variability amplitudes, for example, in the traditional periodogram analysis. This happens because the weak constraint of condition equations results in a wider dynamical range of eigenvalues and emergence of low-significance principal components. The advantage of our method is that this harmful build-up of random error propagation via the least significant principal components with small eigenvalues is avoided by limiting the fit to only most significant components, which are relatively well constrained. Still, the power spectrum density (PSD) of uncorrelated noise is flat, and the estimated amplitudes are affected by its presence in the data. The additional regularization of the solution by weighting of initial Fourier terms (Equation (11)) and the transformation from Fourier functions to functional principal components makes it difficult to analytically estimate the remaining propagation.

The best approach is to use Monte Carlo simulations perturbing each data point by random numbers with the specified formal variances and zero mean and repeating the FPCA fitting and estimation multiple times. Even non-Gaussian distributions of measurement error can be tested and experimented with, but, unfortunately, Monte Carlo simulations are quite computationally costly. We performed limited experiments with a few selected objects. For the ICRF3 quasar IVS 0000 − 197, which is listed in Table 1, we performed Monte Carlo trials perturbing the initial data 300 times and recomputing the mean magnitudes "cv" and rms amplitudes "av" (the FPCA part does not have to be recomputed because it is defined by the cadence only). The formal Pan-STARRS errors for this object range from 0.025 to 0.199 mag, the greater values being mostly associated with the blue g and r passbands where this red quasar is faint. The errors are comparable to the estimated parameters, and, not surprisingly, affect the results. Using 1.5 × MAD as a robust statistic proxy for standard deviation with the emerging non-Gaussian sample distributions, we obtain these estimates of the amplitude in the grizy bands: 0.11, 0.09, 0.22, 0.20, and 0.23 mag. The blue passband amplitudes seem to be less dispersed; however, their sample distributions are steeper on the short side and have long tails stretching to values above 1 mag. The median amplitudes over 300 trials are 0.56, 0.54, 0.75, 0.52, and 0.52 mag, respectively. These results are not too far from the single-point estimates in Table 1, except for the g filter, where observation errors conspired to produce a result at the high end of the sample distribution.

Table 1. Catalog of ICRF3 Variability Parameters

IVS ObjID N ScaleMAD ResidBandcvavMedian3× MADS/N
   mag magmagmagmag 
12345678910
0000-160490.980.067r21.2150.10421.2480.1690.8
0000-160490.980.067i20.5490.24320.5520.3452.8
0000-160490.980.067z20.3480.19320.3480.3121.6
0000-160490.980.067y19.7790.32619.7790.5261.8
0000-197540.990.037g20.0061.12119.9851.25632.6
0000-197540.990.037r19.7820.47219.7270.63313.3
0000-197540.990.037i19.4690.42119.4840.3949.6
0000-197540.990.037z19.0700.59919.0670.66013.0
0000-197540.990.037y18.8760.44918.8760.4754.6
0000-199630.910.018g19.1680.05219.1870.1102.3
0000-199630.910.018r19.1490.02919.1420.0631.2

Note. Column description: (1) IVS object name; (2) total number of observations used in all five filters; (3) scale Γ; (4) post-fit residual, MAD; (5) PS1 band; (6) FPCA-fitted mean magnitude; (7) FPCA-fitted amplitude of variability; (8) observed median magnitude; (9) observed dispersion, MAD times 3; and (10) signal-to-noise ratio.

Only a portion of this table is shown here to demonstrate its form and content. A machine-readable version of the full table is available.

Download table as:  DataTypeset image

AGN light-curve methods based on random process models also suffer from measurement errors in the input data with large gaps and irregular cadence. These techniques are focused on the PSD parameter rather than the rms amplitude, because the former is believed to be related to the underlying physical processes in the accretion disks. In the elaborate CARMA(p,q) method of posterior random process fitting, for example, several free parameters are directly related to the autocorrelation function and the relative strength of the Lorentzian PSD components (Kelly et al. 2014). The post-fit standardized residuals are used to select the degrees of the model striving to achieve a Gaussian-looking distribution. This is suitable for better sampled and monochromatic light curves, but may not work for the Pan-STARRS data. More flexible and generalized CARMA-based models have been proposed using the Greens functions to infer the random process differential equation (Kasliwal et al. 2017) and the physical factors behind it, but they also require dense and uniform observational samples.

6. Results

In this study, we only processed 2714 objects with at least two observations in each filter. The remaining 433 objects were either too faint or too poorly observed. In addition, a significant number of observations are faint and have therefore large errors. To identify these we calculated the signal-to-noise ratio (S/N) as the FPCA-fitted amplitude of variability divided by the median error and we find that 25% of all observations have S/N< 3. We decided not to remove these and reduce the sample as we found that the basic results are not changed, but we provide the S/N for each observation.

The main product of the FPCA analysis is a table with the following determined parameters for each object and each filter: the nominal magnitude m(f), the variability amplitude a(f), the scale parameter Γ (see Equations (8) and (10)), as well as the median observed magnitude, the observed median absolute deviation (MAD) and the S/N as described above. These data are collected in Table 1, which is also available online in full form.

Figure 2 shows the histogram of the scale parameter Γ, one per object. There is a sharp and well defined core with a modal value close to unity, as expected, but a considerable fraction of objects deviate from this value, especially on the negative side. The tails of this distribution indicate that not all of the ICRF sources obey the main assumption of our model, that the light curves in different filters differ only by constant offsets and constant multipliers (see Equation (8)). Indeed, the example shown in Figure 1 illustrates an obvious departure from this assumption with the gi color changing from approximately 0 at the beginning of the cadence to about +0.6 mag. Such perturbations result is poor or absent convergence of the estimated scale value. The comparability of the amplitudes is also degraded, as the model is not capable of capturing large color variations.

Figure 2.

Figure 2. Distribution of derived scale parameters Γ.

Standard image High-resolution image

The total number of objects with ∣Γ − 1∣ > 0.5 is 63. Some of these objects are simply too faint, such as IVS B2143 − 236, which was not detected in g and y and just barely detected in r. Some are double or lensed sources, e.g., IVS B2057 + 235, which has a reddish companion at 1.3 arcsec separation. A scale parameter much deviating from unity is a warning sign for problem objects that may not be suitable as radio-optical reference frame sources. They seem to include more than one optically luminous component with drastically different light curves, mostly at the blue end of the range.

Figure 3 shows the histograms of the variability amplitudes a(f) (in the Pan-STARRS bands gPS1, rPS1, iPS1, zPS1, and yPS1). The distribution is sharp-peaked at approximately 0.1 mag, with a slower drop extending to a shallow tail up to 2 mag and larger. We quantify the distributions in three different ways. First, we determine the median of each distribution, which signifies the "typical" amplitude. We also fit a Rayleigh distribution with a free scale parameter σ into each of the five histograms. Finally, we determine the most frequent amplitude, which is called the commonest, at a resolution of one-hundredth. The results are presented in Table 2. The dependence of variability amplitude on wavelength is concave, with the largest values observed for the reddest bands z and y, and the smallest for mid-range bands r and i. The peculiar shape of the amplitude distributions is reflected in the commonest values being much smaller than the median, which are, in their turn, systematically smaller than the scales of the Rayleigh fits. All quasars are intrinsically variable sources; however, the modal and most frequent values are rather small. The larger degree of variability at the reddest wavelengths is not what one would expect from an intuitive assumption that the contribution from the host galaxy, which is constant, is greater for the red bands. The method of cumulants that we used to estimate the distribution scale is sensitive to the significant departures from the assumed Rayleigh PDF, especially, to the extent of the tail. But the more robust median estimates confirms that there is a tendency for the ICRF objects to vary more in the reddest bands z and y. This result is also supported by comparison of the amplitudes for the same object. 57% of the sample have a(y) > a(g), and 53% have a(z) > a(r), while a(i) is statistically the smallest. On the other hand, Pan-STARRS observations are systematically noisier in the reddest y band, and a better error budget analysis is in order to confirm this unexpected outcome.

Figure 3.

Figure 3. Histogram of variability amplitudes a(f) of ICRF3 quasars in all five bands of Pan-STARRS.

Standard image High-resolution image

Table 2. Variability Amplitude Distribution Parameters for Pan-STARRS Bands: The Median Value, the Scale ν of the Best-fitting Rayleigh Distribution, and the Commonest Value

BandMedianRayleigh ν Commonest
 magmagmag
gPS1 0.2200.3020.13
rPS1 0.2080.2840.09
iPS1 0.2090.3080.07
zPS1 0.2170.3190.07
yPS1 0.2380.3450.11

Download table as:  ASCIITypeset image

The intersection of our ICRF3-PS1 sample with the NED catalog includes 2862 objects. Counting objects with nonzero a(f) amplitudes (signifying valid determinations) and with observational redshifts z, the numbers are 1546, 1694, 1732, 1720, and 1627 for the grizy bands. The redshifts are of heterogeneous origin, and here we use them only for large-number statistics estimation. Figure 4 shows the binned median variability amplitude a(f) as a function of redshift. Quasars with redshifts about 0.5 are the most variable sources, while at redshifts greater than 0.7–1.0, a dramatic drop in the amplitude is observed. This result is strong, as the same pattern, with small variations, is present in all PS1 bands. The median amplitude of quasars with redshifts around 3.0, for example, is only one-third of the peak value in the i band. Furthermore, the same characteristic behavior is seen in higher quantiles of the binned sample, e.g., the 0.75 quantile. As a tentative explanation of the decline, we offer this consideration. If the power spectrum density of flux variation follows a monotonously declining power law, most of the variation should be coming from the lowest frequency part in the range probed by our method, corresponding to a timescale of about 4 yr. The time dilation effect shifts the window of probed frequencies to higher values in the rest frame of the emitter by a factor of (1 + z). Thus, for a high-redshift quasar, we are measuring intrinsically higher-frequency parts of the spectral density distribution.

Figure 4.

Figure 4. Binned median values of variability amplitude a(f) as a function of redshift z for the five Pan-STARRS bands.

Standard image High-resolution image

The homogeneous method of average magnitude estimation allows us to investigate the dependence of observed colors on redshift. Figure 5 shows the entire collection of average colors gr, rz, iz, and iy for those ICRF3 counterparts that have measured redshifts in the NED. We observe complex, wavy patterns and distinct correlations, albeit with a significant scatter. Similar patterns in observed color versus redshift have been seen in other studies (Richards et al. 2001; Wilhite et al. 2013). This is the direct result of bright emission lines and other spectral features redshifted to and out of the photometric bands. In particular, the blue dip in the gr relation at z ≃ 0.5 is caused by the λ3000 bump residing mostly in the g band, but the considerable scatter above the median betrays the presence of a significant fraction of intrinsically red objects. The reddening bump at z ≃ 1.2 is due to the superposition of the Mg II line and the λ3000 bump in the r band, etc. We note the close similarity of our results for gr and iz to the corresponding relations in Richards et al. (2001, their Figure 5), which comes from the closeness of Pan-STARRS bands to those of the SDSS, also confirming the high quality of Pan-STARRS photometry and consistency of our method. We also achieve here a better representation of quasars with z > 2.2 and extend the coverage to the reddest y band.

Figure 5.

Figure 5. Nominal colors of ICRF3 quasars vs. redshift.

Standard image High-resolution image

7. Discussion

Our main results for a sample of radio-loud ICRF3 quasars with a large range of redshifts indicate that the statistical properties of these objects are similar to other previously studied sets of spectroscopically selected objects, most notably, from the SDSS. Previous studies utilizing the Pan-STARRS 3π survey were mostly focused on refining QSO detection techniques and estimating the PSD slope of large samples (e.g., Morganson et al. 2014), while our study targets specific radio-loud ICRF3 sources. A typical or common variability amplitude for these radio-loud AGNs is not large at about 0.1 mag. The distribution of amplitudes, however, shows a powerful tail stretching toward 2 mag and higher, indicating the presence of blazars and other highly variable kinds. From the photometric point of view, the ICRF3 sample is a complex mixture of various types, which leaves the possibility to relate the photometric deviants with the observed astrometric position offsets.

Our information about the relative variability across the Pan-STARRS bands, presented in Table 2 with three different statistics, appears to be somewhat divergent from the commonly accepted "brighter-bluer" phenomenon found in SDSS photometry (e.g., Schmidt et al. 2012), in that a convex relation with increased variability on both ends of the range is found. This may still be veritable because our study extended to the reddest z and y bands in a consistent way, barely probed before. However, the impact of random observational errors is also greater at the reddest bands, and the increase of variability toward near-infrared needs to be verified with better cadenced or more precise measurements.

The strong and confidently detected declining dependence of variability with redshift (Figure 4) is, perhaps, the most surprising result. The likeliest explanation is provided by the well-known time dilation effect in combination with a red noise distribution of the variability power spectrum in rest frame. Indeed, with a limited range of time frequencies and the regularization weight function in Equation (11), our analysis is most sensitive to variations with periods around 1–4 yr in the observer's frame, which correspond to 0.25–1 yr in the rest frame for the most distant quasars in the sample. Their intrinsic variability on these shorter timescales is lower.

While the complicated dependencies of colors on redshift strongly confirm the previous results and their interpretation (various spectral lines entering and leaving the relevant bands), the knowledge is extended to the reddest band bordering the near-infrared. Invariably, the empirical relations look sharper at higher redshifts (Figure 5), and the significant one-sided scatter at low z confirms the admixture of intrinsically red sources in the ICRF3 sample. This is another possible correlation with the radio-optical position offsets.

Footnotes

Please wait… references are loading.
10.3847/1538-3881/abfc51