The Global Asteroseismology Project Proof of Concept: Asteroseismology of Massive Stars with Continuous Ground-based Observations

Massive (≳8M ⊙) stars are the progenitors of many astrophysical systems, yet key aspects of their structure and evolution are poorly understood. Asteroseismology has the potential to solve these open puzzles; however, sampling both the short period pulsations and long period beat patterns of massive stars poses many observational challenges. Ground-based single-site observations require years or decades to discern the main oscillation modes. Multisite campaigns were able to shorten this time span, but have not been able to scale up to population studies on sample of objects. Space-based observations can achieve both continuous sampling and observe large numbers of objects; however, most lack the multiband data that is often necessary for mode identification and removing model degeneracies. Here, we develop and test a new ground-based observational strategy for discerning and identifying the main oscillation modes of a massive star in a few months, in a way that can be scaled to large samples. We do so using the Las Cumbres Observatory—a unique facility consisting of robotic, homogeneous telescopes operating as a global network, overcoming most of the challenges of previous multisite efforts, but presenting new challenges which we tailor our strategy to address. This work serves as the proof of concept for the Global Asteroseismology Project, which aims to move massive star asteroseismology from single-objects to bulk studies, unleashing its full potential in constraining stellar structure and evolution models. This work also demonstrates the ability of the Las Cumbres Observatory to perform multisite continuous observations for various science goals.


INTRODUCTION
Massive (≳ 8 M ⊙ ) stars, although rare, play a key role in many astrophysical processes.They have a relatively short and energetic life that often ends in a supernova explosion, driving chemical and mechanical feedback to their host galaxy while leaving behind a neutron star or black hole.However, key aspects in the structure and evolution of massive stars, such as the degree and profile of their internal chemical mixing, their metallicity, and their rotation profile, are poorly constrained observationally.
One way forward is through asteroseismology, the study of the oscillations of stars and their connection Corresponding author: Noi Shitrit noyshitrit@tauex.tau.ac.il to the internal stellar structure.Pulsating early-type OB stars can be divided into several categories, (e.g.Bowman 2020; Aerts 2021; Kurtz 2022).The two main classes with oscillations primarily driven by cyclic variation in opacity (the so-called κ-mechanism) are slowly pulsating B-type (SPB) stars and β Cephei (hereafter, β Cep) type stars.SPB stars are intermediate mass stars, with a mass range of about 3 to 8 M ⊙ , while β Cep stars, on which we focus here, have masses in the range of about 8 to 25 M ⊙ .
β Cep stars oscillate in low order p (pressure) and g (gravity) modes with periods between approximately 2 and 8 hours (Stankov & Handler 2005;Aerts et al. 2010;Bowman 2020).Some β Cep stars were found to have high-order g-modes with periods longer than a day (Handler et al. 2006(Handler et al. , 2009)).In general, they produce both short-pulsation periods (of order hours to days) and long-term beats (of order months) arising from ad-jacent oscillation modes.Sampling these different time scales presents a significant observational challenge.For single-site campaigns, limited to night-time data collection, the observing time span required to discern the oscillation frequencies can be very long (e.g., 21 years for the β Cep star HD129929, Aerts et al. 2003).
Space-based telescopes naturally allow continuous observations, which reduce the required time span substantially.Missions such as MOST (Microvariability and Oscillations of Stars; Walker et al. 2003), CoRoT (Convection, Rotation, and Planetary Transits;Auvergne et al. 2009), BRITE (BRIght Target Explorer; Weiss et al. 2014), Kepler (Koch et al. 2010), K2 (Howell et al. 2014), and TESS (Transiting Exoplanet Survey Satellite; Ricker et al. 2014) produced high-cadence long time-base precise light curves which can be used for asteroseismology.Indeed, Pedersen et al. (2021), for example, used Kepler light curves spanning ∼1500 days of 26 SPB stars to study their internal mixing profiles.Another example is Handler et al. (2019), who identified at least 34 excited modes for the β Cep star PHL 346 from its TESS light curve.
Unfortunately, space telescopes are expensive and are often designed to address particular requirements for a specific science mission.For example, most missions so far did not provide color information, which is important for mode identification (i.e.assigning each mode with its harmonic degree l and azimuthal order m; see e.g.Aerts et al. 2010 andSzewczuk &Daszyńska-Daszkiewicz 2015).This identification increases the extent of astrophysical information gained from the asteroseismic analysis as it can lift model degeneracies.
For slowly rotating stars (rotating at ≲15% of their critical rotation velocity; Bowman 2020), mode identification can be performed by using rotational splitting of the oscillation modes (e.g.Aerts et al. 2010).However, β Cep stars display a variety of rotation rates, with some having v sin (i) as high as a few hundred km s −1 (Handler et al. 2012), where i is the inclination angle of the rotation axis relative to the viewing angle.For such stars, mode identification can often only be done through multi-band measurements of their oscillations.Multi-band observations probe changes in both the temperature and geometrical cross-section of pulsating stars.These changes are correlated with the spherical harmonic of the oscillation mode.Therefore, comparing the ratios between amplitudes and the differences between phases in different bands to the ones predicted by models, can be used to identify the modes, also for fast rotators (e.g.Daszyńska-Daszkiewicz et al. 2002, 2015).
Multi-site ground-based observations can overcome both the day and night interruptions that affect singlesite ground-based campaigns (see Aerts 2021 and references therein), and can provide color information, not usually available in space-based campaigns, for mode identification.Indeed, Winget et al. (1991) obtained over 264 hours of continuous observations of a pulsating pre white dwarf star, identifying over 100 pulsation modes and determining its mass, rotation period, magnetic field, and constraining its compositional stratification, using the Whole Earth Telescope (WET) project (Nather et al. 1990).WET was a coordinated effort among existing telescopes worldwide to perform a few continuous observation campaigns per year of order 10day durations.Handler et al. (2006) observed the β Cep star 12 (DD) Lacertae ("12 Lac") using a multi-site ground-based campaign, spanning 750 hours of observations over 190 nights, detected 23 sinusoidal signals, and performed mode identification from their multi-band observations (which allowed them to derive the spherical degree l of the five strongest modes and constraints on l for the weaker ones).The number of modes they detected was larger than predicted by standard models, which might be due to a gap in our understanding of the chemical structure of 12 Lac.
These efforts demonstrate the potential of continuous ground-based observations.However, they required a complicated and challenging observational setup involving coordination between several groups of astronomers, observatories, and systems.In addition, differences between telescopes add challenges and uncertainties to the data analysis.Such methods, therefore, end up being difficult to scale up for studying large numbers of objects.While deep and impactful insights can be gained from single object studies, the full potential of asteroseismology in constraining stellar structure and evolution models lies in sample studies (e.g.Pedersen et al. 2021).
Here, we propose and test a new method with a new facility that we claim can be used to perform population sample studies of massive stars, and can be applied to a variety of other science cases.We propose to use the Las Cumbres Observatory (Brown et al. 2013), which differs from previously used global networks in that it is fully robotic and homogeneous (i.e.sets of identical telescopes, instruments, and filters are installed around the globe).While this facility has been operational for almost 10 years, it has yet to be used extensively for continuous observations in general, and for massive star asteroseismology in particular.A main challenge posed by Las Cumbres is its multi-purpose nature.Since this facility can not be dedicated to a single science pro-gram for extended periods of time, we propose to move from weeks-scale continuous observation runs, used previously, to days-scale continuous observation runs executed roughly once a week.In this work we show for the first time that such observations are technically possible with the Las Cumbres Observatory, that it provides the necessary photometric precision to perform asteroseismic analysis of massive stars, that all of this can be done in multiple bands simultaneously, and that such an observing strategy can reproduce single-site decade-long campaigns in a few weeks to months.
The Las Cumbres Observatory consists of ten 0.4meter (0.4 m) telescopes, fifteen 1-meter (1 m) telescopes, and two 2-meter (2 m) telescopes (all telescopes are equipped with standard imagers and multiple filters; Table 1).The different telescopes of each size are identical, they are coordinated robotically, and are distributed around the globe in the United States (Texas, Hawaii), Chile, Spain (Tenerife), South Africa, Israel, and Australia.
From our experience with the Las Cumbres Observatory, its scheduling constraints due to multiple competing programs, and weather constraints at its different sites, we posit that 48-hour continuous observing blocks can be reasonably executed regularly.Thus, in Section 3 we propose a new observational strategy, using 48-hour blocks of observations over multiple weeks, and show, through simulations, that it is effective for massive-star oscillation mapping.Then, in Section 4, we collect actual Las Cumbres Observatory data from different sites and telescope classes for two massive stars in five observational campaigns, to demonstrate that the Las Cumbres Observatory can indeed perform 48hour continuous runs in practice.Next, in Section 5 we construct an automatic analysis pipeline to handle the large number of images that such observations produce, exploring the best defocusing amount required for various Las Cumbres Observatory telescope classes, and the best image reduction parameters needed to achieve mili-magnitude photometric accuracy (as required for asteroseismic studies).Finally, in Section 6 we test the quality of our observations and photometric extraction.We conclude and discuss our results in Section 7.
This paper is the first in a series describing the Global Asteroseismology Project (GAP) and its expected discoveries.The GAP was granted 9000 observing hours over three years (starting on 2023 August 1) on the 0.4m telescopes of the Las Cumbres Observatory network, on the basis of the proof of concept presented here.The time was granted as a key project, which guarantees its long-term status, and a high science priority to mitigate telescope oversubscription risks.Through GAP, we plan to collect multi-band high-cadence data of ∼ 20 β Cep stars to complement single-band TESS data.This will allow us to perform complete seismological studies, including mode identification, of a sample of massive stars, almost tripling the current sample.The second paper in the series will detail our target selection process (Shitrit et al. in preparation).

TARGETS
Since our goal is to test a new observational strategy, we choose two β Cep stars for which full asteroseismic analyses were already conducted (through both space-based and long-term ground-based observations): HD129929 (Aerts et al. 2003) and HD180642 (Degroote et al. 2009a), to demonstrate our proposed approach.

HD129929
HD129929 is a B3V β Cep star with a V -band magnitude of 8.1 and an estimated mass of 10 M ⊙ (Hill et al. 1974;Aerts et al. 2003).The first to detect its variability was Rufener (1981), with a detailed study of the variability later performed by Waelkens & Rufener (1983), identifying three main frequency modes.The more recent study by Aerts et al. (2003), who observed HD129929 over a period of 21.2 years with the Geneva multi-color photometer, detected a beating phenomenon with at least six different frequencies, including two frequency multiplets of average spacing ∼ 0.0121 c day −1 (cycles per day).
HD129929 is at right ascension 221.6073 deg and declination −37.2222 deg and is therefore mainly visible from the southern hemisphere.We observed it from the Las Cumbres Observatory sites in Chile, South Africa, and Australia (there is a ∼ 25-minute observational gap between Chile and Australia for this position on the sky).
During the first two HD129929 observing campaigns, no 0.4 m telescopes were yet available on the network, and the 1 m telescopes were being transitioned from SBIG to Sinistro cameras (Table 1), with some telescopes still using SBIGs and some on Sinistros 1 .In the third campaign, 0.4 m telescopes were available at all southern sites all of which were equipped with the same camera type (SBIG).

HD180642
HD180642 is a B1.5 II-III β Cep star with a Vband magnitude of 8.29 and an estimated mass in the range 11.4 M ⊙ -11.8 M ⊙ (Degroote et al. 2009b).a After the stated standard binning.
Note-The SBIG cameras are no longer available on the 1 m telescopes.Degroote et al. (2009b) combined observations of HD180642 from the CoRoT and Hipparcos space telescopes, and additional ground-based facilities across 18 years.They found 11 independent frequencies (and 22 three-resonance combinations) including the main frequency which was already detected in a previous study (5.4871 c day −1 ; Aerts 2000).HD180642 is at right ascension 289.3117 deg and declination 1.0594 deg, making it visible from all Las Cumbres Observatory sites.However, due to a temporary focus mechanism problem with the 0.4 m telescope in South Africa, and scheduling constraints, we observed HD180642 with the 0.4 m telescopes in Chile, Australia, Tenerife, and Hawaii.

SIMULATIONS
We create an artificial light curve using: (1) where F (t) is the flux at time t, N is the number of excited modes, a i , f i , and ϕ i are their amplitudes, frequencies, and phases (respectively), and t 0 is a global time shift.We use the frequencies, amplitudes, and phases as determined by Aerts et al. (2004) for HD129929 and the first 10 modes detected by Degroote et al. (2009b) for HD180642 in the V -band.For now, we set the time shift to t 0 = 0 (we will change this later for the Monte Carlo simulations in Section 3.2, and for comparison simulations in Section 6.2).

Single Simulations
As stated above, we simulate observation campaigns of weekly "runs" of 48 hours each.We denote the total time span of the observations X, the number of hours observed continuously per campaign Y , and the sampling cadence (i.e., the time between consecutive im-ages taken within each run) Z.The simulations include a randomized ±2-day offset at the start of each run (which is expected due to weather and scheduling constraints and can help remove aliases) and a randomly timed weather gap of two hours in every 48-hour run.We add random Gaussian-distributed noise with a standard deviation of 3 parts per thousand (PPT) to the flux values in all of our simulations.This is a typical relative photometry uncertainty value for Las Cumbres 1 m telescopes2 .
We extract the oscillation frequencies of the simulated data using a prewhitening process (see Aerts et al. 2010 for full details)3 .We continue the prewhitening process as long as: (1) the signal to noise ratio (SNR) of the amplitude of the strongest mode is >4 in the periodogram (following Breger et al. 1993), and (2) the ratio of the amplitude of the strongest mode to that from the previous step is >1/3.The first condition is a significance threshold and the second condition is a stopping criterion introduced to avoid false modes which also cross the significance threshold4 .Examples of the prewhitening process for the different targets can be seen in Figures B1 and B2 in Appendix B.1.The results of the simulations are presented in Fig- ures 1 and 2, and in Table 2.We find that 17 weekly 48-hour, 60-second cadence (i.e., X = 17 weeks, Y = 48 hours, and Z = 60 seconds) runs can recover the full frequency spectrum of HD129929, as observed by Aerts et al. (2003) in 21 years, here in under four months, to an accuracy of ∼ 10 −4 c day −1 in frequency and ∼ 10 −1 mmag in amplitude.For HD180642 (Figure 2), only 4 weeks (i.e., X = 4 weeks) of 48-hour observation blocks are required to recover the ten highest amplitude modes from the 18.8 year-long data set of Degroote et al. (2009b) to within ∼ 10 −3 c day −1 in frequency and ∼ 10 −1 PPT in amplitude.
The difference between the X values required for each target is due mainly to differences in the spacing between their oscillation frequencies.HD129929 has an average frequency spacing of ∼ 0.012 c day −1 , for all detected frequencies (Aerts et al. 2003), while HD180642 has an average frequency spacing of ∼1.24 c day −1 for frequencies between 4.32 c day −1 and 25.9 c day −1 (Aerts, C. et al. 2011).These two stars are roughly at opposite extremes of the GAP sample in terms of their frequency spacing (Shitrit et al. in preparation), and thus represent the boundaries of the X values that will be required for the various stars in the sample.

Monte Carlo Simulations
There could be combinations of X, Y , and Z values other than the ones presented in Section 3.1 that can produce equally good results.In addition, the random timing of the 2-hour weather gap, the randomized ±2day offset at the start of each run and the timing of the start of the campaign (denoted by t 0 in Equation 1), can also influence the results.We do not conduct a full phase space study here, but we run four sets of 1000 simulations for HD129929 to check the effect of changing some of these parameters.In each simulation we choose a different realization for each of the randomized parameters, and repeat each set of 1000 simulations with a different value for Y (the length of the continuous observation block): 12, 24, 36, and 48 hours.We then perform the prewhitening process for each of the simulated light curves to extract its frequencies and amplitudes.A summary of the results is presented in Figure 3 and in Table B2 in Appendix B.2. Figure 3 shows the differences between the recovered and input frequencies and amplitudes for the first, third and sixth modes (ordered by decreasing amplitude).The recovered frequencies and amplitudes are consistent with the input ones within the scatter of the simulations.As expected, the stronger modes are recovered more accurately and more consistently compared to the weaker ones.The quality of the results declines with decreasing observing run length (Y value), also as expected.
We conclude that the results presented in Section 3.1 are robust and that observation runs shorter than 48 hours degrade the accuracy of the recovered frequen- cies and amplitudes for HD129929.However, each star is different and might require different observation run lengths, depending on the desired mode recovery accuracy.Simulations such as these can be used to estimate the uncertainty in oscillation mode recovery for various observing run lengths and expected oscillation spectra.

OBSERVATIONS
Having found that 48-hour runs can be used as building blocks for an observation sequence capable of reproducing the oscillation spectrum of our example targets, in principle, we turn to test this idea in practice.Namely, we test whether 48-hour runs can indeed be scheduled and executed by the Las Cumbres network, and whether the resulting images provide the necessary photometric accuracy to discern the oscillations.To do this, we conduct five observational campaigns (in total) of our two targets.Note-The in subscript stands for "input", and the rec subscript stands for "recovered".The recovered values are for one of the realizations performed for each target, shown in Figure 1 for HD129929 and in Figure 2 for HD180642.The SNR is calculated as in Breger et al. (1993), for HD129929 between 6 and 7 c day −1 and for HD180642 between 0 and 20 c day −1 .
Four of these campaigns, which we number 0-3, are for HD129929.Campaign 0 was a short ∼13-hour test across two sites on the 1 m telescopes.Campaign 1 was the full ∼48-hour test across all three southern sites, also on the 1 m telescopes.Campaign 2 was a ∼24-hour test on the 0.4 m telescopes.Campaign 3 was a multi-band observation test on the 1 m telescopes.As mentioned, multi-band observations are important for mode identification.We wish to test this setup because switching between filters resets the telescope guiding and can thus affect the photometric precision.We cycled between four filters: B, V , R, and I, with three consecutive images taken per filter.The final campaign, Campaign 4, was a ∼48-hour test for our second target, HD180642, using the 0.4 m telescopes.
In total, we test two classes of telescopes (0.4 m and 1 m)5 , for two different targets in single and multi-band campaigns.Table 3 and Figure 4 summarize our observational campaigns.
Our targets are bright and can saturate the detectors (depending on the exposure time, airmass, and observing conditions, of course).Short exposures could solve the saturation issue, but are inefficient given the readout times of the imagers (Table 1), and would push fainter reference stars to below the detection threshold.Instead, we choose to defocus the telescopes.Defocusing, however, creates non-trivial point-spread functions (PSFs).This presents some challenges in analyzing the data (see Section 5).
The amount of defocusing should be large enough to prevent saturation of the detector but not too large in order to minimize PSF distortions and avoid loss of stellar signal to background sky and noise.Before each campaign, we performed exposure tests to determine the appropriate defocusing amount for each telescope class  and target.We used 10-second exposures per image and took 3 images per defocusing value.We then analyzed the images to find the one with the lowest defocusing amount and stellar counts still in the linear regime of the detector (≲ 55, 000 counts).We found that HD129929 required different amounts of defocusing for each telescope class while HD180642 did not require any defocusing (see Table 3).Images were initially processed by Las Cumbres Observatory.The pipeline that was in use during cam-paigns 0, 1, and 3 was based on the Observatory Reduction and Acquisition Control Data Reduction (ORAC-DR; Cavanagh et al. 2008;Jenness & Economou 2015) framework, which performs standard image processing, including bad-pixel masking, bias and dark subtraction, flat field correction, astrometric solution, source catalog production (including source identification and photometric extraction), zeropoint determination, and perobject airmass and barycentric time correction computation.Since the 2016A semester, Las Cumbres updated the standard pipeline to "Beautiful Algorithms to Normalize Zillions of Astronomical Images" (BANZAI; Mc-Cully et al. 2018), which was used for campaigns 2 and 4. BANZAI performs the same operations as ORAC-DR, with the exception of the barycentric time correction, and includes background subtraction from the overscan region (Volgenau & Boroson 2016).In addition, it uses a different algorithm for the astrometric solution (Astrometry.net;Lang et al. 2010), and for source extraction (Source Extractor, hereafter SE; Bertin, E. & Arnouts, S. 1996) and is Python-based (whereas ORAC-DR is written in Perl).

PRODUCING THE LIGHT CURVES
High cadence long-term observations like ours, result in thousands of images.Therefore, we construct a general automatic data reduction pipeline to produce relative-photometry light curves from a stack of defocused images.
As mentioned, defocusing the telescopes creates a nontrivial PSF.Instead of the "natural" Gaussian-like PSF that is expected from observing a point-like source, the  defocused PSF is doughnut shaped, due to the shape of the telescope's primary mirror.Most of the standard programs for analyzing astronomical images assume a Gaussian PSF.Therefore, defocused images require a few adjustments to the default settings of these programs.

Cosmic Ray Removal
Cosmic rays are a significant contaminant for the astrometric solution (Section 5.2).For example, for HD129929, for campaigns 0, 1, and 2, without cosmic ray removal, only 76.22% of the images were astrometrically solved, while cosmic ray removal increased this percentage to 98.98%.Cosmic rays also produce false positives when performing source detection and extraction (Section 5.3).We used the default Astro-SCRAPPY parameters except for sigclip which we set to 4.0 and objlim which we set to 1000.0 (see the Astro-SCRAPPY documentation6 for a full description of the parameters).

Astrometric Solution
We use a local installation of the Astrometry.net(Lang et al. 2010) engine, and run the solve-field command on each image.We use Astrometry.net'sbuilt-in downsampling to deal with the doughnut shaped PSF's caused by the defocusing.When using the downsampling option, Astrometry.net bins the images by n×n pixels before performing the source detection (with n provided by the user).After finding the best solution, Astrometry.net scales the solution back to the original image.We find this process to be crucial for solving defocused images.It helps smooth the dark-centered regions of the sources, making source recognition by Astrometry.net more robust and efficient.In principle, downsampling is also useful for removing cosmic rays, however in our case it was not enough for most images, hence our use of Astro-SCRAPPY (see Section 5.1).We used a binning value of n = 9 for HD129929.Since we do not perform any defocusing for HD180642, no downsampling was needed in that case.

Source Extraction
After obtaining the astrometric solution for the images, we perform aperture photometry on all sources in the image using SE.SE has a very detailed configuration file and a parameters file for specifying the output.We use a few different configuration files for the different campaigns (see Table A1 in Appendix A).Among the differences between the configuration files are the pixel scale of the different cameras, the convolution filter, and the aperture diameter.
The SE algorithm assumes a Gaussian PSF for source detection.Therefore, the doughnut shaped PSF (in the defocused images of HD129929) is sometimes detected by SE as two different Gaussian PSFs, and as a result, as two different sources.To avoid this, we perform a convolution with a top hat filter (using a 5 × 5 kernel) on the images.This step smooths the double-peaked PSF and removes many of the fictitious double sources.
To confirm that the relative number of counts between sources (for performing relative photometry) is conserved by the top-hat convolution, we create artificial images with two isolated sources (representing a target star and a reference star) and process them with SE, once with the top-hat convolution and once without.We then compare the ratio of counts between the sources in each case.We repeat the test for four different ratios of source fluxes.We find that the top-hat convolution conserves the relative counts consistently across the various flux ratios tested (Table 4) up to 2-4 parts per million (such differences have no notable effect on our analysis).
For HD180642, which we observed with no defocusing, we perform a convolution with a Gaussian filter using a 7 × 7 kernel.
We choose the best aperture size to use for each observing site and camera type by visually inspecting a selection of random images from that site and camera type.
The result of this stage is a catalog of sources with positions and instrumental aperture fluxes (aperture integrated, sky subtracted, total counts) per image.

Reference Stars
Performing relative photometry requires identifying a set of reference stars in all images.Having several reference stars helps reduce statistical uncertainties (and also systematic ones, in case one or some of the chosen reference stars are variable).However, massive stars are typically the brightest stars in the field of view, and there is usually only one more star detected at a similar signal to noise, to serve as a reference star.This is the case here, and we show that one reference star is enough to achieve the desired photometric precision to reveal the oscillations of our targets.
Stars are associated across images by their celestial coordinates provided from the astrometric solution (Section 5.2).However, due to the defocused PSFs, the astrometric solutions and the source extraction centroids are sometimes shifted between images.We thus associate stars across images based on their celestial separation between images being smaller than a pre-defined threshold of 5.4 ′′ for campaigns 0, 1, and 3, 2.34 ′′ for Campaign 2, and 1 ′′ for Campaign 4. These values are much smaller than the distance to neighboring sources, thus avoiding any source confusion.
For HD129929, the reference star used is CD-36 9599 which appeared in all of the images (except one, which we omit).For HD180642, two candidate reference stars appeared in all images.The brighter of the two, HD180662, is a multiple system (van den Bos 1963).We therefore choose the second brightest candidate as the reference star, HD180533.The details of the reference stars are presented in Table 5.There is no indication of variability for these stars.We tested the constancy of each reference star using a comparison star (see Table 5).For this test, we chose the nights that displayed the most stable conditions (for HD129929, from Campaign 0 in Chile and for HD180642, from Campaign 4 in Hawaii).We find that the reference star CD-36 9599 shows constant flux to within 2.53 PPT, while the reference star HD180533 shows constant flux to within 13.73 PPT on the nights tested.These levels are of order a few percent of the oscillations measured for our target stars (see below), and hence we consider the reference star luminosities to be constant.

Relative Photometry
The relative flux between our target and the reference star in each image is simply: with the uncertainty calculated as: Here, f rel is the relative flux, and f target and f ref are the aperture flux of the target star and of the reference star (respectively).The ∆ values are the uncertainties of each flux measurement from SE considering only photon and detector noise: where A is the set of pixels inside an aperture, σ i is the standard deviation of the noise estimated from the local background by SE, p i is the background-subtracted counts and g i is the effective detector gain, all per pixel i in the aperture (see the SE documentation7 for more details).

Removing False Measurements
As a first filtering step of bad images, we remove all images in which: • The maximum counts of the star are over 50,000 or the minimum counts of the reference star are below 1,000 (including the background), to avoid images in which the target is too bright (in the non-linear regime of the detector) and the reference star is too faint (and suffers from low signal to noise).
• The error of the flux ratio (Equation 3) is larger than 0.01 PPT.
• The target or reference star are detected by SE as two sources, as indicated by a deblending flag value of 2, due to the defocused PSF.
As a second filtering step, we further wish to remove images taken under non-photometric conditions, such as uneven clouds or haze across the field of view.During photometric conditions we expect the counts of the reference star to evolve smoothly with time (due to gradual airmass and seeing changes, for example).Departure from such smooth evolution can indicate nonphotometric disturbances.We fit a piecewise second order polynomial to the reference star counts vs. time, such that each polynomial is fit to data from the same night, site, and two-hour time span, using only data points that are scattered less than a pre-determined threshold in 0.05-hour time bins.We then remove all images with a reference-star flux value that is farther than the pre-determined threshold from the polynomial.The pre-determined threshold is 15 PPT for Campaigns 0, 1, and 2, and 250 PPT for Campaign 4. The number of images retained after each filtering step is listed in Table 6.
We find an offset between the relative fluxes observed from different Las Cumbres sites.One possible explanation for this issue is the flat-field correction not being exact.To correct the offset, we subtract the median relative flux from the measured relative fluxes per site and night.
Our final light curves are presented in Figures 5 and  6.The stellar oscillations are clearly seen both in the single-band and multi-band campaigns, though some scatter remains, not removed by the filtering stages described above.The scattered points are not significantly correlated with detector temperature, sky brightness, photometric errors or a specific telescope or site.However, since the scatter is larger in the 1m data compared to the 0.4m data, we suspect that it is related to the non-trivial time-dependent PSF caused by defocusing.

LIGHT CURVE ANALYSIS
These light curves serve as a test of a single 48-hour run.A full asteroseismic observation set will consist of several such runs.Therefore we do not expect to be able to recover all of the oscillation frequencies of our targets with the current observations.However, we can perform some "sanity checks" on -the data from our 48-hour runs (Campaigns 1 and 4).

Model Fitting
We fit the observations from Campaigns 1 and 4 to a sum of oscillations as described by: (5) which is the same as Equation ( 1) but with a global scaling factor a.
We use the emcee (Foreman-Mackey et al. 2013) Python implementation of the Affine Invariant Markov Chain Monte Carlo (MCMC) Ensemble sampler (Goodman & Weare 2010) to fit Equation (5) to our data, with a and t 0 free parameters, and a i , f i , ϕ i and N the known amplitudes, frequencies, phases and total number of frequencies (respectively) of HD129929 (from Aerts et al. 2003) and HD180642 (from Degroote et al. 2009b).We assume here that the targets continue to pulsate with the same frequencies, phases and relative amplitudes as when they were measured by those studies.This assumption is validated by the long-term monitoring of HD129929 by Aerts et al. (2004).
The fit results are presented in Figure 7 and their respective corner plots (generated with corner.py;Foreman-Mackey 2016) in Figure C3 (Appendix C).We find good fits to the data, indicating that our observations are consistent with the expected pulsation parameters and with the assumption that these parameters remain stable over decades, as suggested by Aerts et al. (2010) for β Cep stars.

Periodograms
We calculate periodograms8 for the data taken in Campaigns 1 and 4 and compare them to periodograms of artificial signals generated from Equation (1) with the known parameters of our targets.
We sample the artificial signal (using the best fit value of a from the MCMC runs described in Section 6.1) at the same time stamps as the actual observations, assuming typical measurement errors (3 PPT9 ), and calculate its periodogram as well.We repeat this step 160 times, each with a different time shift t 0 between 0 and 80 days in 0.5-day increments.
Our results are presented in Figure 8.All the observed periodograms are roughly consistent with the ar-  tificial ones.The observed periodogram from Campaign 1 shows peaks at additional frequencies, possibly due to the photometric scatter that our pipeline was unable to remove.

DISCUSSION AND CONCLUSIONS
We have shown that it is possible, in principle, to discern the oscillation frequencies of massive stars using a few weeks to months of observations, consisting of weekly high-cadence 48-hour runs.
We then showed that it is possible, in practice, to schedule and obtain such ∼48-hour consecutive crosssite observations with the Las Cumbres Observatory network, both on its 1 m and 0.4 m telescopes.We further demonstrated that stellar oscillations of two different β Cep stars can be distinguished by these observations, and that they match the expected oscillation patterns known for these stars.These oscillations are observable also when switching bands, allowing such data to be obtained in multiple bands, which is crucial for mode identification.
As the 0.4 m telescopes require less defocusing than the 1 m telescopes for the same stars, they are less likely to produce photometric artifacts that result from exces- sive defocusing.Thus, for bright enough targets, the 0.4 m telescopes are more likely to provide light curves with lower scatter.In principle, even the continuous observation runs don't have to be at such high cadence as was tested here.Sampling is only needed above the Nyquist limit.However, it remains to be tested how non-continuous (but still well-sampled) observations can be obtained through the Las Cumbres scheduler in practice.
This work serves as a proof-of-concept for using the Las Cumbres Observatory network to detect the oscillations of massive stars in multiple bands, in a way that can be scaled up to population studies of samples of objects.As such it forms the basis for the Global Asteroseismology Project which aims to obtain multi-band observations of ∼20 β Cep stars.Measuring and identifying the oscillation modes of a representative sample of massive stars will allow us to start to map the distribution of their internal properties from asteroseismic modeling (e.g.Briquet et al. 2009).This will bring us one step closer to solving some of the largest puzzles regarding massive-star structure and evolution.

Figure 1 .
Figure1.Sequences of weekly 48-hour observation runs can recover the oscillation frequencies of the massive star HD129929, previously obtained during two decades of singlesite observations, in 17 weeks.Top: Simulated signal of HD129929 (blue line; based on the modes detected byAerts et al. 2004) and simulated sampling of it during 17 weekly 48-hour runs of 60-second cadence per run (red points; with a measurement uncertainty of 3 PPT).The simulations include ±2-day randomness at the start of each 48-hour run (which is expected due to weather and scheduling constraints and can help remove aliases), and a randomly timed twohour weather gap in each 48-hour run, which can be seen in the bottom right panel, displaying one such run.Bottom left: The difference between the frequencies (∆f ) and amplitudes (∆A) recovered from our simulated data, using a prewhitening process (see FigureB1in Appendix B.1 for more details) and the "true" frequencies and amplitudes input to the simulation.The input frequencies and amplitudes are recovered to within ∼ 10 −4 c day −1 and ∼ 10 −1 mmag respectively.The prewhitening process used to extract the frequencies is presented in FigureB1in Appendix B.1.

Figure 2 .
Figure 2. Same as Figure 1, but for the massive star HD180642.Here, only 4 sequences of weekly 48-hour consecutive observations recover the top 10 oscillation frequencies and amplitudes obtained through 18.8 years of single-site observations to within ∼ 10 −3 c day −1 and ∼ 10 −1 PPT respectively.The prewhitening process used to extract the frequencies is presented in Figure B2 in Appendix B.1.

Figure 3 .
Figure3.Differences between recovered and input frequencies (top) and amplitudes (bottom) in 1000 Monte Carlo simulations for each of 4 different observing run lengths for HD129929.The average value ±1σ scatter of this difference from each set of 1000 simulations is shown for three out of the six observing modes (ordered in descending amplitude) for clarity.The recovered frequencies are consistent with the input ones to within the scatter of the simulations.Consistency and accuracy improve with increasing observing run length and mode amplitude, as expected.

Figure 4 .
Figure 4. Timelines of our five observational campaigns.Colored bands denote when observations were obtained from each site.

Figure 5 .
Figure5.Final light curves from Campaigns 0, 1, 2 and 4. The original data are plotted in semi-transparent points, and the data binned to 0.02-day (top panel) 0.003-day (middle panel), and 0.00833-day (bottom panel) bins are plotted in opaque points.The oscillations (and varying amplitudes due to mode beating) can be clearly seen for most cases, with varying intrinsic scatter between sites and nights.

Figure 6 .
Figure6.Final light curves from Campaign 3 (the multiband campaign).The oscillations are clearly seen, even when the camera switches filters after every third image.The original data are plotted in semi-transparent points, and the data binned to 0.0125-day bins are plotted in opaque points.

Figure 7 .
Figure 7. MCMC fit to the observations from Campaign 1 (HD129929; top) and Campaign 2 (HD180642; bottom).1000 random model light curves are plotted as drawn from the posteriors (black lines), but they appear as one thick line given that the posterior distributions are narrow.Time is in hours from the beginning of each observation (HJD 2456777.91486for HD129929 and HJD 2459420.5672for HD180642).

Figure 8 .
Figure8.Measured periodograms (red) for Campaign 1 (top) and 4 (bottom).Grey vertical dashed lines and circles denote the frequencies (at their respective amplitudes) of HD129929 and HD180642 (fromAerts et al. 2004 and  Degroote et al. 2009b, respectively).The semi-transparent blue lines denote periodograms of 160 simulated signals with varying time offsets (to in Equation (5)).The opaque blue line denotes the periodogram of the simulated signal with the best fit time offset from the MCMC fit (see Section 6.1).

Figure B1 .Figure B2 .
Figure B1.Lomb-Scargle periodograms after each prewhitening stage for the simulated light curve presented in Figure1, with the highest amplitude frequency at that stage marked by a dashed blue line.The last panel (from left to right and top to bottom), is the residual periodogram after subtracting each frequency with the highest amplitudes in each step.

Figure C3 .
Figure C3.Corner plots of the two MCMC fits presented in Figure 7 for Campaigns 1 (left) and 4 (right).

Table 1 .
Properties of the imagers used in this work.

Table 2 .
Aerts et al. 2004ed frequencies (f ) and amplitudes (A) for our test targets.The input values are taken in the V band fromAerts et al. 2004for HD129929 with amplitudes in mmag, and fromDegroote et al. 2009bfor HD180642 with amplitudes in PPT).

Table 3 .
Summary of the five observation campaigns.

Table 4 .
Conservation of relative flux with top-hat convolution for various input flux ratios (between two artificial sources).

Table 5 .
Our targets together with their respective reference and comparison stars.

Table 6 .
Number of images retained after each filtering step.

Table B2 .
The average ± standard deviation of the differences between recovered frequencies and amplitudes and the ones input to the simulations.