Integrating Sentinel-2 and PlanetScope Image with Drone-based Seagrass Data for Seagrass Percent Cover Mapping

Seagrass field data collection activities to train remote sensing images for seagrass percent cover mapping and assess its accuracy can be laborious, costly, and time-consuming, especially for vast seagrass meadows with high density variations. There is also a potential discrepancy in information between seagrass data collected in the field, which usually covers 0.25m2 or 1m2 ground area, and the spatial resolution of remote sensing image used. PlanetScope at 3m and Sentinel-2 at 10m are the currently frequently remote sensing images used to map seagrass. There is a considerable information gap between seagrass data collected in the field and their spatial resolution. The use of seagrass field data thus involves a generalization process and a set of assumptions to justify its integration with remote sensing image. An alternative is to use the drone-based aerial image (hereafter drone data), which captures seagrass meadows at very high spatial resolution, to interpret seagrass percent cover at a level of precision similar to the remote sensing data used. This research assessed the integration of drone-based seagrass data with PlanetScope and Sentinel-2 images to map seagrass percent cover. Seagrass percent cover was interpreted from drone data for each 9m2 and 100m2 ground size following the PlanetScope and Sentinel-2 grids, respectively. Stepwise, random forest, and support vector regression were employed to develop the seagrass’s percent cover mapping model. The accuracy assessment of the resulting seagrass percent cover map involves the calculation of RMS error and plot 1:1 and its derivative analyses. Our results showed that an unparalleled benefit of using drone data is the possibility to obtain SPC information that matches the spatial resolution of satellite imagery, where techniques such as photo-quadrat and photo-transect cannot match. Drone data is successfully integrated with PlanetScope and Sentinel-2 images to produce a high accuracy SPC map effectively and efficiently. Indeed, there are challenges in using drone data, mainly related to oceanographic and weather conditions, and the difficulties in interpreting SPC at the species level.


Introduction
Seagrass meadow provides various ecosystem services that support the livelihood of coastal community and acting as long-term carbon sequestration agent by removing CO2 from the atmosphere and store it in their sediment [1][2].Seagrass is also considered as one of the most important nature-based solutions in the process of the adaptation and mitigation to climate change [3].However, information of the spatial distribution and extent of seagrass meadow across the earth is still lacking [3].For instance, at national or regional level, only Seychelles [4], some African countries [5], and Mediterranean countries [6] that have been reported to have a completed seagrass map.Furthermore, information such as 1291 (2024) 012012 IOP Publishing doi:10.1088/1755-1315/1291/1/012012 2 seagrass percent cover, which is considered as the key parameter for seagrass monitoring effort [7], is even more lacking.More importantly, seagrass percent cover can be used to estimate aboveground carbon stock [8] and sequestration [9] non-destructively.
Seagrass percent cover mapping is mainly conducted using spaceborne remote sensing data such as WorldView-2, IKONOS, Quickbird [10][11], Sentinel-2 [12], Landsat [13].The challenge in seagrass percent cover mapping is in collecting field data to train the model and assess the accuracy of the map.The seagrass data collected in the field using photo-quadrat or photo-transect method rarely match the spatial resolution of remote sensing image used and could affect the success of the mapping, which is unavoidable and understandable due to the various challenges in conducting seagrass survey.To bridge this gap, assumptions are frequently used, and aggregating information taken from several measurement within the pixel size was used to represent the information of the corresponding pixel [14].
An alternative method to obtain seagrass percent cover is to use the drone-based aerial image, which is now relatively cheap, can cover a large area, is less dangerous to the surveyor, captures seagrass at a very high spatial resolution, and seagrass percent cover can be interpreted directly.The seagrass percent cover seen from drone data matches the definition of seagrass percent cover, which is the horizontal projection of the area covered by seagrass per unit area, and the percent cover data can be obtained at a similar precision as the pixel size of the remote sensing image.However, the use of drone does have challenges when flying during strong wind and strong sunglint occurs on the water's surface.
This research aims to assess the integration of seagrass percent cover data interpreted from drone data with Sentinel-2 and Planetscope image, with the expectation that drone data can be used as an effective and efficient alternative to field survey for seagrass percent cover mapping, if not a better one.PlanetScope and Sentinel-2 images are recently commonly used to map seagrass due to their freely available data and decent temporal resolution.The drone that we used is The Phantom 4 Multispectral (P4M).
Seagrass beds in the western part of Pari Island, Kepulauan Seribu, Indonesia were selected to test this approach (Figure 1).The northern part of the study site is mainly dominated by a high density of long blades of Enhalus acoroides beds, with Thalassia hemprichii can be found in between in several parts of the meadows.The substrate is soft and easily makes the water turbid when stepped on and the suspended material may take times to settle.Many epiphytes cover the seagrass blades, coral reefs can be found in the outer parts of the meadow, and the water can be categorized as medium turbidity.The southern parts have more carbonate sand substrate, and the water is much cleared with low turbidity.Enhalus acoroides dominate the area near the shoreline but gradually change into Thalassia hemprichiidominated meadows, with much less epiphyte.Enhalus acoroides can be found again in deeper pools near the boundary with coral reefs.These two unique characteristics made a good representation of seagrass condition variations, and also a robust assessment of the approach.

High-resolution image acquisition
The Phantom 4 Multispectral (P4M) sensor was used in this study to produce high-resolution multispectral images.The P4M camera has six imaging sensors, including one RGB sensor for visible light imaging and five multispectral sensors with a 2.08-megapixel resolution for each band.The P4M camera's Field of View (FOV) is 62.7°, with a focal length of 5.74 mm, and an aperture of f/2.2.The maximum image size is 1600 × 1300 pixels, and the sensor size is 4.87 × 3.96 mm [15].Table 1 shows the band information for P4M.During data collection, the P4M was installed on its own aircraft.It has a takeoff weight of 1487 g, and the maximum flight time is 27 minutes.It also carries a real-time kinetic positioning (RTK) module to improve accuracy up to ± 0.1 m.This UAV also carries an integrated sunlight detection sensor, providing calibration metadata for the images so that surveys conducted at different times throughout the day have consistently processed data results [15].The UAV flight was carried out under clear sky conditions (less cloud and wind) from 08:00 to 10:00 am on October 29, 2022.The detail of flight setting, and flight log records can be seen in Table 2 and Table 3.

High-resolution image processing
Processing the drone photos was conducted using a Workstation PC with the i7 8700 processor, DDR4 32 GB RAM, and NVIDIA Quadro P600.Several steps were carried out to prepare ready-to-use highresolution images, including 1) aligning photos, 2) optimizing the camera, 3) building a dense cloud, 4) building Digital Surface Model (DSM), and 5) building an orthomosaic.For each set of images that have been acquired, they were aligned by estimating the camera position and orientation for each multicamera system and generating a sparse point cloud consisting of the tie points.Then, a realignment was performed based on the sparse point clouds using the optimized camera function to improve a good alignment of the camera orientation and position.This stage was important to improve accuracy and avoid noise when performing the next step.To interpolate the sparse point cloud that was created in the previous step, the Build Dense Cloud function was executed.This process was done by calculating depth information on each camera to combine it into a single dense point cloud model.Finally, the dense point cloud model is converted to a DSM and combined using the Build Orthomosaic function to produce a ready-to-use high-resolution image.Although the P4M provides multispectral bands separately, this study only processes and uses the RGB photos into a mosaic image, which is afterward used to calculate the seagrass percent cover.

Seagrass percent cover interpretation
Seagrass percent cover (SPC) was calculated from grids that were created based on the satellite pixel size.The grids were developed using the fishnet-grid tool that can be found in any GIS software.They were created by following the spatial resolution of PlanetScope and Sentinel-2 MSI images, which is 3 × 3 m and 10 × 10 m, respectively (hereafter PlanetScope grid and Sentinel-2 grid).The resulting PlanetScope and Sentinel-2 grids (Figure 2) were used to mask the drone data.After the drone data was masked, isodata unsupervised classification was performed for each of the PlanetScope and Sentinel-2 grids.Isodata classification was employed due to its capability to provide high accuracy and high precision classification in a fast implementation, especially with a fewer class number [16][17], which are seagrass and non-seagrass (mainly bare substrate).Ten classes with ten iterations were set when performing the isodata classification.The SPC in each grid was calculated based on the isodata unsupervised classification results, which is the areal proportion of the seagrass class to the total areal extent of the grid.To acquire the SPC in each grid, we calculate the total area of seagrass class in PlanetScope and Sentinel-2 grids, divide it with the grid size extent, and times with 100% (Formula 1).

100%
(Formula 1) Where, Class x to Class y -Seagrass class areal extent depends on the isodata classification result and A -grid area extent depends on the satellite image pixel size.[20].Both images have visible and near-infrared (NIR) bands, suitable for benthic habitat mappings such as coral reefs, seagrass meadows, and macroalgae due to their water column penetration abilities.Since the acquired images are relatively clear, sunglint free, and recorded when the tide is relatively low, sunglint correction and water column correction are deemed unnecessary.

SPC Mapping model development.
To develop the SPC mapping model from PlanetScope and Sentinel-2, we employed stepwise (SWR) [21], random forest (RFR) [22], and support vector regression (SVR) [23].For the RFR, square root of all features is the preferred function to randomly determine selected features and the number of trees is 500.For the SVR, the parameter followed the setting from [24].For all regression models, all 8 bands of PlanetScope and 4 bands of Sentinel-2 were included as the predictors.

Accuracy assessment
The accuracy assessment of the resulting SPC maps includes the calculation of root mean squared error (RMSE) and plot 1:1 between reference and predicted SPC, which is further analyzed to assess the overestimation and underestimation of the predicted SPC.

Drone data mosaic result
We obtained a mosaicked image that records the study area in two different parts (Figure 3).The southern image was created from 247 photos (mission 4), while the northern image was created from 266 photos and 283 photos from mission 6 and mission 7, respectively.The mosaicked image has 5 cm of pixel size, covering approximately 21 hectares, and provides a very high spatial resolution image that gives a clear visualization of the variation of seagrass cover in the study area.

Seagrass percent cover data based on drone data
The SPC interpretation conducted in this research did not consider the species variation since it is challenging to differentiate between species with drone data.The SPC data here include all seagrass species present in the area.There are three scenarios in using drone data to map seagrass percent cover using PlanetScope image.The results of isodata classification on drone data based on the PlanetScope grid resulted in ten spectral classes (Figure 4).The first five classes represent high to medium seagrass density, class 6 to 8 belong to low seagrass density, and class 8 is a very low density seagrass.Class 10 is a bare substrate.The percent cover for each grid thus calculated from the percentage of seagrass from class 1 to 5 (scenario 1), 1 -8 (scenario 2), and 1 to 9 (scenario 3), respectively, and producing three percent cover values (Table 5).For Sentinel-2 grid, the number of classes in the isodata classification is four (Figure 4).Class 1 to 3 represent seagrass at high and medium density and class 4 is bare substrate.As a result, there is only one percent cover value to be used for Sentinel-2 image.The differences in the output class numbers in the isodata classification results based on PlanetScope and Sentinel-2 grids are highly affected by the area extent, hence the image statistics, which is one of the important variables in isodata unsupervised classification algorithm [16].

SPC mapping model and accuracy assessment
Statistically, all the regression models of PlanetScope and Sentinel-2 images managed to model and map seagrass percent cover using seagrass percent cover data calculated from drone data with accuracies higher than 67% (Table 6).The accuracy difference between regression models in PlanetScope and Sentinel-2 image is within 3% and 4%, respectively.This indicates a stable model performance generated from the combination of input images and percent cover reference data used.The variations between PlanetScope scenario results are apparent when assessing the spatial distribution of the seagrass percent cover.In scenario 1, where only high and medium density seagrass was included, the actual percent cover value is underestimated, and as a consequence, when used to train the image to develop a mapping model, the resulting map has underestimated values, especially on the very low seagrass percent cover pixels.As seen in Figure 5, pixels dominated by the substrate are unclassified due to the values being negative.Percent cover values from scenario 2 are the most suitable to be used for seagrass percent cover mapping using PlanetScope.The percent cover spatial distribution result resembles the condition of Pari's island seagrass meadow in the field, where the northern part is dominated by high seagrass cover and the southern part is dominated by medium to low seagrass cover.In scenario 3, when the very low seagrass class was included, the actual percent cover value is overestimated, and when used to develop a mapping model, the resulting map overestimated the percent cover values.Figure 5 shows that in scenario 3 all pixels were categorized as medium to high seagrass percent cover, with some very high density seagrass pixels flagged with red due to the value being higher than 100%.The sand-dominated pixels were also categorized as medium to high seagrass cover.
The spatial distribution of seagrass percent cover from Sentinel-2 resembles scenario 2 of PlanetScope result.The difference is that the Sentinel-2 results are slightly underestimated, due to the smaller number of isodata classes to calculate the percent cover precisely.The underestimate is shown in the sand-dominated pixels in the southern part of the island.Based on the 1:1 plot between reference and predicted seagrass percent cover, both PlanetScope and Sentinel-2 image slightly overestimate the percent cover, generally in SPC lower than 70% (Figure 6).The overestimation mainly came from the dark background substrate, turbid water, and shadowing factors from the surrounding higher seagrass that made the lower density seagrass pixels appear darker, hence resembling higher seagrass density.Our results have shown that drone data can be used to obtain SPC data to be integrated with satellite images.This is a huge benefit since we can collect SPC effectively and efficiently.Utilizing Drones to collect reference SPC can reduce the survey expenses such as local staff fees and boat rent since we only need a minimum of two drone operators.Moreover, the drone can cover a wider survey area in a short time, which means fieldwork activity can be completed faster.It will be even more useful when the sampling location is in a remote area with unpredicted weather and oceanographic condition.A drone can be battery-limited, however, having spare batteries still make it possible to record several sampling locations in a day.Also, utilizing drones is safer for surveyors by minimizing the threat of fatal accidents and venomous animals to surveyors in seagrass habitat.Another unparalleled benefit is drone data allows us to obtain SPC information at the same level of precision with satellite image spatial resolution such as widely-available PlanetScope and Sentinel-2 images.Techniques such as photoquadrat and photo-transect will not be able to obtain SPC field data that match the spatial resolution of those images.

Cons.
Although drone technology provides various advantages, it is also followed by several challenges.The strong wind can be a limitation to operating the drone safely.The strong wind can affect the battery consumption and the roughness of the water surface, which enhances the difficulty in interpreting the seagrass cover from the aerial image.Thus, drone data acquisition is only effective when the sunglint is absent or minimum.Since most drones do not have NIR band, it is also not possible to do sunglint correction.The presence of animals such as sea eagles can also attack the drone while working and cause damage to the drone.Another limitation in utilizing drone data is the data processing effort requires proper computer specification, where processing a huge number of high-quality photos IOP Publishing doi:10.1088/1755-1315/1291/1/01201210 can take a long time.When the water surface is rough, it is also difficult to do image mosaicking and the resulting image might be difficult to interpret because the seagrass may be distorted.It is also not very effective when the water is turbid, and the bottom is not visible.Finally, it is also difficult to interpret species variation, and thus, obtaining SPC for each species in the field can be challenging.Our experience in this study suggested that drone data is only effective to obtain information on the location of seagrass and SPC without considering species variation.

Conclusions
This research provided insight into how drone data can be integrated with spaceborne remote sensing images for seagrass percent cover mapping effectively and efficiently.PlanetScope and Sentinel-2 images successfully integrated with drone-based SPC data to produce an SPC map with high accuracy.The unparalleled benefit of using drone data is the possibility to obtain SPC information that matches the spatial resolution of the satellite image, where techniques such as photo-quadrat and photo-transect cannot match.Nevertheless, challenges in using Drone data are also present, mainly related to the oceanographic and weather condition, and the difficulties in interpreting the SPC at the species level.

Figure 1 .
Figure 1.Study site locations and the PlanetScope and Sentinel-2 grids used to obtain seagrass percent cover data from drone data.

Figure 2 .
Figure 2. The PlanetScope and Sentinel-2 grids, and the resulting vector of the isodata unsupervised classification overlaid on PlanetScope and Sentinel-2 grid.

Figure 3 .
Figure 3. True color composite of aerial image taken from Drone P4M.

Figure 4 .
Figure 4. PlanetScope and Sentinel-2 grids overlaid on the corresponding isodata classification result of drone data.

Figure 6 .
Figure 6.The 1:1 plot between predicted and reference seagrass percent cover.

Table 2 .
General flight setting.

Table 3 .
P4M flight log records for each mission.
[18][19]rass percent cover mapping 2.4.1.PlanetScope and Sentinel-2.The PlanetScope (SuperDove generation) image used in this research was acquired on October 29 th , 2022, with 8 spectral bands and a spatial resolution of 3 m[18].For the Sentinel-2 image, we used 10 m spatial resolution with 4 spectral bands acquired from the European Space Agency (ESA) on September 11 th , 2022.Both images are geometrically corrected to the local map projection and radiometrically corrected to surface reflectance (SR)[18][19].The specific processing level for PlanetScope and Sentinel-2 images used in this research is Level-3B Analytics SR and Level-2A, respectively.The detailed specification for PlanetScope and Sentinel-2 MSI images used in this research can be seen in Table4.Based on Table4, utilizing PlanetScope and Sentinel-2 MSI images for SPC mapping has several advantages, especially to provide detailed and moderate information for large areas due to their spatial resolution, and providing images with acquisition date similar or close to the date of field survey date

Table 4 .
Specifications of PlanetScope and Sentinel-2 image used in this study.

Table 5 .
Summary of seagrass percent cover calculated from drone data.

Table 6 .
Summary of accuracy assessment of SPC mapping using the integration of PlanetScope and Sentinel-2 image and seagrass reference data calculated from drone data.