Research Article  Open Access
Zhongping Lee, Mingjia Shangguan, Rodrigo A. Garcia, Wendian Lai, Xiaomei Lu, Junwei Wang, Xiaolei Yan, "Confidence Measure of the ShallowWater Bathymetry Map Obtained through the Fusion of Lidar and Multiband Image Data", Journal of Remote Sensing, vol. 2021, Article ID 9841804, 16 pages, 2021. https://doi.org/10.34133/2021/9841804
Confidence Measure of the ShallowWater Bathymetry Map Obtained through the Fusion of Lidar and Multiband Image Data
Abstract
With the advancement of Lidar technology, bottom depth () of optically shallow waters (OSW) can be measured accurately with an airborne or spaceborne Lidar system ( hereafter), but this data product consists of a line format, rather than the desired charts or maps, particularly when the Lidar system is on a satellite. Meanwhile, radiometric measurements from multiband imagers can also be used to infer ( hereafter) of OSW with variable accuracy, though a map of bottom depth can be obtained. It is logical and advantageous to use the two data sources from collocated measurements to generate a more accurate bathymetry map of OSW, where usually imagespecific empirical algorithms are developed and applied. Here, after an overview of both the empirical and semianalytical algorithms for the estimation of from multiband imagers, we emphasize that the uncertainty of varies spatially, although it is straightforward to draw regressions between and radiometric data for the generation of . Further, we present a prototype system to map the confidence of pixelwise, which has been lacking until today in the practices of passive remote sensing of bathymetry. We advocate the generation of a confidence measure in parallel with , which is important and urgent for broad user communities.
1. Introduction
The ocean depth is a geophysical property puzzling humans for thousands of years. The answer not only satisfies curiosity but also is important for many aspects of human activities and scientific studies, including navigation, ecosystem management, sustainable economic development, and ocean dynamic modeling [1]. To determine ocean bathymetry, the ancient Greeks (circa 80 BCE) used the “line sounding” method and obtained depth measurements up to ~2000 m in the Mediterranean Sea, while James Clark Ross obtained a depth of 4893 m in the 1840s. All these measurements were based on ship surveys; hence, it is unsurprising that the Challenger expedition (18721876), an oceanic voyage explicitly targeting ocean bathymetry, obtained just 492 soundings of the Atlantic Ocean. Only after the invention of Sonar during World War II did great achievements occur in ocean bathymetry or bottom topography. Still, because the ocean bottom is covered by a thick layer of water, which is generally opaque to electromagnetic radiation, we still know much less about the seafloor compared to what we know about the surface of the Moon or Mars [2].
Since the launch and operation of satellites, our capability to observe and measure ocean bathymetry has significantly improved, where sea surface altimetry has been successfully used to indirectly infer the bathymetry [3, 4]. This approach cannot resolve smallscale variations and can only detect large seamounts that alter the earth’s gravitational field and subsequently the sea surface altimetry. One direct and precise measurement of bathymetry from airborne or spaceborne platforms is Lidar (light detection and ranging) [5], which uses time lapses between emission and receiving of photons interacting with the bottom (or a target) to calculate the distance photons traveled. This timebased technique can be used to accurately calculate the bottom depth of clear oceanic waters up to about 40 m at present [6, 7]. We commonly term this technique as active remote sensing of bottom depth and here use to represent the product obtained (see Table 1 for major symbols and acronyms used in this article). Although Lidar is only feasible for relatively shallow and clear waters, due to the significance of such regions, there are many airborne Lidar systems specifically developed for bathymetry [5, 8]. One extremely exciting and valuable development is the ICESat2 satellite system [9], which sends out a laser at 532 nm, has a vertical resolution of about 0.17 m for bathymetry, and can potentially provide for various nearshore regions of the world [7]. This Lidar system, due to its spaceborne nature, obtains measurements of meshstyle points (in the various lines dictated by the Lidar system and satellite orbit), not the desired bathymetry map.

A completely different approach of the optical measurement of bathymetry is based on radiative transfer, where a shallow bottom will affect the radiance emerging from below the sea surface. A quick and simple example is the shallow vs. deep ends of a swimming pool that appear as different colors to humans. Algorithms were then developed in the 1970s to use radiometric data from multiband imagers to estimate the bottom depth of shallow clear waters [10–12]. This imagebased remote sensing of the bottom depth ( hereafter) is commonly termed as passive remote sensing. Although such estimates of depth are not precise, a significant advantage is that a map of can be produced, especially with the launch and operation of more advanced sensors [1]. In recent decades, with an improved understanding of radiative transfer in optically shallow waters (OSW), more sophisticated algorithms based on radiative transfer were developed [13–19], resulting in the creation of more bathymetry maps from imagers [19–22]. For those algorithms based on the physics of radiative transfer, a priori depths are not required for the development of the algorithm, and can be produced as long as the input reflectance spectrum is highly reliable and has an adequate spectral resolution. However, since both the water and bottom properties affect the reflectance spectrum, there are still various uncertainties in the derived product from the reflectance spectrum.
In view of the availability of concurrent or collocated highly accurate satellitebased and highspatial resolution multiband imagers, it is logical to develop schemes to generate bathymetry maps of OSW through the fusion of the two data sources [23, 24]. Figure 1 shows an example of measurements captured by ICESat2 (red dashed line) and Landsat8 operational land imager (OLI) over the Great Bahama Bank, where it is desired to expand the ICESat2 bathymetry to the entire shallow regions covered by the Landsat8 acquisition. As demonstrated in many studies, when collocated measurements of both and radiometric properties are available, the generation of through explicit empirical regressions [25–28] or neural networks [29–31] becomes straightforward. However, these products [25, 32] lack a representation of the confidence or quality of the product at each pixel, although schemes to estimate the impact of radiometric noise on have been developed [21, 33]. Usually, an averaged root mean square error or mean relative error of an algorithm is provided, but such measures of error represent the performance from a data pool and do not mean the same error or confidence at every pixel or location [34]. After the first demonstration of passive remote sensing of bottom depth [10, 12], what limited the broadscale application of the product was not a lack of algorithms but rather the lack of a pixelwise confidence measure for such products. Brando et al. [35] suggested using the closure between the measured and modeled remote sensing reflectance to infer the quality of . However, as this closure is an index for numerical solutions of a complex remote sensing function (see Section 2.3), a good closure does not necessarily indicate a high confidence of the derived . At present, the ability to generate a pixelwise measure of the quality or confidence of through theoretical modeling is lacking, despite the necessity of such a measurement as it would vary spatially even within clear waters.
In this work, after review and demonstration of traditional passive schemes for bathymetry, we provide an original and novel prototype system to objectively classify the confidence of . We advocate the generation of such confidence products in parallel to remotely sensed bathymetry, where such a confidence measure would be essential to promote the use of by the broader community.
2. Overview of Passive Remote Sensing Algorithms for ShallowWater Bathymetry
Detailed descriptions of the derivation of from a Lidar system can be found in Guenther [5], where the key is to obtain precise measurements of time lapses of photons reflected by a sea bottom. For a water body with a shallow bottom, the remote sensing reflectance (, in sr^{1})—the ratio of the radiance () emerging from below the surface to the downwelling irradiance just above the surface—can be expressed as [36–38]
Here, is the remote sensing reflectance of the same water body but with no impact from the bottom (i.e., optically deep); is the bottom reflectance modified by airsea transmittance; is the diffuse attenuation coefficient of downwelling irradiance, with and for the diffuse attenuation of upwelling photons generated due to scattering in the water column and bottom reflection, respectively. , , and can be parameterized with the water inherent optical properties [13, 36, 39]. Thus, while depends on the water optical properties and bottom reflectance, it is also a function of the bottom depth (). Hence, various passive remote sensing algorithms have been developed to retrieve from this spectral signal [11, 13, 21, 40]. These algorithms can be grouped into three approaches, two of which belong to the empirically based approach (EBA) and the other classed as a semianalytical approach (SAA). The following briefly describes the essence of these three schemes.
2.1. Explicit Empirical Approach (EEA)
2.1.1. Multiband Value Algorithm (MBVA)
With collocated measurements of bottom depth and multiband radiometric data, Polcyn et al. [10] proposed the first empirical algorithm for based on the difference between the shallow and deepwater , with the algorithm further refined by Lyzenga [12]. Generally, can be written as
Here, is the logarithm of () or () and is calculated for a specific spectral band, while are empirical coefficients tuned using collocated (or ) and . This algorithm can be improved with the use of additional bands:
is the number of bands that are available and feasible from an imager; thus, there are algorithm coefficients () to be tuned. Since Equations (2) and (3) are empirical, there is a potential that is smaller than , e.g., over dark seagrass regions, which then causes an invalid mathematical calculation for ; this formula could be modified as [41]
Since bottom depth is related directly to the value of , we term this empirical model for as a multiband value approach (MBVA). Comparing Equation (4) with Equations (1)–(3), they are essentially the same, except that the (a difference between the shallow and deep regions in an image) in Equations (2) and (3) is replaced by in Equation (4).
2.1.2. TwoBand Ratio Algorithm (TBRA)
Also, recognizing that the difference between and could be negative over dark bottom substrates, Stumpf et al. [25] developed an empirical approach from the ratio of two bands to estimate :
The three model coefficients (, ) are also tuned using pairs of collocated and . The value of ranges 5001500 and is usually fixed at 1000 [25, 42], while Traganos and Reinartz [34] indicated that a value of works fine for a seagrass environment. As demonstrated in many studies [24, 25, 42], maps of bathymetry can be generated following this scheme. It is also possible to use the logarithm of the ratio from two bands for the empirical estimation of [43]. However, Traganos et al. [41] found that the performance is worse than using the formulation given by Equation (5); hence, its discussion is omitted here. Because this approach employs a ratio of at two bands, we term this scheme a twoband ratio algorithm (TBRA), although Equation (5) can be expanded to include more available bands. The algorithms following Equations (2)–(5) are databased (empirical), where algorithm relationships and coefficients are explicit, and the coefficients are driven by pairs of known and . Note that if from more bands are required, it then places higher demands on atmospheric correction, especially in the longer wavelengths over optically shallow regions, where presently the OLI value in the red band sometimes is invalid.
2.2. Implicit Empirical Approach (IEA)
2.2.1. Estimation from
Different from the explicit empirical algorithms shown above, the machine learning approach (MLA, which in this manuscript collectively stands for neural networks, machine learning, and deep learning) is another databased approach for the estimation of from remote sensing measurements [29, 30, 44, 45]. Unlike EEA, the algorithm dependence or relationships and coefficients of MLA are hidden in the computer programming architecture (various layers and neurons), so it is not obvious how varies with or spectral radiance. As there are no explicit equations or parameters for such an approach; conceptually, the algorithm can be expressed as
Here, is the available and feasible band for the estimation of (e.g., usually Bands 14 for Landsat8 OLI), although there are no specific restrictions of that can be used.
2.2.2. Estimation from TopofAtmosphere Reflectance
Given that machine learning is empirical, another way to utilize MLA for estimation is to bypass atmospheric correction [46], thereby estimating and/or water properties directly from the topofatmosphere reflectance () [47]. Like Equation (7), this scheme can conceptually be defined as
To implicitly account for the contribution of atmosphere to , the range of will be from the visible to NIRSWIR bands (e.g., Bands 17 for Landsat8 OLI). Similar to the algorithms in Section 2.1, when sufficient pairs of and are available, an MLA_{ρtoa} can be developed for the remote sensing of from . While an EEA like Equation (5) could be developed with 510 data points, an MLA requires much more data (usually hundreds or more) in the training phase. In addition, an MLA is much more complex in the computer architecture than the simple mathematical formulation presented in Equations (2)–(5).
2.3. Semianalytical Approach (SAA)
A completely different set of algorithms for is based on the radiative transfer equation. After parameterizing the spectra of inherent optical properties (IOPs) and bottom reflectance in Equation (1), an spectrum of shallow water could be simplified with five variables and expressed as [13]
Here, and represent the absorption coefficient of phytoplankton () and detritusgelbstoff (), is the particle backscattering coefficient (), and is the bottom reflectance, all set at a reference wavelength, such as 440 nm. The five variables can be solved numerically through spectral optimization (or minimization) by minimizing a cost function computed between the measured and modeled spectra, defined as
Provided there is a sufficient number of spectral bands and that the spectrum is in high quality, can be generated from imagebased spectrometers without a priori data pairs of and [35, 48, 49]. In addition, the bottom substrate class and water optical properties could also be generated from this process [14, 15, 19]. The SAA is extremely valuable to measurements that have only, but the retrievals depend on the quality of the spectrum and the number of spectral bands [21, 33, 35, 50]. For multiband imagery that has a limited number of spectral bands in the visible domain, such as Landsat8 OLI and Sentinel2 MSI, modifications on the SAA variables and processing are necessary [51]. Additionally, its computational load is significantly greater than that of EBA because an SAA solves 45 variables simultaneously for a given spectrum; fortunately, this demand can be met with greatly improved computer technology.
3. Data and Processing
Landsat8 OLI and ICESat2 measurements are used here to demonstrate maps from collocated Lidar data and multiband imagers and the generation of a pixelwise confidence score.
3.1. Landsat8 Data
Landsat8 is an extension of the earlier Landsat series [52] and was launched on February 11, 2013. Its OLI has seven bands in the ~440–2200 nm domain to take measurements of the earthatmosphere system. In particular, the spatial resolution of Landsat8 OLI is 30 m, which provides detailed features of coastal regions where bottom depth can be highly heterogeneous. The Landsat8 OLI Level1 data processed by the Level1 Product Generation System (LPGS) can be downloaded from the USGS website (https://glovis.usgs.gov/). The image data were processed using the SeaDAS package (v7.5) [53], and the atmospheric correction algorithm proposed by Bailey et al. [54] was adopted for the generation of . A low threshold of 0.0003 sr^{1} is used for if it is found that the obtained is negative. Here, a few images over Florida Bay (24.7624.89^{°}N, 80.7580.77^{°}W; June 9 and October 15), the Great Bahama Bank (23.9625.14^{°}N, 76.8076.93^{°}W; March 7 and May 26), and the Great Barrier Reef (23.1823.57^{°}S, 151.68151.93^{°}E; August 8 and September 17) captured in 2019 were processed and utilized as examples.
3.2. ICESat2 Data
ICESat2 was launched on September 15, 2018. It is a followon mission to Ice, Cloud and land Elevation Satellite (ICESat) and provides global altimetry and atmospheric measurements with emphasis on surface elevation changes in polar regions [9]. The sole instrument onboard ICESat2 is the Advanced Topographic Laser Altimeter System (ATLAS), a green (532 nm) wavelength, photoncounting laser altimeter with a 10 kHz pulse repetition rate [9, 55]. ATLAS uses photomultiplier tubes (PMTs) as detectors in the photoncounting mode so that a single photon reflected back to the receiver triggers a detection within the ICESat2 data acquisition system. This singlephotonsensitive detection technique used by ATLAS to measure photon time of flight provides a very high vertical resolution required to detect small temporal changes in polar ice elevations [56, 57], as well as the bottom depth of optically shallow waters [7].
3.3. Data Matchup and Statistical Measures
Matchup datasets between the Landsat8 OLI and ICESat2 measurements were organized with the following processing steps: the Landsat8 OLI pixels of dense clouds were first discriminated and removed based on the threshold of Rayleigh reflectance at the SWIR band (1238 nm) [58]. Meanwhile, the pixels of lowquality were removed based on the standard Level2 quality flags included in SeaDAS, which include ATMFAIL (atmospheric correction failure), LAND (land pixel), CLDICE (probable cloud or ice contamination), HILT (very high or saturated observed radiance), and HIGLINT (strong sun glint contamination).
The ICESat2 bathymetry results presented in this work use geolocated photon data, contained in the ATL03 data product, which are segmented into granules that span about 1/14th of an orbit [59]. Both OLI and ATL03 photon products include latitude and longitude information within the WGS84 coordinate reference system.
Considering the variation of (after tidal correction) is negligible within a short period, the time constraint for “concurrent” Landsat8 OLI and ICESat2 data is set as ±2 weeks, and the ICESat2 product is adjusted to match the tidal cycle of Landsat8, where the classical tidal harmonic analysis model T_TIDE was applied to calculate tide information of interested locations [60]. To match the measurement between ICESat2 and OLI , we first located the ICESat2 track within the OLI image. For an ICESat2 data point, a Landsat8 pixel was first selected based on the closest distance. Since the spatial resolution of ICESat2 along the orbit track is 0.7 m, while the footprint of OLI is 30 m, there are many ICESat2 measurements within an OLI pixel. Therefore, for this OLI pixel, all ICESat2 pixels within a radius of 15 m are used to calculate the mean value and considered the matchup for this OLI .
To measure the deviation or error of the product, in addition to the coefficient of determination between any two sets of data, the root mean square error (RMSE) and the mean absolute relative error (MARE) between and are calculated: where is the total number of pairs used in the analyses. Note that the term “error,” rather than the term “difference,” is used in these analyses. This is because the uncertainty of is very low (a few centimeters); thus, could be considered the ground “truth,” and any difference between and will be most likely in the system to produce .
4. Predictability of Empirical Schemes
For a robust empirical scheme, the first aspect is to check if there are strong correlations between the input and the desired output, which is termed as predictability here and measured by the coefficient of determination in linear regression (). A value of indicates a 100% predictability or certainty. For the case of bathymetry, the output is the bottom depth, while the input is the spectral information (spectra of or here) or value after its mathematical transformations (e.g., parameters or in Equations (2)–(5)). In the following, we use the compiled matchup datasets to show the different predictability of the abovementioned empirical schemes (TBRA, MBVA, MLA_{Rrs}, and MLA_{ρtoa}).
4.1. Predictability with Data from One Image
Many publications [24, 26, 41] have shown strong predictability () of EEA (TBRA or MBVA) for the estimation of from . Such high predictability is not always the case [28] (also see Table 2). Figure 2 shows matchup measurements (>3500 pairs) over the Great Bahama Bank, an environment with generally clear water and shallow depths [43, 61, 62]. The OLI was obtained on March 7, 2019, where (obtained on March 16, 2019) ranges ~1.5–9.0 m after tide correction. The value between (for the ratio of OLI Band 2 and Band 3) and is ~0.36 (Figure 2(b)), which is dropped to ~0.20 when is changed to . These values indicate that such a ratio at most explained <40% of the variance for this dataset, although the radiometric measurements came from the same image, and that the distance of the points (see the red dashed line in Figure 2(a)) spans ~110 km. Most of the remaining variances (>60%) are likely from the water column and bottom properties (i.e., assuming that uncertainties from sunsensor geometry and atmospheric properties can be omitted), and these variations could not be resolved from . These values are significantly lower than those reported in previous studies [24, 26, 63], indicating a high data or environmental dependence of TBRA and its algorithm coefficients (). The use of 20 or so data pairs to obtain a stable set of [63, 64] will likely be a special case, rather than a common situation. This also echoes the findings of earlier studies [25, 27, 65] that one set of empirical coefficients cannot satisfy all pixels, even for the same image, unless the threshold for acceptable uncertainty is relaxed.

The value increases to 0.88 (Figure 2(c)) if MBVA (Equation (5)) is used to predict for the same dataset, indicating significantly higher predictability of MBVA for this dataset or environment. The results are even better ( as 0.91), although not much, if an MLA with 1 hidden layer and 5 neurons (for an spectrum containing four visible bands) is used (Figure 2(d) and Table 2). These results highlight the importance of using more bands [66], explicitly (MBVA) or implicitly (MLA), to account for the likely changes of water properties and bottom substrates across an image.
In an MLA, each neuron is similar to a free variable in a multivariant nonlinear regression; thus, more free variables tend to improve regressions. For the MLA with 1 hidden layer and 5 neurons (for ), the number of free variables is equivalent to that for MBVA; therefore, the results suggest an improved capability of MLA to pick up hidden relationships between and . This predictability is further improved with a deep learning architecture of using 3 hidden layers and more neurons (see Table 2), indicating a great potential of machine learning for empirical estimation of bottom depth. Further, it is found that the statistical measures are nearly the same (see Table 2) between using and using (1 hidden layer with 8 neurons, i.e., number of spectral bands plus 1) as the input to an MLA. These results suggest that through a nonlinear scheme like MLA, it is feasible to bypass the atmospheric correction step to retrieve directly from the topofatmosphere measurements [46].
4.2. Predictability with Data from Multiple Images
To further observe the impact of data on the predictability of using two bands or multiple bands, especially the tolerance of MLA, a total of 5172 pairs of collocated Landsat8 OLI and ICESat2 data covering waters of the Bahamas (23.9625.14^{°}N, 76.8076.93^{°}W), Florida Bay (24.7624.89^{°}N, 80.7580.77^{°}W), and the Great Barrier Reef (23.1823.57^{°}S, 151.68151.93^{°}E) were compiled. For , corrected to match the tidal cycle of OLI measurements, in a range of ~1.011.0 m, TBRA, MBVA, and MLA produced values of 0.48, 0.84, and 0.91, respectively (see Table 3). MLA_{ρtoa} performs slightly better than MLA_{Rrs} across these multiple images where there are various atmospheric properties, further supporting the concept of obtaining from when MLA is applicable. The improved predictability of MBVA and MLA is echoed by the MARE and RMSE values (see Tables 2 and 3), which are calculated after the model coefficients are determined through tuning or training. For instance, the MARE value is ~10% with RMSE as 0.54 m for MBVA, and MARE is ~8% with RMSE of 0.52 m for MLA. However, TBRA achieves a MARE value of 27% with RMSE as 1.25 m (see Table 3), which are about three times the values obtained using MBVA and MLA. These evaluations indicate improved predictability of using more bands, rather than using information from two bands, for the calculation of .

Such a result is expected because, as shown by Equation (9), of shallow water is governed by at least 45 variables; thus, a ratio of at two bands cannot resolve all unknowns, unless some of them are nearly constant or covarying with each other for a region of interest. However, even for this region in the Bahamas, as shown by Barnes et al. [61] and Garcia et al. [62], their , IOPs, and bottom substrates vary spatially. This is further evidenced by the spatial variation of (see Figure 3) derived from HOPE for the matchup data in Figure 2, where varied from ~0.03 to 0.09 m^{1}, showing limited correlation ( as 0.18 and an inverse relationship) with . For such a wide variability in (and no covariance with ), which plays a key role in the spectral variation of an spectrum, more bands and more free variables in an algorithm would improve the predictability. Note that in Figure 3 is derived from HOPE with fixed as from ICESat2 (after tidal correction) and fixed as 0.002 m^{1} (see Section 5.2 for details), so the only variables are , , and for Equation (9); therefore, the resulted values (and then ) are more reliable after the reduction of variables.
5. Applicability and Confidence Measure
The ultimate goal of any algorithm is to apply it to new measurements, i.e., data not used in the tuning or training, in order to obtain the desired remotely sensed product. While an empirical algorithm for can be easily developed from collocated imagery and data, the extent that such an algorithm can be applied to new data is unknown. It has been demonstrated that the model coefficients (e.g., and in Equations (2)–(5)) developed from one image cannot be applied to another image if low uncertainties are the goal [26]. The scatter in the regression shown in Figure 2 indicates that these empirical coefficients may not be applicable even for locations within the same image, unless larger uncertainties are acceptable.
Conventionally, the applicability of an algorithm is assessed by evaluating its performance using an independent dataset, with the reported RMSE and/or MARE values as justifications [25–27]. It is necessary to keep in mind that such averages, although informative, are dependent on the data pool and do not represent the error or uncertainty of each pixel [34]. Because different users have different tolerance for the uncertainty of (e.g., high requirement of accuracy for navigation), the average error is insufficient to inform all users of . It is necessary and important to provide a confidence measure for products at each grid or pixel. The following addresses the confidence associated with both EBA and SSA, with a firstever attempt to provide a pixelwise confidence measure for .
5.1. Applicability of EBA and Measure of Confidence
5.1.1. Issues of the Map from Landsat8 OLI
Following the practices commonly presented in the literature [23, 25], a map of 30 m resolution (see Figure 4) over the Great Bahama Bank was generated from a Landsat8 OLI image (May 26, 2019) with a TBRA tuned using matchup data generated from this image and ICESat2 bathymetry (May 25, 2019, the red dashed line in this map, and a total of 1707 pairs of data). As shown in the literature [23, 25] and as desired, the discrete or linetype bathymetry product from ICESat2 is now expanded to form a bathymetry map. Overall, for the western side of Andres Island, Great Bahama Bank, the bottom depth ranged from ~2.0 to 8.0 m. This is consistent with our general understanding and depth retrievals from other observations and methods [61, 62, 67]. On average, the difference is ~28.0% when compared with that derived from MERIS [62], which is consistent with those reported in the literature [26, 28]. However, obviously, there are erroneous outputs, where the bathymetry is ~15.0 m for the Tongue of the Ocean (TOTO), which is known to be ~2000 meters deep. In other words, a “false positive” of the shallow bottom is derived from TBRA (similar “shallow bottom” for TOTO is also observed from MBVA and MLA, results not shown here). Such false positives can also be found in Caballero and Stumpf [26] for waters around Dry Tortugas, Key West (see Fig. 6d of Caballero and Stumpf [26]). These false positives are a result of the following two inherent limitations of empirical algorithms for : (1)Empirical algorithms for (e.g., Equations (2)–(8)) are developed using data from optically shallow waters, as only such data has an optical signal from the bottom(2)By design, the empirical algorithm is datadriven; i.e., it can only be applied to measurement with similar characteristics as the training data
When an EBA, such as the TBRA, is applied to multiband imagery, however, these two basic requirements or assumptions are hardly tested or evaluated a priori. In other words, an map was generated by assuming, blindly, the algorithm is applicable to any in an image used during the algorithm tuning. Consequently, an erroneous bottom depth over TOTO was generated (see Figure 4). For this image, we know TOTO is optically deep, so such false products can be easily ignored or masked out. However, not all locations or pixels within an image do we know a priori their optical properties; thus, it is not certain if the environment is optically shallow or not. Therefore, it is not straightforward to mask out optically deep waters with empirical algorithms such as Equations (2)–(8). As such, usually, the resultant of deeper depths were manually, and arbitrarily, masked out (e.g., [68]).
5.1.2. Criteria to Check the Applicability of an Empirical Algorithm for
To confidently apply an EBA algorithm to a new spectrum, to the least, it is important and necessary to check if the from this target location meets the following two criteria:
(1) Criterion 1 (Cr1). If it is optically shallow; and
(2) Criterion 2 (Cr2). If there is an identical or similar spectrum in the training pool.
Such two criteria are omitted or ignored at present in the practices related to EBA for , although an SAA can separate optically deep vs. optically shallow during data processing [20, 35].
Given that the EBA formulation for deriving depth cannot provide information on whether a pixel is optically deep or shallow, a neural network (NN_{OSW}) based on Multilayer Perception (MLP) was developed to aid in the determination of Cr1. MLP is a class of feedforward artificial neural networks (ANN) composed of one input layer, one or multiple hidden layers associated with one output layer. Since here Landsat data are used, values of , , , and are the input, while the output is optically deep (assigned a value of 0) or shallow (assigned a value of 1). The number of hidden layers and the number of neurons of each layer were determined following the concept of minimum loss, a common approach for developing a deep learning system. Data used for the training came from known optically deep (Landsat measurements in Massachusetts Bay, Chesapeake Bay, and TOTO) and optically shallow (Great Bahama Bank, Florida Keys) environments. After many training attempts, two hidden layers with 32 and 16 neurons were found to provide the best performance for this separation. Separately, the Rectified Linear Unit (ReLu) function for the activation function of hidden layers is employed, which can largely avoid gradient disappearance. Since it is binary classification, the activation function of the output layer is a Sigmoid function. The training stage was eventually achieved when the iteration is stopped, and the loss function converges. Figure 5 shows an example of the OSW classification after applying NN_{OSW} to the Landsat8 OLI image displayed in Figure 1. Although the deep vs. shallow separation may not be perfect at this stage, waters of TOTO are optically deep and clearly separated. Since all neural networkbased algorithms are datadriven, we envision that this initial NN_{OSW} will be updated after more optically deep and shallow data are employed.
Further, a similarity index (SIM_{Rrs}) is designed for Cr2, such that the higher the SIM_{Rrs} value, the higher the measure of similarity; therefore, the target spectrum is likely “learned” in the phase of algorithm development. In this effort, SIM_{Rrs} is defined and calculated as follows.
A target spectrum () is evaluated against each spectrum in the training pool ():
Here, MARD_{Rrs} represents a mean absolute relative difference between two spectra, with for the th . The calculations of Equations (12) and (13) show two ways of quantifying MARD, where a small difference in at a single band plays a bigger role for MARD1_{Rrs}, while it is the large value in that plays a larger role for MARD2_{Rrs}. Since there are many in the data pool (1707 pairs in this case), many MARD1_{Rrs} and MARD2_{Rrs} values for a given will be obtained; the minimum of the combination of MARD1_{Rrs} and MARD2_{Rrs} is selected for the quantification of SIM_{Rrs}, calculated as
The use of 50% for both MARD1_{Rrs} and MARD2_{Rrs} is a compromise between the two ways of evaluating spectral differences.
5.2. Confidence Score System for
We thus propose to use this similarity measure to gauge the likely quality of . For instance, if for an , it indicates that there is an identical in the training pool. Further, we know the absolute relative error of (ARE_{H}) for each in the data pool (see Figure 6 for example); thus, an identical ARE_{H} for this is expected from that for , which can be found in the data pool. Therefore, based on the value of ARE_{H}, the confidence or quality of each can be classified as detailed below. Here, ARE_{H} is calculated as
While a low SIM_{Rrs} value indicates low confidence in (i.e., is likely out of the data range in training), a high SIM_{Rrs} value is not automatically a guarantee of high confidence or high accuracy of . As shown in Figure 6, although the mean of the ARE_{H} is ~5% ( value is 0.69 between and ) for the entire dataset, it does not suggest it is 5% for each point. For some data points, the ARE_{H} could be as high as 20%. Thus, if this matches the having an ARE_{H} of 20%, the of this is expected to have such a relative error from this algorithm.
Following the above indications, we designed a preliminary confidence score system (CSS) based on both SIM_{Rrs} and ARE_{H} in the data pool to classify the quality of the product. At this initial stage, this CSS is designed to coarsely classify the confidence of into three classes: low (), medium (), or high (), which is determined based on a decision tree (see Figure 7 for details). With this tree, from an spectrum having both high SIM_{Rrs} values and low ARE_{H} values can be considered to have high confidence.
Figure 8 shows a map of confidence for the product presented in Figure 4. Generally, for pixels not too far from the track of ICESat2 and with depths ~6.08.0 m, the confidence of is high, a result of similar characteristics in the data and the environment that were used in the training. For pixels near the coast of Andros Island and those in the Exumas region (east of TOTO) where the retrieved is generally under ~4.0 m, the confidence of is low. This is because the data pool used to develop the empirical algorithm has a depth range of ~5.08.0 m (see Figure 6); thus, this empirical algorithm did not “learn” the spectral characteristics of of depths shallower than ~4.0 m or with very different bottom and/or water properties (see Figure 3 for the wide variation of ). This low confidence is further confirmed using obtained on March 16, 2019 (the black line in Figure 8). After adjusting the tidal cycle to match the image time of May 26, 2019, and assuming no significant changes of bottom topography in the past ~70 days, it was found that ARE_{H} is generally around 40% or more (see Figure 9), above the 25% criterion of low confidence. The overall accuracy of classifying lowconfidence pixels is 80.2%, while the accuracy of classifying medium and highconfidence pixels is ~1%. This extremely low accuracy for medium to highconfidence pixels is due to the low number of such data points (see Figure 3), thus statistically not significant. The less than perfect classification result, in part, results from a few (~200) measurements where values are less than 4.0 m, but the ARE_{H} values for these points are under 10%; i.e., they belong to the highconfidence category. This excellent performance of TBRA for these pixels deserves further studies as the range of used in TBRA development was ~5.08.0 m (see Figure 6). Nevertheless, these results (Figures 6, 8, and 9) highlight that, unlike Sonar or Lidarproduced where the uncertainty in measurement is generally uniform, the uncertainty or confidence of the product is far from uniform [34]; thus, it is important and necessary to have a pixelwise measure of the quality of . Further, the >80% success rate suggests that the CSS does provide a good indication of the confidence of , although there is room for improvement.
Caballero and Stumpf [26] suggested the use of multiple acquisitions to measure the performance of an algorithm, with an assumption that bottom depth should remain the same (after tidal correction) for a short period of time. These multiple observations are useful and important [51], but they may not overcome systematic biases embedded in an empirical algorithm. One example is the “shallow bottom” of TOTO (Figure 4); such “shallow bottom” will repeat itself when similar empirical algorithms are applied to new multiband images.
5.3. Applicability of SAA
5.3.1. Example of from Landsat8 OLI
SAA is not datadriven; its applicability is dependent on the spectrum itself, as well as the biooptical models and the simplified expression for [13, 21, 35, 50]. As articulated in many studies, SAA requires a highly accurate spectrum as input, as errors in will be propagated into the retrieved IOPs and/or [21, 33, 35]. While empirical algorithms can overcome some systematic errors in in the tuning or training phase, SAA, at least in its present form, cannot. In addition, the number of wavelengths plays an important role in the retrieval of [50]. This is because, within an SAA, the IOPs and bottom properties are assumed independent variables, while empirical algorithms (especially MLA) may find, and remedy to some extent, some hidden relationships among them and therefore transfer systematic bias or relationships into the algorithm coefficients (explicitly or implicitly).
To demonstrate the retrieval of from Landsat8 OLI data with an SAA, the default HOPE algorithm was applied to the data pairs shown in Figure 2(a) (the red line), a data pool of a wide range of bottom depths ( is ~1.59.0 m) and dynamic water properties ( is in a range of ~0.030.09 m^{1}). Because Landsat8 OLI has only four usable bands for shallowwater remote sensing (the 865 nm has nearly no information of the water and bottom for most water bodies), Equation (9) is underdetermined. Considering that the spectral shapes of and in the 440–561 nm range are similar, and that and do not make significant contributions to the total absorption at 561 nm, here, in Equation (9) was fixed as 0.002 m^{1} in order to process Landsat8 , and this modified version is termed as HOPE_{LS8}. This fixed value of 0.002 m^{1} is simply a reflection that for waters in this region, it is close to the lowest value for (440) [61]. Also, note that HOPE_{LS8} is certainly subject to refinement, but that is not the focus here.
Figure 10(a) compares the profiles of from HOPE_{LS8} with from ICESat2, which are two independent determinations, and an value of 0.66 was obtained. Figure 10(a) also shows the profile of ARE_{H} for each pair, which ranges from 0.2 to 100%, with a median value of 17.7%. Although a generally consistent bathymetry pattern of from HOPE_{LS8} is obtained, these statistical measures do suggest that substantially more effort is required if highconfidence is to be retrieved by HOPE_{LS8} from such multiband .
There could be many sources contributing to the moderate performance in the retrieved . These include the sensor’s calibration, the atmospheric correction, the biooptical models used in HOPE_{LS8}, or the number of available bands. It is not the scope of this effort to address the impact of those elements and the refinement of HOPE_{LS8}, where algorithm improvement is constantly ongoing. Here, we focus on the necessity and development of a CSS to measure the pixelwise confidence of retrieved from an SAA (such as HOPE_{LS8}). Brando et al. [35] developed a system to classify the quality of into two categories (good or bad) based on the value (Equation (10)) and assumed that has high confidence when a low value (i.e., good closure between the measured and modeled spectra) is obtained. However, is determined by various components and many sources; there could be the same but with different results. For instance, when the biooptical models are modified, a different would be retrieved, but the value of can remain the same. Thus, as demonstrated in Figure 10(b) and previous studies [13, 35, 49], there is no relationship between and ARE_{H} ( is ~0.1); thus, is insufficient to indicate the quality of when it is retrieved with an SAA. A small , by its definition, indicates only a high agreement between the measured and modeled spectra.
5.3.2. Confidence Score System for SAADerived
Following the CSS scheme for EBA, a prototype CSS for from HOPE (CSS_{HOPE}) was also developed and presented in Figure 11. Since an SAA determines a set of solution using , a firstorder decision could be based on the value of [13, 35]. If the value is higher than a threshold, it indicates that the closure between the input and output spectra is not enough; thus, the retrieved could be questionable [13, 35]. Here, we tentatively set this threshold for as 0.02, as most values are found smaller than this value (see Figure 10(b)) for the data pool shown in Figure 2. When there are no collocated and reliable data available, the maximum relative contribution from the bottom to the total is used as an indicator to gauge the confidence of estimated [20, 35]. Too low a value (usually the threshold is 20% [20]) suggests a low contribution from the bottom and low confidence of retrieved bottom properties [35]. Since there are matchups between and , the CSS developed for EBA could be employed for pixels with values below the threshold, as illustrated in Figure 11. Thus, for derived from HOPE_{LS8}, a companion confidence score could also be produced.
As an example, Figure 12 shows a map of obtained from HOPE_{LS8} (Figure 12(a)) and its confidence map (CSS_{HOPE}, Figure 12(b)). Similar to the bathymetry map obtained from TBRA, the depth in the west of Andros Island obtained from HOPE_{LS8} also has a range of ~4.08.0 m, a generally consistent pattern as observed before. For waters of the TOTO, the map shows a depth of 20 m, which is basically the upper boundary preselected within the HOPE_{LS8} system, where actually the contribution from the bottom is negligible when processed with HOPE_{LS8}, so it can be easily marked as optically deep water as in Lee et al. [20] and Brando et al. [35]. More importantly, the pixelwise quality of (CSS_{HOPE}) shown in Figure 12(b) provides a clearer indication of the confidence on the product pixel by pixel. Similarly, as Figure 8, higher confidence of is found for pixels around the ICESat2 track, and low confidence is found for locations near the coast. Evaluation using (March 16, 2019) (the black dashed line in Figure 12(b)) indicates a success rate of ~99% in identifying lowconfidence pixels. In addition, there are differences in the distributions of confidence between the two products (see Figures 4 and 12), a clear indication of the performance of different approaches for bathymetry. On the other hand, because from an SAA (e.g., HOPE_{LS8} here) is an independent determination of from that of , SAA offers an opportunity to check consistency from the two measurements, which is not possible with EBA.
6. Conclusions and Future Perspective
Through many decades of effort, there is no shortage of remote sensing products from multiband or hyperspectral imagers, but there is a shortage of remote sensing products attached with a confidence measure; this is especially true for the remote sensing of bathymetry. Compared to active measurements of bottom depth by Sonar or Lidar, the retrieved is still facing difficulties in its applications by the broader community, where a key limiting factor until now is the lack of pixelwise confidence for the product.
To fill this void, a prototype confidence score system (CSS) for is proposed for the first time, which at present classifies all pixels in an map of OSW into three categories: low, medium, and high, with a preliminary set of criteria. Since this CSS involves both the algorithm coefficients and the data used for the development of empirical algorithms, it is logical that not only the algorithm function and model coefficients be reported but also the data pool used for the algorithm development be deposited in a common data portal. In the future, while it is always necessary and important to continue the refinement of these algorithms, it is also important, and urgent, to develop or revise or refine such system(s) to measure the confidence of the resulting pixel by pixel. Specifically, it includes a refinement of the quality classes, thresholds, and settings of the criteria, as well as the desired statistical measures. Only the product of high confidence from multiple images could be merged to form a reliable map for the broad user communities. We call on the ocean color community to refine such schemes or to develop brandnew systems, so a mature and widely endorsed system could be implemented to clearly measure the quality of , a critical parallel product of remotely sensed bathymetry. To reach this goal, it is also urgent and important to compile, by the community and for the community, an inclusive data pool of collocated or concurrent measurements of and highquality and spectra of a wide range of depths and environments.
Data Availability
The satellite data used to support the findings of this study are publicly available. Landsat8 data can be downloaded from the USGS website (https://glovis.usgs.gov/), while ICESat2 data can be downloaded from the National Snow and Ice Data Center (NSIDC), where the geolocated photon data (ATL03) can be found online at https://nsidc.org/data/atl03.
Conflicts of Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Authors’ Contributions
ZL conceptualized the study and drafted and finalized the manuscript; MS helped in analyzing Lidar data; RG helped in Landsat8 data processing and finalizing the manuscript; WL developed machine learning algorithms; XL helped in ICESat2 data processing; JW processed Landsat8 data; XY helped in data matching and empirical algorithms.
Acknowledgments
Financial support by the Chinese Ministry of Science and Technology through the National Key Research and Development Program of China (#2016YFC1400904 and #2016YFC1400905) and the National Natural Science Foundation of China (#41941008, #41890803, and #41830102), the Joint Polar Satellite System (JPSS) funding for the NOAA ocean color calibration and validation (Cal/Val) project, and the University of Massachusetts Boston is greatly appreciated. The authors would like to thank the NASA ICESat2 team for providing the data used in this study. The ICESat2 data are publicly available through the National Snow and Ice Data Center (NSIDC). The geolocated photon data (ATL03) are found online (https://nsidc.org/data/atl03), with descriptions that can be found in the cited reference of Neumann et al. [60].
References
 T. Kutser, J. Hedley, C. Giardino, C. Roelfsema, and V. E. Brando, “Remote sensing of shallow waters – a 50 year retrospective and future directions,” Remote Sensing of Environment, vol. 240, article 111619, 2020. View at: Publisher Site  Google Scholar
 J. Copley, “Just how little do we know about the ocean floor?” The Conversation, vol. 9, 2014. View at: Google Scholar
 D. Sandwell, “Bathymetry from space is now possible,” Eos, vol. 84, no. 5, pp. 37–44, 2003. View at: Publisher Site  Google Scholar
 D. T. Sandwell, W. H. F. Smith, S. Gille et al., “Bathymetry from space: rationale and requirements for a new, highresolution altimetric mission,” Comptes Rendus Geoscience, vol. 338, no. 1415, pp. 1049–1062, 2006. View at: Publisher Site  Google Scholar
 G. C. Guenther, Digital Elevation Model Technologies and Applications: The DEM Users Manual, D. F. Maune, Ed., vol. 2, Asprs Publications, 2007.
 C. W. Wright, C. Kranenburg, T. A. Battista, and C. Parrish, “Depth Calibration and Validation of the Experimental Advanced Airborne Research Lidar, EAARLB,” Journal of Coastal Research, vol. 76, pp. 4–17, 2016. View at: Publisher Site  Google Scholar
 C. E. Parrish, L. A. Magruder, A. L. Neuenschwander, N. ForfinskiSarkozi, M. Alonzo, and M. Jasinski, “Validation of ICESat2 ATLAS bathymetry and analysis of ATLAS’s bathymetric mapping performance,” Remote Sensing, vol. 11, no. 14, p. 1634, 2019. View at: Publisher Site  Google Scholar
 R. C. Hilldale and D. Raff, “Assessing the ability of airborne LiDAR to map river bathymetry,” Earth Surface Processes and Landforms, vol. 33, no. 5, pp. 773–783, 2008. View at: Publisher Site  Google Scholar
 T. Markus, T. Neumann, A. Martino et al., “The Ice, Cloud, and land Elevation Satellite2 (ICESat2): science requirements, concept, and implementation,” Remote Sensing of Environment, vol. 190, pp. 260–273, 2017. View at: Publisher Site  Google Scholar
 F. C. Polcyn, W. L. Brown, and I. J. Sattinger, The Measurement of Water Depth by RemoteSensing Techniques, University of Michigan, Ann Arbor, 1970.
 D. R. Lyzenga, “Passive remote sensing techniques for mapping water depth and bottom features,” Applied Optics, vol. 17, no. 3, pp. 379–383, 1978. View at: Publisher Site  Google Scholar
 D. R. Lyzenga, “Remote sensing of bottom reflectance and water attenuation parameters in shallow water using aircraft and Landsat data,” International Journal of Remote Sensing, vol. 2, pp. 71–82, 1981. View at: Publisher Site  Google Scholar
 Z. P. Lee, K. L. Carder, C. D. Mobley, R. G. Steward, and J. S. Patch, “Hyperspectral remote sensing for shallow waters: 2. Deriving bottom depths and water properties by optimization,” Applied Optics, vol. 38, no. 18, pp. 3831–3843, 1999. View at: Publisher Site  Google Scholar
 R. Garcia, Z.P. Lee, and E. J. Hochberg, “Hyperspectral ShallowWater Remote Sensing with an Enhanced Benthic Classifier,” Remote Sensing, vol. 10, p. 147, 2018. View at: Publisher Site  Google Scholar
 J. D. Hedley and P. J. Mumby, “A remote sensing method for resolving depth and subpixel composition of aquatic benthos,” Limnology and Oceanography, vol. 48, no. 1, part2, pp. 480–488, 2003. View at: Publisher Site  Google Scholar
 J. Hedley, C. Roelfsema, and S. R. Phinn, “Efficient radiative transfer model inversion for remote sensing applications,” Remote Sensing of Environment, vol. 113, no. 11, pp. 2527–2532, 2009. View at: Publisher Site  Google Scholar
 W. M. Klonowski, P. R. Fearns, and M. J. Lynch, “Retrieving key benthic cover types and bathymetry from hyperspectral imagery,” Journal of Applied Remote Sensing, vol. 1, article 011505, 2007. View at: Publisher Site  Google Scholar
 J. Hedley, B. Russell, K. Randolph, and H. Dierssen, “A physicsbased method for the remote sensing of seagrasses,” Remote Sensing of Environment, vol. 174, pp. 134–147, 2016. View at: Publisher Site  Google Scholar
 C. D. Mobley, L. K. Sundman, C. O. Davis et al., “Interpretation of hyperspectral remotesensing imagery by spectrum matching and lookup tables,” Applied Optics, vol. 44, no. 17, pp. 3576–3592, 2005. View at: Publisher Site  Google Scholar
 Z. P. Lee, K. L. Carder, R. F. Chen, and T. G. Peacock, “Properties of the water column and bottom derived from Airborne Visible Infrared Imaging Spectrometer (AVIRIS) data,” Journal of Geophysical Research, vol. 106, no. C6, pp. 11639–11651, 2001. View at: Publisher Site  Google Scholar
 R. A. Garcia, P. R. Fearns, and L. I. McKinna, “Detecting trend and seasonal changes in bathymetry derived from HICO imagery: a case study of Shark Bay, Western Australia,” Remote Sensing of Environment, vol. 147, pp. 186–205, 2014. View at: Publisher Site  Google Scholar
 J. A. Goodman and S. L. Ustin, “Classification of benthic composition in a coral reef environment using spectral unmixing,” Journal of Applied Remote Sensing, vol. 1, article 011501, 2007. View at: Publisher Site  Google Scholar
 D. R. Lyzenga, “Shallowwater bathymetry using combined lidar and passive multispectral scanner data,” International Journal of Remote Sensing, vol. 6, pp. 115–125, 1985. View at: Publisher Site  Google Scholar
 Y. Ma, N. Xu, Z. Liu et al., “Satellitederived bathymetry using the ICESat2 lidar and Sentinel2 imagery datasets,” Remote Sensing of Environment, vol. 250, article 112047, 2020. View at: Publisher Site  Google Scholar
 R. P. Stumpf, K. Holderied, and M. Sinclair, “Determination of water depth with highresolution satellite imagery over variable bottom types,” Limnology and Oceanography, vol. 48, no. 1part2, pp. 547–556, 2003. View at: Publisher Site  Google Scholar
 I. Caballero and R. P. Stumpf, “Retrieval of nearshore bathymetry from Sentinel2A and 2B satellites in South Florida coastal waters,” Coastal and Shelf Science, vol. 226, article 106277, 2019. View at: Publisher Site  Google Scholar
 D. R. Lyzenga, N. P. Malinas, and F. J. Tanis, “Multispectral bathymetry using a simple physically based algorithm,” IEEE Transactions on Geoscience and Remote Sensing, vol. 44, no. 8, pp. 2251–2259, 2006. View at: Publisher Site  Google Scholar
 G. Casal, X. Monteys, J. Hedley, P. Harris, C. Cahalane, and T. McCarthy, “Assessment of empirical algorithms for bathymetry extraction using Sentinel2 data,” International Journal of Remote Sensing, vol. 40, no. 8, pp. 2855–2879, 2019. View at: Publisher Site  Google Scholar
 Z. P. Lee, M. R. Zhang, K. L. Carder, and L. O. Hall, “A neural network approach to deriving optical properties and depths of shallow waters,” in Proceedings, Ocean Optics XIV., Kona, HI, 1998. View at: Google Scholar
 S. Liu, L. Wang, H. Liu, H. Su, X. Li, and W. Zheng, “Deriving bathymetry from optical images with a localized neural network algorithm,” IEEE Transactions on Geoscience and Remote Sensing, vol. 56, no. 9, pp. 5334–5342, 2018. View at: Publisher Site  Google Scholar
 A. Collin, S. Etienne, and E. Feunteun, “VHR coastal bathymetry using WorldView3: colour versus learner,” Remote Sensing Letters, vol. 8, no. 11, pp. 1072–1081, 2017. View at: Publisher Site  Google Scholar
 J. D. Hedley, C. Roelfsema, V. Brando et al., “Coral reef applications of Sentinel2: coverage, characteristics, bathymetry and benthic mapping with comparison to Landsat 8,” Remote Sensing of Environment, vol. 216, pp. 598–614, 2018. View at: Publisher Site  Google Scholar
 J. Hedley, C. Roelfsema, and S. Phinn, “Propagating uncertainty through a shallow water mapping algorithm based on radiative transfer model inversion,” in Proceedings of the Ocean Optics XX, Anchorage, AK, USA, 2010. View at: Google Scholar
 D. Traganos and P. Reinartz, “Mapping Mediterranean seagrasses with Sentinel2 imagery,” Marine Pollution Bulletin, vol. 134, pp. 197–209, 2018. View at: Publisher Site  Google Scholar
 V. E. Brando, J. M. Anstee, M. Wettle, A. G. Dekker, S. R. Phinn, and C. Roelfsema, “A physics based retrieval and quality assessment of bathymetry from suboptimal hyperspectral data,” Remote Sensing of Environment, vol. 113, no. 4, pp. 755–770, 2009. View at: Publisher Site  Google Scholar
 Z. P. Lee, K. L. Carder, C. D. Mobley, R. G. Steward, and J. S. Patch, “Hyperspectral remote sensing for shallow waters I A semianalytical model,” Applied Optics, vol. 37, no. 27, pp. 6329–6338, 1998. View at: Publisher Site  Google Scholar
 S. Maritorena, A. Morel, and B. Gentili, “Diffuse reflectance of oceanic shallow waters: influence of water depth and bottom albedo,” Limnology and Oceanography, vol. 39, no. 7, pp. 1689–1703, 1994. View at: Publisher Site  Google Scholar
 A. Albert and C. D. Mobley, “An analytical model for subsurface irradiance and remote sensing reflectance in deep and shallow case2 waters,” Optics Express, vol. 11, no. 22, pp. 2873–2890, 2003. View at: Publisher Site  Google Scholar
 H. R. Gordon, O. B. Brown, R. H. Evans et al., “A semianalytic radiance model of ocean color,” Journal of Geophysical Research, vol. 93, no. D9, article 10909, 1988. View at: Publisher Site  Google Scholar
 R. P. Stumpf, M. E. Culver, P. A. Tester et al., “Monitoring Karenia brevis blooms in the Gulf of Mexico using satellite ocean color imagery and other data,” Harmful Algae, vol. 2, no. 2, pp. 147–160, 2003. View at: Publisher Site  Google Scholar
 D. Traganos, D. Poursanidis, B. Aggarwal, N. Chrysoulakis, and P. Reinartz, “Estimating satellitederived bathymetry (SDB) with the Google Earth Engine and Sentinel2,” Remote Sensing, vol. 10, no. 6, p. 859, 2018. View at: Publisher Site  Google Scholar
 I. Caballero and R. P. Stumpf, “Towards routine mapping of shallow bathymetry in environments with variable turbidity: contribution of Sentinel2A/B satellites mission,” Remote Sensing, vol. 12, no. 3, p. 451, 2020. View at: Publisher Site  Google Scholar
 H. M. Dierssen, R. C. Zimmerman, R. A. Leathers, T. V. Downes, and C. O. Davis, “Ocean color remote sensing of seagrass and bathymetry in the Bahamas Banks by highresolution airborne imagery,” Limnology and Oceanography, vol. 48, no. 1part2, pp. 444–455, 2003. View at: Publisher Site  Google Scholar
 J. C. Sandidge and R. J. Holyer, “Coastal bathymetry from hyperspectral observations of water radiance,” Remote Sensing of Environment, vol. 65, no. 3, pp. 341–352, 1998. View at: Publisher Site  Google Scholar
 B. Ai, Z. Wen, Z. Wang et al., “Convolutional neural network to retrieve water depth in marine shallow water area from remote sensing images,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 13, pp. 2888–2898, 2020. View at: Publisher Site  Google Scholar
 T. Kutser, I. Miller, and D. L. B. Jupp, “Mapping coral reef benthic substrates using hyperspectral spaceborne images and spectral libraries,” Coastal and Shelf Science, vol. 70, no. 3, pp. 449–460, 2006. View at: Publisher Site  Google Scholar
 R. Doerffer and H. Schiller, “The MERIS Case 2 water algorithm,” International Journal of Remote Sensing, vol. 28, pp. 517–535, 2007. View at: Publisher Site  Google Scholar
 B. B. Barnes, C. Hu, B. A. Schaeffer, Z. Lee, D. A. Palandro, and J. C. Lehrter, “MODISderived spatiotemporal water clarity patterns in optically shallow Florida Keys waters: a new approach to remove bottom contamination,” Remote Sensing of Environment, vol. 134, pp. 377–391, 2013. View at: Publisher Site  Google Scholar
 A. G. Dekker, S. R. Phinn, J. Anstee et al., “Intercomparison of shallow water bathymetry, hydrooptics, and benthos mapping techniques in Australian and Caribbean coastal environments,” Limnology and OceanographyMethods, vol. 9, no. 9, pp. 396–425, 2011. View at: Publisher Site  Google Scholar
 Z. P. Lee and K. L. Carder, “Effect of spectral band numbers on the retrieval of water column and bottom properties from ocean color data,” Applied Optics, vol. 41, no. 12, pp. 2191–2201, 2002. View at: Publisher Site  Google Scholar
 J. Wei, M. Wang, Z. Lee et al., “Shallow water bathymetry with multispectral satellite ocean color sensors: leveraging temporal variation in image data,” Remote Sensing of Environment, vol. 250, article 112035, 2020. View at: Publisher Site  Google Scholar
 D. P. Roy, M. A. Wulder, T. R. Loveland et al., “Landsat8: science and product vision for terrestrial global change research,” Remote Sensing of Environment, vol. 145, pp. 154–172, 2014. View at: Publisher Site  Google Scholar
 B. A. Franz, S. W. Bailey, N. Kuring, and P. J. Werdell, “Ocean color measurements with the Operational Land Imager on Landsat8: implementation and evaluation in SeaDAS,” Journal of Applied Remote Sensing, vol. 9, article 096070, 2015. View at: Publisher Site  Google Scholar
 S. W. Bailey, B. A. Franz, and P. J. Werdell, “Estimation of nearinfrared waterleaving reflectance for satellite ocean color data processing,” Optics Express, vol. 18, no. 7, pp. 7521–7527, 2010. View at: Publisher Site  Google Scholar
 L. Magruder and K. Brunt, “Performance analysis of airborne photon counting lidar data in preparation for the ICESat2 mission,” IEEE Transactions on Geoscience and Remote Sensing, vol. 56, no. 5, pp. 2911–2918, 2018. View at: Publisher Site  Google Scholar
 T. A. Neumann, A. J. Martino, T. Markus, S. Bae, M. R. Bock, and A. C. Brenner, “The Ice, Cloud, and Land Elevation Satellite – 2 mission: a global geolocated photon product derived from the Advanced Topographic Laser Altimeter System,” Remote Sensing of Environment, vol. 233, article 111325, 2019. View at: Publisher Site  Google Scholar
 S. C. Popescu, T. Zhou, R. Nelson et al., “Photon counting LiDAR: an adaptive ground and canopy height retrieval algorithm for ICESat2 data,” Remote Sensing of Environment, vol. 208, pp. 154–170, 2018. View at: Publisher Site  Google Scholar
 M. Wang and W. Shi, “Cloud Masking for Ocean Color Data Processing in the Coastal Regions,” IEEE Transactions on Geoscience and Remote Sensing, vol. 11, pp. 3196–3105, 2006. View at: Publisher Site  Google Scholar
 T. Neumann, A. Brenner, D. Hancock, J. Robbins, J. Saba, and K. Harbeck, ICE, CLOUD, and Land Elevation Satellite  2 (ICESat2) Project Algorithm Theoretical Basis Document (ATBD) for Global Geolocated Photons ATL03, NASA Goddard Space Flight Center, Greenbelt, Maryland, 2018.
 R. Pawlowicz, B. Beardsley, and S. Lentz, “Classical tidal harmonic analysis including error estimates in MATLAB using T_TIDE,” Computers & Geosciences, vol. 28, no. 8, pp. 929–937, 2002. View at: Publisher Site  Google Scholar
 B. B. Barnes, R. Garcia, C. Hu, and Z. Lee, “Multiband spectral matching inversion algorithm to derive water column properties in optically shallow waters: an optimization of parameterization,” Remote Sensing of Environment, vol. 204, pp. 424–438, 2018. View at: Publisher Site  Google Scholar
 R. Garcia, Z. Lee, B. Barnes, C. Hu, H. Dierssen, and E. Hochberg, “Benthic classification and IOP retrievals in shallow water environments using MERIS imagery,” Remote Sensing of Environment, vol. 249, article 112015, 2020. View at: Publisher Site  Google Scholar
 I. Caballero and R. P. Stumpf, “Atmospheric correction for satellitederived bathymetry in the Caribbean waters: from a single image to multitemporal approaches using Sentinel2A/B,” Express, vol. 28, no. 8, pp. 11742–11766, 2020. View at: Publisher Site  Google Scholar
 I. Caballero, R. P. Stumpf, and A. Meredith, “Preliminary assessment of turbidity and chlorophyll impact on bathymetry derived from Sentinel2A and Sentinel3A satellites in South Florida,” Remote Sensing, vol. 11, no. 6, p. 645, 2019. View at: Publisher Site  Google Scholar
 N. T. O'Neill and J. R. Miller, “On calibration of passive optical bathymetry through depth soundings Analysis and treatment of errors resulting from the spatial variation of environmental parameters,” International Journal of Remote Sensing, vol. 10, pp. 1481–1501, 1989. View at: Publisher Site  Google Scholar
 Y. Liu, D. Tang, R. Deng et al., “An adaptive blended algorithm approach for deriving bathymetry from multispectral imagery,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 801–817, 2021. View at: Publisher Site  Google Scholar
 Z.P. Lee, C. Hu, B. Casey, S. L. Shang, H. Dierssen, and R. Arnone, “Global shallowwater bathymetry from satellite ocean color data,” Eos, Transactions American Geophysical Union, vol. 91, no. 46, pp. 429430, 2010. View at: Publisher Site  Google Scholar
 S. M. Hamylton, J. D. Hedley, and R. J. Beaman, “Derivation of highresolution bathymetry from multispectral satellite imagery: a comparison of empirical and optimisation methods through geographical error analysis,” Remote Sensing, vol. 7, no. 12, pp. 16257–16273, 2015. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2021 Zhongping Lee et al. Exclusive Licensee Aerospace Information Research Institute, Chinese Academy of Sciences. Distributed under a Creative Commons Attribution License (CC BY 4.0).