(C) PLOS One This story was originally published by PLOS One and is unaltered. . . . . . . . . . . Field performance of the GaugeCam image-based water level measurement system [1] ['François Birgand', 'Department Of Biological', 'Agricultural Engineering', 'Nc State University', 'Raleigh', 'Nc', 'United States Of America', 'Ken Chapman', 'Department Of Biological Systems Engineering', 'University Of Nebraska-Lincoln'] Date: 2022-08 Image-based stage and discharge measuring systems are among the most promising new non-contact technologies available for long-term hydrological monitoring. This article evaluates and reports the long-term performance of the GaugeCam ( www.gaugecam.org ) image-based stage measuring system in situ. For this we installed and evaluated the system over several months in a tidal marsh to obtain a good stratification of the measured stages. Our evaluation shows that the GaugeCam system was able to measure within about ±5 mm for a 90% confidence interval over a range of about 1 m in a tidal creek in a remote location of North Carolina, USA. Our results show that the GaugeCam system nearly performed to the desired design of ±3 mm accuracy around 70% of the time. The system uses a dedicated target background for calibration and geometrical perspective correction of images, as well as auto-correction to compensate for camera movement. The correction systems performed well overall, although our results show a ‘croissant-shaped’ mean error (-1 to +4 mm,) varying with water stage. We attribute this to the small, yet present, ‘fish-eye’ effect embedded in images, for which our system did not entirely correct in the tested version, and which might affect all image-based water level measurement systems. Funding: This material is based upon work supported by the National Science Foundation under Grant Nr DGE-0750733 (to RE), the Environmental Protection Agency under Grant Nr EPA 2871 (to FB), and the USDA-NIFA multistate project S1063 (to FB). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Most articles on image-based stage measurements report a variety of techniques used to monitor stages but provide relatively few details on the systems actual performances and uncertainties. This article reports the performances of the GaugeCam ( www.gaugecam.org ) image-based stage measuring system in the field over many weeks. This system was originally designed to match the performances of other water level measurement technologies commonly used in relatively small streams and rivers (< 10 m wide). This article expands on a previous report focusing on the sources and uncertainties of this system in the lab [ 36 ], and evaluates the long-term performances and calculates the uncertainties of the GaugeCam system in a remote location in the field. Image-based measurements of discharge and water level have recently been given particular attention although the idea is not new [ 1 – 11 ]. The main advantages compared to other techniques include access to verifiable data, as the human eye can read or verify each data point, the non-contact sensing as the camera is safely placed away from the water, and in the case of small videos, access to the velocity field at the surface of the water. The latter is particularly important in cases or areas where establishing a stage-discharge rating curve is difficult, when possible at all. Research reported on image-based hydrological monitoring tends to either focus on Large Scale Particle Image Velocimetry (LS-PIV) to compute discharge [ 12 – 28 ], on discharge measurements using machine learning [ 29 ], on flood detection from surveillance cameras [ 30 – 33 ], or, on the stage measurement itself [ 2 , 34 – 43 ] from which discharge is often calculated. Much of what we know in hydrology today takes root at one point in the ability to observe and record the rapid changes in water flow or discharge. To this day, most of flow rates recorded in the thousands of hydrologic stations around the world are still calculated from the water level or stage measurements combined with stage-discharge relationships established at each monitoring station. As such, the measurement of stages will always remain a central part of hydrological monitoring. Method GaugeCam design objectives, philosophy, and system constraints The GaugeCam system was designed to 1) measure water stages with a precision of ±3 mm in relatively small streams and rivers (<10 m wide), wetlands and lakes, using images obtained from time lapse cameras; 2) have the system be fully solar powered using a 50W solar panel to be installed virtually anywhere in the world; 3) have images automatically streamed via cellular phone network to a server; and 4) have images analyzed on the server with calculated stage and corresponding images stored and available on any Internet client. Traditional (non image-based) water level measurement techniques (e.g., mechanical pulley and float based systems, pressure transducers immersed in water, pressure transducers in air in equilibrium with the water column or bubblers, radar systems, ultrasonic systems placed above or under the water, reviewed by [44, 45]), involve the interpretation of the raw signal (e.g., voltage, time, etc.) in the field. Real world stage level values are thus calculated from the raw signal internally by the instruments via a calibration or rating system. However, all instruments and signals in the field tend to drift over time, rendering necessary frequent calibration of the instruments in situ by qualified personnel. A major advantage of image-based systems is that the raw signal in an image (the pixels) does not drift relative to the calibration system (i.e., machine vision algorithm; details below). As a result, maintenance does not require highly qualified personnel because there is no need for on-site calibration. The timing for and necessity of maintenance can be detected by the user (i.e., seen on the images as a large camera move, obstructed view, etc.), and every single measurement is visually verifiable. The translation of the raw signal into real world stages is not impacted by field conditions and can theoretically be done in the field (edge computing) or on a server (cloud computing). We chose the latter for the GaugeCam system. The design constraints imposed to find a time lapse camera that would have minimal power consumption, that would be in sleeping mode most of the time and at each given time interval would wake up to take a picture, day and night, save it locally, and transfer it remotely. Another constraint on the camera was the ability to take images of size small enough to capture the desired Region of Interest (ROI) and the contextual scene, and to minimize the cellular bandwidth monthly consumption, but yet, that would allow high enough resolution and precision on the calculated stage values. All image-based water level measurement systems reported detect the waterline on an image thanks to the sharp contrast in the distribution in the pixel grey values above and below the water line. In many systems, the ROI where the waterline is sought corresponds to the gauge staff itself [2, 37, 38, 46–49]. This makes a lot of sense as gauge staffs are always present at hydrological stations. However, the graduations on the gauge staffs tend to yield rather heterogeneous distribution of the pixel grey scale above the water line, which can make the image analysis challenging, although measurement errors have been reported to be around ±1 cm in good conditions [49]. In a few reports, the goal is to obtain a rough estimate of flow/stage, and/or images do not have a specific target to detect water level [34, 50–53]. To give the highest potential for optimal accuracy, others use a dedicated target [[35–37, 54]; this article], i.e., a vertical white plan, installed in the field, and at which the camera aims. GaugeCam measurement principles The details of the principles of the GaugeCam measurement and software (GRIME2) are available in Chapman et al. [55]. In short, on a white or light grey colored vertical target plane installed in a water body, water forms a very crisp water line. This is particularly visible in an image taken in the general horizontal direction perpendicular to the target. Because of the light absorption of water (even if it is transparent), the pixels corresponding to water have a dark shade, and usually much darker than those above corresponding to the white background of the target. The sharp contrast in the pixel grey scale above and below the water line is used for automatic detection of the water line. The second principle of the GaugeCam system is the ability to automatically translate the location of the water line expressed in pixel coordinates into real world coordinates. For this, the GaugeCam system uses a set of eight bow ties shaped fiducials embedded on the target in two columns of four rows, and leaving a blank column between the two column fiducials (Fig 1). The GaugeCam software automatically detects the centers of the fiducials, for which the real world coordinates are known. The fiducials are also used for automatic correction of image movements (details herein). PPT PowerPoint slide PNG larger image TIFF original image Download: Fig 1. Overall installation setup of the camera and the dedicated GaugeCam target background. A- Inside of the Lookout V camera; B- installation in the field of the camera with the illuminator installed on the side; C- both aiming at the dedicated background; D- installed on the opposite bank of the tidal creek. https://doi.org/10.1371/journal.pwat.0000032.g001 Field site, camera setup and maintenance The site chosen to evaluate the performance of the GaugeCam system in the field was a tidal marsh located in Carteret County, North Carolina (34°49’03.7“N 76°36’16.5”W) where the diurnal tidal amplitude generally was of 0.7 m but reached 1.2 m. Thanks to the tides, all stages were relatively equally represented. The evaluation period spanned from 9 February 2012 to 24 July 2012 with a gap during the first three weeks of May 2012 because of power failure. The camera was a Lookout V wireless internet camera from Colorado Video (http://www.colorado-video.com) equipped with a CMOS (complementary metal oxide semiconductor) image sensor, and a color filter to filter out the Infrared (IR) wavelengths during the daytime (Fig 1A). The filter was mechanically removed at night to allow the IR to pass to the sensor and change the focus since IR and visible light have slightly different wavelengths. The camera was mounted on a heavy metal pole and fitted with an LED illuminator installed about 70 cm to the side to avoid the direct glare on pictures at night (Fig 1B & 1C). The camera was installed on the right bank of the tidal creek 1.6 m above ground and at about 5.2 m from the GaugeCam target installed on the opposite left bank and facing at 240° to the Southwest (Fig 1D). The installation height was chosen to minimize chances of interference with the vegetation and wildlife, and to have the camera safely above the highest (wind driven) tides. A commercial company printed the GaugeCam target patterns on an exterior grade white plastic material and laminated it on a 0.95 cm (3/8") Plexiglas 1.25×1.1m sheet. This was done after assurance that the black patterns would be visible by the camera with infrared lighting at night. The GaugeCam target was carefully installed against 10×10 cm wooden posts driven in the stream bank and kept vertical thanks to wooden stakes driven at about a 45° angle in the bank and secured with lag bolts against the posts (Fig 1D). The GaugeCam target was installed such that it intercepted the vast majority of low and highest stages. The water line served as the leveling system during installation and it was visually determined that there was less than 1 mm difference in elevations between mirror bow-ties. The camera faced downward yielding view angles from a horizontal plane varying from 8.1° from the top (real world coordinates 137.6 cm) to 21.0° to the bottom (real world coordinates 11 cm) of the target, respectively. A gauge staff was installed on one of the posts holding the GaugeCam target such that the real-world coordinates could be directly read on images. The images taken were 800×600 pixels and varied between 35 and 70Kb in size (night pixels took less memory because of the many black pixels). The camera was fitted with a 12 mm lens to flatten the images and limit the ‘fish-eye’ effect, which yielded a resolution of about 2.4 mm per pixel. The whole was powered by a 12V 8Ah ‘brick’ battery recharged by a 50 W solar panel. The camera was able to send images via GPRS wirelessly using 2G technology (camera discontinued in 2016 as a result), and physically store images on an SD card for backup purposes. The target was cleaned every other week (during other normal maintenance activities) using a brush to remove biological fouling that may have formed over the two-week periods. The camera protective glass was wiped at the same time and the SD card was collected and replaced to have a physical backup for the images. Calibration image and pixel to world coordinate calibration One of the weak points of image-based stage measurements are the inevitable (small) movements of the camera in the field associated with wind, temperature changes, camera maintenance, optical filter movement, etc. To correct for these, the GaugeCam system uses a ‘calibration image’ to which all other ‘working images’ are referred back after the water line has been detected and measured. It is on the calibration image that the translation between pixel and real world coordinates is performed. A calibration image is normally chosen from an image with best contrast (Fig 2A), but can be performed in less than ideal conditions if necessary (Fig 2B). On the calibration image, a template search is performed to find the pixel coordinate positions of the center of the bow-ties in the image (green circles in Fig 2A & 2B). The real world coordinates of these bow-tie centers are known and used to calculate the perspective transform coefficients [56] and/or the linear interpolation association points [57], from which pixel to world coordinate transformations are performed as illustrated by the yellow lines and labels in Fig 2A & 2B and stored in a calibration file. The position of a water level search area (blue ROI in Fig 2A & 2B) is calculated and written to the calibration file. Finally, two ROI around the two top bow-ties (framed in red in Fig 2A & 2B) are recorded in the calibration file. A template search for the bow-ties in these ROI’s is performed at runtime on working images to determine how much the target has moved compared to the calibration image. Adjustment is made to the stage measurements to accommodate any such movement (further details in [55]). PPT PowerPoint slide PNG larger image TIFF original image Download: Fig 2. Pattern detection on calibration images (A&B) and water level measurement on working image (C). On a calibration image, the coordinates of the center of the bow-ties are automatically detected (green circles; A&B), from which the real-world coordinates calibration grid can be calculated (yellow lines and labels; A&B). The blue ROI defines the area where the water line is normally searched, and the top two bow-ties framed in red are used to quantify the movement of ’working images’ compared to the calibration one. Calibration image is normally chosen from an image with best contrast (A), but can be performed in less than ideal conditions if necessary (B). C- points found to correspond to the water line (yellow circles); the line best fitting through these points (blue line); the point on the blue line which coordinates are used to calculate the initial stage value and reported as level in the top left part of the image; the thick red and the thin green segments representing the locations of the top fiducial of the calibration and the working images, respectively; the difference in stage between these segments is calculated and reported as adjust and used to calculate the final and recorded stage reported as Level (adj). https://doi.org/10.1371/journal.pwat.0000032.g002 Water level search The line finder algorithm operates within the blue ROI illustrated in Fig 2. The ROI is divided into vertical search lines for every column of pixels in the ROI. Five arguments are passed to the algorithm to control the way it performs the line search (further details in [55]). The five arguments and what they control include 1) the direction of search (directs whether the search should start in the water and search upward or the opposite); 2) the edge direction (dark to light edge transition or opposite); 3) the edge magnitude (defines minimum water line pixel brightness); 4) the edge kernel size (defines the size in vertical pixels of the edge search); and 5) Edge to select (which edge to keep among all the edge detected). After a set of edge points are found in each of the vertical column of the search ROI, the OpenCV fitLine [58] method is called to fit a line through the points. If the line is at a reasonable angle relative to a typical water level line, then the find is set as successful. If not enough points are found to perform a line fit or the calculated line is not at a correct angle, the line search fails are reported as ‘no line detected’. The point (in pixel coordinates) on the line at the half-way point between the left and right edges is passed to the pixel to world transform model. The vertical position of the chosen point is reported in world coordinates as the water level. Because of the fitline approach in the pixel coordinates, the coordinates of the chosen points do not have to be integers but real numbers or expressed in fractions of pixels. The resolution on the final number can thus theoretically be sub-pixel or smaller than the pixel resolution (2.4 mm in this case; Fig 2C). Stage calculation illustration Following the processes described above, on a working image, points corresponding to the water line along the given number of vertical pixel columns in the search ROI are first detected (ten yellow circles at the water line in Fig 2C). A line is then fit through these points (blue line in Fig 2C) having an equation with slope and intercept values expressed in pixel coordinates. The stage value in pixel coordinates is calculated as the ordinate value corresponding to abscissa equal to the pixel at the half-way point between the left and right edges of the search ROI (red crossed square in Fig 2C). This pixel ordinate is then transformed into stage value in real world coordinates, using the perspective transform coefficients established from the calibration image, and reported as level: in top left of an analyzed image in cm in Fig 2C. This analysis is thus done as if the fiducial in the working image were perfectly superimposed over the calibration image. In reality, this is rarely the case as images always end up ‘moving’, or more appropriately the camera small movements cause the images to ‘move’. The location of the top two fiducials on the calibration image is illustrated and corresponds to the extremities of the thick red line at the top of the image in Fig 2C. The location of the top fiducials on the working image corresponds to the extremities of the green segment. It turns out that in Fig 2C, both lines appear superimposed. The difference in stage between these two segments is calculated as Adjust: in the top left corner of an image. The final and recorded stage is calculated as the stage value in Level minus the adjust value and appears as Level (adj): in the top left corner of an image (Fig 2C). Reference visual measurements To evaluate the performance of the GaugeCam system, stages from 1,086 images staggered in time were visually read on the gauge staff visible on all pictures. This included pictures obtained at all time of the day and night, during early morning haziness, fog, winter conditions, and rain. However, this procedure is subject to the same potential errors as the automatic stage detection system, i.e., the variable interaction between the incident light and the meniscus on the appearance of the water edge in an image (details below). The resolution of 2.4 mm per pixel also limited the overall reading accuracy to about this resolution, although the brain finds ways to extrapolate. The 1 cm black tacks of the staff gauge also tend to appear a little smaller than the white intervals on a low resolution jpeg image. As a result the visual readings are not flawless and have their own embedded uncertainty. Nonetheless, these readings were done keeping all these potentials in mind and were used as references to evaluate the GaugeCam system. Comparison with other sensors Stages and discharge in the tidal creek were actually measured over a longer period of 13 months (March 2011 to November 2012), within which the visual readings were done for performance evaluation (February to July 2012). Over the 13 months, the stages in the tidal creek were monitored with several other sensors, including a vented pressure transducer referred to as ‘ISCO’ herein (ISCO 750 module associated with ISCO 6712, Lincoln, NE, USA; details in [59]), which was eventually replaced by a dual ultrasonic/pressure sensor referred to as ‘Sontek’ herein (Sontek IQ, Xylem Inc., Rye Brook, NY, USA). Stage was primarily calculated from the transit time of ultrasounds between the sensor immersed at the bottom and the water surface. Stage was internally corroborated with that measured from the embedded pressure transducer. A third non vented pressure transducer referred to as ‘HOBO’ herein (HOBO U20 Water Level Logger, Onset Computer Corporation, Bourne, MA, USA) was also used. A total of 35,849, 7,099, and 14,068 data points were used for comparison between, respectively, ISCO, Sontek, HOBO, and the GaugeCam system. The GaugeCam system was originally used as a backup system and to verify data. Sources of and hypotheses for measurement uncertainties There are two main sources of uncertainties: the first ‘natural’ source associated with the impact of the variable interaction between the incident light and the meniscus on the appearance of the water edge in an image, and the second associated with the GaugeCam series of actions from image taking to deriving stage values. At the interface between water and the vertical target, a horizontal meniscus is formed because of the surface tension forces raising the contact of the water with the target 2–3 mm above the actual stage [36]. Because of the curved shape of the meniscus, the incident light (direct sun and diffuse light, and IR illuminator) reflected back to the camera may vary depending on the angle of the incident light itself, but also on the view angle, itself linked to the stage. During daytime, when diffuse and direct sunlight tend to bring light from a steep vertical angle, the meniscus may reflect bright light to the camera and exhibit pixels of very light grey, possibly compensating for the increased height due to the meniscus. Reversely at night when the source of light is provided by the IR illuminator generally at a much flatter angle, the meniscus may tend to appear darker, possibly making the water edge appear on an image higher than it really is. Consequently we hypothesized that because of the change in the incident light angle, the errors on nighttime and daytime pictures should be significantly different, and the nighttime pictures should generally have a positive mean, i.e., overestimate the measurements more than the daytime pictures. The corollary hypotheses are that the nighttime measurements should generally have a positive mean, and that the daytime measurements should generally have a mean near zero. For this, daytime images were taken between 8:30 and 17:45 (~local solar time), and nighttime images when taken between 19:45 and 4:30 the next day. Images from other times were also evaluated and are referred to as dawn/dusk. Uncertainties inherent to the GaugeCam system are numerous and include uncertainties due to the finding of the fiducial center, the corrections for perspective, ‘fish-eye’ effect and the pixel to real world coordinate matrix, the stage correction for camera movement compared to the calibration image, the water edge detection, and the coordinate of the central point of the line best fitting the water edge points. It would be futile if possible at all to study each separately to calculate an overall uncertainty. Instead, it might be possible to detect the effect of uncertainties due to the GaugeCam system among the overall observed uncertainties. In particular, it became apparent that there was significant movement of the images at night. This was attributed to the mechanical removal of the anti-IR filter in front of the CMOS sensor, which created enough vibrations for the images to move significantly and somewhat randomly at night. However, the night picture sizes tended to be smaller because of lower contrast. The errors on the coordinates of the top two bow-tie centers were thus likely higher than those for daylight pictures, possibly inducing greater errors during stage correction on night pictures. The consequences of these observations are that we hypothesized that because the nighttime pictures had to be corrected more often than daytime pictures because of movement associated with the mechanical filter removal, the standard deviation of the errors should be larger than those of the daytime ones. Also, the correction for the perspective is naturally higher at a steeper angle of view, i.e., for lower stage values in the study case [36]. The consequences of these observations are that we hypothesized that because of image distortion, the distribution of errors (mean and standard deviation) for measurements at low stages (less than 30 cm) should be different than the ones at high stage (greater than 70 cm). Statistical analysis All analyses were performed using the R software [60]. The objectives of the statistical analyses were to identify potential sources of and quantify uncertainties. Practical results sought included whether and how errors would be different from daylight or nighttime pictures, high or low stages, as a result of the perceived importance of the meniscus described above. Additionally, the analysis sought to quantify the errors expressed in ±X mm of the true stage, and whether and how these errors might be distributed. First, measurement errors were calculated as the difference between stage values measured with the GaugeCam system and measured visually. As such, 1,086 error values were obtained. It was assumed that errors outside ±2 cm (due to the detection of the line on foul lines rather than on water, blurry pictures due to dew on the lenses, or fog) were not representative of the capabilities of the GaugeCam system and were removed for the statistical analysis. In the end, 1,033 images were used for the statistical analysis. The time dependence among errors was checked and considered negligible (see S1 File). Data description and pre-processing Because of camera malfunction for three weeks in May 2012, no images were available over that period and the camera brought in for repair. A 21-day gap thus appeared in the time series between the 529th and the 530th image. The camera maintenance required removal and re-installation, which inevitably caused the images to have ‘moved’ compared to the reference image before and after the 530th image. To measure the impact of such action on the performance of the system, an indicator was kept indicating the ‘repair’ point, i.e., associating a value 1 after repair and 0 before the 529th observation. Statistical model A test of the normality of data revealed that the empirical distribution was ‘heavy-tailed’. We posited a Student’s t-distribution to describe the distribution of errors (S2 Fig). Using x t to denote the (known) true water level at the t-th time point, we modeled the GaugeCam error Y t , given the indicators and x t , for the t-th time point where Y t is the distribution of errors, T ν (μ t , σ t ) denotes the Student’s t distribution with location parameter μ t (equivalent of the mean for normal distribution), scale parameter σ t (equivalent of standard deviation for normal distribution) and ν degrees of freedom (when ν→∞, T ν (μ, σ)→N(μ, σ)). We first plotted the errors as a function of the reference (visual) water stage (blue = daylight, dark grey = nighttime, and cinereous = dusk/dawn times; Fig 3A). This revealed that the errors heavily depended upon the stage. We then fitted a regression model allowing the mean GaugeCam error to be a smooth function using linear combination of a set of B-splines. We considered 8 knots that approximately allowed one knot per 10 cm range of the true water level. The fitted mean profile along with point-wise 95% confidence intervals (based on normal distribution assumption of the error components of the regression model) are represented by the red line and ribbon, respectively, in Fig 3A. The differences between the errors and the smooth function of the mean GaugeCam errors (referred to as residuals in Fig 3), were then plotted before and after the repair (Fig 3B), and as a function of the time of the day (Fig 3C). PPT PowerPoint slide PNG larger image TIFF original image Download: Fig 3. Initial analysis of the GaugeCam errors and residuals. A- GaugeCam errors vs. the reference water level (on X-axis) versus (on Y-axis) along with a fitted smooth regression curve and pointwise 95% confidence intervals (red); the blue, dark grey, and cinereous dots represent the daytime, nighttime, and dawn/dusk observations, respectively. (B) Boxplots of the residuals (based on regression of GaugeCam errors on the true water level) before and after the changepoint, and (C) boxplots of the residuals for daytime, nighttime, dawn and dusk times of the day. https://doi.org/10.1371/journal.pwat.0000032.g003 From these results, it appeared that 1) the mean of the errors heavily depended on the water stage, 2) that it might depend on before and after repair periods, but 3) that the standard deviation might not, 4) that the mean of the errors might not depend so much on the time of the day. Also, it appeared that the 5) equivalent of the standard deviation might be a bit larger for nighttime than for daytime pictures. This comforted our initial hypotheses. We defined the indicators 1(t>repairpoint), 1(t∈daylight), 1(t∈nighttime) and 1(t∉daylight, t∉nighttime) for the repairpoint, daytime, nighttime and other (dawn/dusk) times. 1(t>repairpoint) = 1 for time points after the repairpoint and zero before the repairpoint, 1(t∈daylight) = 1 if the t-th image was observed during daylight times (between 8:30 AM and 5:45 PM) and zero otherwise; 1(t∈nighttime) = 1 if the t-th image was observed during nighttime (between 7:45 PM and 4:30 AM next day) and 1(t∉daylight, t∉nighttime) = 1 if the t-th image was observed some other time of the day, i.e., during dawn and dusk, and zero otherwise. Mathematically, this can be translated in the equation below: where f() is a smooth function and accounts for a non linear dependence on the reference level x. We thus assumed that the location (mean for t statistics) of the distribution (μ t ) of errors was dependent upon whether pictures were taken before or after repair (α 1 1(t>repairpoint)), during daytime (α 2 1(t∈daylight)), nighttime (α 3 1(t∈nighttime)) or dawn/dusk (no expression because implied by the previous two indicators), and a function of the water stage (f(x t )). We modeled f as a linear combination of K known basis functions cubic B-splines {β k (⋅), k = 1,…,K}; f(⋅) is a smooth function that can be written as . We chose K = 8 as discussed above. We treated the regression coefficients {β k , k = 1,…,K} as unknown parameters and estimated them from the data. Because for any x, we did not consider any further intercept term in the model. We assumed that the scale (standard deviation for t statistics) of the distribution (σ t ) of errors was dependent upon the time at which the pictures were taken. The model parameters were derived using the maximum likelihood estimation (MLE) approach [61]. Additional details on the MLE approach and how computational matrices were defined are provided in the supplementary information. [END] --- [1] Url: https://journals.plos.org/water/article?id=10.1371/journal.pwat.0000032 Published and (C) by PLOS One Content appears here under this condition or license: Creative Commons - Attribution BY 4.0. via Magical.Fish Gopher News Feeds: gopher://magical.fish/1/feeds/news/plosone/