Skip to content
BY 4.0 license Open Access Published by De Gruyter Open Access September 13, 2021

Individual tree detection using UAV-lidar and UAV-SfM data: A tutorial for beginners

  • Midhun Mohan EMAIL logo , Rodrigo Vieira Leite EMAIL logo , Eben North Broadbent , Wan Shafrina Wan Mohd Jaafar , Shruthi Srinivasan , Shaurya Bajaj , Ana Paula Dalla Corte , Cibele Hummel do Amaral , Gopika Gopan , Siti Nor Maizah Saad , Aisyah Marliza Muhmad Kamarulzaman , Gabriel Atticciati Prata , Emma Llewelyn , Daniel J. Johnson , Willie Doaemo , Stephanie Bohlman , Angelica Maria Almeyda Zambrano and Adrián Cardil
From the journal Open Geosciences

Abstract

Applications of unmanned aerial vehicles (UAVs) have proliferated in the last decade due to the technological advancements on various fronts such as structure-from-motion (SfM), machine learning, and robotics. An important preliminary step with regard to forest inventory and management is individual tree detection (ITD), which is required to calculate forest attributes such as stem volume, forest uniformity, and biomass estimation. However, users may find adopting the UAVs and algorithms for their specific projects challenging due to the plethora of information available. Herein, we provide a step-by-step tutorial for performing ITD using (i) low-cost UAV-derived imagery and (ii) UAV-based high-density lidar (light detection and ranging). Functions from open-source R packages were implemented to develop a canopy height model (CHM) and perform ITD utilizing the local maxima (LM) algorithm. ITD accuracy assessment statistics and validation were derived through manual visual interpretation from high-resolution imagery and field-data-based accuracy assessment. As the intended audience are beginners in remote sensing, we have adopted a very simple methodology and chosen study plots that have relatively open canopies to demonstrate our proposed approach; the respective R codes and sample plot data are available as supplementary materials.

1 Introduction

Monitoring and quantifying the canopy growth, chances of plant diseases, and changes happening within forest structures – especially at tree levels – on a timely basis are crucial for optimizing yields and for determining the response of forests to climate anomalies. Although traditional field-based methodologies provide us with detailed data, these tasks can be expensive, time-consuming, and labor-intensive, specifically for monitoring and measuring large areas of forested landscapes [1,2,3,4,5,6]. As a consequence, there exists a need to tap into state-of-the-art remote sensing methodologies, in particular unmanned aerial vehicles (UAVs).

The past decade has witnessed the proliferation of UAV applications using both optical and lidar (light detection and ranging) in the forestry sector due to the advancements in sensors, platforms, and software [7]. In this regard, individual tree detection (ITD) can be considered as one of the most important applications using UAVs as it can provide information on numerous forest structural attributes – such as tree height, crown width, diameter at breast height (dbh), aboveground biomass, forest uniformity, and wood quality [2,3,4,5,8]. For ITD, algorithms such as local maxima (LM), marker-controlled watershed (MCWS) algorithm, template matching (TM), valley following (VF), scale-space (SS) theory, and Markov random fields (MRFs) have been employed previously and found to be applicable in different studies [9,10,11,12,13,14,15,16,17,18]. Similarly, ITD has been applied using various spatial, spectral, and temporal resolutions [19,20,21,22].

The wide array of algorithms and approaches for UAV-based ITD can be challenging for users of UAV data without a strong background in remote sensing and/or programming. Here, we intend to provide a step-by-step easy-to-implement tutorial on ITD applied to canopy height models (CHM) derived through (i) a low-cost UAV (with RGB bands) and (ii) high-density lidar. For grasping the ongoing advanced applications of UAV-based ITD and for state-of-the-art ITD algorithms, please refer to [10,19,23,24,25,26,27]. For the purpose of this study, we made use of the lidR and rLiDAR packages in R Studio, built on the LM algorithm [9,28,29,30]; the open-source codes are included as a supplementary file, though each function used is listed in detail within the Section 3. R programming was chosen as the medium since it is one of the most simple and widespreadly used programming languages – which is also open source. For getting started with R programming, please refer to refs. [31,32].

Since it is uneconomical to acquire frequent field inventory data for small-scale studies, we have presented an option to perform tree detection accuracy assessment statistics based on manual visual interpretation from high-resolution imagery, which has proved to be an efficient strategy for open canopy forests [2,5]. In addition, for the high-density lidar data, we have performed an accuracy assessment using field data. In a similar theme, given that the objective of the study is to help more early-stage researchers and forest managers without technical remote sensing experience understand the benefits and implementation of ITD, a very simple-to-implement methodology is presented here with the study plots having relatively open canopies. Nevertheless, it should be noted that for denser canopy forests with high levels of crown overlap, the proposed approach might need to be modified and we recommend referring to [5,33] for advanced users.

2 Data acquisition, processing, and download

2.1 UAV-SfM data

For this tutorial, we considered two study sites where point clouds were obtained using different techniques (see Figure 1). In the first study site, point clouds were built from a UAV carrying a RGB sensor through the stereo matching of multiple overlapping aerial images. This method is referred to as the SfM technique, and it helps solve issues associated with the geometry, camera positions, and orientations [23,24,25,26]. For this study site, trees were not georeferenced in the field. This is shown as a low-cost alternative that is useful for multiple applications and for regions where resources for purchasing sensors or extensive field data collection are not feasible, although limitations related to accuracy should be considered [25,34]. The site is approximately 700 ha located at the E.O. Siecke State Forest, East Texas, and is managed by the Texas A&M Forest Service.

Figure 1 
                  The study sites and respective point clouds from the UAV surveys. (a) United States; (b) Texas; (c) Florida; UAV surveyed area of (d) study site 1 and (e) study site 2; example of theSfM point cloud for the (f) study site 1 in Texas and the (g) study site 2 in Florida.
Figure 1

The study sites and respective point clouds from the UAV surveys. (a) United States; (b) Texas; (c) Florida; UAV surveyed area of (d) study site 1 and (e) study site 2; example of theSfM point cloud for the (f) study site 1 in Texas and the (g) study site 2 in Florida.

At this site, we selected an area of 11.95 ha for this study, which was dominated by the oldest slash pine (Pinus elliottii Engelm.) plantations. The aerial imagery was acquired in August 2020 using a DJI Mavic Pro quadcopter. The RGB imagery was collected using the Pix4Dcapture flight planning app (https://pix4d.com/) with the specifications described below (see Table 1). Pix4D Mapper was used for initial image processing, point cloud generation, and orthomosaic creation. For the point cloud densification, an image scale of ¼ and an optimal point density were used. The output point cloud was generated in the LAS format. The raster digital surface model (DSM) was created using inverse distance weighting method.

Table 1

UAV-SfM parameters and related specifications

Parameters Specifications
Ground spatial resolution 3.47 cm/px
Sensor type 1/2.3″ (CMOS)
Sensor resolution 12.71 MP
Camera Angle 60°
Flying altitude 91.4 m
Front overlap 80%
Side overlap 75%

2.2 UAV-lidar data

The second study site was surveyed by a UAV-lidar system and had trees precisely mapped in the field, representing a best-case scenario regarding point cloud generation and field validation. The site is the Ordway Swisher Forest Dynamics Plot (OSFDP) at the Ordway-Swisher Biological Station in Florida. The station is operated as a long-term research facility by the University of Florida and is part of the Global Earth Observation Network (ForestGEO) (https://forestgeo.si.edu/). The plot has an area of 23.04 ha established and mapped from March 2019 to February 2020. All the trees (dbh > 1 cm) were tagged and had their species, dbh, height, status (living or dead), crown light exposure, and position recorded. Tree mapping was done by first locating each one into a 40 by 40 m quadrat and measuring the azimuth (Suunto K-14 sighting compass) and distance with a metal tape to the georeferenced quadrat center point. The quadrat center point geolocation was acquired in a real-time kinematic (RTK) survey using two survey-grade GPS receivers tape to measure antenna height above the monument, a tripod for the base, a bipod for the rover, a range pole, and a Topcon Data Collector. The dominant species were longleaf pine (P. palustris) and turkey oak (Quercus leavis Walter). The plot measurements are publicly available at the ForestGEO website under request (https://forestgeo.si.edu/explore-data/ordway-swisher-termsconditionsrequest-forms) and with agreement to the use terms.

The lidar data were collected in June 2019 with the GatorEye Unmanned Flying Laboratory (http://www.gatoreye.org/). This system is composed of a DJI M 600 Pro hexacopter with a Phoenix Scout Ultra core, which has a STIM300 inertial measurement unit (IMU) coupled with a differential GNSS antenna and integrates a Velodyne Ultra Puck 32c, a 24 MP visual camera, and a Headwall Photonics Nano hyperspectral camera. The lidar dense point clouds (∼288 pts/m2) were acquired from three separate flights 80 m apart, following terrain at 80 m above ground level and at a ground speed of 10 m/s (see Table 2). Flightlines were post-processed using the GatorEye multi-scale post-processing (GMSPP) workflow (v. 229 detailed at http://www.gatoreye.org/).

Table 2

UAV-lidar parameters and related specifications

Parameters Specifications
No. of lasers in Lidar Sensor 32
Max. range of individual laser 220 m
Forward-backward FOV 40°
Side-to-side FOV 360°
Pulses per second 600,000
Returns per pulse Dual (strongest and last)

3 Workflow

Herein, we walk you step-by-step through the process of ITD from the 3D point clouds. This section is further divided into three subsections: (i) CHM generation, (ii) Tree detection, and (iii) accuracy assessment. Figures 2 and 3 show the overall workflow and process. Full-length open access codes and step-by-step information for accessing and downloading the data are shared as a pdf file (ITD_Tutorial_2021_RCodes; from here referred to as S1) in the supplementary section. Hereby, we encourage first-time users to download the sample point cloud UAV-SfM data and UAV-lidar data available, for practice purposes and follow the R coding instructions in S1 on the side as you progress on with this section. The tutorial was built using the R packages rLiDAR 0.1.1 [29] and lidR 3.0.4 [35] and their dependencies. Any queries or concerns can be directly communicated with the authors using the google form (https://tinyurl.com/ITD-Tutorial-2021-Feedback), which we have created for troubleshooting.

Figure 2 
               Graphical illustration of ITD; (a) drone survey on a forest, (b) top view of the forest, (c) side view of the forest, and (d) top view of a treetop; herein, top view images resemble what we would observe in a UAV-captured imagery.
Figure 2

Graphical illustration of ITD; (a) drone survey on a forest, (b) top view of the forest, (c) side view of the forest, and (d) top view of a treetop; herein, top view images resemble what we would observe in a UAV-captured imagery.

Figure 3 
               Workflow diagram summarizing the steps for ITD and validation.
Figure 3

Workflow diagram summarizing the steps for ITD and validation.

3.1 Canopy height model (CHM) generation

CHM refers to the distance between the ground level and the topmost point of the objects (which are treetops in our case) under consideration and gives the actual height of objects (Figure 4). Traditionally, CHMs can be generated by (i) subtracting the Digital Terrain Model (DTM) (DTM; which gives a measure of the ground elevation) from the DSM (which represents features on the earth’s surface) or (ii) height normalizing the point cloud using the DTM elevation values and deriving the elevation model from the topmost point cloud returns.

Figure 4 
                  DSM, DTM, and CHM generation.
Figure 4

DSM, DTM, and CHM generation.

We used the functions available in the lidR package [35] for CHM generation and one sample plot of 900 m2 (30 m × 30 m) at random in each study site as examples for processing. The plot polygons are included in the data sources described in Section 2.1 and 2.2 in “.shp” format that you can download (as per instructions provided in Section 2.3) and load into R Studio.

  • Step 1 – Loading the data: Use the readLAS function to load the 3D point clouds. This function can read both “.las” and “.laz” files (lines 6 and 8 of S1). Specifications and standards of the “.las” format can be found in http://www.asprs.org/a/society/committees/standards/LAS_1_4_r13.pdf. For files that are excessively large to be loaded at once, the readLAScatalog function can be used (for example, in case if you are loading data for the entire study area all at once). This function is used as a representation of one or multiple .las files and works with several lidR functions for processing the point clouds without loading it into R (details on the LAScatalog can be found here: https://cran.r-project.org/web/packages/lidR/vignettes/lidR-LAScatalog-class.html).

  • Step 2 – Clipping the area of interest: Use the clip_roi function to clip a point cloud. This function allows clipping the point cloud based on a given geometry, such as shapefiles. Clipping can help in the visualization of a region of interest (ROI) or for increasing processing efficiency. We clipped one of the example-plots in each site as the ROI. In this example, we created a 3 m buffer (lines 14 and 15 of S1) around the plot before clipping (lines 18 and 19 of S1) with the purpose of avoiding edge effects in the following processes.

  • Step 3 – Classifying ground points: Use the classify_ground function to classify the points that represent the ground. This function changes the “Classification” attribute of points that represent the ground to the value of “2” following the.las file formatting standards (see step 1). For lidar point clouds, there is still an option for using only the last returns in this process as those are most likely the ones from the ground. We can assess the quality of the ground classification by plotting the point cloud by its classification values (Figure 3b; lines 31 and 32 of S1) or directly assess the generated DTM in the next step. If improvements are necessary, the parameters of the algorithm can be changed empirically or based on previous publications in similar types of forests. Furthermore, lidR provides other ground filtering algorithms such as the CSF [36]. The points classified as ground returns are used in the next step for generating the DTM.

  • Step 4 – Creating a DTM: Use the grid_terrain function for creating the DTM. This function creates a rasterized surface representing the terrain by interpolating the ground points and has several algorithms available. Herein, we used the k-nearest neighbor with inverse distance weighting (lines 37 and 38 of S1). The cell size of the DTM can be defined as a function of the point density [37], empirically, or based on previous works in similar types of forests. Given the low tree density in most of the area and the high density of the point clouds (especially for the lidar dataset), we defined the cell size to be 0.25 m. The quality and consistency should be assessed in this part on a case-by-case basis as the subsequent process relies on it. For instance, you can use the function plot_dtm3d as a reference to visually assess the consistency of the DTM (Figure 5).

  • Step 5 – Height normalization of the point cloud: Use the function normalize_height for normalizing the point cloud elevation values. This algorithm subtracts the DTM elevation (z value) from the elevation of all points. After this process, the lowest returns (i.e., ground returns) in the point cloud are set to ∼0, and the point cloud elevation values (z) will represent the true height of the objects (lines 43 and 44 of S1).

  • Step 6 – Creating a CHM: Use the grid_canopy function for creating the CHM. This function creates a rasterized surface using the upper returns of the point cloud and has several algorithm options (refer to lidR package description for more details; https://cran.r-project.org/web/packages/lidR/index.html). Herein, we used the method p2r (point to raster) that holds the highest value for a user-defined voxel and interpolated them into a rasterized surface (lines 48 and 49 of S1).

Figure 5 
                  Processing steps for point cloud height normalization using the SfM and lidar-derived point clouds from the study sites in Texas and Florida (a and b). Starting from the raw point cloud with elevation values (a1 and b1), the ground points are classified (a2 and b2). By subtracting the DTM from the point cloud elevation values (a3 and b3), a normalized point cloud with true height values is obtained (a4 and b4).
Figure 5

Processing steps for point cloud height normalization using the SfM and lidar-derived point clouds from the study sites in Texas and Florida (a and b). Starting from the raw point cloud with elevation values (a1 and b1), the ground points are classified (a2 and b2). By subtracting the DTM from the point cloud elevation values (a3 and b3), a normalized point cloud with true height values is obtained (a4 and b4).

3.2 Individual tree detection (ITD)

One of the most well-known, effective, and simplest methods for ITD is the LM algorithm, which is incorporated in this study. Within the LM algorithm, the treetops are associated with the high-intensity LM of the imagery, and we can further apply smoothing techniques and height thresholds to get rid of spread-out tree branches and contorted snags which otherwise might create spurious LM [1,2,28,29,38]. For our study, we used a smoothing window size (SWS) of 5 × 5 pixels as advised for open canopies [1,2,5]. Additionally, a fixed tree window size (TWS) of 7 × 7 and 5 × 5 pixels for the Texas and Florida sites, respectively, was employed as it allows us to define the boundary within which the algorithm has to look for treetops. For more information on optimizing SWS and TWS combinations for enhancing tree accuracy based on canopy density, please refer to [5].

  • Step 7 – Smoothening the CHM: We used the function CHMsmoothing to smoothen the CHM (see Figure 6). This is an optional but commonly used process that can improve the treetop detection algorithm results as it helps eliminate spurious LM (e.g., caused by branches) [1,2,28,29,38]. This function uses an image convolution kernel allowing the application of mean, median, or gaussian-based effects. We used the “mean” option as often applied using a 5 × 5 window size. Note that the mean filter changes the pixel values, and it may be needed to verify it when assessing the height of the objects (lines 53 and 54 of S1).

  • Step 8 – Detecting the treetops: The function FindTreesCHM, which employs an LM filter, was applied to detect the treetops and retrieve their heights (see Figure 6). The LM are found using a user-defined fixed window. It is also necessary to define a minimum height threshold to exclude non-tree features. We defined a 7 × 7 grid cells window for the point clouds from the treetop detection. Furthermore, a 3 m minimum height threshold was set for avoiding shrubs and dead trees (lines 56 and 57 of S1).

Figure 6 
                  Data processing and CHM creation (a1 and b1); smoothened CHMs (a2 and b2); application of LM filter to detect the treetops (a3 and b3). The CHMs were generated from the SfM (a1–3) and li-dar-derived point clouds (b1–3) obtained in UAV surveys in Texas and Florida sites, respectively.
Figure 6

Data processing and CHM creation (a1 and b1); smoothened CHMs (a2 and b2); application of LM filter to detect the treetops (a3 and b3). The CHMs were generated from the SfM (a1–3) and li-dar-derived point clouds (b1–3) obtained in UAV surveys in Texas and Florida sites, respectively.

LM filters are a well-known, widely used, simple, and effective method for ITD. It is important to keep in mind that it is a raster-based method and allows the identification of trees that are in the upper canopy strata. For more complex forest environments such as areas with steep slopes or dense canopies, alternative algorithms – such as TM or VF or deep learning – and approaches or their combinations should be tested [19,39,40,41,42,43,44]. Furthermore, it is worth noting that most of the functions used here have optional parameters that might be different depending on the forest structure and must be tested based on inferences drawn from previous studies [2]. The R packages presented here also have several other interesting features – for filtering (see lidR filter_poi function), processing large datasets (see lidR catalog_apply function), delineating crowns (see rLiDAR forestCAS function), segmenting tree crowns (see lidR segment_trees function), and building 3D forest representations (see rLiDAR LiDARForestStand function) – that we encourage the users to explore.

3.3 Accuracy assessment

For validation purposes, we compared the LM-based ITD results with the manual interpretation of tree counts made from the high-resolution imagery for the UAV-SfM dataset. This is the most common method used for regional-level studies as the acquisition of concurrent data is not economically viable for small-scale landowners. For the UAV-lidar data, we used the position of the trees in the field as a reference for comparison. A total of 10 random plots of 900 m2 (30 m × 30 m) in each site were considered for this study. Accuracy metrics calculated include true positive (TP, correct detection), false negative (FN, omission error), false positive (FP, commission error), recall (r), precision (p), and F-score (F); equations are listed below Equations (1)–(3) and for more information on the individual metrics, please refer to [45,46,47]. Herein, we can obtain a measure of trees detected from recall; precision gives a measure of correctly detected trees; F-score provides a measure of the test’s accuracy by taking the harmonic mean of recall and precision.

(1) r = TP / ( TP + FN ) ,

(2) p = TP / ( TP + FP ) ,

(3) F = 2 r p / ( r + p ) .

Overall, the F-scores were approximately 0.8, and omission was higher than commission errors (see Table 3). In the Texas site, 49 of the 64 reference trees were correctly detected. The omission and commission errors in this site were 24.6 and 18.4%, respectively. However, in the Florida site, 70 of the 87 reference trees were correctly detected with an omission error of 19.5% and commission error of 18.3%. It should be borne in mind that the tree detection accuracy is highly dependent on the forest structure, topography, and canopy characteristics. Homogeneous forests, plantations, and woodlands usually will present higher accuracies as the same window size will work in a similar way throughout the area [2,5]. It is highly recommended that the user bases their accuracies on previous works in the same type of forests. To improve the accuracy and F-score, it is possible to vary the smoothening and LMF window sizes as well as check the consistency of the DTM.

Table 3

Accuracy assessment statistics for ITD using LiDAR and structure from motion (SfM)-derived point clouds

Source Subplots Detected Reference FP FN TP Recall Precision F-score
UAV-SfM P1 3 4 1 2 2 0.5 0.67 0.57
(Texas) P2 10 10 2 2 8 0.8 0.8 0.8
P3 1 1 0 0 1 1 1 1
P4 7 7 1 1 6 0.86 0.86 0.86
P5 4 6 1 3 3 0.5 0.75 0.6
P6 5 5 1 1 4 0.8 0.8 0.8
P7 7 9 1 3 6 0.67 0.86 0.75
P8 11 12 2 3 9 0.75 0.82 0.78
P9 9 8 2 1 7 0.88 0.78 0.82
P10 4 3 1 0 3 1 0.75 0.86
Overall 61 65 12 16 49 0.75 0.8 0.78
UAV-Lidar P1 10 11 2 3 8 0.73 0.8 0.76
(Florida) P2 8 6 2 0 6 1 0.75 0.86
P3 4 6 1 3 3 0.5 0.75 0.6
P4 8 8 2 2 6 0.75 0.75 0.75
P5 9 7 2 0 7 1 0.78 0.88
P6 9 9 2 2 7 0.78 0.78 0.78
P7 9 8 2 1 7 0.88 0.78 0.82
P8 13 13 2 2 11 0.85 0.85 0.85
P9 6 8 0 2 6 0.75 1 0.86
P10 10 11 1 2 9 0.82 0.9 0.86
Overall 86 87 16 17 70 0.8 0.81 0.8

4 Immediate applications and conclusion

As a tutorial-type short paper, this article is intended to expose the lay people to the applicability of UAVs for ITD and its applications in forest conservation, management, and policy analysis sectors. Our results underscore the potential of simple and easy-to-apply algorithms in combination with drone-derived CHMs for performing ITD in open canopies. By making 3D point cloud data freely available and including open-source R programming codes along with a description of the workflow implemented, we expect the task of ITD to be less daunting and highly accessible to a broad audience. Thus, the study contributes to the expansion of the use of point cloud data, not only by scientists or professionals who are in the remote sensing field, but also to those who are unaware of the possibilities and multiple available resources for processing point cloud data. Furthermore, it can be a first step for early career remote sensing students or anyone initiating on point cloud data processing. After obtaining a strong understanding of the ITD methodology and having successfully executed the workflow presented for performing ITD, the users would be in a position to venture into various applications of ITD within the field of forestry, conservation, and management spectrums (see Figure 7). Nonetheless, it should be borne in mind that the proposed approach employing LM is only one among many approaches/technologies to do ITD. LM was proposed due to its simplicity over alternative methods, and as reported, there were no specially accurate results in detecting trees presented. A list of references to related research articles and open-source R programming packages is also provided in the reference section [19,29,35,39,48,49,50,51,52,53,54,55,56,57] for encouraging public participation.

Figure 7 
               (a) Species classification; (b) tree crown delineation; (c) fruit/yield estimation; (d) biomass estimation; (e) habitat structural assessment; (f) forest uniformity; (g) forest health/pest monitoring; and (h) disturbed forest and recovery tracking.
Figure 7

(a) Species classification; (b) tree crown delineation; (c) fruit/yield estimation; (d) biomass estimation; (e) habitat structural assessment; (f) forest uniformity; (g) forest health/pest monitoring; and (h) disturbed forest and recovery tracking.

Acknowledgments

We acknowledge Morobe Development Foundation, Papua New Guinea, for supporting the research endeavors. Additionally, we would like to appreciate the help of the UN Volunteers, Omkar Khadamkar (for his support with graphical illustrations) and Esmaeel Adrah (for his GIS-related assistance), as well as Dr Carlos Alberto Silva of the University of Florida and Tek Kshetri. We thank the McIntire-Stennis program of the USDA for their support towards the GatorEye program as well as for the TFS Data Collection initiative.

  1. Funding information: The publication costs related to the research are covered by the National University of Malaysia (UKM) research grants GGPM-2020-034 and GUP-2018-132.

  2. Author contributions: All the authors have made a substantial contribution towards the successful completion of this manuscript and have been involved in designing the study, drafting the manuscript, and engaging in critical discussion. M.M. and R.V.L. developed the original draft and foundational framework for the study. M.M., R.V.L., E.N.B., W.S.W.M.J., S.S., S.B., A.P.D.C., C.H.A., E.L., and G.G. contributed to the conceptualization, data processing, data analysis, visualization, validation, project supervision, and write up. E.N.B., S.S., D.J.J., S.N.M.S., A.M.M.K., A.M.A.Z., S.B., A.C., and G.A.P. helped with the interpretation, quality control, fieldwork, and revisions of the manuscript. M.M., R.V.L., E.N.B., S.S., S.B.,W.D., and W.S.W.M.J. assisted with public relations, funding acquisition, end-user feedback acquisition, and conservation awareness endeavors. All authors have read and approved the final manuscript.

  3. Conflict of interest: The authors declare no conflict of interest.

  4. Data availability statement: The respective R codes and sample plot data are available as supplementary materials.

References

[1] Silva CA, Hudak AT, Vierling LA, Loudermilk EL, O’Brien JJ, Hiers JK, et al. Imputation of individual Longleaf pine (Pinus palustris mill.) tree attributes from field and LiDAR data. Can J Remote Sens/J Can Teledetect. 2016;42(5):554–73.10.1080/07038992.2016.1196582Search in Google Scholar

[2] Mohan M, Silva C, Klauberg C, Jat P, Catts G, Cardil A, et al. Individual tree detection from unmanned aerial vehicle (UAV) derived canopy height model in an open canopy mixed conifer forest. Forests. 2017;8(9):340.10.3390/f8090340Search in Google Scholar

[3] Wan Mohd Jaafar WS, Woodhouse I, Silva C, Omar H, Abdul Maulud K, Hudak A, et al. Improving individual tree crown delineation and attributes estimation of tropical forests using airborne LiDAR data. Forests. 2018;9(12):759.10.3390/f9120759Search in Google Scholar

[4] Wan Mohd Jaafar WS, Woodhouse I, Silva CA, Omar H, Hudak AT. Modelling individual tree aboveground biomass using discrete return LiDAR in lowland dipterocarp forest of Malaysia. J Trop Sci. 2017;29(4):465–84.Search in Google Scholar

[5] Mohan M, Mendonça BAFde, Silva CA, Klauberg C, de Saboya Ribeiro AS, Araújo EJGde, et al. Optimizing individual tree detection accuracy and measuring forest uniformity in coconut (Cocos nucifera L.) plantations using airborne laser scanning. Ecol Model. 2019;409(108736):108736.10.1016/j.ecolmodel.2019.108736Search in Google Scholar

[6] Dalla Corte AP, Rex FE, Almeida DRAde, Sanquetta CR, Silva CA, Moura MM, et al. Measuring individual tree diameter and height using GatorEye high-density UAV-lidar in an integrated crop-livestock-forest system. Remote Sens (Basel). 2020;12(5):863.10.3390/rs12050863Search in Google Scholar

[7] Gonzalez-Aguilera D, Rodriguez-Gonzalvez P. Drones – an open access journal. Drones. 2017;1:1.10.3390/drones1010001Search in Google Scholar

[8] van Leeuwen M, Hilker T, Coops NC, Frazer G, Wulder MA, Newnham GJ, et al. Assessment of standing wood and fiber quality using ground and airborne laser scanning: a review. For Ecol Manag. 2011;261(9):1467–78. 10.1016/j.foreco.2011.01.032.Search in Google Scholar

[9] Pinz A. A computer vision system for the recognition of trees in aerial photographs. Multisource Data Integr Remote Sens. 1991 Jan 1;3099:111–24.Search in Google Scholar

[10] Ke Y, Quackenbush LJ. A review of methods for automatic individual tree-crown detection and delineation from passive remote sensing. Int J Remote Sens. 2011 Sep 10;32(17):4725–47.10.1080/01431161.2010.494184Search in Google Scholar

[11] Rinnamang S, Sirirueang K, Supavetch S, Meunpong P. Estimation of aboveground biomass using aerial photogrammetry from unmanned aerial vehicle in teak (Tectona grandis) plantation in Thailand. Biodiversitas [Internet]. 2020;21(6):2369–76. 10.13057/biodiv/d210605.Search in Google Scholar

[12] Nagai S, Saitoh TM, Kajiwara K, Yoshitake S, Honda Y. Investigation of the potential of drone observations for detection of forest disturbance caused by heavy snow damage in a Japanese cedar (Cryptomeria japonica) forest. J Agric Meteorol. 2018;74(3):123–7.10.2480/agrmet.D-17-00038Search in Google Scholar

[13] Saarinen N, Vastaranta M, Näsi R, Rosnell T, Hakala T, Honkavaara E, et al. Assessing biodiversity in boreal forests with UAV-based photogrammetric point clouds and hyperspectral imaging. Remote Sens (Basel). 2018;10(2):338.10.3390/rs10020338Search in Google Scholar

[14] Stupariu M-S, Pleșoianu A-I, Pătru-Stupariu I, Fürst C. A method for tree detection based on similarity with geometric shapes of 3D geospatial data. ISPRS Int J Geo-Inform. 2020;9(5):298. 10.3390/ijgi9050298.Search in Google Scholar

[15] Huang H, Li X, Chen C. Individual tree crown detection and delineation from very-high-resolution UAV images based on bias field and marker-controlled watershed segmentation algorithms. IEEE J Sel Top Appl Earth Obs Remote Sens. 2018;11(7):2253–62.10.1109/JSTARS.2018.2830410Search in Google Scholar

[16] Brandtberg T, Walter F. Automated delineation of individual tree crowns in high spatial resolution aerial images by multiple-scale analysis. Mach Vis Appl. 1998;11(2):64–73.10.1007/s001380050091Search in Google Scholar

[17] Larsen M, Eriksson M, Descombes X, Perrin G, Brandtberg T, Gougeon FA. Comparison of six individual tree crown detection algorithms evaluated under varying forest conditions. Int J Remote Sens. 2011;32(20):5827–52.10.1080/01431161.2010.507790Search in Google Scholar

[18] Descombes X, Pechersky E. Tree Crown Extraction using a Three State Markov Random Field. Doctoral dissertation. Rocquencourt, France: INRIA; 2006.Search in Google Scholar

[19] Ferreira MP, Almeida DRAde, Papa DdeA, Minervino JBS, Veras HFP, Formighieri A, et al. Individual tree detection and species classification of Amazonian palms using UAV images and deep learning. Ecol Manage. 2020;475(118397):118397.10.1016/j.foreco.2020.118397Search in Google Scholar

[20] Guerra-Hernández J, González-Ferreiro E, Monleón V, Faias S, Tomé M, Díaz-Varela R. Use of multi-temporal UAV-derived imagery for estimating individual tree growth in Pinus pinea stands. Forests. 2017;8(8):300.10.3390/f8080300Search in Google Scholar

[21] Johansen K, Raharjo T, McCabe M. Using multi-spectral UAV imagery to extract tree crop structural properties and assess pruning effects. Remote Sens (Basel). 2018;10(6):854.10.3390/rs10060854Search in Google Scholar

[22] Nevalainen O, Honkavaara E, Tuominen S, Viljanen N, Hakala T, Yu X, et al. Individual tree detection and classification with UAV-based photogrammetric point clouds and hyperspectral imaging. Remote Sens (Basel). 2017;9(3):185.10.3390/rs9030185Search in Google Scholar

[23] Haala N, Hastedt H, Wolf K, Ressl C, Baltrusch S. Digital photogrammetric camera evaluation generation of digital elevation models. PFG Photogrammetrie, Fernerkundung, Geoinf. 2010 May 1;2010(1):99–115.10.1127/1432-8364/2010/0043Search in Google Scholar

[24] Baltsavias E, Gruen A, Eisenbeiss H, Zhang L, Waser LT. High‐quality image matching and automated generation of 3D tree models. Int J Remote Sens. 2008;29(5):1243–59.10.1080/01431160701736513Search in Google Scholar

[25] Westoby MJ, Brasington J, Glasser NF, Hambrey MJ, Reynolds JM. “Structure-from-Motion” photogrammetry: a low-cost, effective tool for geoscience applications. Geomorphol (Amst). 2012;179:300–14.10.1016/j.geomorph.2012.08.021Search in Google Scholar

[26] Iglhaut J, Cabo C, Puliti S, Piermattei L, O’Connor J, Rosette J. Structure from motion photogrammetry in forestry: a review. Curr Rep. 2019;5(3):155–68.10.1007/s40725-019-00094-3Search in Google Scholar

[27] Navarro A, Young M, Allan B, Carnell P, Macreadie P, Ierodiaconou D. The application of unmanned aerial vehicles (UAVs) to estimate above-ground biomass of mangrove ecosystems. Remote Sens Env. 2020;242(111747):111747.10.1016/j.rse.2020.111747Search in Google Scholar

[28] Korpela I, Anttila P, Pitkänen J. The performance of a local maxima method for detecting individual tree tops in aerial photographs. Int J Remote Sens. 2006;27(6):1159–75.10.1080/01431160500354070Search in Google Scholar

[29] Silva CA, Crookston NL, Hudak AT, Vierling LA. rLiDAR: an R package for reading, processing and visualizing LiDAR (light detection and ranging) Data, Version 0.1. Available online: https://cran.r-project.org/web/packages/rLiDAR/index.html (accessed on 15 January 2021).Search in Google Scholar

[30] R Core Team, R: a language and environment for statistical computing. R foundation for statistical computing, Vienna, Austria; 2015. Available online: http://www. R-project.org (accessed on 15 January 2021).Search in Google Scholar

[31] Dalgaard P. Introductory statistics with R. 2nd edn. New York, NY: Springer; 2008.10.1007/978-0-387-79054-1Search in Google Scholar

[32] Jones O, Maillardet R, Robinson A. Introduction to scientific programming and simulation using R. 2nd edn. Boca Raton, FL: CRC Press; 2014.10.1201/b17079Search in Google Scholar

[33] Weinstein BG, Marconi S, Bohlman SA, Zare A, White EP. Cross-site learning in deep learning RGB tree crown detection. Ecol Inf. 2020;56(101061):101061.10.1016/j.ecoinf.2020.101061Search in Google Scholar

[34] Guerra-Hernández J, Cosenza DN, Rodriguez LCE, Silva M, Tomé M, Díaz-Varela RA, et al. Comparison of ALS- and UAV(SfM)-derived high-density point clouds for individual tree detection in Eucalyptus plantations. Int J Remote Sens. 2018;39(15–16):5211–35.10.1080/01431161.2018.1486519Search in Google Scholar

[35] Roussel J-R, Auty D, Coops NC, Tompalski P, TRH Goodbody, Meador AS, et al. LidR: an R package for analysis of airborne laser scanning (ALS) data. Remote Sens Env. 2020;251(112061):112061.10.1016/j.rse.2020.112061Search in Google Scholar

[36] Zhang W, Qi J, Wan P, Wang H, Xie D, Wang X, et al. An easy-to-use airborne LiDAR data filtering method based on cloth simulation. Remote Sens. 2016;8(6):501. 10.3390/rs8060501.Search in Google Scholar

[37] Chen Q, Baldocchi D, Gong P, Kelly M. Isolating individual trees in a Savanna woodland using small footprint lidar data. Photogramm Eng Remote Sens. 2006;72(8):923–32.10.14358/PERS.72.8.923Search in Google Scholar

[38] Gebreslasie MT, Ahmed FB, Van Aardt JAN, Blakeway F. Individual tree detection based on variable and fixed window size local maxima filtering applied to IKONOS imagery for even-agedEucalyptusplantation forests. Int J Remote Sens. 2011;32(15):4141–54.10.1080/01431161003777205Search in Google Scholar

[39] Duncanson LI, Cook BD, Hurtt GC, Dubayah RO. An efficient, multi-layered crown delineation algorithm for mapping individual tree structure across multiple ecosystems. Remote Sens Env. 2014;154:378–86.10.1016/j.rse.2013.07.044Search in Google Scholar

[40] Ayrey E, Fraver S, Kershaw Jr JA, Kenefic LS, Hayes D, Weiskittel AR, et al. Layer stacking: A novel algorithm for individual forest tree segmentation from LiDAR point clouds. Can J Remote Sens/J Can Teledetect. 2017;43(1):16–27.10.1080/07038992.2017.1252907Search in Google Scholar

[41] Khosravipour A, Skidmore AK, Wang T, Isenburg M, Khoshelham K. Effect of slope on treetop detection using a LiDAR canopy height model. ISPRS J Photogrammetry Remote Sens. 2015;104:44–52. 10.1016/j.isprsjprs.2015.02.013.Search in Google Scholar

[42] Falkowski MJ, Smith AMS, Gessler PE, Hudak AT, Vierling LA, Evans JS. The influence of conifer forest canopy cover on the accuracy of two individual tree measurement algorithms using lidar data. Can J Remote Sens. 2008;34:S338–50. 10.5589/m08-055.Search in Google Scholar

[43] Dalponte M, Coomes DA. Tree‐centric mapping of forest carbon density from airborne laser scanning and hyperspectral data. Methods Ecol Evolution. 2016;7:1236–45. 10.1111/2041-210X.12575. Search in Google Scholar

[44] Wang L, Gong P, Biging GS. Individual tree-crown delineation and treetop detection in high-spatial-resolution aerial imagery. Photogrammetric Eng & Remote Sens. 2004;70:351–7. 10.14358/PERS.70.3.351.Search in Google Scholar

[45] Li W, Guo Q, Jakubowski MK, Kelly M. A new method for segmenting individual trees from the lidar point cloud. Photogramm Eng Remote Sens. 2012;78(1):75–84.10.14358/PERS.78.1.75Search in Google Scholar

[46] Goutte C, Gaussier E. A probabilistic interpretation of precision, recall and F-score, with implication for evaluation. In: Lecture notes in computer science. Berlin, Heidelberg: Springer; 2005. p. 345–59.10.1007/978-3-540-31865-1_25Search in Google Scholar

[47] Sokolova M, Japkowicz N, Szpakowicz S. Beyond accuracy, F-score and ROC: a family of discriminant measures for performance evaluation. In: Lecture notes in computer science. Berlin, Heidelberg: Springer; 2006. p. 1015–2110.1007/11941439_114Search in Google Scholar

[48] Silva CA, Valbuena R, Pinagé ER, Mohan M, Almeida DRA, North Broadbent E, et al. Forest gap R: an R Package for forest gap analysis from canopy height models. Methods Ecol Evol. 2019;10(8):1347–56.10.1111/2041-210X.13211Search in Google Scholar

[49] Russell MB. Nine tips to improve your everyday forest data analysis. J For. 2020;118(6):636–43.10.1093/jofore/fvaa034Search in Google Scholar

[50] Kamoske AG, Dahlin KM, Stark SC, Serbin SP. Leaf area density from airborne LiDAR: comparing sensors and resolutions in a temperate broadleaf forest ecosystem. Ecol Manage. 2019;433:364–75.10.1016/j.foreco.2018.11.017Search in Google Scholar

[51] Teimouri M, Doser JW, Finley AO. ForestFit: an R package for modeling plant size distributions. Env Model Softw. 2020;131(104668):104668.10.1016/j.envsoft.2020.104668Search in Google Scholar

[52] Atkins JW, Bohrer G, Fahey RT, Hardiman BS, Morin TH, Stovall AEL, et al. Quantifying vegetation and canopy structural complexity from terrestrial LiDAR data using the forestr r package. Methods Ecol Evol. 2018;9(10):2057–66.10.1111/2041-210X.13061Search in Google Scholar

[53] Féret JB, de Boissieu F. Biodivmapr: an r package for α‐and β‐diversity mapping using remotely sensed images. Methods Ecol Evolution. 2020 Jan;11(1):64–70.10.1111/2041-210X.13310Search in Google Scholar

[54] Malevich SB, Guiterman CH, Margolis EQ. Burnr: fire history analysis and graphics in R. Dendrochronologia (Verona). 2018;49:9–15.10.1016/j.dendro.2018.02.005Search in Google Scholar

[55] Stanke H, Finley AO, Weed AS, Walters BF, Domke GM. rFIA: an R package for estimation of forest attributes with the US forest inventory and analysis database. Env Model Softw. 2020;127(104664):104664.10.1016/j.envsoft.2020.104664Search in Google Scholar

[56] Réjou‐Méchain M, Tanguy A, Piponiot C, Chave J, Hérault B. Biomass: an R package for estimating above‐ground biomass and its uncertainty in tropical forests. Methods Ecol Evol. 2017;8(9):1163–7.10.1111/2041-210X.12753Search in Google Scholar

[57] Zhou T, Popescu S. Waveformlidar: an R package for waveform LiDAR processing and analysis. Remote Sens (Basel). 2019;11(21):2552.10.3390/rs11212552Search in Google Scholar

Received: 2021-02-19
Revised: 2021-06-14
Accepted: 2021-08-09
Published Online: 2021-09-13

© 2021 Midhun Mohan et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 2.10.2023 from https://www.degruyter.com/document/doi/10.1515/geo-2020-0290/html
Scroll to top button