Abstract
Background: Production of high-quality 3D models of trees has a practical application in arboriculture, allowing the assessment, measurement, and recording of aboveground structural features of individual trees. Commercially available hobby UAVs (uncrewed aerial vehicles) with integrated cameras in combination with a low-cost photogrammetry software package can produce high-definition 3D models of tree structures with low financial investment. Methods: Our study compared an established orbital flight strategy and 2 novel close-range flight strategies for collecting digital imagery to produce 3D models of features of interest (FOIs) in trees. We used 3 separate tests to establish if these different flight strategies resulted in reliable, measurable, and complete models for FOIs in mature broadleaf trees. Results: Our study substantiated the findings of previous studies, in that a well-planned, automated orbital strategy of flight can render full models of trees from which accurate measurement of dimensions can be obtained. A flight strategy involving collecting clusters of images from different angles whilst the UAV stayed in one location was trialled, and the resulting models were typically incomplete and of low quality. However, the second close-range strategy of hand-flown missions running parallel with the FOI yielded highly detailed, complete models from which observations of the FOI’s surface and accurate measurements of its dimensions could be obtained. Conclusions: We conclude that there are substantial opportunities for the use of low-cost UAVs to undertake visual tree assessments of the aerial parts of trees, subject to ease of access to the feature and the right choice of flight strategy.
INTRODUCTION
Background
Uncrewed aerial vehicles (UAVs) with mounted, integrated cameras capable of capturing high-definition stills and video are now an established tool in the inspection of structures such as power grids, dams, and buildings (Buffi et al. 2018; Seo et al. 2018). UAVs are a cost-effective means of inspecting areas which are otherwise difficult to access. The same technology has the potential to be used in the inspection of component parts of trees that are high above the ground and thus also difficult to access (Priestley 2017; Cannon et al. 2018).
As this technology has developed, the size and weight of drones has reduced while the sophistication of the cameras has increased along with battery power and flight times (Anderson and Gaston 2013). Internationally, there is great variation in the degree to which the use of UAVs is regulated. Recent changes in drone regulations in the UK have dispensed with the differentiation between hobbyist flying and commercial flight. Low-weight UAVs (sub-250 g) have some of the lowest restrictions for operating in urban spaces in the UK, which opens their use for a wider variety of inspection tasks within areas where UAV use had previously been restricted (The UK Civil Aviation Authority 2019).
Visual tree inspections are mainly undertaken from the ground with the surveyor aided by binoculars or cameras with telephoto lenses to assist with inspecting the stem, crown, and branches of a tree (Dunster et al. 2017). Even with such visual aids, the views obtained are limited to the bottom and sides of sections of branches and stems. It is not easy to accurately measure and evaluate many aerial features of trees from the ground due to this limitation. Where a ground-based tree inspection identifies a defect or structural feature of interest (FOI) higher up in a tree, a climbing arborist may be brought in to climb the tree and inspect these FOIs to inform tree management recommendations. Specialist aerial inspections can provide invaluable information otherwise unavailable to a tree surveyor. However, such aerial inspections involve specialist training, equipment, and insurance, and can add significant expense and time to a tree assessment.
Although tree structure can be modelled successfully with terrestrial LiDAR (Côté et al. 2009), this does not allow for a view downwards onto aerial features of trees. Aerial photography can provide additional visual information to a tree assessment, such as damage to the upper sides of branches and identification of pests and pathogens: e.g., branch decay caused by massaria disease of plane (Splanchnonema platani), which is often only visible on the upper side of horizontal branches near their point of attachment to the tree’s main stem (Schmitt et al. 2014). However, it can be difficult to extract quantifiable information on the dimensions of features from images or video alone. The dimensions of the features of a tree are important to a tree surveyor, to assess the scale of a structural fault or other feature. For example, the length of branches, diameter of wounds, angle and form of branch junctions, and the size of any deadwood in the crown of a tree are all valuable metrics for assessing the significance of these features and for providing suitable recommendations for their management (Cox and Melarange 2017; Slater 2022).
Photogrammetry, the production of digital 3D models from overlapping images, can produce high-definition, scaled 3D models from which detailed observations and measurements can be made (Ganz et al. 2019). Photogrammetry is a means of increasing the useable data from the information collected during an aerial inspection by a UAV.
Photogrammetry works by obtaining images of the object from multiple positions. The photogrammetry software uses data from those images, such as position and angle of the camera for each image, and the camera characteristics, such as focal length, to create point clouds. From the point cloud a 3D mesh is produced and information from the input images is overlaid to produce a final digital 3D model. The quality and accuracy of the 3D model is dependent on the quality and accuracy of the data used to build it. UAVs and photogrammetry software are regularly used to assess solid, static structures such as buildings, dams, and bridges. The digital 3D models which they produce can be easily shared with people for inspection, to communicate ideas, and for planning purposes. However, the sometimes complex form and flexibility of tree stems and branches often makes the capturing of quality imagery of these structures by UAVs more challenging. Structures which sway in windy conditions can make capturing sharp, well-exposed images more difficult. In addition, every tree has a different form and grows in different surroundings. Some methods of capturing images with a UAV may be more achievable than others in certain scenarios; for example, an automated orbital flight around a tree may not be possible if the subject tree is growing close to other trees in a woodland setting or close to tall buildings in an urban environment.
Current Research Gaps
The value and accuracy of UAV surveys is recognised in associated literature (e.g., Koh and Wich 2012; Seo et al. 2018; Nitoslawski et al. 2021). Several studies have investigated the use of UAVs and photogrammetry as a method of obtaining accurate measurements from trees for varying purposes, such as understanding tree growth and behaviour (e.g., Scher et al. 2019; Moreira et al. 2021) and forest mensuration (e.g., Krause et al. 2019; Ramalho de Oliveira et al. 2021). Many of these studies focus on landscape scale photogrammetric models or models of single trees from which core measurement data can be obtained, such as diameter at breast height (DBH) and overall tree height. For example, Krause et al. (2019) have shown the successful use of photogrammetry to accurately measure tree heights in forest stands, whilst Nezami et al. (2020) showed how UAV technology could identify different tree species within forests through spectral analysis and the use of artificial intelligence (AI). However, such studies have typically involved the use of high-cost systems, often bespoke systems, and most have not focused upon individual assessments of open-grown or urban trees.
To take existing UAV technology and make it into a viable tool for an arborist carrying out a visual inspection of the aerial parts of a single tree, the use of low-cost equipment that provides sufficiently informative visual models is needed. In addition, it is important to establish the best way a UAV can be deployed in terms of accuracy and efficiency of model production, and to identify key limitations to this approach to aerial tree surveying.
Research Objectives
Our first research objective was to determine the accuracy of photogrammetric models that could be obtained from low-cost UAVs and any key limitations. For the photogrammetric models to be recommended for tree surveys, the accuracy and level of detail that can be achieved needs to be quantified and objectively determined to be at an acceptable level for the purposes of the survey.
Our second research objective was to investigate which UAV flight path might provide a more accurate photogrammetric model of an FOI in a tree. We trialled 3 different flight strategies for capturing images of trees and processing into 3D models with the aim of determining if UAVs and photogrammetric models can be successfully used in the assessment of tree FOIs and thus inform management recommendations for trees.
MATERIALS AND METHODS
This study was designed to make use of low-cost, easily accessible hardware and software so that the results can be utilised by practitioners without the need for specialist, expensive equipment. The UAV used for this study was a DJI Mini 2® (DJI, Shenzhen, China) equipped with an integrated 12-mp camera with manual exposure controls. It is a hobby aircraft which currently costs less than £600 (USD $736) (Figure 1). This UAV weighs less than 250 g which places it in the lowest level of restrictions in the UK, and it does not require a professional qualification to be flown. The software used was the lite version of 3DF Zephyr (3Dflow SRL, Verona, Italy) photogrammetry software currently costing approximately £150 (USD $184) at the time of writing (3Dflow SRL 2022). The software used for the assessment of the dense point clouds from the field study was Cloud-Compare (EDF R&D, Los Altos, CA, USA), which is an open-source program (EDF R&D 2022).
This study assessed the quality of information that could be obtained from UAV-derived photogrammetry models of FOIs set above eye level. To do this, a 2-step approach was devised. First, we carried out an accuracy trial to assess the accuracy and reliability of the photogrammetric models for isolated tree branches in an artificial setting. The outcome of the first trial then fed forward to a field trial, where FOIs of 4 mature oak trees (Quercus robur L.) on their stems, crowns, and canopies were recorded using the UAV-mounted camera to capture images for processing into digital 3D models using photogrammetry software.
Accuracy Trial
This first trial tested the accuracy of digital 3D models processed from UAV-obtained digital images captured under ideal conditions. Using photogrammetry software, 3D models were created of tree branches, allowing direct comparisons between measurements taken from the 3D photogrammetric models and the subject branches.
Ten samples of variously sized fallen branches from broadleaf tree species were collected locally. These samples were elevated from the ground using a timber gantry crane, ropes, and pulleys (Figure 2A). Two photographic scales were attached to each upright leg of the gantry crane to allow for the photogrammetric models to be scaled. The UAV was then flown around the subject branches, collecting images from multiple angles at 2-second intervals, with an orbital radius of approximately 1.5 m. The camera has a 24-mm equivalent lens and a fixed aperture of 2.8F. The ISO of all images was set at 200, and the exposure was adjusted to the available light by changing the shutter speed to provide well-exposed pictures. All the flights were undertaken during similar flight conditions: less than 10-mph (4.5 m/s) winds to reduce movement of the elevated branches, and with overcast skies to provide a more even light with fewer shadows.
Each set of photographs was then processed using 3DF Zephyr Lite Steam Edition (3Dflow SRL, Verona, Italy). The full version of this software was previously found to be one of the most accurate and precise photogrammetry software packages for estimating tree heights (Lipwoni et al. 2022). The lite version of this program uses the same algorithm but has fewer functions, such as export file options, and has a maximum limit of 500 photographs for processing. The program automatically detects the camera settings and calibrates the model based on the metadata contained within the digital image files. Key point identification, point stereo matching (the process of matching the corresponding pixels within the set), sparse point cloud generation, dense point cloud generation, and the rendering of the mesh and textured mesh are managed automatically by the program. The following settings were used for all photogrammetric models generated in our study:
Category: General
Preset: Deep/High detail
After processing, the textured mesh (hereafter referred to as ‘the model’ or ‘the photogrammetric model’) was manually cleaned to remove extraneous details of the surrounding area. The model was scaled using the photographic scales on the upright posts, and the up vector was manually defined using the axis tool (Figure 2A).
Using this method, 10 models (one for each suspended branch) were produced, and for each of these models, 5 measurements were taken using the ‘Quick Measurements’ tool within the software (Figure 2B). The total length of the modelled branch, 2 internal lengths, and 2 end diameters were recorded for each model. The corresponding measurements were taken of the original branches using a tape measure. The measurements from the models and branches were used to establish the extent of correlation between them.
Field Trial
Our second trial involved collecting photographic data from 4 mature open-grown oaks during their dormancy period when the trees were free of leaves. Trees grow with an infinite variation of forms and in various locations, sometimes on their own and sometimes near to other trees or structures. This can make successful collection of data by UAV challenging. The most-often used flight strategy in similar studies of other structures (an automated orbital flight pattern) is not always practicable for trees due to the presence of surrounding structures which can obstruct the planned flight path. This study, developed from a study by Mokroš et al. (2018), trialled 3 different flight strategies involving an automated orbital flight path as well as 2 hand-flown methods with the intention of gaining quantitative data on the photogrammetric models obtained from using these 3 different flight strategies.
All flights were undertaken using the DJI Mini 2® and under closely similar climatic conditions: less than 10-mph (4.5 m/s) winds and bright sky but overcast with clouds. All flights were undertaken during winter and early spring while the trees were dormant. Before initiating the flight and by using information from the histogram in the flight app, the camera was manually set to ISO 200 and the shutter speed changed to ensure that pictures were taken at the correct exposure. The output was periodically checked throughout each flight and the shutter speed altered where necessary.
Four veteran pedunculate oaks (Quercus robur L.) were selected for this field trial (Figure 3). These trees would be typical of mature parkland trees that may require aerial inspections due to their potential to develop structural weaknesses such as decay cavities or junctions with included bark. For each tree, a single feature of interest (FOI) was selected as the focus of the associated UAV flights. Measurements of each FOI, such as its elevation and length, were taken from the ground using a handheld laser measure (Leica DISTO D2, Leica Geosystems, Heerbrugg, Switzerland) or a measuring tape if the FOI was within reach of the tape. The UAV was flown 3 times for each tree, using 3 different flight strategies to capture images of the FOI. The 4 FOIs were:
T1 oak (11.5 m tall)—the tree’s main stem
T2 oak (17 m tall)—part of the tree’s crown
T3 oak (14 m tall)—co-dominant upright scaffold branch
T4 oak (20 m tall)—isolated lateral branch
The following 3 flight strategies were used during the field study.
Orbital Flight Strategy
The automated orbital flight strategy, based on the methods of Gatziolis et al. (2015) and Scher et al. (2019), involved 2 concentrically stacked orbits with staggered heights of between 2 and 3 m. The flight was limited to 2 concentric orbits to keep the image number to below 500, due to the limitation of the associated software (Figure 4A). The orbits were as close to the outer edge of the trees’ canopies as could be achieved, at approximately 1 to 3 m from the canopy edge. The camera gimbal was set to the approximate height of the FOI for each flight. For vertically aligned FOIs (e.g., a tree’s main stem) the gimbal was set to the height of the middle point of the FOI. The speed of flight was between 0.9 and 1.5 mph (0.4 and 0.7 m/s), depending on the radius of the orbit. The shutter mode was on ‘timed shot’ and set to take an image every 2 seconds.
Cluster Flight Strategy
The cluster flight strategy was considered useful where access to the FOI might be restricted due to either a dense canopy, foliage, or another obstruction which would prevent effective use of either the orbital or parallel flight strategy. The cluster flight strategy involved finding vantage points from where multiple images of the FOI could be captured at different angles (Figure 4B). In this test, between 8 and 9 vantage points were used around each FOI. The UAV was flown to each position, and, using only the yaw control (left and right turn) and the gimbal control (up and down) of the camera, multiple images were taken from each vantage point.
Parallel Flight Strategy
The parallel flight strategy involved manually flying the drone close to the tree and with a flight path which repeatedly ran parallel to the FOI of each tree (Figure 4C). At the initial stages of this study, this flight strategy had produced some good, detailed models in some short trials of the equipment and software. The UAV flight mode was set to cine/tripod mode, which reduces the aircraft’s speed and slows both yaw and gimbal to reduce blurring in the images. Each parallel flight was undertaken in one continuous flight with the camera pointing towards the FOI. The distance between the drone and the FOI varied between 1 and 4 m depending on accessibility through the tree’s canopy by the UAV. The gimbal was operated manually to allow for changes of angle throughout the flight. The shutter mode was ‘timed shot’ and set to take an image every 2 seconds.
The automated orbital flights were planned and executed through the Litchi flight app (VC Technology Ltd., Islamabad, Pakistan), which is a third-party app which allows for automated flight planning. The manually flown missions were undertaken using the DJI Fly app (DJI, Shenzhen, China). The resulting UAV-derived images were then processed to create a photogrammetric model for each FOI and for each flight strategy.
A square, rigid target with known dimensions was attached to the stem of each tree before each flight at an approximate height of 1.5 m above ground to allow for the model to be scaled in postproduction.
The processing of the images followed the same steps as in the first trial, which was automated within the program. The final model from each set of captured images was scaled within the program using the images captured of the square target attached to each tree’s stem. The up vector was manually defined using the axis tool.
Point Cloud Density Assessment
Point cloud density provides a good indication of the detail which can be achieved in a final model (Moyano et al. 2020) and quantifies the level of detail in a way which is difficult when assessing the textured mesh (3D model). The dense point cloud of the FOI from each model of each tree was exported as a PLY file using a trial version of the full 3DF Zephyr program. The 3 files of the FOI pertaining to each tree were opened together in CloudCompare, which includes analytical tools for the assessment of point clouds.
Final adjustments were made within this program to bring the scale and orientation of each of the point clouds into line with each other. Each set of cross sections were then interrogated using the ‘compute geometric features’ tool. Using this tool, the density of each point cloud was assessed by measuring the number of neighbours (n) within a radius (r) of 0.25 (equivalent to 25 mm in the models created)(Hobart et al. 2020). This process turns the dense point cloud into a 3D scalar field which visually represents the variation in density of each point cloud (Figure 5).
The point density (n), mean density, and standard deviation for each point cloud was computed at r = 0.25 within CloudCompare. The mean values were used to evaluate the point density and, therefore, the detail achieved in each model by each flight strategy. A histogram showing the number of neighbours was produced for each model, giving a mean value and the standard deviation. The coefficient of variation for each point cloud was derived as a percentage from the point density (n) and the standard deviation using the calculation Σ μ × 100, allowing comparison of the percentage of point density variation between the models.
The point cloud assessment within CloudCompare only assesses points which fit the criteria of having neighbours (n). The completeness of a model (for the purposes of this study, the formation of a visibly complete circumference in the transverse plane) cannot be assessed by a measure of point density. To assess model completeness, 3 corresponding cross sections were taken from the centre and each of the extremities of all 12 of the generated models of the FOIs, producing 36 cross sections for assessment (Mokroš et al. 2018). To aid visual assessment, all the points were coloured white. To avoid bias in this assessment, a panel of 3 objective assessors was then asked to inspect, discuss, and come to a consensus on a score of completeness for each of the 36 cross sections. The mean score for each flight strategy was then used to compare the completeness of the models that had been generated (Pfeifer et al. 2021). The score for completeness for each cross section was based on the following scale:
Score 0 – No discernible point cloud—a blank image or only a few random points presented.
Score 1 – Incomplete point cloud—there were sections of the point cloud where the circumference of the branch/stem modelled could not be easily discerned.
Score 2 – Mostly complete point cloud—largely complete point cloud presented with enough points to allow measurement of the branch/stem model reliably.
Score 3 – Complete point cloud—point cloud considered to provide an entire circumference of the branch/stem modelled.
Statistical Analysis
To compare results from direct measurements of the parameters of branches and FOIs with the results of the photogrammetric models, Pearson’s correlations were used. As data for point density and model completeness were not normally distributed, Kruskal-Wallis tests with post hoc Dunn’s tests were utilised to identify differences in the results of the 3 flight strategies.
RESULTS
Accuracy Trial
The 10 models of the branches were processed using 3DF Zephyr Lite Steam Edition. The models ranged between 102 and 205 images used to create them. The software was able to incorporate all the images taken by the UAV into all 3D photogrammetric models of the branches. Fifty pairs of corresponding measurements were taken from the ten branch samples and the associated models. The chosen features on the branches ranged from 40 mm to 1.66 m in length. The measurements taken included total branch length, internal distances between identifiable surface details, and the branch end diameters. There was a very strong correlation between the standard measurements recorded from the physical branches and those taken from the photogrammetric models (R = 0.9999; P < 0.001). The mean difference between these 50 measurements was 0.33% ± 0.03 standard deviation, with a range between 4% overestimation to 17.2% underestimation by the digital models.
Field Trial
In total, 12 flights were successfully conducted using the 3 differing flight strategies for image collection around FOIs exhibited by the 4 selected oak trees. The sets of images were processed using 3DF Zephyr Lite Steam Edition. The program provided information on the number of images captured and the subsequent number of images which were used in creating the photogrammetric model. The numbers of images which were successfully matched by the program for each model’s creation are provided in Table 1.
All the models created using the automated orbital flight strategy used over 99% of the images captured of the FOIs. Observation of the models showed good formation of entire trees out to fine branch structure with little to no fine surface detail. Three of the models created from the cluster flight strategy did not utilise a substantial number of the captured images during processing. Consequently, 3 of the photogrammetric models using this flight strategy had significant sections of the FOI missing. Within the software it could be observed that the images used in these models were from between one and 3 closely located clusters out of the 8 to 9 clusters from the flights. Only the model for the FOI on T3 was formed from 100% of the images captured using the cluster flight strategy (Table 1). Two of the four models created using the parallel flight strategy (T1 parallel and T2 parallel) lost nearly half of the images captured during processing, and their models were formed from 54% and 57% of the captured images, respectively. T3 parallel and T4 parallel models utilised greater than 99% of the images from the sets (Table 1).
Correlation to FOI Measurements
Each of the FOIs in the 4 trees had 3 measurements taken from the ground. Ten of the measurements were taken using a handheld laser measure Leica DISTO D2 with two others taken by a measuring tape due to sufficient access being available to the FOI of that tree. Corresponding measurements were taken from the photogrammetric models and the results grouped based on the flight strategy. All correlations for these paired measurements were strong, for the orbital, cluster, and parallel flight strategies (R = 0.9982, 0.992, and 0.9981, respectively); however, 4 measurements could not be taken from the photogrammetric models generated by the cluster flight strategy due to missing sections.
Point Density of Models
The 3D photogrammetric models were represented as scalar fields of point density along with corresponding histograms of the density values to facilitate analysis. The orbital flight strategy produced point cloud densities between 1 n – 1.9 n/0.025. The cluster flight strategy generated point cloud densities between 4.6 n – 5.1 n/0.025 for 3 of the models, but the T4 cluster model produced a point cloud with a density of 28 n/0.025. Of the point clouds using the parallel flight strategy, 3 had a density between 32.7 n and 35.7 n/0.025; however, the T3 parallel model had a lower mean density of 15.4 n/0.025. The coefficient of variation for all the models had a spread between 38.9% and 81.3% with 2 outliers: T1 orbital, which had the lowest variability (11.4%), and T4 cluster, which had a variation of 125.2%. The mean density and standard deviation for each model along with the coefficient of variation are provided by flight strategy type in Table 2.
A Kruskal-Wallis test confirmed that the point cloud densities were significantly different for the 3 flight strategies (H2,12 = 9.269; P = 0.00971), and a post hoc Dunn’s test at 95% confidence identified that the median point cloud value for the orbital flight strategy was significantly lower than that of the parallel flight strategy.
Model Completeness
The scores determined by a panel of 3 reviewers for each cross section resulted in mean completeness scores for the orbital flight strategy of 2.75; 1.08 for the cluster flight strategy; and 2.83 for the parallel flight strategy. A Kruskal-Wallis test confirmed that these scorings were significantly different (H2,36 = 11.208; P = 0.00368) and a post hoc Dunn’s test at 95% confidence found that the median panel scores for the cross sections produced by the cluster flight strategy were significantly lower than the median values for the cross sections generated by the other 2 flight strategies.
Model Appearance and Suitability
Figure 6 illustrates all models created by the field trial and Figure 7 provides close-up views of FOIs generated within each model. This visual comparison identifies that the parallel flight strategy provided the best visual images for use by an arboriculturist when it comes to realism and fine detail.
For the parallel flight strategy, inspection of the spread of cameras for the T1 model revealed that the images that were utilised were well spread around the FOI, allowing the associated software to render a visibly complete model (Figure 8A). In comparison, the camera map for the T3 model using the parallel flight strategy shows that images from one side of the FOI were entirely missing from the model (Figure 8B).
DISCUSSION
Of the 10 models created of suspended branches in the accuracy trial, all were formed from 100% of the images captured by the UAV. The results show that the measurements of surface detail from photogrammetric 3D models had a 99% agreement with traditional methods of measurement (i.e., tape measure and ruler). This demonstrates that the measurements obtained from photogrammetric models created with this equipment can be used with some confidence for the purposes of tree inspection and evaluation of tree features, fulfilling our first research objective.
In the field trial, the orbital flight strategy consistently resulted in a higher proportion of captured images being used in the arising model than in the other methods. As with similar studies (Gatziolis et al. 2015; Scher et al. 2019), the orbital flight strategy produced full 3D models of the subject trees with easily distinguishable branch structures from the stem and crown out to the finer branches. Such models would be of considerable value in site planning and design scenarios, as well as for long-term tree monitoring.
Small numbers of images from the cluster flight image sets were utilised by the software for 3 of the 4 models (T1, T2, and T4). In those models, images from one to 3 of the closely located clusters were utilised for rendering. As a result, these 3 photogrammetric models were visibly incomplete. The outlier from using the cluster flight strategy was the model of T3, which was formed from 100% of the images in the set. This outlier provides a useful insight as to why the 3 other models were incomplete. The FOI for the T3 model was an exposed lateral branch which was free from other obstructions. It is likely that the 3 incomplete models arose due to an inability of the software to effectively match key points from the various groups of images. As the cluster flight strategy does not use a set of sequential images, images from the first 3 cluster flights processed did not contain sufficient overlapping points within the captured images for the software to match up (Gatziolis et al. 2015). A commonality of the 3 incomplete models was the significant lengths of branch or stem which lay between the various clusters which obscured the views beyond. This lack of overlap between captured images was the reason so many images were rejected by the software when generating these models.
The parallel flight strategy had mixed results in terms of utilisation of the input images. The model for T4 used nearly all the input images, and the T3 model used 100% of the images from the set, whereas the models for T1 and T2 lost nearly half of their images in the initial matching stage. However, this flight strategy did produce a well-formed model for T1 despite losing substantial numbers of images in the matching stage. Image losses occurred for the same reason as for the lower-quality cluster flights, due to too few images being matched up by the software because of discontinuity within image sequences. This resulted in a 3D model with one side of the FOI having visibly lower detail than the other surface areas (for T2).
It is worth noting that all the photogrammetric models from the parallel flight strategy exhibited an elevated level of surface detail, including bark texture, grain patterns of any exposed wood, and surface details of lichen. This level of detail would provide valuable diagnostic information for a tree surveyor and contributes to an assessment of a given feature in terms of its physiological and structural condition.
With regards to model quality, the models using the ‘orbital’ strategy were consistently the lowest in point density. As the flights take a path outside of the canopy of each tree, the images are taken from further away from the subject than in the other strategies. This resulted in fewer pixels from each frame being utilised to match and process into the point cloud. Even the higher numbers of images which were produced using this strategy did not compensate for the distance that the subject was away from the camera. The effect of this low point density can be observed in the models obtained. Although each model from the orbital flight strategy formed a surface which would be adequate for measuring the dimensions of the FOI, there is a lack of fine detail (Figures 6 and 7). It would not be possible from these models to make detailed observations of the condition of the surface of the tree’s stem, crown, or branches.
The parallel flight strategy produced consistently higher density point clouds. The detail rendered from these point clouds into the final model was also far greater than the other methods (Figure 6), allowing for a meaningful observation of the FOI and details of its surface. This result answers our second research objective—that a parallel flight strategy, using this specific combination of technology, provided the best output for an arboriculturist.
Study Limitations
Point density varied widely across the FOI models, albeit at different scales of density. There was a non-significant association between the locations of the FOIs as well as a significant association with the flight strategies used. For example, the models of the FOI in T4 exhibited the highest variability, whereas the models of the FOI in T1 had some of the lowest variation in point cloud density. The variation in density may have resulted from a combination of factors associated with the technology used, site features, and human-led variation in the manually controlled flights. However, it is not possible to draw a conclusion beyond these suggestions without analysis of a greater number of models under far more tightly controlled conditions.
Despite low replication in the field trial, key statistical differences and obvious visual differences in the photogrammetric models were evident. However, extended trials with a higher number of replicates are likely to provide further insights into techniques and flight paths that reliably capture FOIs in sufficient detail for tree assessment purposes. For example, from the insight that continuity of overlapping images is needed for good rendering of a visual model with the equipment used in this study, the cluster flight strategy could be revised to achieve more complete models. We also recommend that future studies undertake the surveying of trees in urbanised areas, where additional obstructions such as buildings and aboveground utilities may require changes to flight strategies.
CONCLUSION
This study demonstrates that it is possible to produce digital 3D models of aerial FOIs of individual trees which can be inspected and measured using low-cost equipment and software. This method of data collection and processing has practical application in arboriculture, effectively extending a ground-based tree assessment to parts of the tree otherwise only accessible to view using rope and harness or mobile elevated work platforms. In addition, this technology automatically produces a record of inspection that can be repeatedly interrogated and compared with further models of the same FOIs over time.
Models of an FOI can be inspected three-dimensionally by practitioners within the software, allowing for a greater visual interrogation of the context and significance of a given feature than would be possible from still images or videos (Koh and Wich 2012). In addition, the ability to take measurements from the models means collecting accurate dimensions is simple, repeatable, and can be undertaken by the inspector rather than by a climbing arborist under instruction from the ground (Cannon et al. 2018; Krause et al. 2019). The external measurements taken from the models can be used in biomechanical assessments of trees; for example, the slenderness ratios of branches when considering the likelihood of their failure. The form of branches and branch junctions can have their orientation recorded and used to assess the effect of loading from different directions. The use of this method can be expected to improve the analysis of the residual strength of tree sections when used in conjunction with other tree assessment methods, such as microdrilling or tomography, by providing accurate information on the external dimensions and form of the feature.
Where tree assessment leads to recommendations for remedial tree surgery, digital pruning of these photogrammetric models is possible. More accurate specifications and more predictable outcomes for high value trees could be achieved by assessing the effects of pruning on a tree’s pruned form, and the extent of proposed pruning can be gauged easily using the model prior to the pruning of the tree. For example, proposed crown reduction work could be communicated very accurately in a visual form to arborists, municipal tree officers, and clients by sharing such digitally pruned models, enabling a clear conception for all involved about the proposal and potentially increasing the precision of the actual pruning carried out.
We conclude that, where the FOIs can be viewed from multiple points by the camera of a UAV, and subject to local UAV flight restrictions, the use of UAV obtained photogrammetry can provide an immediate and cost-effective method of visually inspecting the aerial FOI of trees and obtaining measurements that have comparable levels of accuracy to conventional measuring equipment.
ACKNOWLEDGMENTS
We thank Kerry Carpenter of Dorset Council for help with access to suitable trees for assessment, Alice Favre of Chettle Estate for access to a trial drone flying area, and Alison Turnock and Graham Cox for their support of the time needed to undertake this research. We also thank the three panellists used in this study: Andrew Osborne, Richard Salt, and Harry Doe of Poole Council. Rachel Warner assisted with proofreading an early draft of this paper.
Footnotes
Conflicts of Interest:
The authors reported no conflicts of interest.
- Copyright © 2023, International Society of Arboriculture. All rights reserved.