Skip to content
BY 4.0 license Open Access Published by De Gruyter Open Access November 13, 2020

Super-resolution reconstruction of a digital elevation model based on a deep residual network

  • Donglai Jiao , Dajiang Wang EMAIL logo , Haiyang Lv and Yang Peng
From the journal Open Geosciences

Abstract

The digital elevation model (DEM) is an important basic data tool applied in geoscience applications. Because of its high cost and long development cycle of enhancing hardware performance, designing the related models and algorithms to improve the resolution of DEM is of considerable significance. At present, there is little research on DEM super-resolution based on deep learning, and the results of the reconstructed DEMs obtained by existing methods are inaccurate. Therefore, deepening of the network layers is utilized to improve the accuracy of a reconstructed DEM. This paper designs a neutral network model with 30 convolutional layers to learn the feature mapping relationship between a low- and high-resolution DEM. To avoid the problem of network degradation caused by increasing the number of convolutional layers, residual learning is introduced to accelerate the convergence speed of the model, thereby preferably realizing the DEM super-resolution process. The results show that DEM super-resolution based on a deep residual network is better than that obtained using a neural network with fewer convolutional layers, and the reconstructed result of the DEM based on a deep residual network is remarkably improved in terms of the peak signal to noise ratio and visual effect.

1 Introduction

A digital elevation model (DEM) is a digital simulation of a surface’s topography through limited terrain elevation data, and it is a digital expression of the supracrustal condition [1]. The DEM uses a simple data organization method, which can express the terrain directly and interpret terrain factors efficiently. At present, there are two ways to obtain DEM data. The first way is to collect DEM data directly through various kinds of measurement instruments, such as using a satellite, aircraft and total station to survey and map the terrain and use LiDAR or InSAR for interferometry [2,3,4,5]. The second way is to design related models or algorithms that can be applied to existing low-resolution DEMs to obtain high-resolution DEMs, and the method based on digital topographic map and professional software to obtain high-resolution DEM is also very common [6]. Obtaining a high-resolution DEM through the first method depends on the use of more precise instruments, but developing high-precision instruments often requires more time and resources. This requirement makes the second method cost-effective and promising. Therefore, it is of great significance to design a method to improve the accuracy of DEMs efficiently and conveniently such that DEMs can be more widely applied.

The interpolation algorithm is a kind of important scaling conversion method. At present, the frequently used interpolation algorithms for the DEM primarily include linear interpolation, polynomial interpolation, spline interpolation and kriging interpolation [7,8,9,10]. Zhang et al. compared different interpolation methods based on ASTER-DEM data and found that different interpolation methods have different experimental results under different conditions; therefore, the selection of an interpolation method should depend on the actual situation [11]. Hutchinson algorithm [12] can use contour lines and other information in digital topographic maps to interpolate DEM and has been widely used in hydrological analysis and simulation. Inverse Distance Weight is a suitable interpolation method for DEM generation from LiDAR data because the LiDAR data have high sampling density [13].

Although the interpolation algorithm is simple and the implementation speed is fast, the algorithm often leads to fuzzy or jagged edges when reconstructing DEM. Zhang found that not only multilevel wavelet analysis could enhance the high-frequency characteristic of images, which could be used to improve the resolution of DEM [14], but also nonlocal similarity and neighborhood reconstruction could be combined to improve the resolution of a DEM [15]. Multisource data fusion can combine different spatial data from different sources via different methods to generate thematic attribute data, which can further improve the geometric accuracy and quality of a DEM [16]. Karkee et al. used a new void filling approach to effectively fuse optical images and InSAR images in order to obtain a high-precision DEM [17]. Jhee et al. combined the super-resolution algorithm based on learning with a multiscale Kalman smoother, which has a tree structure graph to realize multiresolution and multiscale DEM modeling [18]. Tang et al. proposed an enhanced data fusion method based on a modified FILTERSIM by integrating high-resolution but sparsely sampled DEM data to generate high-resolution DEM data, which can obtain higher accuracy [19]. Although multisource data fusion can fully extract the useful information in all kinds of data and realize the complementary advantages of multisource data, the accuracy of the reconstruction data could not be effectively improved when the data sources were insufficient.

Image super-resolution reconstruction refers to the technology of transforming existing low-resolution (LR) images into high-resolution (HR) images by designing relevant models or algorithms based on image processing and signal processing [20]. Image super-resolution can further enhance the spatial resolution and the visual effect of images without changing the imaging system, which is conducive to feature extraction and information recognition for images. In 1984, Tsai et al. first proposed the concept of image super-resolution [21]. After decades of development and progress, image super-resolution has gradually formed a research system composed of interpolation, reconstruction and learning-based image super-resolution methods. For example, Nguyen introduced the wavelet analysis into super-resolution based on the interpolation and verified the effectiveness of the algorithm by using one-dimensional and two-dimensional data [22]. Elad et al. proposed a hybrid method combining maximum likelihood (ML) and projection onto convex sets (POCS), which has better image restoration performance [23]. Zeyde et al. simplified and improved the K-SVD dictionary learning algorithm to promote the speed and accuracy of the algorithm on the premise of fewer data sets [24]. In recent years, with the rapid development of deep learning, a variety of neural network models have emerged. The neural network models constructed by people can have strong feature learning abilities, which enable the models to further extract more useful information from the data. Deep learning, ranging from the supervised learning image super-resolution method based on the convolutional neural network (CNN) [25] at the beginning to unsupervised learning image super-resolution method based on the generative countermeasure network [26] in recent years, has been widely used in the field of image super-resolution. Compared with the other two methods, the learning-based image super-resolution method has higher accuracy and the texture details of the reconstructed image that is closer to the real image are abundant.

Therefore, the image super-resolution method is introduced to DEM super-resolution in this paper. A deep convolutional network was constructed to learn the feature mapping relationship between low-resolution and high-resolution DEMs, and the residual network was introduced to solve the degradation phenomenon caused by the deep network. This model could obtain more useful information from the DEM; thus, the detailed texture features of the reconstructed DEM can be better recovered, and the accuracy of the DEM can be effectively improved.

2 Related works

In this section, we will introduce some contents about super-resolution reconstruction, which could be divided into image super-resolution based on the CNN and DEM super-resolution reconstruction based on the CNN.

2.1 Image super-resolution based on the CNN

With the breakthrough of deep learning in the field of computer vision, people have attempted to construct neural network models to conduct end-to-end training to effectively solve the problem of image super-resolution reconstruction. The CNN [27] is a representative network in the field of deep learning. The CNN has a wide range of applications, especially in the fields of image processing and analysis. The basic structure of a CNN is composed of input layers, convolutional layers, pooling layers, fully connected layers and output layers (Figure 1). In general, the convolutional and pooling layers are connected alternately, and the total network will eventually have one or more fully connected layers [28].

Figure 1 Basic structure of a CNN.
Figure 1

Basic structure of a CNN.

Compared with traditional image processing algorithms, the CNN can avoid artificial participation in the process of complex image preprocessing and can directly learn the feature relationships between images. Based on the CNN, Dong et al. proposed the super-resolution convolutional neural network (SRCNN) [29], which is a classic neural network that can be applied to image super-resolution. This method is known as the pioneering work of deep learning applied to image super-resolution. SRCNN can directly learn the feature mapping relationship between low- and high-resolution images by building a three-layer convolutional network. Based on this feature mapping relationship, high-resolution images can be reconstructed from low-resolution images. First, the original images are interpolated into low-resolution images via bicubic interpolation. Then, the low-resolution images are input into the networks. The input images successively pass through the feature detection, nonlinear function mapping and reconstruction layers, and finally, the high-resolution image was output by the SRCNN (Figure 2).

Figure 2 Network structure of the SRCNN.
Figure 2

Network structure of the SRCNN.

In the training process of a neural network model, increasing the number of network layers is often adopted to extract the deep features from the data. However, when the depth of the network is increased, the exploding gradient or vanishing gradient phenomena often occur, which is not conducive to the model training. Although this problem has been effectively controlled by the standard initialization and middle layer normalization methods, network degradation is the main factor affecting the performance of the model. He et al. proposed a Residual Network [30] that combined residual learning with the CNN. Experimental results showed that the residual network can effectively solve the degradation problem of a deep neural network and improve the convergence speed of the model.

In this paper, the features of DEM data are extracted through the combination of a deep convolutional and residual network to effectively realize the transformation from low-resolution DEM data to high-resolution DEM data.

2.2 DEM super-resolution based on the CNN

The process of DEM super-resolution is a more micro and precise spatial expression, which is similar to the process of using a low-resolution image to generate a high-resolution image using image super-resolution [31]. DEM data reflect the superficial information of the three-dimensional objects on the surface and the height of the objects can be regarded as the gray values of the image. Therefore, the knowledge of image super-resolution can be applied to DEM super-resolution. The concept of DEM super-resolution was first proposed in Xu's work where a nonlocal algorithm was applied to improve the resolution of the DEM and the thinking that there are multiple duplicate or similar parts in a single DEM was used as a reference [32]. Yue et al. integrated resolution enhancement, noise suppression and data hole filling into a general framework through Markov random field regularization and then used the complementary information between different DEMs to generate a high-resolution DEM [33]. In recent years, some researchers have also explored how to apply a deep learning method to DEM super-resolution reconstruction. Chen et al. [34] tried to improve the resolution of a DEM based on a CNN, and the results showed that this method could achieve a better effect than bicubic [35] interpolation. Xu et al. [36] proposed an image super-resolution algorithm for a DEM, which could obtain a high-precision DEM based on a small number of DEM samples. Although the CNN has been widely used in the field of image super-resolution, the research on DEM super-resolution reconstruction based on the CNN is relatively less. Moreover, the existing methods that only use several convolutional layers have shortcomings recovering the details of a reconstructed DEM and the texture of reconstructed images is relatively fuzzy. Therefore, there is great potential for improving accuracy.

3 Methods

In this paper, we build a model based on a deep residual network to reconstruct DEM data with super-resolution. The specific implementation process is shown in Figure 3. First, the data need to be normalized before the data are input. After the data are input, bicubic interpolation is adopted to generate low- and high-resolution DEM data to participate in model training. The residual of the input data will be obtained after the original data passed through the 30 convolutional layers. At last, the original input data and the residual data should be added to get the final output of the network, and the output is the reconstructed high-resolution DEM.

Figure 3 Process of DEM super-resolution based on a deep residual network.
Figure 3

Process of DEM super-resolution based on a deep residual network.

In the existing work on DEM super-resolution reconstruction based on deep learning, the network structure of the model is only composed of several simple convolutional layers, which often leads to blurred textures and details in the reconstructed images. Considering that the deep network can extract the deep feature information of data, this paper tried to construct a model with 30 convolutional layers in which each convolutional layer contains a rectified linear unit (ReLU) [37]. Furthermore, our model also used a larger receptive field to obtain contextual information. After each convolutional operation, 0 should be padded in the output data to prevent the image size from increasing. In this way, the deep network model can further extract the feature information from DEM data, which is helpful to restore the texture of the reconstructed image.

Figure 4 Structure of a residual block.
Figure 4

Structure of a residual block.

In the image super-resolution reconstruction process, the low-frequency information contained in the input low-resolution image and the output high-resolution image is similar; therefore, the model only needs to learn the high-frequency residual between the low-resolution images and the high-resolution images. As such, the residual learning strategy should be adopted in DEM super-resolution. In this paper, the residual block structure is introduced to optimize the model. The basic structure of a block of the residual network is shown in Figure 4, where x is the input data of the residual block, the expected output is H(x) and f(x) is the residual error in this block. The residual block constructs an identity map H(x) = x when redundant layers exist in the model. The redundant layers can be skipped by constructing identity mapping in the model training process to ensure that the input and output are equal when data pass through the layers. Therefore, adding a residual block structure to our 30-layer convolution network can effectively speed up the convergence of the model and avoid the influence of network degradation on the accuracy of the reconstructed DEM.

4 Experiments

4.1 Experimental data

At present, the Shuttle Radar Topography Mission (SRTM) DEM and Advanced Spaceborne Thermal Emission and Reflection Radiometer Global Digital Elevation Model (ASTER GDEM) are open DEM data sources with the characteristics of simple acquisition, wide application and coverage over most parts of the world [38]. The SRTM data are a global DEM that was jointly surveyed by NASA and the national mapping agency of the United States Department of Defense (NIMA), and it has experienced several revisions. The ASTER GDEM is based on the detailed observation results of NASA’s new generation earth observation satellite Terra [39]. When evaluating the accuracy of DEM data at the same resolution, the accuracy of the SRTM DEM is higher than that of the ASTER GDEM data. In the more mountainous areas, the effect of the terrain relief on the accuracy of the SRTM is more significant [40].

Figure 5 Overview of the study area.
Figure 5

Overview of the study area.

In this paper, the SRTM DEM data at a 30 m resolution for the range of eastern China are selected as the experimental data. The DEM data are located 29° N to 30° N and 117° E to 118° E. The study area encompassing the experimental DEM data is shown in Figure 5. There are many mountains and gullies in this region where the land surface is undulating and rugged, and the textures of the mountains and gullies area are notably clear; thus, these DEM data in this area are much suitable for the judgement and evaluation of the reconstruction effect. Therefore, this paper selects the SRTM data at a 30 m resolution in this region as the experimental data for the related experiments.

4.2 Data preprocessing

The data set used in this paper is composed of 200 DEM images with a size of 256 × 256 pixels. Data preprocessing is conducted to normalize the DEM data, that is, to normalize the gray values of the DEM from 0 to 1 to eliminate the influence of local perception characteristics. Before the DEM data are input into the model, Matlab is used to interpolate the data and cut the bigger images into smaller images. The data are generated by cutting the grayscale DEM images to a size of 41 × 41 pixels. The total data include the original resolution data and the 2-fold, 3-fold and 4-fold resolution data, which are generated by scaling. At last, the total data encompass 57,599 groups.

4.3 Parameter setting

Our model consists of a 30-layer convolutional network (each layer is 3 × 3). The input data need to be filled with 0 s after the input data pass through a convolutional layer to avoid a large change in the DEM image size. This paper selected the Adam [41] optimization algorithm to reduce the model loss and selected the exponential decay method to set the learning rate of the model. About 64 samples are selected from the training set for each iteration in the model training, and the number of training epochs is set to 120. The learning rate of the model is 0.0001. The experimental code is run in the Anaconda software based on Python 3.6 and TensorFlow 1.12. Training takes approximately 12 h on a single NVIDIA GTX 1060 GPU.

4.4 Comparison method of experimental results

Image super-resolution methods based on interpolation mainly include the nearest neighbor, bilinear and bicubic methods. The main idea of image super-resolution based on interpolation is that the neighborhood pixels are used to calculate the pixels that should be interpolated. Bicubic interpolation calculates the pixel values of four adjacent points around the interpolation points, and then the calculated pixel values are linearly weighted and assigned to the interpolation points [42]. Because the edge of an image that is reconstructed via bicubic interpolation is smooth and the visual effect is very good, this paper chooses bicubic interpolation as the experimental comparison method.

In addition, a three-layer convolutional network is designed for DEM super-resolution based on the SRCNN. The contrast experimental data set is composed of 200 DEM gray-scale images (each image is sized 256 × 256). Before inputting the data into the three-layer convolutional network, the DEM with a 30 m resolution should also be interpolated via bicubic interpolation. The low-resolution DEM data are selected as the input data and the original high-resolution DEM data are selected as the label data. Then, both sets are input into the 3-layer convolutional network. The sizes of the convolutional kernels used by three-layer convolutional network are 9 × 9, 1 × 1 and 5 × 5. At the end of the model training, the results of DEM reconstruction are compared with those of the deep residual network to explore the influence of the network layers on the reconstruction quality. The DEM super-resolution process based on the CNN is shown in Figure 6.

Figure 6 Flow chart of DEM super-resolution based on the CNN.
Figure 6

Flow chart of DEM super-resolution based on the CNN.

4.5 Evaluation method of experimental results

At present, visual effect and quantitative evaluation are two main methods to evaluate the quality of reconstructed images. The visual effect mainly refers to comparing the images reconstructed by different methods and judges the visual quality of reconstructed images by using the naked eye. The peak signal to noise ratio (PSNR) [43] is a measure of the pixel difference between the reconstructed high-resolution images and the real high-resolution images. The PSNR (formula (1)) represents the ratio of the maximum signal power to the noise power and it can be used to measure the quality of a processed image. It is the most widely used standard to evaluate image quality.

(1)psnr=10log2552MSE

MSE refers to the mean squared error between the original image and the reconstructed image. Although the pixel value of a common image is generally from 0 to 255, the gray values of DEM data are often considerably greater than 255. Therefore, it is necessary to improve the calculation formula of the PSNR. According to the definition of the PSNR, the maximum value (255) defined in the original formula can be changed to the difference between the maximum and minimum gray values in DEM data. Therefore, the calculation formula of the PSNR used in DEM super-resolution is shown in formula (2), where ΔS is the difference between the maximum and minimum gray values of the DEM.

(2)psnr=10logΔS2MSE.

This paper also introduced the mean absolute error (MAE) [44] and root mean square error (RMSE) [45] to evaluate the quality of a reconstructed DEM. The MAE (formula (3)) can better reflect the actual situation of prediction error.

(3)MAE=1mi=1NYiyi.

The RMSE (formula (4)) is a kind of numerical accuracy index that is widely used to evaluate the accuracy of DEM. It does not reflect the size of a single point’s error, but rather it describes the discrete degree of terrain parameters and true values in the overall sense.

(4)RMSE=1ni=0n(Hihi)2,
Hi is the value of the original DEM, hi is the value of the reconstructed DEM and N is the number of sampling points in the model.

5 Analysis of experimental results

5.1 Analysis of reconstructed results

The results of DEM super-resolution at different scale factors (2, 3 and 4) are compared (Table 1) and a scatter plot (Figure7) was constructed for more intuitive understanding. When the sampling factor of the DEM is the same, the experimental results show that the PSNRs of the reconstructed DEM based on the CNN are higher than those of the bicubic interpolation. The PSNRs of the reconstructed DEM based on the deep residual network are greatly improved compared with the bicubic interpolation and the CNN. Therefore, we found that the structure of the deep network can further improve the PSNRs of the reconstructed DEM.

Table 1

PSNRs of the reconstructed DEM based on three methods

Scale factorBicubicCNNDeep residual network
PSNR241.313142.229754.2460
337.170138.476746.4518
434.297035.189942.1277
Figure 7 PSNRs of the reconstructed DEM compared with the origin DEM.
Figure 7

PSNRs of the reconstructed DEM compared with the origin DEM.

The results of DEM super-resolution based on the bicubic interpolation method are shown in Figure 8, the results of DEM super-resolution based on the CNN are shown in Figure 9 and the results of DEM super-resolution based on the deep residual network are shown in Figure 10. It can be found that the quality of the reconstructed DEM worsens when the scale factor of the DEM increases. The deep residual network can get better reconstructed details for the DEM than the shallow convolutional networks.

Figure 8 Results of DEM super-resolution based on bicubic interpolation. (a) Original DEM, (b) scale factor: 2, (c) scale factor: 3, (d) scale factor: 4.
Figure 8

Results of DEM super-resolution based on bicubic interpolation. (a) Original DEM, (b) scale factor: 2, (c) scale factor: 3, (d) scale factor: 4.

Figure 9 Results of DEM super-resolution based on the CNN. (a) Original DEM, (b) scale factor: 2, (c) scale factor: 3, (d) scale factor: 4.
Figure 9

Results of DEM super-resolution based on the CNN. (a) Original DEM, (b) scale factor: 2, (c) scale factor: 3, (d) scale factor: 4.

Figure 10 Results of DEM super-resolution based on deep residual network. (a) Original DEM, (b) scale factor: 2, (c) scale factor: 3, (d) scale factor: 4.
Figure 10

Results of DEM super-resolution based on deep residual network. (a) Original DEM, (b) scale factor: 2, (c) scale factor: 3, (d) scale factor: 4.

5.2 Analysis of reconstructed quality for DEM

The reconstructed effects of the DEM based on bicubic interpolation (b), the CNN (c) and the deep residual network (d) when the scale factor is 3 are shown in Figure 11 (the original DEM is (a)). Through the comparison of the three methods, we found that the details of the DEM reconstructed by the CNN and deep residual network are more abundant, the texture of mountains in DEM data is clearer, and the trend of the mountains can be seen. In addition, the reconstructed DEMs were displayed intuitively by cross-section analysis, the results show that the curve of cross-section analysis obtained from the deep residual network is closer to that of the original DEM data (Figure 12).

Figure 11 Details of the original DEM (a) and the reconstructed DEM based on bicubic interpolation (b), the CNN (c) and the deep residual network (d).
Figure 11

Details of the original DEM (a) and the reconstructed DEM based on bicubic interpolation (b), the CNN (c) and the deep residual network (d).

Figure 12 Cross-section analysis of the DEM super-resolution when the scale factor is 3.
Figure 12

Cross-section analysis of the DEM super-resolution when the scale factor is 3.

Taking the scale factor (3) of the DEM as an example, the quality of the reconstructed DEM is evaluated by the MAE and RMSE (Table 2), the basic parameters (Maximum, Minimum and Standard Deviation) of that DEMs are also showed visually in Table 3, besides, the frequency distribution of reconstructed DEMs is shown by histogram (Figure 13). The data in the table show that the MAE and RMSE of the reconstructed DEM based on the deep residual network are far less than those of the bicubic interpolation and the CNN. The histogram shows that the results of our method are closer to the original DEM.

Table 2

MAE and RMSE based on different methods

Compared with origin DEMMAERMSE
Bicubic11.51414.156
CNN9.47112.012
Deep residual network3.6914.829
Table 3

Elevation differences based on different methods

DEM typeElevation differences [m]
MinMaxSD
Origin-DEM2901,298204.168
Bicubic-DEM2981,281200.812
CNN-DEM2931,293204.478
Deep residual network-DEM2891,300204.853
Figure 13 Histogram distribution of the recontructed DEM.
Figure 13

Histogram distribution of the recontructed DEM.

In addition, we also extracted the regions of the mountain valley (Figure 14) and mountain ridge (Figure 15) in the reconstructed DEM for a comparison. By comparing the results of the mountain valleys and ridges that were extracted from the reconstructed DEM, we see that the deep residual network could acquire clearer details and textural features than other methods, and the details and textures are closer to the original data.

Figure 14 Comparison of the extracted mountain valley.
Figure 14

Comparison of the extracted mountain valley.

Figure 15 Comparison of the extracted mountain valley.
Figure 15

Comparison of the extracted mountain valley.

5.3 Extend our approach to other study areas

In this paper, the texture of the DEM data in mountainous area is very clear so that this type of area is suitable for research and analysis of DEM, but our research has not been applied to the DEM data of urban, coastal or other areas. However, we have noticed that more and more scholars have gradually used deep learning technology to research in related fields. For example, some scholars have proposed that neural network model can be used to eliminate the positive vertical error in SRTM data in coastal regions [46] and the multiscale mapping approach based on CNN can also be used to deal with the complex features of urban topography and to reconstruct high-resolution urban DEMs [47]. Therefore, we believe that the improvement of our method may achieve good results according to the characteristics of urban and coastal areas.

For different types of data, such as LiDAR and bathymetric data, neural network models have also achieved certain results with its strong feature extraction ability. In the urban scene classification based on LiDAR point cloud data, the combination of CNN and RNN can effectively realize the efficient semantic analysis of large-scale 3D point cloud [48] and the combination of Mask R-CNN and LiDAR has great potential for mapping anthropogenic and natural landscape features [49]. In the field of bathymetric survey, deep learning methods are becoming more and more active, for example, some experts and scholars obtained high-resolution water depth data by introducing convolution neural network to process the remote sensing image data and water depth data in shallow water area [50]. It can be seen that neural network model can play an important role in feature extraction of different data, but there is no universal neural network model that could be involved in the feature extraction of various types of data.

Therefore, we will also pay more attention to the applicability of our method in other areas, and further explore the mapping and feature extraction of deep residual network in various kinds of geospatial and remote sensing data in the future work.

6 Conclusion

In this paper, a method of DEM super-resolution based on a deep residual network is proposed. Our method can effectively extract the feature mapping relationship between a low- and high-resolution DEM using a deep convolutional network. In addition, a residual network was introduced to our model to accelerate the convergence speed of the model to quickly and effectively realize DEM super-resolution process.

The results show that the deep residual network has a greater impact on DEM super-resolution. Compared with the convolutional network with fewer layers, our method based on the deep residual network significantly improves the DEM’s reconstructed details and recovered textures. It can be seen that the structure of the deep network is of considerable value for improving the resolution of DEMs.

Acknowledgments

We would like to thank Wei He for providing important feedback, suggestions and code contributions to the project. This research was funded by the National Natural Science Foundation of China, grant number 41471329 and 41101358.

  1. Author contributions: Donglai Jiao: writing – original draft, writing – review and editing. Dajiang Wang: software, validation, methodology. Haiyang Lv: investigation, data curation. Yang Peng: project administration.

References

[1] Tang G. Progress of DEM and digital terrain analysis in China. Acta Geographica Sin. 2020;69(9):1305–25.Search in Google Scholar

[2] Zhang Z. Future development and prospect of digital photogrammetry. Geomat World. 2004;2(3):1–5.Search in Google Scholar

[3] Yi H, Wang C, Hu B, Ding W. Generation of digital elevation models as applied to polarimetric SAR interferometry. Eng Surveying Mapp. 2012;2:13–7.Search in Google Scholar

[4] Jin K, Gong Z, Wang B, Tang Z. Key step analysis of extraction DEM based on LiDAR data. Eng Surveying Mapp. 2010;6:39–42.Search in Google Scholar

[5] Zhao Z. Methods on high-accuracy DEM extraction from interferometric SAR in sophisticated terrain areas. Acta Geodaetica et Cartographica Sin. 2016;11:1385.Search in Google Scholar

[6] Zhang CX, Yang QK, Duan JJ. Method for establishing high resolution digital elevation model. J Hydraulic Eng. 2006;37(8):1009–14.Search in Google Scholar

[7] Liu XH, Hu H, Hu P. Accuracy assessment of LiDAR-derived digital elevation models based on approximation theory. Photogrammetric Eng Remote Sens. 2009;75(1):7062–79.10.3390/rs70607062Search in Google Scholar

[8] Goodale C, Aber J, Ollinger S. Mapping monthly precipitation, temperature, and solar radiation for ireland with polynomial regression and a digital elevation model. Clim Res. 1998;10(1):35–49.10.3354/cr010035Search in Google Scholar

[9] Unser M, Aldroubi A, Eden M. Fast b-spline transforms for continuous image representation and interpolation. IEEE Trans Pattern Anal Mach Intell. 1991;13(3):277–85.10.1109/34.75515Search in Google Scholar

[10] Kratzer JF, Hayes DB, Thompson BE. Methods for interpolating stream width, depth, and current velocity. Ecol Model. 2006;196(1–2):256–64.10.1016/j.ecolmodel.2006.02.004Search in Google Scholar

[11] Wei Zhang, Shengyi Mao. DEM resampling methods comparison: an experimental study. Proceedings of 2011 International Conference on Computers, Communications, Control and Automation Proceedings (CCCA 2011 V3); 2011. p. 629–31.Search in Google Scholar

[12] Hutchinson MF, Stein JA, Stein JL, Xu TB. Locally adaptive gridding of noisy high resolution topographic data. In: Anderssen RS, Braddock RD, Newham LTH, editors. Nedlands: Univ Western Australia; 2009. p. 2493–9.Search in Google Scholar

[13] Liu XY. Airborne LiDAR for DEM generation: Some critical issues. Prog Phys Geography-Earth Environ. 2008;32(1):31–49.10.1177/0309133308089496Search in Google Scholar

[14] Zhang Y. Research on dem resolutional determination and scaling conversion methods. Nanjing, China: Nanjing Normal University; 2014.Search in Google Scholar

[15] Liang Z, Ge Y, Wang J. Review of Geostatistical-based downscaling. Remote Sens Technol Application. 2015;30(1):1–7.Search in Google Scholar

[16] Guo L, Cui T, Wang Y, Lu Y. Study for multi-resources geospatial data integration. Geomat World. 2007;1:64–8.Search in Google Scholar

[17] Karkee M, Steward BL, Aziz SA. Improving quality of public domain digital elevation models through data fusion. Biosyst Eng. 2008;101(3):293–305.10.1016/j.biosystemseng.2008.09.010Search in Google Scholar

[18] Jhee H, Cho HC, Kahng HK, Cheung S. Multiscale quadtree model fusion with super-resolution for blocky artefact removal. Remote Sens Lett. 2013;4(4–6):325–34.10.1080/2150704X.2012.729869Search in Google Scholar

[19] Tang Y, Zhang J, Jing L, Li H. Digital elevation data fusion using multiple-point geostatistical simulation. IEEE J Sel Top Appl Earth Observations Remote Sens. 2015;8(10):4922–34.10.1109/JSTARS.2015.2438299Search in Google Scholar

[20] Su H, Zhou J, Zhang Z. Survey of super-resolution image reconstruction methods. Acta Automatica Sin. 2013;39(8):1202–13.10.3724/SP.J.1004.2013.01202Search in Google Scholar

[21] Huang TS, Tsai R. Multiframe image restoration and registration. Adv Comput Vis Image Process. 1984;12:317–39.Search in Google Scholar

[22] Nguyen N, Milanfar P. A wavelet-based interpolation-restoration method for superresolution. Circuits Syst Signal Process. 2000;19(4):321–38.10.1007/BF01200891Search in Google Scholar

[23] Elad M, Feuer A. Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images. IEEE Trans Image Process. 2002;6(12):1646–58.10.1109/83.650118Search in Google Scholar PubMed

[24] Zeyde R, Elad M, Protter M, editors. On Single Image Scale-Up Using Sparse-Representations. International Conference on Curves and Surfaces; 2010.Search in Google Scholar

[25] Dong C, Chen CL, He K, Tang X, editors. Learning a deep convolutional network for image super-resolution. Cham: Springer International Publishing; 2014.10.1007/978-3-319-10593-2_13Search in Google Scholar

[26] Pouyanfar S, Sadiq S, Yan YL, Tian HM, Tao YD, Reyes MP, et al. A survey on deep learning: Algorithms, techniques, and applications. ACM Comput Surv. 2019;51(5):36.10.1145/3234150Search in Google Scholar

[27] Xu Z, Guan K, Casler N, Peng B, Wang S. A 3D convolutional neural network method for land cover classification using LiDAR and multi-temporal Landsat imagery. ISPRS J Photogrammetry Remote Sens. 2018;144:423–34.10.1016/j.isprsjprs.2018.08.005Search in Google Scholar

[28] Zhou F, Jin L, Dong J. Review of convolutional neural network. Chin J Computers. 2017;40(6):1229–51.Search in Google Scholar

[29] Dong C, Loy CC, He K, Tang X. Image super-resolution using deep convolutional networks. IEEE Trans Pattern Anal Mach Intell. 2016;38(2):295–307.10.1109/TPAMI.2015.2439281Search in Google Scholar PubMed

[30] He K, Zhang X, Ren S, Sun J Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2016. p. 770–8.10.1109/CVPR.2016.90Search in Google Scholar

[31] Zhang H, Song Z, Yang J, Yang Q. Influence of DEM super-resolution reconstruction on terraced field slope extraction. Trans Chin Soc Agric Machinery. 2017;48:118.Search in Google Scholar

[32] Xu Z, Wang X, Chen Z, Xiong D, Ding M, Hou W. Nonlocal similarity based DEM super resolution. ISPRS J Photogrammetry Remote Sens. 2015;110:48–54.10.1016/j.isprsjprs.2015.10.009Search in Google Scholar

[33] Yue L, Shen H, Yuan Q, Zhang L. Fusion of multi-scale dems using a regularized super-resolution method. Int J Geographical Inf Sci. 2015;29(11–12):2095–120.10.1080/13658816.2015.1063639Search in Google Scholar

[34] Chen Z, Wang X, Xu Z, Hou W. Convolutional neural network based dem super resolution. Isprs Int Arch Photogrammetry Remote Sens Spat Inf Sci. 2016;XLI-B3:247–250.10.5194/isprs-archives-XLI-B3-247-2016Search in Google Scholar

[35] Rees WG. The accuracy of digital elevation models interpolated to higher resolutions. Int J Remote Sens. 2010;21(1):7–20.10.1080/014311600210957Search in Google Scholar

[36] Xu Z, Chen Z, Yi W, Gui Q, Hou W, Ding M. Deep gradient prior network for DEM super-resolution: Transfer learning from image to DEM. ISPRS J Photogrammetry Remote Sens. 2019;150:80–90.10.1016/j.isprsjprs.2019.02.008Search in Google Scholar

[37] Glorot X, Bordes A, Bengio Y. Deep sparse rectifier neural networks. J Mach Learn Res. 2011;15:315–23.Search in Google Scholar

[38] Xing Q, Li Z, Zhou J, Zhang P. Comparison of ASTER GDEM and SRTM DEM in deriving the thickness change of Small Dongkemadi Glacier on Qinghai-Tibetan Plateau. IEEE Int Geosci Remote Sens. 2011;3171–74.10.1109/IGARSS.2011.6049892Search in Google Scholar

[39] Jacobsen K, Passini R, editors. Analsysis of aster gdem elevation models. Canadian Society of Geomatics. IEEE Int Geosci Remote Sens. 2013.Search in Google Scholar

[40] Guth PL. Geomorphometric comparison of ASTER GDEM and SRTM. ISPRS Commission Technical Commission IV Symposium: Geodatabases & Digital Mapping. 2010;1–10.Search in Google Scholar

[41] Kingma D, Ba J. Adam: A method for stochastic optimization. Computer Science. 2014.Search in Google Scholar

[42] Hisham MB, Yaakob SN, Raof RAA, Nazren ABA, Wafi NM. An analysis of performance for commonly used interpolation method. Adv Sci Lett. 2017;23(4):5147–50.10.1166/asl.2017.7329Search in Google Scholar

[43] Xie H. Super resolution of digital terrain map based on convolutional neural network. Geospatial Inf. 2020;18(1):28–31+8.Search in Google Scholar

[44] Szypuła B. Quality assessment of DEM derived from topographic maps for geomorphometric purposes. Open Geosci. 2019;11(1):843–65.10.1515/geo-2019-0066Search in Google Scholar

[45] Drouin A, Saintlaurent D. Comparison of interpolation methods to produce high precision digital elevation models (DEM) for the representation of floodplain micro-topography. Hydrological Ences J. 2010;55(4):526–39.10.1080/02626667.2010.481088Search in Google Scholar

[46] Kulp SA, Strauss BH. CoastalDEM: A global coastal digital elevation model improved from SRTM using a neural network. Remote Sens Environ. 2018;206:231–9.10.1016/j.rse.2017.12.026Search in Google Scholar

[47] Jiang L, Hu Y, Xia X, Liang Q, Soltoggio A, Kabir SR. A multi-scale mapping approach based on a deep learning cnn model for reconstructing high-resolution urban DEMs. Water. 2020;12:5.10.3390/w12051369Search in Google Scholar

[48] Zhang L, Zhang L. Deep learning-based classification and reconstruction of residential scenes from large-scale point clouds. IEEE Trans Geosci Remote Sens. 2018;56(4):1887–97.10.1109/TGRS.2017.2769120Search in Google Scholar

[49] Maxwell AE, Pourmohammadi P, Poyner JD. Mapping the topographic features of mining-related valley fills using mask R-CNN deep learning and digital elevation data. Remote Sens. 2020;12(3):547.10.3390/rs12030547Search in Google Scholar

[50] Ai B, Wen Z, Wang Z, Wang R, Su D, Li C, et al. Convolutional neural network to retrieve water depth in marine shallow water area from remote sensing images. IEEE J Sel Top Appl Earth Observations Remote Sens. 2020;13:2888–98.10.1109/JSTARS.2020.2993731Search in Google Scholar

Received: 2020-08-02
Revised: 2020-10-14
Accepted: 2020-10-19
Published Online: 2020-11-13

© 2020 Donglai Jiao et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 9.6.2023 from https://www.degruyter.com/document/doi/10.1515/geo-2020-0207/html
Scroll to top button