Skip to content
BY 4.0 license Open Access Published by De Gruyter Open Access June 24, 2022

Data smoothing with applications to edge detection

Mohammad F. Al-Jamal, Ahmad Baniabedalruhman and Abedel-Karrem Alomari
From the journal Open Mathematics

Abstract

The aim of this paper is to present a new stable method for smoothing and differentiating noisy data defined on a bounded domain Ω R N with N 1 . The proposed method stems from the smoothing properties of the classical diffusion equation; the smoothed data are obtained by solving a diffusion equation with the noisy data imposed as the initial condition. We analyze the stability and convergence of the proposed method and we give optimal convergence rates. One of the main advantages of this method lies in multivariable problems, where some of the other approaches are not easily generalized. Moreover, this approach does not require strong smoothness assumptions on the underlying data, which makes it appealing for detecting data corners or edges. Numerical examples demonstrate the feasibility and robustness of the method even with the presence of a large amounts of noise.

MSC 2010: 65F22; 47A52; 35K20; 65D19

1 Introduction

It is very common in many scientific applications, such as biological imaging, image processing and computer vision, inverse source and identification problems, to have to differentiate functions specified by data. In many cases, the data is experimental and so substantial noise or imprecision may be present. Since differentiation is an ill-posed problem [1], naive numerical differentiation techniques will amplify the noise resulting in inaccurate and mostly useless results. A concrete example is given by the function

f δ ( x ) = f ( x ) + δ sin ( ω x ) , x ( 0 , 1 ) ,

where f δ is the noisy version of the true data f , and δ is a small number representing the noise level. The computed derivative is then

d f δ d x ( x ) = d f d x ( x ) + ω δ cos ( ω x ) , x ( 0 , 1 ) .

Evidently, the maximum error in the data is δ , which is small, while the maximum error in the computed derivative can be arbitrarily large, depending on the value of the frequency ω .

The last example showcases the instability associated with the differentiation process: small errors in the data might induce large errors in the computed derivative. To overcome the instability issue of the differentiation operator, several regularization techniques have been developed. In [2], a perturbation based method is used for single variable functions. In [3], a mollification method is given. Smoothing splines is adopted in [4], and results for single variable functions are given. Other methods such as the Lanczos’ generalized derivative [5], and the differentiation by integration using Jacobi polynomials [6], are known for single variable functions only. For a summary of other popular methods, see [7]. Despite its importance in real life applications, only few methods dealing with higher dimensional cases have reported [8,9, 10,11]. A side from limited applicability due to dimension restriction, the aforementioned approaches are relatively slow to suit real life applications, particularly with big data.

In this paper, the smoothing property of the classical diffusion equation is utilized to reconstruct the gradient f ( x ) of a multivariable function f ( x ) from given noise-corrupted data f δ L 2 ( Ω ) satisfying

f δ f L 2 ( Ω ) δ ,

where Ω R N is a bounded domain. For the time being, we shall assume that f Ω = 0 ; see the Appendix for the general case. More precisely, the proposed method amounts to take the function x u as stable approximation to f where u solves the diffusion equation:

(1) t u ( x , t ) = Δ x u ( x , t ) , x Ω , t > 0 ,

subject to the initial and boundary conditions:

(2) u ( x , 0 ) = f δ ( x ) , x Ω , u ( x , t ) = 0 , x Ω , t > 0 ,

where Δ is the Laplacian operator. The time t can be considered as a design parameter, and its rule is to control the smoothness of the desired approximation; in the context of regularization theory, t represents the regularization parameter. The precise formulation of the suggested method as well as the motivation behind it will be given in more detail in the sequel.

Under reasonable assumptions on true data f , we prove stability results and we propose optimal convergence rate for the proposed method. Particularly, we show that the method gives an optimal rate of convergence of order O ( δ 2 / 3 ) . Several examples and applications in computer vision demonstrating the validity and efficiency of the proposed approach are also provided.

The rest of this paper is composed of five sections. Section 2 recalls some preliminary and auxiliary results. In Sections 3, we state the precise formulation of the proposed method and give the motivation behind it, and then we prove the main convergence results in Section 4. We provide several numerical experiments and applications in Section 5. More detailed discussion on the general case is provided in the Appendix.

2 Preliminaries

First, we recall some definitions and results from the theory of Sobolev spaces. Readers may refer to [12,13] for more detailed discussions. Throughout the rest of the paper, we assume that Ω is a bounded domain in R N , N 1 , with sufficiently smooth boundary Ω .

We use the notation L 2 ( Ω ) to denote the usual Lebesgue space of square-integrable functions, which is a Hilbert space with the inner product

( u , v ) L 2 ( Ω ) = Ω u v , u , v L 2 ( Ω ) .

The induced norm will be denoted by L 2 ( Ω ) .

Let α = ( α 1 , , α N ) be a multi-index, and set α = α 1 + + α N . For a positive integer m , the Sobolev space H m ( Ω ) is defined by

H m ( Ω ) = { u L 2 ( Ω ) : α u L 2 ( Ω ) α m } ,

where the partial derivatives are taken in the distributional sense. It is a Hilbert space under the inner product

( u , v ) H m ( Ω ) = α m ( α u , α v ) L 2 ( Ω ) ,

and a Banach space under the the corresponding induced norm. In particular, the norm on H 1 ( Ω ) is given by

u H 1 ( Ω ) 2 = u L 2 ( Ω ) 2 + u L 2 ( Ω ) 2 .

The Hilbert space H 0 1 ( Ω ) is defined as the closure of C 0 ( Ω ) in H 1 ( Ω ) , where C 0 ( Ω ) is the set of all functions that are infinitely differentiable on Ω and compactly supported in Ω . We have the characterization

H 0 1 ( Ω ) = { u H 1 ( Ω ) u Ω = 0 } ,

where the boundary value is understood in the trace sense.

By H 1 / 2 ( Ω ) , we mean the space of all restrictions on Ω from functions in H 1 ( Ω ) . It is a Hilbert space with the norm

u H 1 / 2 ( Ω ) = inf v H 1 ( Ω ) { v H 1 ( Ω ) : v Ω = u } .

From the Sturm-Liouville theory [14], the eigenvalues of the eigenvalue problem

(3) Δ φ = λ φ in Ω , φ = 0 on Ω

form a nondecreasing sequence of positive numbers { λ n } tending to infinity, and the corresponding (normalized) eigenfunctions { φ n } H 0 1 ( Ω ) H 2 ( Ω ) form an orthonormal basis for the space L 2 ( Ω ) . Moreover,

(4) u L 2 ( Ω ) 2 = n = 1 ( u , φ n ) L 2 ( Ω ) 2 λ n , u H 0 1 ( Ω ) .

Associated with the eigenvalue problem (3), the Hilbert space X β is defined as follows:

X β = u L 2 ( Ω ) : u X β 2 = n = 1 ( u , φ n ) L 2 ( Ω ) 2 λ n 2 β < .

The parameter β characterizes the regularity properties of the space X β . We have X 0 = L 2 ( Ω ) , X 1 / 2 = H 0 1 ( Ω ) , X 1 = H 2 ( Ω ) H 0 1 ( Ω ) , and X β H 2 β ( Ω ) , see [14].

3 Motivation of the proposed algorithm

To motivate the proposed regularization approach, let u be the solution of the parabolic initial-boundary value problem given by equations (1) and (2). Let us set y δ ( x ) = u ( x , t ) for x Ω , and think of t as a parameter. Then, on separating variables, we have the following representation

(5) y δ ( x ) = n = 1 ( f δ , φ n ) L 2 ( Ω ) exp ( λ n t ) φ n ( x ) .

A standard result [14] is that y δ H 2 ( Ω ) H 0 1 ( Ω ) for t > 0 . From the expansion given in (5), we deduce that

( y δ , φ n ) L 2 ( Ω ) = exp ( λ n t ) ( f δ , φ n ) L 2 ( Ω ) , n = 1 , 2 ,

and so, due to the exponential term, the Fourier coefficients of y δ decay faster than those of f δ . It is also evident that y δ f δ for small values of the parameter t . Thus, from this brief analysis, we infer that y δ is an approximation of f δ that is smoother than the input data f δ .

As a consequence of the aforementioned discussion, it would be reasonable to take y δ ( x ) , t > 0 , as smooth approximations to f ( x ) for x Ω . Therefore, we suggest the following numerical differentiation algorithm:

Step 1. Compute the smoothed version of the noisy data by

y δ ( x ) = n = 1 ( f δ , φ n ) L 2 ( Ω ) exp ( λ n t ) φ n ( x ) , x Ω ;

Step 2. Take the gradient y δ ( x ) as an approximation of f ( x ) .

Few remarks are in order. First, from expansion (5), we see that the parameter t plays the role of a smoothing parameter, that is, sufficiently small or large values of t result in undersmoothing or oversmoothing, respectively. Second, we point out that the gradient in the second step can be computed exactly or using an appropriate difference scheme depending on the nature of the available data.

4 Stability and consistency analysis

Our goal next is to examine y δ ( x ) computed by the proposed algorithm as a stable approximation to f ( x ) . In particular, we study the stability and consistency of the aforementioned approach and prove a convergence result. Throughout the sequel, we assume t > 0 , and for convenience, we also let

y ( x ) = n = 1 ( f , φ n ) L 2 ( Ω ) exp ( λ n t ) φ n ( x ) , x Ω ,

that is, y is the solution of (1) and (2) corresponding to the noiseless data f .

We start off with the following stability result.

Lemma 4.1

We have

y δ y L 2 ( Ω ) δ 2 t .

Proof

Set g = f δ f and u = y δ y . Since g L 2 ( Ω ) , we have u H 0 1 ( Ω ) . Then, it follows from (4) and the uniqueness of the Fourier coefficients that

u L 2 ( Ω ) 2 = n = 1 λ n m = 1 ( g , φ m ) L 2 ( Ω ) exp ( λ m t ) φ m , φ n L 2 ( Ω ) 2 = n = 1 ( g , φ n ) L 2 ( Ω ) 2 exp ( 2 λ n t ) λ n .

By using the fact that 4 t exp ( t ) 1 and the Parseval’s identity, we see that

u L 2 ( Ω ) 2 1 4 t g L 2 ( Ω ) 2 δ 2 4 t ,

which concludes the proof.□

Throughout the sequel, we shall assume that β > 1 / 2 , and we set η = β 1 / 2 if β < 3 / 2 , and η = 1 if β 3 / 2 . We have the following consistency result:

Lemma 4.2

If f X β , then

y f L 2 ( Ω ) C β t η ,

where C β = 2 max { 1 , λ 1 3 / 2 β } f X β .

Proof

Set u ( x ) = y ( x ) f ( x ) , and so u H 0 1 ( Ω ) . Moreover, from (4), the definition of y , and the uniqueness of the Fourier coefficients, we have

u L 2 ( Ω ) 2 = n = 1 [ ( y , φ n ) L 2 ( Ω ) ( f , φ n ) L 2 ( Ω ) ] 2 λ n = n = 1 ( f , φ n ) L 2 ( Ω ) 2 ( 1 exp ( λ n t ) ) 2 λ n .

The last equality together with the fact that ( 1 exp ( t ) ) 2 t / ( t + 1 ) imply

u L 2 ( Ω ) 2 4 n = 1 ( f , φ n ) L 2 ( Ω ) 2 λ n t 1 + λ n t 2 λ n = 4 n = 1 ( f , φ n ) L 2 ( Ω ) 2 λ n 2 β λ n 3 2 β t 2 ( 1 + λ n t ) 2 .

Now, it is an easy matter to show that

t 2 λ 3 2 β ( 1 + t λ ) 2 t 2 β 1 , β < 3 / 2 , λ 1 3 2 β t 2 , β 3 / 2 ,

for all λ λ 1 , from which we see

u L 2 ( Ω ) 2 4 max { 1 , λ 1 3 2 β } f X β 2 t 2 β 1 , β < 3 / 2 , t 2 , β 3 / 2 ,

which ends the proof.□

Combining Lemmas 4.1, 4.2, and the triangle inequality, we obtain the main convergence result:

Theorem 4.1

Suppose that f X β for some β > 1 / 2 , then there exists a constant K independent of t and δ such that

y δ f L 2 ( Ω ) K ( t η + δ t ) .

Remark 4.1

If we choose the parameter t so that t δ γ for some γ ( 0 , 2 ) , then we obtain the convergence result

y δ f L 2 ( Ω ) 0

as δ 0 . The convergence rate is optimal when γ = 2 / ( 2 η + 1 ) ; in this case, we have

y δ f L 2 ( Ω ) = O ( δ 2 η / ( 2 η + 1 ) ) .

We obtain the fastest convergence when β 3 / 2 , in which case, η = 1 and

y δ f L 2 ( Ω ) = O ( δ 2 / 3 ) ,

provided that t = C δ 2 / 3 for some positive constant C .

5 Numerical experiments

Since data acquisition results in only discrete data in many scientific applications, we assume Ω = ( 0 , L ) N and that the noisy data, which we denote by f δ , is sampled at n = ( m + 1 ) N regular grid points X = { x i = ( i 1 h , , i N h ) : i I } using the formula

f i δ = f ( x i ) + e i , i I = { ( i 1 , , i N ) : i 1 , , i N = 1 , , m 1 } ,

where e is some Gaussian noise with mean 0, and h = L / m . Let us further assume that f Ω = 0 (refer to the attached appendix for general treatment). We will designate the symbol δ to denote the relative noise level in the data measured in the l 2 norm, that is,

f δ f = δ f ,

where is the usual Euclidean norm. We shall assess the quality of the recovered gradient by the relative l 2 error given by

y δ f f .

Regarding the choice of the parameter t , we employ the Morozov’s discrepancy principle (e.g., [1]), which amounts to choose t such that

y δ f δ = δ f .

For such domain Ω , the eigenvalues and eigenfunctions of (3) are

λ i = i 1 π L 2 + + i N π L 2 , φ i ( x ) = sin i 1 π x 1 L sin i N π x N L .

Therefore, the Fourier discrete sine transform f δ ^ of the data f δ is expressed as follows:

f δ ^ i = 2 m N / 2 k I f k δ φ i ( x k ) , i I .

Using the trapezoidal rule over the regular grid X , the smoothed data given by (5) can be computed as follows:

y δ ( x j ) = i I 2 l N Ω f δ ( x ) φ i ( x ) d x e λ i t φ i ( x j ) i I 2 m N k I f k δ φ i ( x k ) e λ i t φ i ( x j ) = 2 m N / 2 i I ( E i f δ ^ i ) φ i ( x j ) , j I ,

where E i = e λ i t . Thus, the smoothed data can be computed by

y δ = f δ ^ E ^ ,

where the symbol ( ) denotes the elementwise multiplication operator, and the hat ( ^ ) stands for the Fourier discrete sine transform.

In the following examples, the derivatives of the smoothed data are computed via the central difference formula:

y δ x k ( x i ) y i 1 , , i k + 1 , , i N δ y i 1 , , i k 1 , , i N δ 2 h , i I , k = 1 , , N

and similarly for the true and noisy data.

Now, we examine examples in several space dimensions. We point out that the computations are extremely quick owing to the fast Fourier transform (FFT) routine, which is available in many computer algebra systems.

Example 5.1

The true data are given by the function

f ( x ) = exp ( 3 ( 2 x 1 ) 2 ) sin ( 5 π x ) , 0 x 1 .

For noise level δ = 20 % and sample size n = 500 , the true, noisy, and smoothed data, and their corresponding derivatives are shown in Figure 1. Error results are summarized in Table 1.

Figure 1 
               Data (left) and the corresponding derivative (right) for Example 5.1.

Figure 1

Data (left) and the corresponding derivative (right) for Example 5.1.

Table 1

l 2 -relative errors in the estimated derivatives for Example 5.1 for several sample sizes n and noise levels δ . Numbers in parenthesis represent the relative errors computed from the noisy data

δ \ n 50 500 5,000
0.2 0.190 (0.809) 0.085 (9.084) 0.043 (89.13)
0.02 0.045 (0.080) 0.030 (0.908) 0.020 (8.913)
0.002 0.007 (0.008) 0.012 (0.090) 0.008 (0.891)

Remark 5.1

The results in Example 5.1 suggest a convergence rate of O ( ε 0.48 ) . Since f X β for all β < 1.25 , from Remark 4.1, we deduce the theoretic optimal order O ( ε 0.6 ) . The deterioration in the convergence rate is mainly due to discretization errors and the nonoptimal choice of the smoothing parameter t .

Example 5.2

We consider the recovery of the gradient of the function

f ( x , y ) = 5 ( 6 y 3 ) exp ( ( 6 x 3 ) 2 ( 6 y 3 ) 2 ) ,

where ( x , y ) [ 0 , 1 ] 2 . For noise level δ = 10 % and sample size n = 50 × 50 , the noisy and smoothed data are depicted in Figure 2. The error results in the estimated gradients are summarized in Table 2.

Figure 2 
               True data surface along with the noisy data (left) and smoothed data (right) for Example 5.2.

Figure 2

True data surface along with the noisy data (left) and smoothed data (right) for Example 5.2.

Table 2

l 2 -relative errors in the recovered gradients for Example 5.2 for several sample sizes n and noise levels δ . Numbers in parenthesis represent the relative errors computed from the noisy data

δ \ n 50 × 50 500 × 500 750 × 750
0.2 0.138 (1.671) 0.056 (16.64) 0.047 (24.97)
0.02 0.051 (0.167) 0.023 (1.664) 0.021 (2.497)
0.002 0.013 (0.016) 0.011 (0.166) 0.009 (0.249)

Example 5.3

The true data are given by the function

f ( x , y , z ) = sin ( ( x x 2 ) ( y y 2 ) ) ( z z 2 ) ,

where ( x , y , z ) [ 0 , 1 ] 3 . A slice contour plot for the noisy and the corresponding smoothed data are shown in Figure 3. Relative errors in the estimated gradients are summarized in Table 3.

Figure 3 
               Sliced contours of noisy data (left) and smoothed data (right) for Example 5.3.

Figure 3

Sliced contours of noisy data (left) and smoothed data (right) for Example 5.3.

Table 3

l 2 -relative errors in the recovered gradients for Example 5.3 for several sample sizes n and noise levels δ . Numbers in parenthesis represent the relative errors computed from the noisy data

δ \ n 50 × 50 × 50 100 × 100 × 100 150 × 150 × 150
0.2 0.061 (4.705) 0.038 (9.174) 0.038 (13.64)
0.02 0.030 (0.470) 0.021 (0.917) 0.020 (1.364)
0.002 0.015 (0.047) 0.011 (0.091) 0.009 (0.136)

Remark 5.2

Contrary to the naive method, the error results in Tables 13 show that the proposed method is stable with respect to the sample size n , that is, as the grid is refined, the error gets smaller.

Now we present an application on edge detection (emphasizing) of digital images.

Example 5.4

Image gradient is an essential ingredient in common image processing and computer vision applications, particularly in edge detection algorithms. Since the gradient is sensitive to noise, noisy images must be denoised or smoothed prior to the gradient estimation. In common edge detection algorithms, such as the Canny method, a Gaussian filter is typically used for this purpose. In the following demonstration, we employ our method to smooth out the noisy images, and then the image gradient is approximated using the central difference formula. Consequently, the magnitude of the image gradient at pixel ( x , y ) is approximated by

f ( x , y ) 1 2 [ f ( x + 1 , y ) f ( x 1 , y ) ] 2 + [ f ( x , y + 1 ) f ( x , y 1 ) ] 2 ,

where f is the underlying image intensity function.

In this demonstration, we consider three noisy images that are obtained by adding a Poisson noise to the noise free images. The original, noisy, and smoothed images by the diffusion method are shown in Figure 4. To assess the quality of the smoothed (denoised) images, we computed the peak signal-to-noise ratio (PSNR) of the noisy and the smoothed images, given by

PSNR = 10 log 10 ( 255 ) 2 MSE ( dB ) ,

MSE = 1 M N i = 1 M j = 1 N [ f ( i , j ) f ˜ ( i , j ) ] 2 ,

where f M × N and f ˜ M × N are the original and denoised images, respectively. The PSNR results are summarized in Table 4.

The gradient magnitudes of the noisy and smoothed images are plotted in Figure 5. For the sake of comparison, we also plot the edges of the noisy images obtained by the popular Canny method [15], which consists of five main steps: Gaussian filtering, measuring intensity gradient, edge thinning, double thresholding to determine strong and weak edges, and edge tracking via blob analysis to extract weak edges.

From Table 4, we see that the PSNR of the denoised images are significantly higher than those of noisy images, which indicates the denoised images by the proposed smoothing method have better qualities than the noisy images. Moreover, as it can be observed from Figures 4 and 5, the proposed technique can significantly enhance the quality of the recovered edges, and it seems that the smoothing stage has no bias toward certain direction; this is a major advantage since edge orientation is mostly unknown in real-life applications. It is evident from this experiment that diffusion smoothing gives plausible results, and it can be well suited for such type of applications.

Table 4

PSNR results for noisy and denoised images in Example 5.4

Image Noisy image Denoised image
House 15.129 21.530
Brain 18.092 25.216
Fish 15.568 21.164
Figure 4 
               Original images (left), noisy images (middle), and smoothed images (right).

Figure 4

Original images (left), noisy images (middle), and smoothed images (right).

Figure 5 
               Detected edges: from noisy images (left), by Canny method (middle), and from smoothed images (right).

Figure 5

Detected edges: from noisy images (left), by Canny method (middle), and from smoothed images (right).

From the presented examples, we see that the proposed diffusion technique is very suitable for data smoothing and numerical differentiation, and it can excel in various scientific applications due to its speed (in particular, when utilizing the fast Fourier transform), supporting theory behind it, and further, it does not pose any dimension limitations; the method can be extended easily to problems in any dimension, contrary to the most existing methods in which they are limited to one-dimensional domains.

6 Conclusion

The smoothing property of the diffusion equation is utilized to estimate the gradient of functions specified by noisy data. We proved stability and consistency results and carried out the convergence analysis for the proposed method. Several examples in one and higher dimensional domains demonstrate the feasibility and robustness of the proposed approach.

The main advantage of the method lies in multivariable problems, where some of the other smoothing approaches do not easily generalize. Moreover, the method shows decent results under mild assumptions on the true data even in the presence of a large amount of noise, while these could be limitations for other methods. We believe the method can excel in various scientific applications due to its speed, relative ease of implementation, and the supporting theory behind it; we look forward to extending this approach and the theoretical results for higher derivatives estimations. Furthermore, appropriate a priori and a posteriori parameter choice strategies for choosing t are under investigation.

Acknowledgements

The authors would like to thank the Scientific Research and Graduate Studies at Yarmouk University for supporting this paper, and they would like express their sincere gratitude to the anonymous referees for their helpful comments that helped us to improve the quality of the manuscript.

  1. Conflict of interest: The authors state no conflict of interest.

Appendix

If the true boundary data h = f Ω is known approximately by the function h ε satisfying

h ε h H 1 / 2 ( Ω ) ε ,

then smoothed data can be defined as follows:

y δ , ε ( x ) = W ε ( x ) + n = 1 ( f δ W ε , φ n ) L 2 ( Ω ) exp ( λ n t ) φ n ( x ) , x Ω ,

where the function W ε satisfies (in the weak sense) the boundary-value problems:

Δ u + u = 0 in Ω , u = h ε on Ω .

To derive stability and convergence results, we set

y ( x ) = W ( x ) + n = 1 ( f W , φ n ) L 2 ( Ω ) exp ( λ n t ) φ n ( x ) , x Ω ,

where W solves the boundary-value problems

Δ u + u = 0 in Ω , u = h on Ω .

From the definition of H 1 / 2 ( Ω ) (e.g., [12]), we see that

W ε W H 1 ( Ω ) = h ε h H 1 / 2 ( Ω ) ε .

Using the aforementioned inequality and similar arguments as in the proofs of Lemmas 4.1 and 4.2, it is easy to conclude that

y δ , ε y L 2 ( Ω ) δ + ε 2 t + ε , y f L 2 ( Ω ) C β t η ,

where C β depends on f and W , provided f W X β . The last inequalities together with triangle inequality imply the main error result:

y δ , ε f L 2 ( Ω ) K δ + ε 2 t + ε + t η .

Moreover, if we let ε = max { δ , ε } , and choose t ε γ for some γ ( 0 , 2 ) , then we obtain the convergence result

y δ , ε f L 2 ( Ω ) 0

as ε 0 . The convergence rate is optimal when γ = 2 / ( 2 η + 1 ) , and in this case, we have

y δ f L 2 ( Ω ) = O ( ε 2 η / ( 2 η + 1 ) ) .

We obtain the fastest convergence when f W X β for some β 3 / 2 , in which case η = 1 and

y δ f L 2 ( Ω ) = O ( ε 2 / 3 ) ,

provided that t = C ε 2 / 3 for some positive constant C .

References

[1] H. W. Engl, M. Hanke, and A. Neubauer, Regularization of Inverse Problems, Kluwer Academic, Dordrecht, 1996. 10.1007/978-94-009-1740-8Search in Google Scholar

[2] L. Yang, A perturbation method for numerical differentiation, Appl. Math. Comput. 199 (2008), no. 1, 368–374, https://doi.org/10.1016/j.amc.2007.09.066. Search in Google Scholar

[3] D. A. Murio, C. E. Mejia, and S. Zhan, Discrete mollification and automatic numerical differentiation, Comput. Math. Appl. 35 (1998), no. 5, 1–16, DOI: https://doi.org/10.1016/S0898-1221(98)00001-7. 10.1016/S0898-1221(98)00001-7Search in Google Scholar

[4] M. Hanke and O. Scherzer, Inverse problems light: numerical differentiation, Amer. Math. Monthly 108 (2001), no. 6, 512–521, https://doi.org/10.2307/2695705. Search in Google Scholar

[5] C. W. Groetsch, Lanczos’ generalized derivative, Amer. Math. Monthly 105 (1998), no. 4, 320–326, DOI: https://doi.org/10.2307/2589707. 10.1080/00029890.1998.12004888Search in Google Scholar

[6] D. Y. Liu, O. Gibaru, and W. Perruquetti, Differentiation by integration with Jacobi polynomials, J. Comput. Appl. Math. 235 (2011), no. 9, 3015–3032, https://doi.org/10.1016/j.cam.2010.12.023. Search in Google Scholar

[7] I. Knowles and R. J. Renka, Methods for numerical differentiation of noisy data, Electron. J. Differ. Equ. Conf. 21 (2014), 235–246. Search in Google Scholar

[8] M. F. Al-Jamal, A. K. Alomari, and M. S. Gockenbach, Smoothing via elliptic operators with application to edge detection, Inverse Probl. Sci. Eng. 26 (2018), no. 5, 657–676, https://doi.org/10.1080/17415977.2017.1336552. Search in Google Scholar

[9] Z. Meng, Z. Zhao, D. Mei, and Y. Zhou, Numerical differentiation for two-dimensional functions by a Fourier extension method, Inverse Probl. Sci. Eng. 28 (2020), no. 1, 126–143, https://doi.org/10.1080/17415977.2019.1661410. Search in Google Scholar

[10] A. A. Yahya, J. Tan, B. Su, K. Liu, and A. N. Hadi, Image edge detection method based on anisotropic diffusion and total variation models, J. Eng. 2019 (2019), no. 2, 455–460, https://doi.org/10.1049/joe.2018.5345. Search in Google Scholar

[11] C. Brito-Loeza, R. Legarda-Saenz, and A. Martin-Gonzalez, A fast algorithm for a total variation based phase demodulation model, Numer. Methods Partial Differential Equations 36 (2020), no. 3, 617–636, DOI: https://doi.org/10.1002/num.22444. 10.1002/num.22444Search in Google Scholar

[12] K. Atkinson and W. Han, Theoretical Numerical Analysis: A Functional Analysis Framework, Springer-Verlag, New York, 2009. 10.1007/978-1-4419-0458-4Search in Google Scholar

[13] D. Braess, Finite Elements: Theory, Fast Solvers, and Applications in Solid Mechanics, Cambridge University Press, Cambridge, 2007. 10.1017/CBO9780511618635Search in Google Scholar

[14] R. C. McOwen, Partial Differential Equations: Methods and Applications, Prentice Hall, New Jersy, 2003. Search in Google Scholar

[15] J. Canny, A Computational Approach To Edge Detection, IEEE Tran. Pattern Anal. Machine Intell. 8 (1986), no. 6, 679–698, https://doi.org/10.1109/TPAMI.1986.4767851. Search in Google Scholar

Received: 2021-11-11
Revised: 2022-04-07
Accepted: 2022-04-09
Published Online: 2022-06-24

© 2022 Mohammad F. Al-Jamal et al., published by De Gruyter

This work is licensed under the Creative Commons Attribution 4.0 International License.

Scroll Up Arrow