Previous denoising algorithms were developed under the assumption that the raw image is regular. Moreover many details of the image may also be smoothed in denoising. These two disadvantages have been remedied by NLM filtering developed by Buades *et al*. [14]. The basic idea of this algorithm is that an image usually contains a mass of noises, thus an image *ν* is defined as
$$v=u+n$$(1)

As per Equation (1), the image *v* is composed of the original noise-free image *u* and random noise *n*. At the pixel *i*, the NLM filtering result *ν̂*(*i*) is simply the weighted average of all of pixels within the noisy image *v*.
$$\hat{v}(i)=\sum _{j}w(i,j)v(j)$$(2)

In Equation (2), the weighting factor *w*(*i, j*) is dependent on the similarity between pixels *i* and *j*, in addition must satisfy the conditions 0 ≤ *w*(*i, j*) ≤ 1 and $\sum _{j}w(i,j)=1.$ The similarity between the pixels *i* and *j* is represented by Gaussian weighted Euclidean distance *D*(*i, j*) from the pixel *i* and its neighborhood *N*_{i} to the pixel *j* and its neighborhood *N*_{j}. Note that each pixel *i* has its own independent weighting factor of the other pixels *j* within the image, which is calculated by Equation (3).
$$w(i,j)=\frac{1}{Z(i)}\mathrm{exp}\left(\frac{-{D}^{2}(i,j)}{{h}^{2}}\right)$$(3)
where *Z*(*i*) is the normalizing factor to ensure $\sum _{j}w(i,j)=1,$and defined by
$$Z(i)=\sum _{j}\mathrm{exp}\left(\frac{-{D}^{2}(i,j)}{{h}^{2}}\right)$$(4)

The filtering parameter *h* is a constant which controls the decay rate of the exponential function and thus determines the degree of filtering. For example, a large value for *h* will provide very similar weight for all pixels *j*, thus the image would be blurred. On the contrary, a small value for *h* will provide a significant weight for only a few of the pixels *j*, thus noises cannot be attenuated sufficiently.

The Gaussian weighted Euclidean distance *D*^{2}(*i, j*) in Equation (3) is defined by the following expression.
$$\begin{array}{rl}& {D}^{2}(i,j)=|v({N}_{i})-v({N}_{j}){|}_{2,a}^{2}\\ & \phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}\phantom{\rule{thinmathspace}{0ex}}=\sum _{l}^{nl}{G}_{a}({x}_{l},{y}_{l}){\left[(v({N}_{i}(l))-v({N}_{j}(l)))\right]}^{2}\end{array}$$(5)
where the operator $|\bullet {|}_{2,a}^{2}$ denotes the squared factor of Gaussian weighted Euclidean distance *D*^{2}(*i, j*), *N*_{i} represents a neighborhood with the center at the pixel *i*, which is usually a square domain, *G*_{a} represents the Gaussian kernel with the standard deviation *a*, and *l* represents one of the total *nl* elements within a neighborhood.

For a 2D image, the Gaussian kernel can be defined by,
$${G}_{a}(x,y)=\mathrm{exp}\left(-\frac{(x-{x}_{0}{)}^{2}+(y-{y}_{0}{)}^{2}}{2{a}^{2}}\right)$$(6)
where *X*_{0} and *y*_{0} represent the center of Gaussian kernel with *x* and *y* corresponding to the coordinates of the element *l* in equation (5).

For three neighborhoods (in red, green and blue, respectively) around a square image (in Figure 1a), the weights of the centers in respect to other all of elements within the image are calculated by Equation (3). We obviously can see, for red neighborhood (in Figure 1a), the centers of all red neighborhoods in Figure 1b have larger weighting factors due to good similarity. Similarly for green neighborhood (in Figure 1a), the centers of all green neighborhoods in Figure 1c have larger weighting factors. For blue neighborhood (in Figure 1a), the centers of all blue neighborhoods in Figure 1d have larger weighting factors.

Figure 1: The weights schematic diagrams of Non-Local means algorithm

As shown in the Figure 1 and Equation (5), for two neighborhoods with identical numerical distribution and structures, the square factor of Gaussian weighted Euclidean distance between them is equal to zero. Therefore, for the center of red neighborhood (in Figure 1a), the centers of all red neighborhoods in Figures 1b have identical weighting factors, which is obviously unreasonable. Hence, *D*^{2}(*i, j*) in Equation (3) is corrected to be ${\overline{D}}^{2}$ (*i, j*) by the idea of inverse distance weighted interpolation.
$${\overline{D}}^{2}(i,j)={D}^{2}(i,j)+|i-j{|}^{2}$$(7)
where |*i−j*| is the geometric distance between the pixels *i* and *j*, for the pixel *i* with the coordinates of *x*_{i} and *y*_{i} and the pixel *j* with the coordinates of *x*_{j} and *y*_{j}, |*i—j*|^{2}=(*x*_{i}−*x*_{j} )^{2} + ( *y*_{i} −*y*_{j} )^{2}.

In addition, there is a defect in neighborhood distance measurement in the Equation (5). The center of the neighborhood has a much larger weighting factor than other all of elements within the region of search, which means if *i*=*j*, the contribution from adjacent elements to similarity is reduced by over-weighting. For example, if the center *X* is a noisy pixel and surrounding pixels are less noisy, the distance calculated by Equation (5) with the large weighting factor for *X* is unfavorable for denoising. In order to solve this problem, we adopt that 0≤*w*(*i, i*)< 1 when *i*=*j*. Then, the impact of the weighting factor at the center may be somewhat mitigated. In numerical calculation, *w*(*i, i*)=0.5 or *w*(*i, i*) = max(*w*(*i, j*)∀*i*≠*j*). In following calculation, *w*(*i, i*) = 0.5.

Each pixel in the image would be compared with all pixels in the process of NLM filtering, which results in mass computation and low efficiency. For an image with M×N pixels, M×N weighting factors would be calculated repeatedly for each pixel. Therefore altogether (M×N)^{2} factors would be calculated in the filtering. It is difficult to popularize such an inefficient algorithm. More importantly, many irrelevant neighborhoods with large distance are assigned weighting factors, this would impair the result of filtering.

Buades [14] proposed that the region of search is limited within a square area, *i.e*. a neighborhood, around the pixel to be handled so as to improve efficiency of filtering. Thus for a region of *S*×*S* and an image with M×N pixels, the computational complexity is M×N×s(*S*×*S*-1) (because it is unnecessary to calculate the weighting factor for the center of the neighborhood itself). As a result, the computational complexity is greatly reduced compared with the original NLM algorithm.

## Comments (0)

General note:By using the comment function on degruyter.com you agree to our Privacy Statement. A respectful treatment of one another is important to us. Therefore we would like to draw your attention to our House Rules.