According to the spectral image complementation diagram, in the actual situation, due to the failure of satellite scan line corrector and other reasons, the obtained spectral image is a degraded spectral image with data loss, resulting in severe damage to the observation quality. To solve this problem, a complementary method that recovers potentially high-quality spectral images from degraded spectral images is particularly important. Before studying the specific complement method, the general mathematical model for the degradation of spectral image data loss is first given:

$$y=Hx+n$$(34)In the above equation, *x* represents the original high-quality spectrum, *H* represents the sampling operator of the analog data loss process, *y* represents the low-quality spectral image after partial data loss, and *n* represents the noise introduced during the sampling process.

In the spectral image complementation problem, it is known that there are low-quality spectral image *y* with data-loss and sampling operator *H*, how to recover the original high-quality spectral image *x*. Since the image restoration problem is a typical ill-posed problem, it cannot be solved directly. Therefore, in the completion process, a priori knowledge of the spectral image needs to be introduced to impose a specific constraint on the solution space of the underdetermined problem. In the previous section, by analyzing the prior characteristics of spectral images, it is known that there are a large number of non-local self-similar 3D image blocks in the spectral image, and the matrix composed of these 3D image blocks with similar features has potential low rank characteristics. Based on this, it is proposed to introduce the low rank characteristics of the spectral image as a priori knowledge into the completion process, and obtain the following low rank optimization model based on the 3D image block:

$$\begin{array}{c}\left\{\stackrel{\u2322}{x},{\stackrel{\u2322}{L}}_{i}\right\}=\underset{x,{L}_{i}}{\mathrm{arg}\phantom{\rule{thinmathspace}{0ex}}min}\phantom{\rule{thinmathspace}{0ex}}\sum _{i}{\u2225{\stackrel{~}{R}}_{i}x-{L}_{i}\u2225}_{F}^{2}+\\ \tau \sum _{i}Rank\left({L}_{i}\right),s.t{\u2225y-Hx\u2225}_{2}^{2}\le \epsilon \end{array}$$(35)Here,${\stackrel{~}{R}}_{i}x=\left[{R}_{i1}x,{R}_{i2}x,\cdots ,{R}_{ij}x\right]\in {R}^{{s}^{2n\times l}}$represents a matrix composed of the first *l* images similar to the *i*th 3D image block *x*_{i}. Each column in *R̃*_{i}x represents a 3D image block with the length of *s*^{2}*n* after stretching, ${R}_{{i}_{j}},1\le $*j* ≤ 1 represents a 3D block’s operator, *Rank* (*L*_{i}) represents the rank of the matrix *L*_{i} to be restored, that is, the number of non-zero singular values, and *τ* is the weight factor corresponding to a regular term$\sum _{i}$*Rank* (*L*_{i}). The minimum rank constraint for similar three-dimensional image blocks is achieved by minimizing regularity$\sum _{i}$*Rank* (*L*_{i}), and the allowed recovery error is controlled by *ε*.

In order to solve the problem, the constrained optimization problem in the above equation can be transformed into the unconstrained optimization problem in the following equation by introducing the Lagrangian multiplier:

$$\begin{array}{c}\left\{\stackrel{\u2322}{x},{\stackrel{\u2322}{L}}_{i}\right\}=\underset{x,{L}_{i}}{\mathrm{arg}\phantom{\rule{thinmathspace}{0ex}}min}\phantom{\rule{thinmathspace}{0ex}}{\u2225y-Hx\u2225}_{2}^{2}+\\ \eta \sum _{i}{\u2225{\stackrel{~}{R}}_{i}x-{L}_{i}\u2225}_{F}^{2}+\lambda \sum _{i}Rank\left({L}_{i}\right)\end{array}$$(36)Where, *λ* and *η* are two regular weighting factors obtained by normalizing the weight of the loyalty item *|y* − $Hx|{|}_{2}^{2}.$So far, a matrix low rank restoration model based on self-similar 3D image blocks has been obtained. However, since the matrix rank problem is an NP-hard combinatorial optimization issue, it is difficult to directly solve it. Candes et al. proved that the kernel norm of the matrix under certain conditions is equivalent to the convex approximation of the matrix rank issue. According to this theory, the following spectral image complementation model based on kernel norm’s low rank approximation is obtained:

$$\begin{array}{l}\left\{\stackrel{\u2322}{x},{\stackrel{\u2322}{L}}_{i}\right\}=\underset{x,{L}_{i}}{\mathrm{arg}\phantom{\rule{thinmathspace}{0ex}}min}\phantom{\rule{thinmathspace}{0ex}}{\u2225y-Hx\u2225}_{2}^{2}+\\ \eta \sum _{i}{\u2225{\stackrel{~}{R}}_{i}x-{L}_{i}\u2225}_{F}^{2}+\lambda \sum _{i}\u2225{L}_{i}\u2225\end{array}$$(37)Wherein, the kernel norm *|L*_{i}|| represents the sum of all the singular values of the matrix *L*_{i}. The minimized objective function contains two unknown variables *x* and *{L*_{i}} to be solved, which are solved by an alternate optimization method. That is, when one variable in the low rank matrix *{L*_{i}} and the spectral image *x* are fixed, the other is solved; when a more precise single variable is obtained, the variable fixed in the previous step is inversely solved; thus the cycle is alternately optimized until convergence.

When the low rank matrix *{L*_{i}} is fixed, the optimization model for solving the spectral image *x* can be expressed as:

$${x}^{\left(k\right)}=\underset{x,{L}_{i}}{\mathrm{arg}min}||y-Hx|{|}_{2}^{2}+\eta \sum _{i}||{\stackrel{~}{R}}_{i}x-{L}_{i}|{|}_{F}^{2}$$(38)${L}_{i}^{\left(k\right)}$and *x*^{(k)} respectively represent low rank matrix and restored spectral image obtained after the *k*th alternating optimization. The quadratic minimization problem in the above equation can be obtained by direct derivation to obtain the following closed-form solution:

$${x}^{\left(k\right)}={\left({H}^{T}H+\eta \sum _{i}{\stackrel{~}{R}}_{i}^{T}{\stackrel{~}{R}}_{i}\right)}^{-1}\left({H}^{T}y+\eta \sum _{i}{\stackrel{~}{R}}_{i}^{T}{L}_{i}^{\left(k\right)}\right)$$(39)In the actual solution, $\left({H}^{T}H+\eta \sum _{i}{\stackrel{~}{R}}_{i}^{T}{\stackrel{~}{R}}_{i}\right)$is a large matrix, and it is difficult to directly invert it. In order to reduce the computational complexity, we usually use the conjugate gradient algorithm to calculate the above equation.

When the spectral image *x*^{(k)} is fixed, the optimization objective function for solving the low rank matrix *L*_{i} is:

$${L}_{i}^{\left(k+1\right)}=\underset{{L}_{i}}{\mathrm{arg}min}\sum _{i}||{\stackrel{~}{R}}_{i}x-{L}_{i}|{|}_{F}^{2}+\frac{\lambda}{\eta}\sum _{i}||{L}_{i}||$$(40)Compared with equation (40), it is not difficult to find that the optimization problem in the above equation can be solved by the singular value threshold (SVT) algorithm. The specific solution process is as follows:

$$\left\{\begin{array}{l}{U}^{\left(k+1\right)},\sum {}^{\left(k+1/2\right)},{V}^{\left(k+1\right)}=svd\left({\stackrel{~}{R}}_{i}{x}^{\left(k\right)}\right)\\ \phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\sum {}^{\left(k+1\right)}={S}_{\lambda \left(2\eta \right)}\left(\sum {}^{\left(k+1/2\right)}\right)\\ {L}_{i}^{\left(k+1\right)}={U}^{\left(K+1\right)}\sum {}^{\left(K+1\right)}{V}^{\left(K+1\right)T}\end{array}\right.$$(41)Where, *svd* $\left({\stackrel{~}{R}}_{i}{x}^{\left(k\right)}\right)$represents singular value decomposition of matrix ${\stackrel{~}{R}}_{i}{x}^{\left(k\right)},{S}_{\lambda \left(2\eta \right)}\left(\sum {}^{\left(k+1/2\right)}\right)$represents soft threshold value function, and the threshold value is *λ*/ (2*η*).

Although the low-rank approximation model based on kernel norm is a convex optimization model, it is easier to solve than the original problem. However, the approximation of the original non-convex optimization problem with convex function is not accurate enough. In the sparse coding problem described above, the research results show that the non-convex *l*_{p} (0 < *p* < 1) norm model can more accurately than the *l*_{1} norm model to approximate the original *l*_{1} norm optimization problem, and the recovered result has better sparsity. Inspired by this research, a non-convex low rank approximation method is proposed. By using $|{\alpha}_{i}|{|}_{p}\left(0<p<1\right)$as the nonconvex approximation of $|{\alpha}_{i}|{|}_{0},$the accuracy of the model is further improved, and this spectral image complementation model based on non-convex low rank approximation can be specifically equationted and expressed as:

$$\left\{\stackrel{\u2322}{x},{\stackrel{\u2322}{L}}_{i}\right\}=\underset{x,{L}_{i}}{\mathrm{arg}\phantom{\rule{thinmathspace}{0ex}}min}\phantom{\rule{thinmathspace}{0ex}}{\u2225y-Hx\u2225}_{2}^{2}+\eta \sum _{i}{\u2225{\stackrel{~}{R}}_{i}x-{L}_{i}\u2225}_{F}^{2}+\lambda \sum _{i}{\u2225{L}_{i}\u2225}_{p}$$(42)Where$\sum _{i}||{L}_{i}||$becomes the norm of matrix *L*_{i}, and *α*_{j} is the *j*th singular value of matrix *L*_{i}. The model is similar to the above-mentioned solving method, and the spectral image *x* and the low rank matrix *{L*_{i}} are alternately solved by the alternating optimization method. Among them, *{L*_{i}} is fixed to solve the optimal objective function of *x* and its corresponding closed-form solution are as shown above, and the equation to solve the optimal objective function of *L*_{i} by fixing *x* is as follows:

$${L}_{i}^{\left(k+1\right)}=\underset{}{\mathrm{arg}min\sum _{i}||{\stackrel{~}{R}}_{i}{x}^{\left(k\right)}-{L}_{i}|{|}_{F}^{2}+\sum _{i}||{L}_{i}|{|}_{P}}$$(43)Combining the idea of singular value decomposition and the generalized iterative enthalpy algorithm proposed by Zhang Lei et al. for solving non-convex sparse coding, the optimization model can be solved by the following equation:

$$\left\{\begin{array}{l}{U}^{\left(k+1\right)},\sum {}^{\left(k+1/2\right)},{V}^{\left(k+1\right)}=svd\left({\stackrel{~}{R}}_{i}{x}^{\left(k\right)}\right)\\ \phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\phantom{\rule{thickmathspace}{0ex}}\sum {}^{\left(k+1\right)}={T}^{GST}\left(\sum {}^{\left(k+1/2\right)}\right)\\ {L}_{i}^{\left(k+1\right)}={U}^{\left(K+1\right)}\sum {}^{\left(K+1\right)}{V}^{\left(K+1\right)T}\end{array}\right.$$(44)
## Comments (0)

General note:By using the comment function on degruyter.com you agree to our Privacy Statement. A respectful treatment of one another is important to us. Therefore we would like to draw your attention to our House Rules.