In photoacoustic tomography, one is interested to recover the initial pressure distribution inside a tissue from the corresponding measurements of the induced acoustic wave on the boundary of a region enclosing the tissue. In the limited view problem, the wave boundary measurements are given on the part of the boundary, whereas in the full view problem, the measurements are known on the whole boundary. For the full view problem, there exist various fast and robust reconstruction methods. These methods give severe reconstruction artifacts when they are applied directly to the limited view data. One approach for reducing such artefacts is trying to extend the limited view data to the whole region boundary, and then use existing reconstruction methods for the full view data. In this paper, we propose an operator learning approach for constructing an operator that gives an approximate extension of the limited view data. We consider the behavior of a reconstruction formula on the extended limited view data that is given by our proposed approach. Approximation errors of our approach are analyzed. We also present numerical results with the proposed extension approach supporting our theoretical analysis.
Photoacoustic tomography (PAT) is an emerging non-invasive imaging technique. It is based on the photoacoustic effect, and it has a big potential for a successful use in biomedical studies, including preclinical research and clinical practice. Applications include tumor angiogenesis monitoring, blood oxygenation mapping, functional brain imaging, and skin melanoma detection [49, 31, 5, 47]
The principle of PAT is the following. When short pulses of non-ionising electromagnetic energy are delivered into a biological (semi-transparent) tissue, then parts of the electromagnetic energy become absorbed. The absorbed energy leads to a nonuniform thermoelastic expansion depending on the tissue structure. This gives rise to an initial acoustic pressure distribution, which further is the source of an acoustic pressure wave. These waves are detected by a measurement device on the boundary of the tissue. The mathematical task in PAT is to reconstruct the spatially varying initial pressure distribution using these measurements. The values of the initial pressure distribution inside the tissue allow to make a judgment about the directly unseen structure of the tissue. For example, whether there are some abnormal formations inside the investigated tissue, such as a tumor.
Consider the part of the boundary of a region enclosing the tissue where the wave measurements are available. This part is called observation boundary. If the tissue is fully enclosed by the observation boundary, then one speaks about the full view problem. Otherwise, if some part of the tissue boundary is not accessible, then one has the so-called limited view problem (LVP). The LVP frequently arises in practice, for example in breast imaging (see, e.g., [50, 27]).
The LVP can be approached using iterative reconstruction algorithms (see, e.g., [39, 37, 23, 52, 25, 19, 42]). Although these algorithms can provide accurate reconstruction, they are computationally expensive and time consuming. Approaches for the full view problem, such as time reversal [7, 24], Fourier domain algorithms [16, 29, 51], explicit reconstruction formulas [10, 9, 30, 28, 35], are faster than iterative reconstructions and additionally are robust and accurate. However, when they are directly applied on the limited view data, then one obtains severe reconstruction artifacts.
And so, an idea appears to try to extend the limited view data to the whole boundary, and then use efficient algorithms for the full view data on the extended data to obtain a reconstruction of the initial pressure. Knowing characterizations of the range of the forward operator, which maps the initial pressure distribution to the wave data on the whole boundary of the tissue, may be used for this purpose (see, e.g., [3, 11, 1, 27] and the references therein). This knowledge is expressed with so-called range conditions. In [40, 41], some of these conditions, the so-called moment conditions, were realized for the extension of the limited view data.
The data extension process based on the moment conditions is unstable, and therefore, mostly low frequencies of the limited view wave data can be extended. This instability is connected with the following issue. The observation boundary defines a so-called detection region, which, for typical measurement configurations, is the convex hull of the observation boundary . It is known (see, e.g., [26, 44, 27]) that if the support of the initial pressure is contained in this detection region, then a stable recovery of the initial pressure from the limited view wave data is theoretically possible. However, the data extension process based on the moment conditions does not use information about the support of the initial pressure, and so, it does not employ advantages of the possible stable recovery.
In this paper, we propose a stable method for the extension of the limited view wave data that uses advantages of the mentioned possible stable recovery. Our method is based on the observation that in the case of the stable recovery, there exists a continuous data extension operator that maps the limited view wave data to the unknown wave data on the unobservable part of the boundary. We formally define this operator in Section 3.1. However, this operator is not explicitly known. In our method, we therefore propose to construct an approximate data extension operator using an operator learning approach that is inspired by the methods of the statistical learning theory (see, e.g., ). We suggest an operator learning procedure that uses the projection on the linear subspace defined by the training inputs.
Having an approximately extended limited view wave data, one can employ reconstruction methods for the full view wave data, such as time reversal or methods based on the explicit inversion formulas. As an example, we consider an explicit reconstruction formula for that purpose. We demonstrate that the resulting reconstruction algorithm corrects most limited view reconstruction artifacts, while the computational time remains to be low. The involved steps in the proposed reconstruction approach are illustrated in Figure 1.
The rest of the paper is organized as follows. In Section 2, we present a mathematical background for PAT, give the used explicit reconstruction formula, and discuss the LVP. Our operator learning approach to the extension of the limited view wave data is given in Section 3. In Section 4, we analyze the approximation errors of our approach. We look at the approximation errors for the unknown wave data and for the corresponding reconstructions obtained by explicit reconstruction formulas. We present the numerical results in Section 5. Finally, we finish the paper with conclusion and outlook in Section 6.
2 Mathematics of PAT
Let be a bounded domain with a smooth boundary , where denotes the spatial dimension. Further, let be the set of all smooth functions that are compactly supported in Ω. In PAT, one is interested to recover an unknown function from the solution of the wave equation given on parts of the boundary of Ω. Let us mathematically specify this reconstruction problem.
2.1 Reconstruction Problem
Let denote the solution of the following initial value problem for the wave equation:
Here denotes differentiation with respect to the second variable t, and is the Laplacian with respect to x. Then the reconstruction problem in PAT consists in recovering the unknown function from the corresponding wave boundary data
where . If , then (2.1) is called full view problem; otherwise, if , we have the limited view problem (LVP). In this paper, we are particularly interested in the limited view case, which we consider in some detail in Section 2.3.
Let us denote the unobservable part of the boundary as . We define also the following restrictions of :
Let us note that in practice, the reconstruction problem (2.1) arises in PAT in spatial dimensions two and three. The three-dimensional problem appears when the so-called point-like detectors are used (see, for example, [49, 26, 12]). When one uses linear or circular integrating detectors, then the reconstruction problem (2.1) is considered in two spatial dimensions (see [6, 15, 38, 53]).
2.2 Explicit Inversion Formula
The reconstruction problem (2.1) can be approached by various solution techniques. Among these techniques, the derivation of the explicit inversion formulas of the so-called back-projection type is particularly appealing. A numerical realization of these formulas typically gives reconstruction algorithms that are accurate and robust, and at the same time are faster than iterative approaches.
An inversion formula consists of an explicitly given operator that recovers the function f from the data u. Such formulas are currently known only for special domains and only for the full view data, i.e. u must be given for all . In this paper, we consider the formula that first has been derived in [48, 30, 6]. In addition to the data u, the formula also depends on the boundary of the domain and on the reconstruction point . The structure of the formula further depends on whether the spatial dimension d is even or odd.
If is an even integer, then
Here is a constant, denotes the outward pointing unit normal to , and
is the differentiation operator with respect to . Further, and denote the standard inner product and the corresponding Euclidian norm on , respectively.
In the case of odd dimension , the formula is defined as follows:
with constant .
The formula has been introduced in  for dimension , and in  for dimension . In , it has been studied for the case when Ω is a ball in arbitrary dimension. Further, in [34, 17, 18], it has been shown that for any elliptical domain Ω, the formula exactly recovers any smooth function f with support in Ω from data . In , it was shown that the same result also holds for parabolic domains Ω with . The formula in arbitrary spatial dimension on certain quadric hypersurfaces, including the parabolic ones, has been analyzed in .
It should be noted that the formula can be in fact used for any convex bounded domain Ω. Then, however, the formula does not recover the function f exactly, and it introduces an approximation error. The form of this error has been analyzed in [34, 17, 18]. Numerical experiments indicate that this error is rather low for domains that can be well approximated by elliptic domains. This is also suggested by the microlocal analysis in .
The operator can be defined for functions , where is an open set with . Define the image of under the operator as . Then it is known (see, e.g., [26, 44, 27]) that is a closed subspace of , and therefore, we will treat as a Hilbert space with the scalar product of . Moreover, the operator is bounded, and it has the bounded inverse .
In the following, we will work with functions , and we will assume that the domain Ω is such that the formula gives exact recovery of the function f from its wave data , i.e. it holds that
As we already mentioned, this is, for example, the case for circular and elliptical domains. In such a situation, it can be shown that is a continuous extension of to .
2.3 Limited View Problem
In practice, the wave data u is frequently given on a subset of the boundary (Figure 2). This subset , called observation boundary, defines the so-called detection region (see, for example, [37, 26]). If , then the function f in (2.1) can be stably recovered from data on . The detection region contains points x such that any line going through x intersects . For example, if is a spherical or elliptical cap, then .
Let us mathematically specify the stable recovery of f. Let be an open set with . The stable recovery holds for , and it is formulated in the following theorem. Note that the space is identified with the set of all functions in that vanish outside of .
The operator is well defined and bounded. Moreover, it has bounded inverse , where denotes the range of . In particular, is closed.
It is sufficient to show the two-side estimate
for some constants . The claims then follow by continuous extension.
To show the left-hand estimate, we decompose , where T is larger than the diameter of Ω. Since the operator is the sum of two Fourier integral operators of order zero (see ), we have for some constant . Moreover, the explicit formulas for (see, e.g., ) imply also that , which gives the left-hand side estimate in (2.4).
The right-hand side estimate can be found in [19, Theorem 3.4]. The required visibility condition is satisfied due to the assumption that . ∎
It is worth to mention that despite the boundedness of , no theoretically exact direct solution methods are available. Let us note that if the condition is not satisfied, the visibility condition in [19, Theorem 3.4] is also not valid, and the inverse of the operator is severely ill-posed (see, e.g., [19, 44, 27]).
Denote . From the boundedness of the operator , we can deduce the boundedness of the operator . We will use this for the data extension operator below.
Recall that in order to give the exact reconstruction, the formula requires the full view wave data u, which is given for all (see (2.3)). In spite of the above discussed stable recoverability of from equation (2.1), the use of formula on the limited view data u given on leads to serious artifacts in the reconstruction; see, e.g., , where the numerical results of the application of on finite parabolas are presented. The reconstruction artefacts in the case of the limited view data are also discussed in [50, 13, 45, 4, 14, 36].
At the same time, the use of formula for reconstructing function f can be attractive from various points of view. For example, as we already pointed out, the reconstruction using a numerical realization of is faster than iterative reconstruction algorithms. Another point may be connected with the nature of the software development. Namely, having already a tested and trusted computer code of the numerical realization of formula , it could be tempting to develop its extensions for the LVP.
An extension of the limited view data u from the observable part of the boundary to the whole boundary may give a possibility to improve the reconstruction quality of the formula . In this paper, we propose to realize this extension using the operator learning approach, which we consider in the next section.
3 Data Extension Using Operator Learning Approach
The extension of the limited view data to the whole boundary can be in principle done by the extension operator that we define in the next subsection. This operator is however not explicitly known, and we propose an operator learning approach to construct its approximation in Section 3.2. In Section 3.3, we discuss computational aspects of the proposed learned approximation of the extension operator.
3.1 Extension Operator
Let us recall that is the observation boundary, is the corresponding detection region defined in Section 2.3, is the unobservable part of the boundary, and is an open set with . Further, let us remind that the operators and are defined in (2.2).
The operator that maps functions to functions for realizes the extension of the limited view data to the unobservable part of the boundary . This operator can be written as . Because of this representation and the assumptions on and , the operator is a linear continuous operator as a superposition of linear continuous operators. Recall that the continuity (or boundedness) of the operators and is discussed in Section 2.3.
With the introduced extension operator , one could extend the limited view data to the whole boundary , and then use the formula on this extended data. In this way, the disadvantages of the use of the formula on the limited view data can be eliminated. However, the form of the operator is not explicitly known.
3.2 Proposed Learned Extension Operator
In this paper, we propose to construct an operator that approximates the operator . The role of the parameter is described below. The approximate operator must satisfy the following two requirements. The first requirement concerns the approximation quality: must be close to . The second requirement is related to the computational effort of the numerical evaluation of . This evaluation must be fast such that the evaluation of the formula on the extended limited view data with the help of remains to be computationally efficient.
Our construction of the approximate operator is inspired by the statistical learning approach (see, e.g., ). For , consider training functions . For each training function , we can determine the corresponding wave data , . By the definition of the extension operator we have that . In the context of statistical learning, the set is called a training set. Define for future reference the set .
So, how to construct (or, using the terminology of the statistical learning, how to learn) an approximation of using the training set ? It should be noted that many statistical learning algorithms are designed for learning a small number of scalar-valued functions. These algorithms are not applicable in our case because the function that we need to learn is an operator. Recently, the development of the statistical learning methods for learning vector-valued functions and also functions with values in function spaces, i.e. operators, has been started (see, e.g., [33, 2]). For obtaining good results, these methods require an a priori knowledge of the dependence between different components of the output vector that is given by the function to be learned. This knowledge is not readily available in our case. However, as we observe below, the linear structure of the extension operator that we want to learn allows to employ a projection operator for the learning.
For any , define the linear subspace
and let be the orthogonal projection on in . Then we define the learned approximation as follows:
Note that , and therefore, the operator composition is well-defined, and
is bounded. Further, note that for all , .
3.3 Computation of Learned Approximation
How to compute the learned approximation using the training set for ? First of all, observe that since , the projection has the following representation:
where the coefficients can be determined from the conditions for . These conditions can be written in the form of the system of linear equations for the coefficients
Denote the matrix corresponding to the above linear system as , i.e. the elements of are
Further, denote the vector of unknowns as , and the right-hand side as , i.e.
The matrix is the Gram matrix of the functions in , and it is invertible if the set is linearly independent. Since the operator is invertible, the set is linearly independent if the set is linearly independent, and for the following, we assume that this is the case.
Note that the matrix does not depend on the limited view wave data that we want to extend. Therefore, the inverse matrix can be precomputed once the set of the learning inputs is given. This will make the determination of the coefficients very fast.
Finally, with the coefficients in (3.3), i.e. , the approximation is calculated as follows:
4 Approximate Reconstructions and Their Error Analysis
For obtaining an approximate reconstruction of f using the limited view data and the formula , we can now proceed as follows. First, we extend the limited view data to the whole boundary using the learned extension operator in the following way:
Then we apply the formula to this extended wave data in order to obtain an approximate reconstruction :
Note that is obtained by extending the limited view data to the whole boundary with zero values on . As we already discussed, the corresponding approximate reconstruction contains significant errors, and it is desirable to have better reconstructions of f using . Additionally, one may desire that the reconstruction improves as n increases.
In the following theorem, we estimate the -error of the approximation of by and of the approximation of f by . From the derived estimates, we see that the above aims can be realized if the training functions , , are chosen appropriately.
Let a set of linearly independent training functions be given, and denote , . Define the training limited view wave data , the corresponding linear subspace in (3.1), and the learned extension operator in (3.2). Consider a function , its limited view wave data , and its approximation defined in (4.1). Then the following -error estimate for the unobservable data holds:
If additionally, the domain Ω is such that (2.3) holds, then we have the following -error estimate for the reconstruction:
We first prove (4.2). From the definition of the operator , we have
From the properties of the projection operators, we also have
For an element , there are unique constants , , such that
and therefore, there exists an element such that . Using this fact, we can estimate
Since for , it follows that
As we see from Theorem 2, the estimates of the -errors given by our learning procedure depend on the minimal distance from the unknown function f to the linear subspace defined by the training functions . This gives us an indication for the choice of the training functions. Namely, one should choose the training functions such that the unknown function f can be well approximated by their linear combination.
Estimates (4.2) and (4.3) also allow us to state the condition for the exact approximation given by our learning procedure and for the convergence of the learned approximation when the number of the training functions n goes to infinity. We present these conditions in the following two corollaries.
If , then the learned approximation and the reconstruction are exact, i.e.
If , then the learned approximation and the reconstruction converge respectively to and f as , i.e.
Comparing the error estimates (4.2) and (4.3) for the learned approximations with and the error estimates (4.9) and (4.10) for the approximations using zero extension of the limited view wave data, one sees that these error estimates differ regarding the following factors:
correspondingly for learned approximations with and approximation using zero extension.
The factors (4.11) can be seen as indicators for the expected approximation quality of the considered algorithms. For a fixed non-zero function f, the factor is a fixed non-zero value, while the factor can be zero, or can be made arbitrary small, see Corollaries 1 and 2. Therefore, the approximation quality of the learned approximations is expected to be better than of the approximations using zero extension of the data. This expectation will be confirmed by the numerical results in the next section. In fact, one can show (see Remark 2 below) that the factor is always less than or equal to the factor , and the strict inequality holds under rather mild conditions on the function f and the training functions . Generally, this condition can be expected to hold in practice.
5 Numerical Results
In this section, we present results of the numerical realization of the proposed operator learning approach.
We consider the spatial dimension , and we take the elliptical domain
with , . We use the following parametrization of the boundary:
and we assume that the unobservable part of the boundary is (see Figure b (a))
Thus, approximately 19 % of the angular values are missing.
We work with the function f presented in Figure b (a). Its numerical full view wave boundary data is given in Figure b (b), and we use the corresponding limited view wave boundary data . The observation boundary is discretized such that the distance between two consecutive points is in the interval . We take the time step size as 0.01.Figure 3
We further assume that we know a rectangular region
that contains the support of f (Figure 4 (top and bottom)). We use this region K for defining training functions . Namely, we consider partitions of the region K into squares , . The square contains points such that
where (width of K), (height of K), , (see Figure 4 (middle)). Then we define the training function as the indicator function of the square . We take the number of the training functions in the form , where and are the numbers of the partitioning intervals along the coordinate and correspondingly. We present the numerical results for .
Let us note that we use the rectangular region K for illustration purpose. If the region containing is not known, then one may consider squares filling the whole subset of the detection region . Further, note that other type of basis functions can be used in a similar manner. Kaiser–Bessel functions, which are frequently used in computed tomography (see, e.g., [32, 46, 43]), would be another reasonable choice.
The extended limited view data using the learned extension operator for the considered values of n are presented in Figure 5. We observe that as n increases, the extended data approaches the full view data u in Figure b (a). Note that the chosen training functions satisfy the condition of Corollary 2. Therefore, the approach of to the full view data u is in agreement with our theoretical analysis.
The reconstructions using the extended data are presented in Figure 6 (second and third rows). For comparison purpose, we also present the reconstruction using the full view wave boundary data u, and the reconstruction using the zero extended data (Figure 6 (first row)). We evaluate the reconstructions at the points from the discrete set
with . We also consider the discrete -error of a reconstruction defined as follows:
Let us discuss the reconstructions in Figure 6. First of all, as expected, one observes strong artifacts in the reconstruction , especially outside of . These artifacts are considerably corrected in the reconstruction , and as the number of the training functions n increases, the artifacts become weaker such that the reconstruction is very similar to the reconstruction . This observation is also reflected in -errors that are presented in Figure 7. Note that differs from f due to the discretization error of the numerical realization of the formula . Thus, as in the case of the data , the approach of to f is in agreement with Corollary 2.
Finally, in Table 1, we present the calculation times for the parts involved in the proposed reconstruction approach. Our numerical results are performed with MATLAB version R2015b on the PC lenovo e31 with four processors Intel(R) Xeon(R) CPU 3.20 GHz. We see that the most time consuming part is the calculation of the matrix , which is used for solving the system of linear equations (3.4). Here, the calculation of is the most computationally expensive. But for a given set of the training functions , and the matrix have to be calculated only once and prior to the actual image reconstruction process.
The calculation of the learned data extension is fast. In particular, for the biggest considered number of the training functions, the calculation time for is near the calculation time for the formula . Thus, our proposed operator learning approach fulfills the requirements that we stated at the beginning of Section 3.2. Namely, the closeness of the approximation to , and the fast evaluation of are realized.
6 Conclusion and Outlook
In this paper, we demonstrated that an approximate extension of the limited view data in PAT can be realized using an operator learning approach. Our numerical results show that the learned extension of the limited view data with a good approximation quality and a low computational cost is possible. A good approximation quality is especially achieved for the biggest number of considered training functions. This makes the proposed learned data extension attractive for the algorithms that are designed for the full view data. As an example, we demonstrated a satisfactory performance of a reconstruction formula with the proposed learned data extension.
It could be interesting to look at the behavior of the proposed learned data extension without knowledge of a rectangular region K containing . As we already noted, in this case, one could consider partitions of the whole detection region . Also other training functions, such as generalized Kaiser–Bessel functions (see, e.g., [32, 46, 43]), can be tried.
It is appealing to consider a comparison of the reconstruction quality and computation time of the proposed reconstruction approach and iterative reconstruction algorithms. Implementation of the proposed learned extension of the limited view data to three spatial dimensions is an interesting aspect of future research. In this case, the choice of the generalized Kaiser–Bessel functions as the training functions is particularly convenient because for them the wave data , are known analytically (see, e.g., ). This makes the determination of the entries of the matrix fast. Also, the solution of the system of linear equations (3.4) can be done either using iterative methods, such as conjugate gradient method, or an approximate inverse matrix to can be determined.
Finally, it seems to be worth to examine applications of the presented operator learning approach to the limited data problems in other tomographic modalities, such as sparse angle or region of interest computed tomography.
Funding source: Austrian Science Fund
Award Identifier / Grant number: P 29514-N32
Funding statement: The authors gratefully acknowledge the support of the Tyrolean Science Fund (TWF). The second author gratefully acknowledges the support of the Austrian Science Fund (FWF), project P 29514-N32.
Sergiy Pereverzyev Jr. would like to thank Alessandro Verri, Vera Kurkova, Linh Nguyen, Jürgen Frikel, Xin Guo, Ding-Xuan Zhou, and members of Ding-Xuan Zhou’s group at the City University of Hong Kong for discussions concerning this work.
 M. Agranovsky, D. Finch and P. Kuchment, Range conditions for a spherical mean transform, Inverse Probl. Imaging 3 (2009), no. 3, 373–382. Search in Google Scholar
 M. A. Alvarez, L. Rosasco and N. D. Lawrence, Kernels for vector-valued functions: A review, Found. Trends Mach. Learn. 4 (2012), no. 3, 195–266. Search in Google Scholar
 G. Ambartsoumian and P. Kuchment, A range description for the planar circular Radon transform, SIAM J. Math. Anal. 38 (2006), no. 2, 681–692. Search in Google Scholar
 L. L. Barannyk, J. Frikel and L. V. Nguyen, On artifacts in limited data spherical Radon transform: Curved observation surface, Inverse Problems 32 (2016), no. 1, Article ID 015012. Search in Google Scholar
 P. Beard, Biomedical photoacoustic imaging, Interf. Focus 1 (2011), no. 4, 602–631. Search in Google Scholar
 P. Burgholzer, J. Bauer-Marschallinger, H. Grün, M. Haltmeier and G. Paltauf, Temporal back-projection algorithms for photoacoustic tomography with integrating line detectors, Inverse Problems 23 (2007), no. 6, S65–S80. Search in Google Scholar
 P. Burgholzer, G. J. Matt, M. Haltmeier and G. Paltauf, Exact and approximate imaging methods for photoacoustic tomography using an arbitrary detection surface, Phys. Rev. E 75 (2007), no. 4, Article ID 046706. Search in Google Scholar
 R. Courant and D. Hilbert, Methods of Mathematical Physics. Vol. II, Interscience, New York 1962. Search in Google Scholar
 D. Finch, M. Haltmeier and R. Rakesh, Inversion of spherical means and the wave equation in even dimensions, SIAM J. Appl. Math. 68 (2007), no. 2, 392–412. Search in Google Scholar
 D. Finch, S. K. Patch and R. Rakesh, Determining a function from its mean values over a family of spheres, SIAM J. Math. Anal. 35 (2004), no. 5, 1213–1240. Search in Google Scholar
 D. Finch and R. Rakesh, The range of the spherical mean value operator for functions supported in a ball, Inverse Problems 22 (2006), no. 3, 923–938. Search in Google Scholar
 D. Finch and R. Rakesh, Recovering a function from its spherical mean values in two and three dimensions, Photoacoustic Imaging and Spectroscopy, CRC Press, Boca Raton (2009), 77–88. Search in Google Scholar
 J. Frikel and E. T. Quinto, Characterization and reduction of artifacts in limited angle tomography, Inverse Problems 29 (2013), no. 12, Article ID 125007. Search in Google Scholar
 J. Frikel and E. T. Quinto, Artifacts in incomplete data tomography with applications to photoacoustic tomography and sonar, SIAM J. Appl. Math. 75 (2015), no. 2, 703–725. Search in Google Scholar
 H. Grün, T. Berer, P. Burgholzer, R. Nuster and G. Paltauf, Three-dimensional photoacoustic imaging using fiber-based line detectors, J. Biomed. Optics 15 (2010), no. 2, Article ID 021306. Search in Google Scholar
 M. Haltmeier, Frequency domain reconstruction for photo- and thermoacoustic tomography with line detectors, Math. Models Methods Appl. Sci. 19 (2009), no. 2, 283–306. Search in Google Scholar
 M. Haltmeier, Inversion of circular means and the wave equation on convex planar domains, Comput. Math. Appl. 65 (2013), no. 7, 1025–1036. Search in Google Scholar
 M. Haltmeier, Universal inversion formulas for recovering a function from spherical means, SIAM J. Math. Anal. 46 (2014), no. 1, 214–232. Search in Google Scholar
 M. Haltmeier and L. V. Nguyen, Analysis of iterative methods in photoacoustic tomography with variable sound speed, SIAM J. Imaging Sci. 10 (2017), no. 2, 751–781. Search in Google Scholar
 M. Haltmeier and S. Pereverzyev Jr., Recovering a function from circular means or wave data on the boundary of parabolic domains, SIAM J. Imaging Sci. 8 (2015), no. 1, 592–610. Search in Google Scholar
 M. Haltmeier and S. Pereverzyev Jr., The universal back-projection formula for spherical means and the wave equation on certain quadric hypersurfaces, J. Math. Anal. Appl. 429 (2015), no. 1, 366–382. Search in Google Scholar
 T. Hastie, R. Tibshirani and J. Friedman, The Elements of Statistical Learning, 2nd ed., Springer Ser. Statist., Springer, New York, 2009. Search in Google Scholar
 G. T. Herman, Fundamentals of Computerized Tomography. Image Reconstruction from Projections, 2nd ed., Adv. Pattern Recognit., Springer, Dordrecht, 2009. Search in Google Scholar
 Y. Hristova, P. Kuchment and L. Nguyen, Reconstruction and time reversal in thermoacoustic tomography in acoustically homogeneous and inhomogeneous media, Inverse Problems 24 (2008), no. 5, Article ID 055006. Search in Google Scholar
 C. Huang, K. Wang, L. Nie, L. V. Wang and M. A. Anastasio, Full-wave iterative image reconstruction in photoacoustic tomography with acoustically inhomogeneous media, IEEE Trans. Med. Imag. 32 (2013), no. 6, 1097–1110. Search in Google Scholar
 P. Kuchment and L. Kunyansky, Mathematics of thermoacoustic tomography, European J. Appl. Math. 19 (2008), no. 2, 191–224. Search in Google Scholar
 P. Kuchment and L. Kunyansky, Mathematics of photoacoustic and thermoacoustic tomography, Handbook of Mathematical Methods in Imaging. Vol. 1, 2, 3, Springer, New York (2015), 1117–1167. Search in Google Scholar
 L. Kunyansky, Reconstruction of a function from its spherical (circular) means with the centers lying on the surface of certain polygons and polyhedra, Inverse Problems 27 (2011), no. 2, Article ID 025012. Search in Google Scholar
 L. A. Kunyansky, A series solution and a fast algorithm for the inversion of the spherical mean Radon transform, Inverse Problems 23 (2007), no. 6, S11–S20. Search in Google Scholar
 L. A. Kunyansky, Explicit inversion formulae for the spherical mean Radon transform, Inverse Problems 23 (2007), no. 1, 373–383. Search in Google Scholar
 C. Li and L. V. Wang, Photoacoustic tomography and sensing in biomedicine, Phys. Med. Biol. 54 (2009), no. 19, R59–R97. Search in Google Scholar
 S. Matej and R. M. Lewitt, Practical considerations for 3-D image reconstruction using spherically symmetric volume elements, IEEE Trans. Med. Imag. 15 (1996), no. 1, 68–78. Search in Google Scholar
 C. A. Micchelli and M. Pontil, On learning vector-valued functions, Neural Comput. 17 (2005), no. 1, 177–204. Search in Google Scholar
 F. Natterer, Photo-acoustic inversion in convex domains, Inverse Probl. Imaging 6 (2012), no. 2, 315–320. Search in Google Scholar
 L. V. Nguyen, On a reconstruction formula for spherical Radon transform: A microlocal analytic point of view, Anal. Math. Phys. 4 (2014), no. 3, 199–220. Search in Google Scholar
 L. V. Nguyen, On artifacts in limited data spherical Radon transform: Flat observation surfaces, SIAM J. Math. Anal. 47 (2015), no. 4, 2984–3004. Search in Google Scholar
 G. Paltauf, R. Nuster, M. Haltmeier and P. Burgholzer, Experimental evaluation of reconstruction algorithms for limited view photoacoustic tomography with line detectors, Inverse Problems 23 (2007), no. 6, S81–S94. Search in Google Scholar
 G. Paltauf, R. Nuster, M. Haltmeier and P. Burgholzer, Photoacoustic tomography using a Mach–Zehnder interferometer as an acoustic line detector, App. Opt. 46 (2007), no. 16, 3352–3358. Search in Google Scholar
 G. Paltauf, J. A. Viator, S. A. Prahl and S. L. Jacques, Iterative reconstruction algorithm for optoacoustic imaging, J. Acoust. Soc. Am. 112 (2002), no. 4, 1536–1544. Search in Google Scholar
 S. K. Patch, Thermoacoustic tomography – Consistency conditions and the partial scan problem, Phys. Med. Biol. 49 (2004), 2305–2315. Search in Google Scholar
 S. K. Patch, Photoacoustic and thermoacoustic tomography: Consistency conditions and the partial scan problem, Photoacoustic Imaging and Spectroscopy, CRC Press, Boca Raton (2009), 103–116. Search in Google Scholar
 A. Rosenthal, V. Ntziachristos and D. Razansky, Acoustic inversion in optoacoustic tomography: A review, Curr. Med. Imag. Rev. 9 (2013), no. 4, 318–336. Search in Google Scholar
 J. Schwab, S. Pereverzyev, Jr. and M. Haltmeier, A Galerkin least squares approach for photoacoustic tomography, SIAM J. Numer. Anal. 56 (2018), no. 1, 160–184. Search in Google Scholar
 P. Stefanov and G. Uhlmann, Thermoacoustic tomography with variable sound speed, Inverse Problems 25 (2009), no. 7, Article ID 075011. Search in Google Scholar
 P. Stefanov and G. Uhlmann, Is a curved flight path in SAR better than a straight one?, SIAM J. Appl. Math. 73 (2013), no. 4, 1596–1612. Search in Google Scholar
 K. Wang, R. W. Schoonover, R. Su, A. Oraevsky and M. A. Anastasio, Discrete imaging models for three-dimensional optoacoustic tomography using radially symmetric expansion functions, IEEE Trans. Med. Imag. 33 (2014), no. 5, 1180–1193. Search in Google Scholar
 J. Xia, J. Yao and L. V. Wang, Photoacoustic tomography: Principles and advances, Prog. Electromagn. Res. 147 (2014), 1–22. Search in Google Scholar
 M. Xu and L. V. Wang, Universal back-projection algorithm for photoacoustic computed tomography, Phys. Rev. E 71 (2005), no. 1, Article ID 0167067. Search in Google Scholar
 M. Xu and L. V. Wang, Photoacoustic imaging in biomedicine, Rev. Sci. Instruments 77 (2006), no. 4, Article ID 041101. Search in Google Scholar
 Y. Xu, L. V. Wang, G. Ambartsoumian and P. Kuchment, Reconstructions in limited-view thermoacoustic tomography, Med. Phys. 31 (2004), no. 4, 724–733. Search in Google Scholar
 Y. Xu, M. Xu and L. V. Wang, Exact frequency-domain reconstruction for thermoacoustic tomography–II: Cylindrical geometry, IEEE Trans. Med. Imag. 21 (2002), 829–833. Search in Google Scholar
 L. Yao and H. Jiang, Photoacoustic image reconstruction from few-detector and limited-angle data, Biomed. Opt. Express 2 (2011), no. 9, 2649–2654. Search in Google Scholar
 G. Zangerl, O. Scherzer and M. Haltmeier, Exact series reconstruction in photoacoustic tomography with circular integrating detectors, Commun. Math. Sci. 7 (2009), no. 3, 665–678. Search in Google Scholar
© 2020 Walter de Gruyter GmbH, Berlin/Boston
This work is licensed under the Creative Commons Attribution 4.0 Public License.