Simultaneous Reconstruction of Speed of Sound and Nonlinearity Parameter in a Paraxial Model of Vibro-Acoustography in Frequency Domain

: In this paper, we consider the inverse problem of vibro-acoustography, a technique for enhancing ultrasound imaging by making use of nonlinear effects. It amounts to determining two spatially variable coefficients in a system of PDEs describing propagation of two directed sound beams and the wave resulting from their nonlinear interaction. To justify the use of Newton’s method for solving this inverse problem, on one hand, we verify well-definedness and differentiability of the forward operator corresponding to two versions of the PDE model; on the other hand, we consider an all-at-once formulation of the inverse problem and prove convergence of Newton’s method for its solution.


Introduction
Recently, several approaches for enhancing ultrasound by means of nonlinear effects have been proposed.In this paper, we consider vibro-acoustography which has originally been proposed in [4,5] to achieve the enhanced resolution by high frequency waves while avoiding the drawbacks of scattering from small inclusions and of stronger attenuation at higher frequencies.The experiment for image acquisition basically consists of three parts.(i) Two ultrasound beams of high and slightly different frequencies ω 1 and ω 2 are excited at two parts Σ 1 , Σ 2 of an array of piezoelectric transducers (emitters).(ii) They interact nonlinearly in a focus region, thus exciting a wave that basically propagates at the difference frequency ω 1 − ω 2 (which allows to avoid strong scattering and attenuation).(iii) The latter is eventually measured by a receiver array Γ.We refer to, e.g., [11,20] for a graphical illustration.
The fact that the value of the nonlinearity parameter B A governing the interaction depends on the tissue properties and thus varies in space B A = B A (x) yields a means of imaging.Likewise, also the speed of sound typically exhibits spatial variation c = c(x).A modeling and simulation framework for this imaging technology has been devised in [17,18].
The aim of this paper is to study the inverse problem of identifying both B A (x) and c(x) in an appropriate PDE model, in which these two quantities appear as coefficients.Some preliminary results on this parameter identification problem have been obtained in [11], however, without modeling the excitation waves as sound beams.The latter (along with an analysis of this model with variable coefficients) is one of the main novel contributions of this paper.Another one is a convergence analysis of a Newton type method for solving the inverse problem.
The model we will use here can be derived from a general wave equation for an acoustic velocity potential ϕ given by by means of the asymptotic ansatz φ (t, x, ε) = ε( φ 1 (t, x) + φ 2 (t, x)) + ε 2 ψ (t, x), ( with small ε > 0 and a paraxial approximation (to take into account the fact that propagation is concentrated to a certain given axis) as well as transformation to frequency domain.In (1.1), c is the speed of sound and B A the nonlinearity parameter.
First of all, plugging (1.2) into (1.1)yields Considering only terms up to order ε 2 and rearranging the equation above results in an expression containing the second order in space and time wave operator ∂ 2 t − c 2 Δ as well as further nonlinear terms; separating contributions that are asymptotic to different powers of ε (namely ε 1 and ε 2 in our case) allows to split (1.3) into separate equation for ϕ 1 , ϕ 2 and ψ.However, these ε asymptotics interact with another scaling that results from the geometry of wave propagation as follows.

Paraxial Approximation
We assume that the direction of propagation is the x 1 -axis.Then, for the original variables (t, x) = (t, x 1 , x  ), where x  = (x 2 , . . ., x d ), (and marking functions of these variables by a check), we perform the paraxial change of variables with Ť = Ť (x 1 , x  ), T = T(z, y) chosen such that where ε ≪ 1, so ε ≪ √ ε , and in general, ε can be different from ε that was introduced for the second order approximation in (1.2), but for the model to make sense, the relation ε ≤ ε ≤ √ε should hold; cf.[20].Concerning well-definedness of Ť , we note that it is actually the distance of x from the boundary {0} × Ωy in the Riemannian metric determined by 1 c 2 ; cf., e.g.[3,Section 7.2.3].For the derivatives, this change of variables yields (1.5) Here ∇ x = (∂ x 1 , ∇ x  ) and (∂ z , ∇ y ) are the nabla operators with respect to the original and paraxial variables, respectively.Likewise for Δ x , etc.Moreover, in (1.4) and (1.5), we have used the fact that
Altogether, for ) with e ıω d t = e ıω d x 1 /c e ıω d τ , we end up with the transformation in particular of the wave operator and likewise for φ k , φ k .Due to the boundary condition Ť (0, x  ) = 0, x  ∈ Ωy , we also have ∇ y T(0, x  ) and Δ y T(0, x  ), which allows us to neglect the terms containing ∇ y T and Δ y T to arrive at The outcome of the nonlinear term is more complicated and depends on the interplay between the asymptotic ansatz (1.2) governed by ε and the paraxial approximation (1.4) governed by ε .Depending on the relation between ε and ε , this leads to several different models; see [20] for more details.

The Considered Models
The inverse problem of combined nonlinearity imaging and speed of sound reconstruction will be considered for one of these models, namely with boundary conditions and Ω = (0, L) × Ωy .The coefficients we are interested in are related to the speed of sound and the nonlinearity parameter by Due to space dependence of c and B A , they vary in space, thus on the paraxial variables s = s(z, y), η = η(z, y).In (1.8), h k , k = 1, 2, models excitation on part of the boundary, and the remaining boundary conditions are supposed to prevent spurious reflections on the boundary of the computational domain Ω.
Remark 1.To model excitation by an array of piezoelectric transducers, continuity of the normal velocity over the transducer-fluid interface would induce Neumann boundary conditions on the velocity potential.In our setting, these would read as −∂ z φ k (0, ⋅ ) = h k in Ωy .In an analysis analogous to the one we provided here, this could be tackled by differentiating φ k with respect to z and considering the corresponding initial value problems for φ  k := ∂ z φ k .Dirichlet conditions as used here can be justified by a surface force -pressure continuity F = [σ] ⋅ ν = pν together with the relation p k = ρ∂ t ϕ k derived from momentum balance.We refer to, e.g., [6, equations (12), (13)] and the references therein for the proper choice of interface conditions in structureacoustics coupling.
Well-posedness of the forward problem (1.7)-(1.8)will be studied in Section 2.
From the point of view of the outgoing wave described by ψ , the parameter ε is not necessarily small since propagation of sound is non-directed for the ψ field.Therefore, alternatively to (1.7), (1.8), we will consider for ψ = P −1 ψ , η = e −ıω d T P −1 η (where P is defined by (P ψ )(z, y) = ψ (x 1 , x  ), cf.(1.5)), with boundary conditions (1.10) see Section 2.2 for its well-posedness analysis.Due to (1.5), the coefficients in the boundary conditions (1.8), (1.10) are related by Thus, with the typical choice σ = s, in the propagation direction z ‖ x 1 , the absorbing boundary condition in the original coordinates becomes a Neumann condition in paraxial coordinates.We wish to mention that the paraxial approximation is also made use of in the derivation of the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation (see [23] and also, e.g., [21] for its analysis).Note, however, that, in our case, the quadratic nonlinearity is decoupled and appears as a source term for the ψ equation, whereas (1.11) is a nonlinear (more precisely, quasilinear) equation, whose expansion in frequency domain would lead to an infinite system of space-dependent PDEs, similarly to [9].We consider F as an operator F : X s × X η → Y with the parameter space X := X s × X η and the data space Y yet to be specified.

The Inverse Problem
In Section 3, we will alternatively consider an all-at-once formulation of this problem that keeps the parameters and the state as simultaneous unknowns, thus avoiding the use of a parameter-to-state map S.
Identification of the nonlinearity coefficient η in time domain models of nonlinear acoustics has been studied, e.g., in [1,8,12,13,15,22] and in frequency domain in [14]; in particular, [15] also considers simultaneous identification of s and η.However, the physical background and therefore also the model differs from the ones we consider here.
A preliminary analysis of the inverse problem of ultrasound vibro-acoustography with models similar to those considered here, but still without the paraxial approximation, can be found in [11].In [20], models using a paraxial approximation are derived in time and frequency domain, and the inverse problem of reconstructing η is studied.
The plan of this paper is as follows.Section 2 is devoted to the forward problem of solving the PDE model for given coefficients.We prove well-definedness and differentiability of the parameter-to-state map (1.13) for both options (1.7) and (1.9), in order to justify the application of Newton type methods for the inverse problems.These are discussed in Section 3, where we establish convergence of a frozen Newton method for reconstructing η and s in (1.9).We point to the fact that proving convergence of iterative regularization methods is notoriously difficult in inverse problems with boundary observations, as typical for tomographic imaging.This is due to the fact that convergence criteria, such as the so-called tangential cone condition, can usually not be verified in the situation of restricted observations.We therefore here work with a range invariance condition instead that indeed can be rigorously verified here and allows to conclude convergence.
An implementation and numerical experiments with the methods analyzed here is subject of future work.Some numerical results on the simultaneous reconstruction of s and η in the Westervelt equation (thus related to this work, but using a model in time domain with a single PDE rather than a system) based on a Newton type iteration as well, can be found in [15].

Well-Posedness of the Forward Problem
In this section, we will prove well-definedness and differentiability of the parameter-to-state map for the systems (1.7), (1.8) and (1.9), (1.10), respectively.
In doing so, we put a particular emphasis on monitoring the smoothness assumptions on the coefficients, which we aim at keeping minimal in view of the fact that, in practice, s and η tend to be only piecewise smooth and also the inverse problem becomes more ill-posed the higher order the norm in preimage space needs to be chosen.
To justify the use of Newton's method for solving the inverse problem (1.12), we will also prove differentiability of the parameter-to-state map (1.13).

Well-Posedness of Paraxial Wave Propagation with Variable Coefficients
Consider the PDEs (1.7) with boundary conditions (1.8).The equations take the form of a (perturbed) Schrödinger equation; hence well-known techniques for that equation can be adopted here (see, e.g., [16] and the references therein).Since we here require estimates that are explicit in terms of appropriate norms of s and η, we will first of all provide some energy estimates for solutions of the linear variable coefficient problems and Here we assume all coefficients s, σ 0 , σ L , σ, b to be real valued and positive with s, The following two identities will be used repeatedly: Moreover, for (2.2), we will assume the Poincaré-Friedrichs type estimate on the domain Ωy , to hold for some C 0 , C 1 > 0.
then a solution of (2.1), exists, is unique and satisfies the estimate ) 3) holds and the relative smallness conditions on s, ∂ z s, ω, are satisfied with c, λ as in (2.3), then a solution of (2.2) exists, is unique and satisfies the estimate with some constant C > 0 independent of ω.
Proof.As in standard evolutionary PDEs, the proof is based on a Galerkin approximation by eigenfunctions of the negative Laplacian, energy estimates and taking weak limits; cf., e.g., [3].We will here focus on the energy estimates since the other steps of the proof are relatively straightforward for the linear problems under consideration.
Testing this way first of all (2.1) with u and taking the real and imaginary parts, we obtain for all z ∈ (0, L), with s ∼ := , where we have used Young's inequality.
Doing the same with (2.2) and integrating over (0, L), we obtain We also differentiate (2.1) with respect to z and test with ∂ z u.Analogously to above, this yields For the initial value problem (with respect to z) for the Schrödinger equation (2.1), we combine (2.6) + ρ ⋅ (2.9), which yields for η(z thus, using Young's inequality with factors 1 2√ρ , √ρ 2 for the last term, ).
An application of Gronwall's inequality yields where we can insert the initial conditions and their differentiated version substituting from the PDE Using this together with the Cauchy-Schwarz and Young's inequalities in (2.5) yields the estimate where we have chosen λ 2 such that ωλ 1 = ρ( ω λ 1 + λ 2 2 ) in order to make optimal use of η.To obtain an estimate for the two point boundary value problem (with respect to z) for the perturbed Schrödinger equation (2.2), we combine (2.7) where C 0 , C 1 are as in (2.3).In order to be able to achieve we invoke (2.4) and choose ).
Remark 2. Since we will apply the part of Proposition 1 concerning (2.1) with large ω = ω k , k = 1, 2, we track frequency dependence of the estimates explicitly there.On the other hand, the energy estimate in Proposition 1 for (2.2) will only be used for the (small) difference frequency ω = ω d , and therefore the restrictions on the size of ω made there do not compromise practical applicability in our context.A wave number explicit estimate for (2.2) follows by application of the transformation (1.6) to the Helmholtz equation without having to impose a smallness condition (2.4); see Proposition 2 in the next subsection.

Well-Posedness, Using the Helmholtz Equation for the Propagating Wave
Alternatively to the paraxial form of the PDE for ψ , we consider for ψ = P −1 ψ , η = e −ıω d T P −1 η (cf.(1.5)), with boundary conditions with P(Ω) ⊇ Ω = (0, L) × Ωy , where P is the paraxial transform defined by (1.4) and 1 Ω the extension by zero operator for a function defined on P −1 ( Ω) to all of Ω.This allows to use a possibly larger propagation domain for the ψ wave as compared to the one for the ϕ 1 , ϕ 2 waves, and to get rid of the smallness assumption (2.4).
Well-posedness and Gâteaux differentiability of the forward operator defined by system (2.13), (2.14) follows analogously to Corollary 1 by combining the first part of Proposition 1 with known results on the Helmholtz equation with impedance boundary conditions; see, e.g., [19,Chapter 8].For completeness and to track the required coefficient regularity, we here provide the essential arguments.
We start with some energy estimates for the Helmholtz equation that can be obtained by applying the testing strategy from [19,Chapter 8] (where the constant coefficient case with σ = s is considered) to our a variable coefficient setting.Again, these results are in principle available in the literature (see, e.g., [7]), but we aim at making the required regularity of s visible for our purposes.
Testing the general Helmholtz equation on a smooth domain Ω with impedance boundary conditions with u and taking real and imaginary parts yields On a starlike domain Ω in two space dimensions d = 2 with on ∂Ω, we can also use v(x) := x ⋅ ∇u(x) as a test function (provided it is contained in H 1 (Ω)) and consider the real part to arrive at Here we have used the identities as well as the divergence theorem and d = 2.This yields the following results in a low and higher regularity regime of s and correspondingly estimates on u with different frequency dependence of the constants.
= 0 a.e on ∂Ω, then a solution of (2.15) exists, is unique and satisfies the estimate

19)
Here D 2 x u denotes the (weak derivative) Hessian of u.
Proof.The well-posedness proof in the low regularity regime follows analogously to the one in the constant coefficient case [19,Chapter 8].Since it can hardly be found in the literature for our setting (variably slowness, impedance boundary conditions), we provide it here.We rewrite (2.15) as (−Δ + id)u = F + Gu with the bounded and boundedly invertible operator (−Δ + id) : for any v ∈ H 1 (Ω).This is equivalent to where b ∈ H 1 (Ω) and the operator K is bounded from L 2 (Ω) to H 1 (Ω) and thus compact when considered as an operator from L 2 (Ω) into itself.We can therefore apply Fredholm's alternative for showing bijectivity of I − K, then the Bounded Inverse Theorem on the Banach space L 2 (Ω) and finally lift regularity of u to H 1 (Ω) by means of the fixed point identity u = Ku + b.In order to use Fredholm's alternative, we have to prove injectivity of I − K. To this end, assume that (I − K)w = 0 for some w ∈ L 2 (Ω), which due to w = Kw is automatically contained in H 1 (Ω).The second equation of (2.16) implies that w satisfies homogeneous Dirichlet, hence by the impedance condition also homogeneous Neumann boundary conditions and can therefore be extended by zero to an H 2 (ℝ 3 ) function² ŵ that then satisfies the homogeneous Helmholtz equation −Δ x ŵ + ŝ ŵ on all of ℝ 3 with ŝ := 1 Ω (s − 1) + 1 and Sommerfeld radiation conditions.From the results in [2, Section 8.3], we conclude that ŵ ≡ 0 and thus w ≡ 0. The higher order regularity result (2.19) follows from elliptic regularity and the fact that From (2.18) with (2.17), (2.20), applying the Cauchy-Schwarz inequality, we conclude Hence, by Young's inequality and the second equation in (2.16), that is, with d(Ω) = sup x∈Ω |x|; hence, by another application of Young's inequality (with factors 1−c Ω 2 and 1 Together with the first identity in (2.16), this yields and again, elliptic regularity yields (2.21).
Analogously to Corollary 1, we obtain well-definedness and differentiability of the parameter-to-state map.

Convergence of a Frozen Newton Method for the Inverse Problem
Based on the parameter-to-state map S defined in (1.13) and analyzed in Section 2 as well as the observation operator (1.14), we can write the inverse problem as with y = p meas and the forward operator F = C ∘ S and apply Newton's method to (3.1), using the fact that F  (s, η) = C ∘ S  (s, η) and relying on our verification of Gâteaux differentiability from Section 2. Alternatively, we here consider an all-at-once formulation with y = (0, 0, 0, 0, 0, p meas ) based on the variational form of the system of initial-boundary value/boundary value problems (1.7), (1.8), for k ∈ {1, 2}, where we have used the abbreviation ⃗ q = (s 1 , s 2 , η, φ 1 , φ 2 , ψ ) and where ⃗ w = (w 1 , w 2 , v, u 1 , u 2 , u) varies over some space of test functions.Note that we have introduced a second copy of the variable s in order to be able to verify the range invariance condition from [10] that allows to prove convergence of a frozen Newton method.The additional constraint 0 = P( ⃗ q) := s 1 − s 2 that will be imposed via a penalty term during the Newton type iteration (3.7) takes care of merging these two s versions into one as the iteration proceeds.
The formal linearization of  = ( 1 , . . .,  6 ) at some ⃗ q 0 is given by It is a bounded linear operator, and thus it is the Gâteaux derivative of  when considered, e.g., as a mapping  :  →  with for some p ∈ [1, ∞].Note that this slightly differs from the function space setting suggested for the reduced setting by Corollary 1, but these differences are essential for the verification of convergence conditions for the frozen Newton method; see (3.5) below.Thus, we are here making use of the additional freedom in choosing the function spaces that we gain by using an all-at-once formulation (3.2) as compared to a reduced one (3.1).We can achieve the range invariance relation by setting r( ⃗ q) := δ ⃗ q = (δs 1 , δs 2 , δη, δϕ 1 , δϕ 2 , δψ) with To this end, we have to choose φ 0 1 , φ 0 2 , ψ 0 such that the denominators in the above expression are bounded away from zero.This is possible due to the fact that, in the all-at-once formulation, they need not be PDE solutions corresponding to the coefficients δs 0 1 , δs 0 2 , δη 0 .Relying on this, we can also establish closeness of r to the identity in the sense that r( ⃗ q) − ( ⃗ q − ⃗ q 0 ) = (δs 1 − (s 1 − s 0 1 ), δs 2 − (s 2 − s 0 2 ), δη − (η − η 0 ), 0, 0, 0), where and similarly for ‖∂ z (δs 1 − (s 1 − s 0 1 ))‖ L p (L p ) , using the identity as well as Based on the range invariance condition (3.4), we can rewrite the inverse problem of reconstructing (s, η) as a combination of an ill-posed linear and a well-posed nonlinear problem for the unknowns ( r , ⃗ q).Here ⃗ q 0 is fixed and U ⊆  is a neighborhood of ⃗ q 0 .For its regularized iterative solution, we consider the frozen Newton method where y δ ≈ y is the noisy data, α n → 0 as n → ∞ and ℤ = (L p (0, L; L p ( Ωy ))).To work in Hilbert spaces, we set p = 2.
In order to prove convergence of (3.7), we additionally need a condition on compatibility of the linear operators   ( ⃗ q 0 ) and P; see [10,Lemma B.1]. Sufficient for this condition is linearized uniqueness with s 1 = s 2 , that is, triviality of the intersection of the nullspaces For the model (1.7), (1.8) described by (3.3), this would likely require more than one excitation as well as observations at several frequencies, along with the corresponding extension of dependency of s in order to allow for range invariance (3.4).We here rigorously establish (3.8) for the alternative model (2.13), (2.14) using the Helmholtz equation for the outgoing wave ψ and making the simplifying but still very realistic assumption that propagation of the excitation waves described by φ 1 , φ 2 takes place in a homogeneous domain with known and constant speed of sound.
The unknown parameters are then the squared slowness š and the nonlinearity coefficient η and the model reads as where f is given by the excitation waves φ 1 , φ 2 that can be precomputed under the assumption of s = s 0 = 1 c being known in the first equations of (2.13).Moreover, on the boundary ∂Ω, we assume to know š = s 2 0 = 1 c 2 , and define the operators A, D, M by In our linearized uniqueness proof, we will assume that A, D and M are simultaneously diagonalizable.This holds true, e.g., in the case σ = 0, where M = id and D = bA.Note that, in the all-at-once setting considered here, we are not bound to well-posedness theory of the underlying PDE problems and therefore have more freedom in choosing the coefficients in the boundary conditions.
To obtain uniqueness, we will need observations on an interval I of difference frequencies ω d and with at least two different sets of excitation frequencies; more precisely, with I countable containing at least one accumulation point, which implies ω 3 p = ω 2 d ω .Correspondingly, we consider ψ = ψ (ω d , ω ) as a function of ω d , while the original coefficients š , η of course stay independent of (ω d , ω ).In order to satisfy range invariance, we will introduce an artificial dependence š = š (ω d , ω ), while keeping η independent of ω d and ω .