Show Summary Details
More options …

Editor-in-Chief: Radulescu, Vicentiu / Squassina, Marco

IMPACT FACTOR 2018: 6.636

CiteScore 2018: 5.03

SCImago Journal Rank (SJR) 2018: 3.215
Source Normalized Impact per Paper (SNIP) 2018: 3.225

Mathematical Citation Quotient (MCQ) 2017: 1.89

Open Access
Online
ISSN
2191-950X
See all formats and pricing
More options …
Volume 8, Issue 1

# Quantization of energy and weakly turbulent profiles of solutions to some damped second-order evolution equations

Marina Ghisi
/ Massimo Gobbino
/ Alain Haraux
Published Online: 2017-12-13 | DOI: https://doi.org/10.1515/anona-2017-0181

## Abstract

We consider a second-order equation with a linear “elastic” part and a nonlinear damping term depending on a power of the norm of the velocity. We investigate the asymptotic behavior of solutions, after rescaling them suitably in order to take into account the decay rate and bound their energy away from zero. We find a rather unexpected dichotomy phenomenon. Solutions with finitely many Fourier components are asymptotic to solutions of the linearized equation without damping and exhibit some sort of equipartition of the total energy among the components. Solutions with infinitely many Fourier components tend to zero weakly but not strongly. We show also that the limit of the energy of the solutions depends only on the number of their Fourier components. The proof of our results is inspired by the analysis of a simplified model, which we devise through an averaging procedure, and whose solutions exhibit the same asymptotic properties as the solutions to the original equation.

MSC 2010: 35B40; 35L70; 35B36

## 1 Introduction

Let H be a real Hilbert space, in which $|x|$ denotes the norm of an element $x\in H$, and $〈x,y〉$ denotes the scalar product of two elements x and y. Let A be a self-adjoint operator on H with dense domain $D\left(A\right)$. We assume that H admits a countable orthonormal basis made by eigenvectors of A corresponding to an increasing sequence of positive eigenvalues ${\lambda }_{k}^{2}$.

We consider the second-order evolution equation

${u}^{\prime \prime }\left(t\right)+{|{u}^{\prime }\left(t\right)|}^{2}{u}^{\prime }\left(t\right)+Au\left(t\right)=0,$(1.1)

with initial conditions

$u\left(0\right)={u}_{0}\in D\left({A}^{1/2}\right),{u}^{\prime }\left(0\right)={u}_{1}\in H.$(1.2)

All nonzero solutions to (1.1) decay to zero in the energy space $D\left({A}^{1/2}\right)×H$, with a decay rate proportional to ${t}^{-1/2}$ (see Proposition 3.1). This suggests the introduction and the investigation of the rescaled variable $v\left(t\right):=\sqrt{t}\cdot u\left(t\right)$.

The special structure of the damping term guarantees that for any linear subspace $F\subseteq D\left(A\right)$ such that $A\left(F\right)\subseteq F$, the space $F×F$ is positively invariant by the flow generated by (1.1). In particular, equation (1.1) possesses the so-called finite-dimensional modes, namely, solutions whose both components of the initial state $\left({u}_{0},{u}_{1}\right)$ are finite combinations of the eigenvectors. Denoting by ${u}_{k}\left(t\right)$ and ${v}_{k}\left(t\right)$ the projections of $u\left(t\right)$ and $v\left(t\right)$ on the k-th eigenspace, we shall call for simplicity the quantity

$t\left({|{u}_{k}^{\prime }\left(t\right)|}^{2}+{\lambda }_{k}^{2}{|{u}_{k}\left(t\right)|}^{2}\right)$

the “energy of the k-th Fourier component of $v\left(t\right)$” (standing for total energy) while

$t\left({|{u}^{\prime }\left(t\right)|}^{2}+{|{A}^{1/2}u\left(t\right)|}^{2}\right)$

will be called the “energy of $v\left(t\right)$” (again meaning total energy). For t large, these quantities are easily seen to be equivalent to ${|{v}_{k}^{\prime }\left(t\right)|}^{2}+{\lambda }_{k}^{2}{|{v}_{k}\left(t\right)|}^{2}$ and ${|{v}^{\prime }\left(t\right)|}^{2}+{|{A}^{1/2}v\left(t\right)|}^{2}$, respectively. Our main results, formally stated as Theorem 2.1 and Theorem 2.5, can be summed up as follows.

• The limit of the energy of $v\left(t\right)$ depends only on the number of Fourier components of $v\left(t\right)$ that are different from 0. In particular, the limit of the energy can take only countably many values.

• If $v\left(t\right)$ has only a finite number of Fourier components different from 0, then $v\left(t\right)$ is asymptotic in a strong sense to a suitable solution ${v}_{\mathrm{\infty }}\left(t\right)$ to the nondissipative linear equation

${v}^{\prime \prime }\left(t\right)+Av\left(t\right)=0.$(1.3)

Moreover, there is equipartition of the total energy in the limit, in the sense that all nonzero Fourier components of ${v}_{\mathrm{\infty }}\left(t\right)$ do have the same total energy.

• If $v\left(t\right)$ has infinitely many components different from 0, then $v\left(t\right)$ tends to 0 weakly in the energy space, but not strongly. Roughly speaking, the energy of $v\left(t\right)$ does not tend to 0, but in the limit there is again equipartition of the energy, now among infinitely many components, and this forces all components of $v\left(t\right)$ to vanish in the limit.

In other words, the Fourier components of rescaled solutions to (1.1) communicate to each other, and this can result in some sort of energy transfer from lower to higher frequencies, longing for a uniform distribution of the energy among components. In the case of an infinite number of non-trivial Fourier components, the weak convergence to 0 implies non-compactness of the profile in the energy space. In particular, if A has compact resolvent, whenever the initial state $\left({u}_{0},{u}_{1}\right)$ belongs to $D\left(A\right)×D\left({A}^{1/2}\right)$ and has an infinite number of elementary modes, the norm of $\left(v\left(t\right),{v}^{\prime }\left(t\right)\right)$ in $D\left(A\right)×D\left({A}^{1/2}\right)$ is unbounded, a typical phenomenon usually called weak turbulence, cf., e.g., [1, 6] for other examples.

Our abstract theory applies for example to wave equations with nonlinear nonlocal damping terms of the form

${u}_{tt}\left(t,x\right)+\left({\int }_{0}^{\mathrm{\ell }}{u}_{t}^{2}\left(t,x\right)𝑑x\right){u}_{t}\left(t,x\right)-{u}_{xx}\left(t,x\right)=0$(1.4)

in a bounded interval $\left(0,\mathrm{\ell }\right)$ of the real line with homogeneous Dirichlet boundary conditions. This is a toy model of the wave equation with local nonlinear damping

${u}_{tt}\left(t,x\right)+{u}_{t}^{3}\left(t,x\right)-{u}_{xx}\left(t,x\right)=0\mathit{ }t\ge 0,x\in \left(0,\mathrm{\ell }\right),$(1.5)

which in turn is the prototype of all wave equations with nonlinear dissipation of order higher than one at the origin. This more general problem was the motivation that led us to consider equations (1.4) and (1.1). It is quite easy to prove that all solutions to (1.5) decay at least as ${t}^{-1/2}$. Actually, the more general problem

${u}_{tt}\left(t,x\right)+g\left({u}_{t}\left(t,x\right)\right)-\mathrm{\Delta }u\left(t,x\right)=0$

in any bounded domain with homogeneous Dirichlet boundary conditions and g non-decreasing has been extensively studied under relevant assumptions on the behavior of g near the origin and some conditions on the growth of g at infinity, cf., e.g., [9, 2, 3, 8], in which reasonable energy estimates of the same form as those in the ODE case are obtained. However, the asymptotic behavior of solutions to the simple equation (1.5) is still a widely open problem since, unlike the ODE case, the optimality of this decay rate in unknown: there are neither examples of solutions to (1.5) whose decay rate is proportional to ${t}^{-1/2}$, nor examples of nonzero solutions that decay faster.

It is not clear whether our results shed some light on the local case or not. For sure, they confirm the complexity of the problem. In the case of (1.5), there are no simple invariant subspaces, and the interplay between components induced by the nonlinearity is more involved. Therefore, it is reasonable to guess that at most the infinite-dimensional behavior of (1.4) extends to (1.5), and this behavior is characterized by lack of an asymptotic profile and of strong convergence.

As a matter of fact, the problem of optimal decay rates is strongly related to regularity issues. It can be easily shown that the solutions to (1.5) with initial data in the energy space remain in the same space for all times, and their energy is bounded by the initial energy. But what about more regular solutions? Can one bound higher order Sobolev norms of solutions in terms of the corresponding norms of initial data? This is another open problem whose answer would imply partial results for decay rates, as explained in [5, 6], cf. also [10] for a partial optimality result in the case of boundary damping. However, the energy traveling toward higher frequencies might prevent the bounds on higher order norms from being true, or at least from being easy to prove.

This paper is organized as follows. In Section 2 we state our main results. In Section 3 we prove the basic energy estimate from above and from below for solution to (1.1), we introduce Fourier components, and we interpret (1.1) as a system of infinitely many ordinary differential equations. In Section 4 we consider a simplified system, obtained from the original one by averaging some oscillating terms. Then we analyze the simplified system, and we discover that it is the gradient flow of a quadratically perturbed convex functional, whose solutions exhibit most of the features of the full system we started with, including the existence of a large class of solutions which die off weakly at infinity. In Section 5 we investigate the asymptotic behavior of solutions to scalar differential equations and inequalities involving fast oscillating terms. Section 6 is devoted to estimates on oscillating integrals. Finally, in Section 7 we put things together and we conclude the proof of our main results.

## 2 Statements

Let us consider equation (1.1) with initial data (1.2). If A is self-adjoint and nonnegative, it is quite standard that the problem admits a unique weak global solution

$u\in {C}^{1}\left(\left[0,+\mathrm{\infty }\right),H\right)\cap {C}^{0}\left(\left[0,+\mathrm{\infty }\right),D\left({A}^{1/2}\right)\right).$

Moreover, the classical energy

$E\left(t\right):={|{u}^{\prime }\left(t\right)|}^{2}+{|{A}^{1/2}u\left(t\right)|}^{2}$(2.1)

is of class ${C}^{1}$, and its time-derivative satisfies

(2.2)

The following is the main result of this paper.

#### Theorem 2.1.

Let H be a Hilbert space, and let A be a linear operator on H with dense domain $D\mathit{}\mathrm{\left(}A\mathrm{\right)}$. Let us assume that there exist a countable orthonormal basis $\mathrm{\left\{}{e}_{k}\mathrm{\right\}}$ of H and an increasing sequence $\mathrm{\left\{}{\lambda }_{k}\mathrm{\right\}}$ of positive real numbers such that

Let $u\mathit{}\mathrm{\left(}t\mathrm{\right)}$ be the solution to problem (1.1)–(1.2), let $\mathrm{\left\{}{u}_{\mathrm{0}\mathit{}k}\mathrm{\right\}}$ and $\mathrm{\left\{}{u}_{\mathrm{1}\mathit{}k}\mathrm{\right\}}$ denote the components of ${u}_{\mathrm{0}}$ and ${u}_{\mathrm{1}}$ with respect to the orthonormal basis, and let $\mathrm{\left\{}{u}_{k}\mathit{}\mathrm{\left(}t\mathrm{\right)}\mathrm{\right\}}$ denote the corresponding components of $u\mathit{}\mathrm{\left(}t\mathrm{\right)}$. Let us consider the set

$J:=\left\{k\in ℕ:{u}_{1k}^{2}+{u}_{0k}^{2}\ne 0\right\}.$(2.3)

Then the asymptotic behavior of $u\mathit{}\mathrm{\left(}t\mathrm{\right)}$ and its energy depends on J as follows.

• (1)

(Trivial solution) If $J=\mathrm{\varnothing }$ , then $u\left(t\right)=0$ for every $t\ge 0$ and, in particular,

$\underset{t\to +\mathrm{\infty }}{lim}t\left({|{u}^{\prime }\left(t\right)|}^{2}+{|{A}^{1/2}u\left(t\right)|}^{2}\right)=0.$

• (2)

(Finite-dimensional modes) If J is a finite set with j elements, then ${u}_{k}\left(t\right)=0$ for every $t\ge 0$ and every $k\notin J$ . In addition, for every $k\in J$ , there exists a real number ${\theta }_{k,\mathrm{\infty }}$ such that

$\underset{t\to +\mathrm{\infty }}{lim}\left(\sqrt{t}\cdot {u}_{k}\left(t\right)-\frac{2}{\sqrt{2j+1}}\cdot \frac{\mathrm{cos}\left({\lambda }_{k}t+{\theta }_{k,\mathrm{\infty }}\right)}{{\lambda }_{k}}\right)=0,$(2.4)$\underset{t\to +\mathrm{\infty }}{lim}\left(\sqrt{t}\cdot {u}_{k}^{\prime }\left(t\right)+\frac{2}{\sqrt{2j+1}}\cdot \mathrm{sin}\left({\lambda }_{k}t+{\theta }_{k,\mathrm{\infty }}\right)\right)=0$(2.5)

and, in particular,

$\underset{t\to +\mathrm{\infty }}{lim}t\left({|{u}^{\prime }\left(t\right)|}^{2}+{|{A}^{1/2}u\left(t\right)|}^{2}\right)=\frac{4j}{2j+1}.$

• (3)

(Infinite-dimensional modes) If J is infinite, then

but

$\underset{t\to +\mathrm{\infty }}{lim inf}t\left({|{u}^{\prime }\left(t\right)|}^{2}+{|{A}^{1/2}u\left(t\right)|}^{2}\right)>0,$(2.6)

and hence $\sqrt{t}\cdot \left(u\left(t\right),{u}^{\prime }\left(t\right)\right)$ converges to $\left(0,0\right)$ weakly but not strongly.

Let us comment on some aspects of Theorem 2.1 above.

#### Remark 2.2.

The result holds true also when H is a finite-dimensional Hilbert space, but in this case only the first two options apply.

#### Remark 2.3.

In the case of finite-dimensional modes, let us set

It can be verified that ${v}_{\mathrm{\infty }}\left(t\right)$ is a solution to the linear homogeneous equation without damping (1.3), and that (2.4) and (2.5) are equivalent to saying that ${v}_{\mathrm{\infty }}\left(t\right)$ is the asymptotic profile of $\sqrt{t}\cdot u\left(t\right)$, in the sense that

$\underset{t\to +\mathrm{\infty }}{lim}\left({|\sqrt{t}\cdot {u}^{\prime }\left(t\right)-{v}_{\mathrm{\infty }}^{\prime }\left(t\right)|}^{2}+{|\sqrt{t}\cdot u\left(t\right)-{v}_{\mathrm{\infty }}\left(t\right)|}^{2}\right)=0.$

#### Remark 2.4.

The assumptions of Theorem 2.1 imply, in particular, that all eigenvalues are simple. Things become more complex if multiplicities are allowed. Let us consider the simplest case where H is a space of dimension 2, and the operator A is the identity. In this case equation (1.1) reduces to a system of two ordinary differential equations of the form

$\left\{\begin{array}{cc}& \stackrel{¨}{u}+\left({\stackrel{˙}{u}}^{2}+{\stackrel{˙}{v}}^{2}\right)\stackrel{˙}{u}+u=0,\hfill \\ & \stackrel{¨}{v}+\left({\stackrel{˙}{u}}^{2}+{\stackrel{˙}{v}}^{2}\right)\stackrel{˙}{v}+v=0.\hfill \end{array}$

If $\left(v\left(0\right),{v}^{\prime }\left(0\right)\right)=c\left(u\left(0\right),{u}^{\prime }\left(0\right)\right)$ for some constant c, then $v\left(t\right)=cu\left(t\right)$ for every $t\ge 0$, hence there is no equipartition of the energy in the limit.

In our second result we consider again the case where J is infinite, and we improve (2.6) under a uniform gap condition on the eigenvalues (which is satisfied for our model problem (1.4)).

#### Theorem 2.5.

Let H, A, ${\lambda }_{k}$, $u\mathit{}\mathrm{\left(}t\mathrm{\right)}$ and J be as in Theorem 2.1. Let us assume in addition that J is infinite and

$\underset{k\in ℕ}{inf}\left({\lambda }_{k+1}-{\lambda }_{k}\right)>0.$(2.7)

Then it turns out that

$\underset{t\to +\mathrm{\infty }}{lim}t\left({|{u}^{\prime }\left(t\right)|}^{2}+{|{A}^{1/2}u\left(t\right)|}^{2}\right)=2.$

## 3 Basic energy estimates and reduction to ODEs

In this section we make the first steps toward the proof of Theorem 2.1. In particular, we prove a basic energy estimate, and we reduce the problem to a system of countably many ordinary differential equations.

#### Proposition 3.1 (Basic energy estimate).

Let H, A and $u\mathit{}\mathrm{\left(}t\mathrm{\right)}$ be as in Theorem 2.1. Assume that $\mathrm{\left(}{u}_{\mathrm{0}}\mathrm{,}{u}_{\mathrm{1}}\mathrm{\right)}\mathrm{\ne }\mathrm{\left(}\mathrm{0}\mathrm{,}\mathrm{0}\mathrm{\right)}$. Then there exist two positive constants ${M}_{\mathrm{1}}$ and ${M}_{\mathrm{2}}$ such that

(3.1)

#### Proof.

Let us consider the classic energy (2.1). From (2.2) it follows that

Integrating this differential inequality, we obtain the estimate from below in (3.1). Since ${E}^{\prime }\left(t\right)\le 0$ for every $t\ge 0$, we deduce also that

(3.2)

Let us consider now the modified energy

${F}_{\epsilon }\left(t\right):=E\left(t\right)+2\epsilon 〈u\left(t\right),{u}^{\prime }\left(t\right)〉E\left(t\right),$

where ε is a positive parameter. We claim that there exists ${\epsilon }_{0}>0$ such that

(3.3)

and

(3.4)

If we prove these claims, then we set $\epsilon ={\epsilon }_{0}$, and from (3.4) and the estimate from above in (3.3), we deduce that

An integration of this differential inequality gives that

for a suitable constant ${k}_{1}$, and hence the estimate from below in (3.3) implies that

which proves the estimate from above in (3.1).

So we only need to prove (3.3) and (3.4). The coerciveness of the operator A implies that

$|2〈{u}^{\prime }\left(t\right),u\left(t\right)〉|\le {|{u}^{\prime }\left(t\right)|}^{2}+{|u\left(t\right)|}^{2}\le {|{u}^{\prime }\left(t\right)|}^{2}+\frac{1}{{\lambda }_{1}^{2}}{|{A}^{1/2}u\left(t\right)|}^{2},$

and hence, from (3.2), we obtain

(3.5)

for a suitable constant ${k}_{2}$ depending on the initial data. This guarantees that (3.3) holds true when ε is small enough.

As for (3.4), after some computations, we obtain that it is equivalent to

$\left(2-3\epsilon \right){|{u}^{\prime }\left(t\right)|}^{4}+\epsilon {|{A}^{1/2}u\left(t\right)|}^{4}-2\epsilon {|{u}^{\prime }\left(t\right)|}^{2}\cdot {|{A}^{1/2}u\left(t\right)|}^{2}$$+6\epsilon 〈{u}^{\prime }\left(t\right),u\left(t\right)〉\cdot {|{u}^{\prime }\left(t\right)|}^{4}+2\epsilon 〈{u}^{\prime }\left(t\right),u\left(t\right)〉\cdot {|{u}^{\prime }\left(t\right)|}^{2}\cdot {|{A}^{1/2}u\left(t\right)|}^{2}\ge 0.$(3.6)

Taking (3.5) into account, (3.6) holds true if we show that

$\left(2-3\epsilon -3\epsilon {k}_{2}\right){|{u}^{\prime }\left(t\right)|}^{4}+\epsilon {|{A}^{1/2}u\left(t\right)|}^{4}-\epsilon \left(2+{k}_{2}\right){|{u}^{\prime }\left(t\right)|}^{2}\cdot {|{A}^{1/2}u\left(t\right)|}^{2}\ge 0.$

The left-hand side is a quadratic form in the variables ${|{u}^{\prime }\left(t\right)|}^{2}$ and ${|{A}^{1/2}u\left(t\right)|}^{2}$, and it is nonnegative for all values of the variables, provided that

$4\left(2-3\epsilon -3\epsilon {k}_{2}\right)\epsilon \ge {\epsilon }^{2}{\left(2+{k}_{2}\right)}^{2},$

which is clearly true when ε is small enough. This completes the proof. ∎

Proposition 3.1 suggests that $u\left(t\right)$ decays as ${t}^{-1/2}$, and motivates the variable change

The energy of $v\left(t\right)$ is given by

${|{v}^{\prime }\left(t\right)|}^{2}+{|{A}^{1/2}v\left(t\right)|}^{2}=\left(t+1\right){|{u}^{\prime }\left(t\right)|}^{2}+\frac{{|u\left(t\right)|}^{2}}{4\left(t+1\right)}+〈{u}^{\prime }\left(t\right),u\left(t\right)〉+\left(t+1\right){|{A}^{1/2}u\left(t\right)|}^{2}.$

We claim that there exist constants ${M}_{3}$ and ${M}_{4}$ such that

(3.7)

The upper estimate being quite clear, we just prove the lower bound. To this end, we start by the simple inequality

${|{v}^{\prime }\left(t\right)|}^{2}+{|{A}^{1/2}v\left(t\right)|}^{2}\ge \left(t+1\right){|{u}^{\prime }\left(t\right)|}^{2}+\left[\frac{{\lambda }_{1}^{2}}{2}+\frac{1}{4\left(t+1\right)}\right]{|u\left(t\right)|}^{2}+〈{u}^{\prime }\left(t\right),u\left(t\right)〉+\frac{t+1}{2}{|{A}^{1/2}u\left(t\right)|}^{2}.$

On the other hand,

$\left(t+1\right){|{u}^{\prime }\left(t\right)|}^{2}+\left[\frac{{\lambda }_{1}^{2}}{2}+\frac{1}{4\left(t+1\right)}\right]{|u\left(t\right)|}^{2}+〈{u}^{\prime }\left(t\right),u\left(t\right)〉$

is obviously greater than or equal to

$\left(t+1\right){|{u}^{\prime }\left(t\right)|}^{2}+\frac{2{\lambda }_{1}^{2}+1}{4\left(t+1\right)}{|u\left(t\right)|}^{2}+〈{u}^{\prime }\left(t\right),u\left(t\right)〉.$

By decomposing this expression, we obtain the inequality

$\frac{t+1}{2{\lambda }_{1}^{2}+1}{|{u}^{\prime }\left(t\right)|}^{2}+\frac{2{\lambda }_{1}^{2}+1}{4\left(t+1\right)}{|u\left(t\right)|}^{2}+〈{u}^{\prime }\left(t\right),u\left(t\right)〉+\left(t+1\right)\left(1-\frac{1}{2{\lambda }_{1}^{2}+1}\right){|{u}^{\prime }\left(t\right)|}^{2}\ge \frac{2{\lambda }_{1}^{2}}{2{\lambda }_{1}^{2}+1}\left(t+1\right){|{u}^{\prime }\left(t\right)|}^{2},$

and we end up with

${|{v}^{\prime }\left(t\right)|}^{2}+{|{A}^{1/2}v\left(t\right)|}^{2}\ge \mathrm{min}\left\{\frac{1}{2},\frac{2{\lambda }_{1}^{2}}{2{\lambda }_{1}^{2}+1}\right\}\left(t+1\right)\left({|{u}^{\prime }\left(t\right)|}^{2}+{|{A}^{1/2}u\left(t\right)|}^{2}\right),$

which proves the lower bound in (3.7) with

${M}_{3}=\mathrm{min}\left\{\frac{1}{2},\frac{2{\lambda }_{1}^{2}}{2{\lambda }_{1}^{2}+1}\right\}{M}_{1}.$

Starting from (1.1), after some computations, we can verify that $v\left(t\right)$ solves

${v}^{\prime \prime }\left(t\right)+\left({|{v}^{\prime }\left(t\right)|}^{2}-1\right)\frac{{v}^{\prime }\left(t\right)}{t+1}+Av\left(t\right)={g}_{1}\left(t\right)v\left(t\right)+{g}_{2}\left(t\right){v}^{\prime }\left(t\right),$(3.8)

where ${g}_{1}:\left[0,+\mathrm{\infty }\right)\to ℝ$ and ${g}_{2}:\left[0,+\mathrm{\infty }\right)\to ℝ$ are defined by

${g}_{1}\left(t\right):=-\frac{3}{4}\frac{1}{{\left(t+1\right)}^{2}}+\frac{1}{2}\frac{{|{v}^{\prime }\left(t\right)|}^{2}}{{\left(t+1\right)}^{2}}-\frac{1}{2}\frac{〈v\left(t\right),{v}^{\prime }\left(t\right)〉}{{\left(t+1\right)}^{3}}+\frac{1}{8}\frac{{|v\left(t\right)|}^{2}}{{\left(t+1\right)}^{4}},{g}_{2}\left(t\right):=\frac{〈v\left(t\right),{v}^{\prime }\left(t\right)〉}{{\left(t+1\right)}^{2}}-\frac{1}{4}\frac{{|v\left(t\right)|}^{2}}{{\left(t+1\right)}^{3}}.$

Due to (3.7), there exists a constant ${M}_{5}$ such that

(3.9)

In the sequel, we interpret ${g}_{1}\left(t\right)$ and ${g}_{2}\left(t\right)$ as time-dependent coefficients satisfying this estimate, rather than nonlinear terms.

Let now $\left\{{v}_{k}\left(t\right)\right\}$ denote the components of $v\left(t\right)$ with respect to the orthonormal basis. Then (3.8) can be rewritten as a system of countably many ordinary differential equations of the form

${v}_{k}^{\prime \prime }\left(t\right)+\left(\sum _{i=0}^{\mathrm{\infty }}{\left[{v}_{i}^{\prime }\left(t\right)\right]}^{2}-1\right)\frac{{v}_{k}^{\prime }\left(t\right)}{t+1}+{\lambda }_{k}^{2}{v}_{k}\left(t\right)={g}_{1}\left(t\right){v}_{k}\left(t\right)+{g}_{2}\left(t\right){v}_{k}^{\prime }\left(t\right).$(3.10)

Let us introduce polar coordinates ${r}_{k}\left(t\right)$ and ${\phi }_{k}\left(t\right)$ in such a way that

${v}_{k}\left(t\right)=\frac{1}{{\lambda }_{k}}{r}_{k}\left(t\right)\mathrm{cos}{\phi }_{k}\left(t\right),{v}_{k}^{\prime }\left(t\right)={r}_{k}\left(t\right)\mathrm{sin}{\phi }_{k}\left(t\right).$

In these new variables, every second-order equation in (3.10) is equivalent to a system of two first-order equations of the form (for the sake of shortness we do not write explicitly the dependence of ${r}_{k}$ and ${\phi }_{k}$ on t)

${r}_{k}^{\prime }=-\left(\sum _{i=0}^{\mathrm{\infty }}{r}_{i}^{2}{\mathrm{sin}}^{2}{\phi }_{i}-1\right)\frac{{r}_{k}{\mathrm{sin}}^{2}{\phi }_{k}}{t+1}+{\gamma }_{k}\left(t\right){r}_{k}\mathrm{sin}{\phi }_{k},$(3.11)${\phi }_{k}^{\prime }=-{\lambda }_{k}-\left(\sum _{i=0}^{\mathrm{\infty }}{r}_{i}^{2}{\mathrm{sin}}^{2}{\phi }_{i}-1\right)\frac{\mathrm{sin}{\phi }_{k}\mathrm{cos}{\phi }_{k}}{t+1}+{\gamma }_{k}\left(t\right)\mathrm{cos}{\phi }_{k},$(3.12)

where

In particular, since the eigenvalues are bounded from below, from (3.9) it follows that there exists a constant ${M}_{6}$ such that

(3.13)

Finally, we perform one more variable change in order to get rid of $\left(t+1\right)$ in the denominators of equations (3.11)–(3.12). To this end, for every $k\in ℕ$, we set

${\rho }_{k}\left(t\right):={r}_{k}\left({e}^{t}-1\right),{\theta }_{k}\left(t\right):={\phi }_{k}\left({e}^{t}-1\right),$

and we realize that in these new variables system (3.11)–(3.12) reads as

${\rho }_{k}^{\prime }=-\left(\sum _{i=0}^{\mathrm{\infty }}{\rho }_{i}^{2}{\mathrm{sin}}^{2}{\theta }_{i}-1\right){\rho }_{k}{\mathrm{sin}}^{2}{\theta }_{k}+{\mathrm{\Gamma }}_{1,k}\left(t\right){\rho }_{k},$(3.14)${\theta }_{k}^{\prime }=-{\lambda }_{k}{e}^{t}-\left(\sum _{i=0}^{\mathrm{\infty }}{\rho }_{i}^{2}{\mathrm{sin}}^{2}{\theta }_{i}-1\right)\mathrm{sin}{\theta }_{k}\mathrm{cos}{\theta }_{k}+{\mathrm{\Gamma }}_{2,k}\left(t\right),$(3.15)

where

${\mathrm{\Gamma }}_{1,k}\left(t\right):={e}^{t}{\gamma }_{k}\left({e}^{t}-1\right)\mathrm{sin}{\theta }_{k}\left(t\right),{\mathrm{\Gamma }}_{2,k}\left(t\right):={e}^{t}{\gamma }_{k}\left({e}^{t}-1\right)\mathrm{cos}{\theta }_{k}\left(t\right),$

and so from (3.13) it follows, on replacing t by ${e}^{t}-1$, that there exists a constant ${M}_{7}$ such that

(3.16)

We observe that ${\rho }_{k}$ can be factored out in the right-hand side of (3.14), and hence either ${\rho }_{k}\left(t\right)=0$ for every $t\ge 0$, or ${\rho }_{k}\left(t\right)>0$ for every $t\ge 0$, where the second option applies if and only if k belongs to the set J defined in (2.3). We observe also that the sequence ${\rho }_{k}\left(t\right)$ is square-summable for every $t\ge 0$, and the square of its norm

$R\left(t\right):=\sum _{k=0}^{\mathrm{\infty }}{\rho }_{k}^{2}\left(t\right)=\sum _{k\in J}{\rho }_{k}^{2}\left(t\right)$(3.17)

satisfies

$R\left(t\right)=\left({|{v}^{\prime }\left({e}^{t}-1\right)|}^{2}+{|{A}^{1/2}v\left({e}^{t}-1\right)|}^{2}\right).$

In particular, from (3.7), it follows that

(3.18)

for every nontrivial solution.

Finally, we observe that in the new variables, Theorems 2.1 and 2.5 have been reduced to proving the following facts:

• (Finite-dimensional modes) If J is a nonempty finite set, then, for every $k\in J$, it turns out that

$\underset{t\to +\mathrm{\infty }}{lim}{\rho }_{k}\left(t\right)=\frac{2}{\sqrt{2j+1}},$(3.19)

and there exists a real number ${\theta }_{k,\mathrm{\infty }}$ such that

$\underset{t\to +\mathrm{\infty }}{lim}\left({\theta }_{k}\left(t\right)+{\lambda }_{k}{e}^{t}\right)={\theta }_{k,\mathrm{\infty }}.$(3.20)

• (Infinite-dimensional modes) If J is infinite, then

and under the additional uniform gap assumption (2.7), it turns out that

$\underset{t\to +\mathrm{\infty }}{lim}R\left(t\right)=2.$(3.21)

## 4 Heuristics

In this section we make some drastic simplifications in equations (3.14)–(3.15). These non-rigorous steps lead to a simplified model, which is then analyzed rigorously in Theorem 4.1 below. The result is that solutions to the simplified model exhibit all the features stated in Theorems 2.1 and 2.5 for solutions to the full system. Since the derivation of the simplified model is not rigorous, we can not exploit Theorem 4.1 in the study of (3.14)–(3.15). Nevertheless, the proof of Theorem 4.1 provides a short sketch without technicalities of the ideas that are involved in the proof of the main results.

To begin with, in (3.14) and (3.15) we ignore the terms with ${\mathrm{\Gamma }}_{1,k}\left(t\right)$ and ${\mathrm{\Gamma }}_{2,k}\left(t\right)$. Indeed, these terms are integrable because of (3.16), and hence it is reasonable to expect that they have no influence on the asymptotic dynamics. Now let us consider (3.15), which seems to suggest that ${\theta }_{k}\left(t\right)\sim -{\lambda }_{k}{e}^{t}$. If this is true, then the trigonometric terms in (3.14) oscillate very quickly, and in turn this suggests that some homogenization effect takes place. Therefore, it seems reasonable to replace all those oscillating terms with their time-averages.

The time-averages can be easily computed to be

$\underset{t\to +\mathrm{\infty }}{lim}\frac{1}{t}{\int }_{0}^{t}{\mathrm{sin}}^{2}\left(\lambda {e}^{s}\right)𝑑s=\frac{1}{2}$(4.1)$\underset{t\to +\mathrm{\infty }}{lim}\frac{1}{t}{\int }_{0}^{t}{\mathrm{sin}}^{2}\left(\lambda {e}^{s}\right)\cdot {\mathrm{sin}}^{2}\left(\mu {e}^{s}\right)𝑑s=\frac{1}{4}$(4.2)$\underset{t\to +\mathrm{\infty }}{lim}\frac{1}{t}{\int }_{0}^{t}{\mathrm{sin}}^{4}\left(\lambda {e}^{s}\right)𝑑s=\frac{3}{8}$(4.3)

A comparison of (4.1) and (4.2) reveals that the two oscillating functions in the integral (4.2) are in some sense independent when $\lambda \ne \mu$, while (4.3) shows that this independence fails when $\lambda =\mu$. This lack of independence plays a fundamental role in the sequel.

After replacing all oscillating coefficients in (3.14) with their time-averages, we are left with the following system of autonomous ordinary differential equations:

${\rho }_{k}^{\prime }={\rho }_{k}\left(\frac{1}{2}-\frac{3}{8}{\rho }_{k}^{2}-\frac{1}{4}\sum _{i\ne k}{\rho }_{i}^{2}\right)={\rho }_{k}\left(\frac{1}{2}-\frac{1}{8}{\rho }_{k}^{2}-\frac{1}{4}\sum _{i=0}^{\mathrm{\infty }}{\rho }_{i}^{2}\right)$(4.4)

Quite magically, this system turns out to be the gradient flow of the functional

$\mathcal{ℱ}\left(\rho \right):=-\frac{1}{4}\sum _{k=0}^{\mathrm{\infty }}{\rho }_{k}^{2}+\frac{1}{16}{\left(\sum _{k=0}^{\mathrm{\infty }}{\rho }_{k}^{2}\right)}^{2}+\frac{1}{32}\sum _{k=0}^{\mathrm{\infty }}{\rho }_{k}^{4},$

where ρ belongs to the space of square-summable sequences of nonnegative real numbers ${\mathrm{\ell }}_{+}^{2}$. Since $\mathcal{ℱ}\left(\rho \right)$ is a continuous quadratic perturbation of a convex functional (the sum of the last two terms), its gradient flow generates a semigroup in ${\mathrm{\ell }}_{+}^{2}$. Solutions are expected to be asymptotic to stationary points of $\mathcal{ℱ}\left(\rho \right)$. In addition to the trivial stationary point with all components equal to 0, all remaining stationary points ρ are of the form

for some finite subset $J\subseteq ℕ$ with j elements. Incidentally, it is not difficult to check that any such stationary point is the minimum point of the restriction of $\mathcal{ℱ}\left(\rho \right)$ to the subset

(4.5)

Now we show that the asymptotic behavior of solutions to the averaged system (4.4) corresponds to the results announced in our main theorems.

#### Theorem 4.1 (Asymptotics for solutions to the homogenized system).

Let $\mathrm{\left\{}{\rho }_{k}\mathit{}\mathrm{\left(}t\mathrm{\right)}\mathrm{\right\}}$ be a solution to system (4.4) in ${\mathrm{\ell }}_{\mathrm{+}}^{\mathrm{2}}$, and let $J\mathrm{:=}\mathrm{\left\{}k\mathrm{\in }\mathrm{N}\mathrm{:}{\rho }_{k}\mathit{}\mathrm{\left(}\mathrm{0}\mathrm{\right)}\mathrm{>}\mathrm{0}\mathrm{\right\}}$. Then the asymptotic behavior of the solution depends on J as follows.

• (1)

(Trivial null solution) If $J=\mathrm{\varnothing }$ , then ${\rho }_{k}\left(t\right)=0$ for every $k\in ℕ$ and every $t\ge 0$.

• (2)

(Finite-dimensional modes) If J is a finite set with j elements, then ${\rho }_{k}\left(t\right)=0$ for every $k\notin J$ and every $t\ge 0$ , and

(4.6)

In other words, in this case the solution leaves in the subspace ${W}_{J}$ defined by ( 4.5 ), and tends to the minimum point of the restriction of $\mathcal{ℱ}\left(\rho \right)$ to ${W}_{J}$.

• (3)

(Infinite-dimensional modes) If J is infinite, then

but

$\underset{t\to +\mathrm{\infty }}{lim}\sum _{k=0}^{\mathrm{\infty }}{\rho }_{k}^{2}\left(t\right)=2$(4.7)

and, in particular, the solution tends to 0 weakly but not strongly.

#### Proof.

First of all, we observe that components with null initial datum remain null during the evolution, while components with positive initial datum remain positive for all subsequent times.

Then we introduce the total energy $R\left(t\right)$ of the solution, defined as in (3.17). Moreover, for every pair of indices h and k in J, we consider the ratio

(4.8)

which is well defined because the denominator never vanishes.

Simple calculations show that

(4.9)

and

(4.10)

Now we prove some basic estimates on the energy and the quotients, and then we distinguish the case where all components tend to 0, and the case where at least one component does not tend to 0. Non-optimal energy estimates. We prove that

$\frac{4}{3}\le \underset{t\to +\mathrm{\infty }}{lim inf}R\left(t\right)\le \underset{t\to +\mathrm{\infty }}{lim sup}R\left(t\right)\le 2.$(4.11)

Indeed, plugging the trivial estimate

$0\le \sum _{k\in J}{\rho }_{k}^{4}\left(t\right)\le {\left(\sum _{k\in J}{\rho }_{k}^{2}\left(t\right)\right)}^{2}$

into (4.9), we obtain that

Integrating the two differential inequalities, we deduce (4.11). Uniform boundedness of quotients. We prove that for every $h\in J$, there exists a constant ${D}_{h}$ such that

(4.12)

We point out that ${D}_{h}$ is independent of k, and actually it can be defined as

${D}_{h}:=\mathrm{max}\left\{1,\mathrm{max}\left\{{Q}_{h,k}\left(0\right):k\in J\right\}\right\}.$(4.13)

Therefore, it is enough to remark that the solutions to (4.10) are decreasing as long as they are greater than 1, and observe that the inner maximum in (4.13) is well defined because for every fixed $h\in J$, it turns out that ${Q}_{h,k}\left(0\right)\to 0$ as $k\to +\mathrm{\infty }$ (because ${\rho }_{k}\left(0\right)\to 0$ as $k\to +\mathrm{\infty }$). The case where all components vanish in the limit. Let us assume that

(4.14)

In this case, we prove that J is infinite and (4.7) holds true.

Let us assume that J is finite. Then from (4.14) it follows that $R\left(t\right)\to 0$ as $t\to +\mathrm{\infty }$, which contradicts the estimate from below in (4.11). So J is infinite.

In order to prove (4.7), let us fix any index ${h}_{0}\in J$. From (4.12), we obtain that

$\sum _{k\in J}{\rho }_{k}^{4}\left(t\right)=\sum _{k\in J}{Q}_{{h}_{0},k}^{2}\left(t\right){\rho }_{{h}_{0}}^{2}\left(t\right)\cdot {\rho }_{k}^{2}\left(t\right)\le {D}_{{h}_{0}}^{2}\cdot {\rho }_{{h}_{0}}^{2}\left(t\right)\cdot \sum _{k\in J}{\rho }_{k}^{2}\left(t\right).$

Plugging this estimate into (4.9), we deduce that

$R\left(t\right)-\frac{1}{2}{R}^{2}\left(t\right)-\frac{1}{4}{D}_{{h}_{0}}^{2}\cdot {\rho }_{{h}_{0}}^{2}\left(t\right)\cdot R\left(t\right)\le {R}^{\prime }\left(t\right)\le R\left(t\right)-\frac{1}{2}{R}^{2}\left(t\right).$(4.15)

Since ${\rho }_{{h}_{0}}^{2}\left(t\right)\cdot R\left(t\right)\to 0$ as $t\to +\mathrm{\infty }$, these two differential inequalities imply (4.7) (we refer to Proposition 5.3 below for a more general result). The case where at least one component does not vanish in the limit. Let us assume that there exists ${h}_{0}\in J$ such that

$\underset{t\to +\mathrm{\infty }}{lim sup}{\rho }_{{h}_{0}}\left(t\right)>0.$(4.16)

In this case, we prove that J is finite and (4.6) holds true.

Since ${\rho }_{{h}_{0}}\left(t\right)$ is Lipschitz continuous (because its time-derivative is bounded), from (4.16) we deduce that

${\int }_{0}^{+\mathrm{\infty }}{\rho }_{{h}_{0}}^{2}\left(t\right)𝑑t=+\mathrm{\infty },$

and hence from equation (4.10) we conclude that (we refer to Proposition 5.4 below for a more general result)

(4.17)

We are now ready to prove that J is finite. Let us assume on the contrary that this is not the case. Then, for every $n\in ℕ$, there exists a subset ${J}_{n}\subseteq J$ with n elements, and hence

$R\left(t\right)\ge \sum _{k\in {J}_{n}}{\rho }_{k}^{2}\left(t\right)=\sum _{k\in {J}_{n}}{Q}_{{h}_{0},k}^{2}\left(t\right){\rho }_{{h}_{0}}^{2}\left(t\right)={\rho }_{{h}_{0}}^{2}\left(t\right)\sum _{k\in {J}_{n}}{Q}_{{h}_{0},k}^{2}\left(t\right).$

When $t\to +\mathrm{\infty }$, the last sum tends to n because of (4.17), and hence

$\underset{t\to +\mathrm{\infty }}{lim sup}R\left(t\right)\ge n\cdot \underset{t\to +\mathrm{\infty }}{lim sup}{\rho }_{{h}_{0}}^{2}\left(t\right),$

which contradicts the estimate from above in (4.11) when n is large enough. To finish the proof, we now observe that the vector ${\left({\rho }_{k}\left(t\right)\right)}_{k\in J}$ is a bounded solution of a first-order gradient system, so that (cf., e.g., [4, Example 2.2.5] or [7, Corollary 7.3.1]) its omega-limit set is made of stationary points only. But the only stationary point satisfying the condition of having all its limiting components positive and equal is the point with all components equal to the right-hand side of (4.6). ∎

## 5 Estimates for differential inequalities

In this section we investigate the asymptotic behavior of solutions to two scalar differential equations characterized by the presence of fast oscillating terms. Equations of this form are going to appear in the proof of our main results as the equations solved by the energy of the solution and by the ratio between two Fourier components.

Throughout the text, we shall meet oscillatory functions which are not absolutely integrable at infinity but have a convergent integral in a weaker sense.

#### Definition 5.1 (Semi-integrable function).

A function $f\in {C}^{0}\left(\left[{t}_{0},\mathrm{\infty }\right),ℝ\right)$ will be called semi-integrable on $\left[{t}_{0},\mathrm{\infty }\right)$ if the integral

$F\left(t\right):={\int }_{{t}_{0}}^{t}f\left(s\right)𝑑s$

converges to a finite limit as t tends to $+\mathrm{\infty }$. In this case, the limit will be denoted as ${\int }_{{t}_{0}}^{+\mathrm{\infty }}f\left(s\right)𝑑s$.

#### Remark 5.2.

A classical example of a function which is semi-integrable but not absolutely integrable in $\left[{t}_{0},+\mathrm{\infty }\right)$ for ${t}_{0}>0$ is

$f\left(t\right)=\frac{\mathrm{cos}\left(\omega t+\varphi \right)}{{t}^{\alpha }},$(5.1)

whenever $0<\alpha \le 1$. Another classical case (Fresnel’s integrals) is

$f\left(t\right)=\mathrm{cos}\left(\omega {t}^{2}+\varphi \right).$

In the second case the integrability comes from fast oscillations at infinity and the convergence of the integral appears immediately by the change of variable $s={t}^{2}$, which reduces us to (5.1) with $\alpha =1/2$. The semi-integrable functions that we shall handle are closer to $\mathrm{cos}\left(c{e}^{bt}\right)$ in $\left[0,+\mathrm{\infty }\right)$, in which case the integral can be reduced to (5.1) with $\alpha =1$, by the change of variable $s={e}^{bt}$.

The first equation we consider is actually a differential inequality which generalizes (4.15). It takes the form

(5.2)

When ${z}_{\mathrm{\infty }}$ is a positive constant, and ${\psi }_{1}\left(t\right)\equiv {\psi }_{2}\left(t\right)\equiv 0$, this inequality reduces to an ordinary differential equation, and it is easy to see that all its positive solutions tend to ${z}_{\mathrm{\infty }}$ as $t\to +\mathrm{\infty }$. In the following statement we show that the same conclusion is true under a more general assumption on ${\psi }_{1}\left(t\right)$ and ${\psi }_{2}\left(t\right)$.

#### Proposition 5.3.

Let ${z}_{\mathrm{\infty }}$ be a positive constant, and let $z\mathrm{:}\mathrm{\left[}\mathrm{0}\mathrm{,}\mathrm{+}\mathrm{\infty }\mathrm{\right)}\mathrm{\to }\mathrm{R}$ be a solution of class ${C}^{\mathrm{1}}$ to the differential inequality (5.2). Let us assume the following:

• (i)

The function ${\psi }_{1}:\left[0,+\mathrm{\infty }\right)\to ℝ$ is continuous and semi-integrable on $\left[0,+\mathrm{\infty }\right)$.

• (ii)

The function ${\psi }_{2}:\left[0,+\mathrm{\infty }\right)\to ℝ$ is continuous and satisfies

$\underset{t\to +\mathrm{\infty }}{lim}{\psi }_{2}\left(t\right)=0.$(5.3)

• (iii)

There exists a constant ${c}_{0}$ such that

(5.4)

Then it turns out that

$\underset{t\to +\mathrm{\infty }}{lim}z\left(t\right)={z}_{\mathrm{\infty }}.$(5.5)

#### Proof.

For every $t\ge 0$, let us set

$x\left(t\right):=z\left(t\right)-{z}_{\mathrm{\infty }},a\left(t\right):=1+\frac{x\left(t\right)}{{z}_{\mathrm{\infty }}}=\frac{z\left(t\right)}{{z}_{\mathrm{\infty }}}.$

Now (5.2) is equivalent to the two differential inequalities

${x}^{\prime }\left(t\right)\le -a\left(t\right)x\left(t\right)+{\psi }_{1}\left(t\right)+{\psi }_{2}\left(t\right),$(5.6)${x}^{\prime }\left(t\right)\ge -a\left(t\right)x\left(t\right)+{\psi }_{1}\left(t\right)-{\psi }_{2}\left(t\right).$(5.7)

Assumption (5.4) implies that

(5.8)

and (5.5) is equivalent to

$\underset{t\to +\mathrm{\infty }}{lim}x\left(t\right)=0.$(5.9)

Let us set

and observe that (5.8) implies that $A\left(t\right)$ is increasing and

$\underset{t\to +\mathrm{\infty }}{lim}A\left(t\right)=+\mathrm{\infty }.$(5.10)

Let us concentrate on the differential inequality (5.6). Due to a well-known formula, every solution satisfies

$x\left(t\right)\le {e}^{-A\left(t\right)}x\left(0\right)+{e}^{-A\left(t\right)}{\int }_{0}^{t}{e}^{A\left(\tau \right)}{\psi }_{2}\left(\tau \right)𝑑\tau +{e}^{-A\left(t\right)}{\int }_{0}^{t}{e}^{A\left(\tau \right)}{\psi }_{1}\left(\tau \right)𝑑\tau .$

We claim that the three terms in the right-hand side tend to 0 as $t\to +\mathrm{\infty }$, and hence

$\underset{t\to +\mathrm{\infty }}{lim sup}x\left(t\right)\le 0.$(5.11)

This is clear for the first term because of (5.10). Since $A\left(t\right)$ is increasing and tends to $+\mathrm{\infty }$, we can apply de L’Hôpital’s rule to the second term. Taking (5.3) and (5.8) into account, we obtain that

$\underset{t\to +\mathrm{\infty }}{lim}\frac{1}{{e}^{A\left(t\right)}}{\int }_{0}^{t}{e}^{A\left(\tau \right)}{\psi }_{2}\left(\tau \right)𝑑\tau =\underset{t\to +\mathrm{\infty }}{lim}\frac{1}{a\left(t\right){e}^{A\left(t\right)}}\cdot {e}^{A\left(t\right)}{\psi }_{2}\left(t\right)=0.$

In order to estimate the third term, let us introduce the function

Due to the semi-integrability of ${\psi }_{1}\left(t\right)$, the function ${\mathrm{\Psi }}_{1}\left(t\right)$ is well defined and ${\mathrm{\Psi }}_{1}\left(t\right)\to 0$ as $t\to +\mathrm{\infty }$. Now an integration by parts gives that

${\int }_{0}^{t}{e}^{A\left(\tau \right)}{\psi }_{1}\left(\tau \right)𝑑\tau ={e}^{A\left(t\right)}{\mathrm{\Psi }}_{1}\left(t\right)-{\mathrm{\Psi }}_{1}\left(0\right)-{\int }_{0}^{t}a\left(\tau \right){e}^{A\left(\tau \right)}{\mathrm{\Psi }}_{1}\left(\tau \right)𝑑\tau .$

The first two terms tend to 0 when multiplied by ${e}^{-A\left(t\right)}$. As for the third term, we apply again de L’Hôpital’s rule and conclude that

$\underset{t\to +\mathrm{\infty }}{lim}\frac{1}{{e}^{A\left(t\right)}}{\int }_{0}^{t}a\left(\tau \right){e}^{A\left(\tau \right)}{\mathrm{\Psi }}_{1}\left(\tau \right)𝑑\tau =\underset{t\to +\mathrm{\infty }}{lim}\frac{1}{a\left(t\right){e}^{A\left(t\right)}}\cdot a\left(t\right){e}^{A\left(t\right)}{\mathrm{\Psi }}_{1}\left(t\right)=0.$

This completes the proof of (5.11).

In an analogous way, from (5.7), we deduce that

$\underset{t\to +\mathrm{\infty }}{lim inf}x\left(t\right)\ge 0.$(5.12)

From (5.11) and (5.12), we obtain (5.9), and this completes the proof. ∎

The second equation we consider is a generalization of (4.10). It takes the form

(5.13)

When $\alpha \left(t\right)\equiv 1$ and $\beta \left(t\right)\equiv \gamma \left(t\right)\equiv 0$, it is easy to see that all positive solutions tend to 1 as $t\to +\mathrm{\infty }$. In the following result we prove the same conclusion under more general assumptions on the coefficients.

#### Proposition 5.4.

Let $z\mathrm{:}\mathrm{\left[}\mathrm{0}\mathrm{,}\mathrm{+}\mathrm{\infty }\mathrm{\right)}\mathrm{\to }\mathrm{\left(}\mathrm{0}\mathrm{,}\mathrm{+}\mathrm{\infty }\mathrm{\right)}$ be a positive solution of class ${C}^{\mathrm{1}}$ to the differential equation (5.13).

Let us assume the following:

• (i)

The function $\alpha :\left[0,+\mathrm{\infty }\right)\to \left(0,+\mathrm{\infty }\right)$ is bounded and of class ${C}^{1}$ , and it satisfies

${\int }_{0}^{+\mathrm{\infty }}\alpha \left(t\right)𝑑t=+\mathrm{\infty }.$(5.14)

• (ii)

There exists a constant ${L}_{0}$ such that

(5.15)

• (iii)

The functions $\beta :\left[0,+\mathrm{\infty }\right)\to ℝ$ and $\gamma :\left[0,+\mathrm{\infty }\right)\to ℝ$ are bounded and semi-integrable.

Then it turns out that

$\underset{t\to +\mathrm{\infty }}{lim}z\left(t\right)=1.$(5.16)

#### Proof.

Equation (5.13) is a classical Bernoulli equation, and the usual variable change $x\left(t\right):={\left[z\left(t\right)\right]}^{-2}$ transforms it into the linear equation

${x}^{\prime }\left(t\right)=-2\left(\alpha \left(t\right)+\gamma \left(t\right)\right)x\left(t\right)+2\alpha \left(t\right)\left(1-\beta \left(t\right)\right).$(5.17)

In the new setting, conclusion (5.16) is equivalent to proving that

$\underset{t\to +\mathrm{\infty }}{lim}x\left(t\right)=1.$(5.18)

In order to avoid plenty of factors 2, with a little abuse of notation, we replace $2\alpha \left(t\right)$, $2\beta \left(t\right)$, $2\gamma \left(t\right)$ with $\alpha \left(t\right)$, $\beta \left(t\right)$, $\gamma \left(t\right)$. This does not change the assumptions, but allows us to rewrite (5.17) in the simpler form

${x}^{\prime }\left(t\right)=-\left(\alpha \left(t\right)+\gamma \left(t\right)\right)x\left(t\right)+\alpha \left(t\right)\left(1-\beta \left(t\right)\right).$(5.19)

Now we introduce the function

and observe that

$\underset{t\to +\mathrm{\infty }}{lim}A\left(t\right)=+\mathrm{\infty },$(5.20)

because of assumption (5.14). We also introduce the functions

$B\left(t\right):={\int }_{t}^{+\mathrm{\infty }}\beta \left(\tau \right)𝑑\tau ,C\left(t\right):={\int }_{0}^{t}\gamma \left(\tau \right)𝑑\tau ,$

which are well defined for every $t\ge 0$ as a consequence of assumption (iii), and satisfy

$\underset{t\to +\mathrm{\infty }}{lim}B\left(t\right)=0,$(5.21)$\underset{t\to +\mathrm{\infty }}{lim}C\left(t\right)=:{C}_{\mathrm{\infty }}\in ℝ.$(5.22)

Every solution to (5.19) is given by the well-known formula

$x\left(t\right)={e}^{-A\left(t\right)-C\left(t\right)}x\left(0\right)+{e}^{-A\left(t\right)-C\left(t\right)}{\int }_{0}^{t}{e}^{A\left(\tau \right)+C\left(\tau \right)}\alpha \left(\tau \right)𝑑\tau -{e}^{-A\left(t\right)-C\left(t\right)}{\int }_{0}^{t}{e}^{A\left(\tau \right)+C\left(\tau \right)}\alpha \left(\tau \right)\beta \left(\tau \right)𝑑\tau .$

We claim that the first and third term tend to 0 as $t\to +\mathrm{\infty }$, while the second term tends to 1. This would complete the proof of (5.18).

The first term tends to 0 because of (5.20) and (5.22).

The second term can be rewritten as

${e}^{-C\left(t\right)}\cdot \frac{1}{{e}^{A\left(t\right)}}{\int }_{0}^{t}{e}^{A\left(\tau \right)+C\left(\tau \right)}\alpha \left(\tau \right)𝑑\tau .$

The factor ${e}^{-C\left(t\right)}$ tends to ${e}^{-{C}_{\mathrm{\infty }}}$. Since $A\left(t\right)$ is increasing and tends to $+\mathrm{\infty }$, we can apply de L’Hôpital’s rule to the second factor. We obtain that

$\underset{t\to +\mathrm{\infty }}{lim}\frac{1}{{e}^{A\left(t\right)}}{\int }_{0}^{t}{e}^{A\left(\tau \right)+C\left(\tau \right)}\alpha \left(\tau \right)𝑑\tau =\underset{t\to +\mathrm{\infty }}{lim}\frac{1}{\alpha \left(t\right){e}^{A\left(t\right)}}\cdot {e}^{A\left(t\right)+C\left(t\right)}\alpha \left(t\right)={e}^{{C}_{\mathrm{\infty }}},$

and this settles the second term.

In order to compute the limit of the third term, we integrate by parts. We obtain that

${\int }_{0}^{t}{e}^{A\left(\tau \right)+C\left(\tau \right)}\alpha \left(\tau \right)\beta \left(\tau \right)𝑑\tau ={e}^{A\left(t\right)+C\left(t\right)}\alpha \left(t\right)B\left(t\right)-\alpha \left(0\right)B\left(0\right)-{\int }_{0}^{t}{e}^{A\left(\tau \right)+C\left(\tau \right)}\left[\left(\alpha \left(\tau \right)+\gamma \left(\tau \right)\right)\alpha \left(\tau \right)+{\alpha }^{\prime }\left(\tau \right)\right]B\left(\tau \right)𝑑\tau .$

When we multiply by ${e}^{-A\left(t\right)-C\left(t\right)}$, the first two terms in the right-hand side tend to 0, because of (5.20)–(5.22) and the boundedness of the function $\alpha \left(t\right)$. Thanks to assumption (5.15), the absolute value of the last integral is less than or equal to

${\int }_{0}^{t}{e}^{A\left(\tau \right)+C\left(\tau \right)}\left(|\alpha \left(\tau \right)|+|\gamma \left(\tau \right)|+{L}_{0}\right)\alpha \left(\tau \right)|B\left(\tau \right)|𝑑\tau .$

Now we multiply by ${e}^{-A\left(t\right)-C\left(t\right)}$, we factor out ${e}^{-C\left(t\right)}$, and we compute the limit of the rest by exploiting de L’Hôpital’s rule, as we did before. From (5.20)–(5.22) and the boundedness of the functions $\alpha \left(t\right)$ and $\gamma \left(t\right)$, we conclude that

$\underset{t\to +\mathrm{\infty }}{lim}\frac{1}{{e}^{A\left(t\right)}}{\int }_{0}^{t}{e}^{A\left(\tau \right)+C\left(\tau \right)}\left(|\alpha \left(\tau \right)|+|\gamma \left(\tau \right)|+{L}_{0}\right)\alpha \left(\tau \right)|B\left(\tau \right)|𝑑\tau =\underset{t\to +\mathrm{\infty }}{lim}\frac{{e}^{A\left(t\right)+C\left(t\right)}\left(|\alpha \left(t\right)|+|\gamma \left(t\right)|+{L}_{0}\right)\alpha \left(t\right)|B\left(t\right)|}{\alpha \left(t\right){e}^{A\left(t\right)}}=0.$

This completes the proof of (5.18). ∎

In the third and last result of this section, we consider again equation (5.13). Let us assume for simplicity that $\alpha \left(t\right)\ge 0$ for every $t\ge 0$, and $\beta \left(t\right)\equiv \gamma \left(t\right)\equiv 0$. These assumptions do not guarantee that positive solutions tend to 1 as $t\to +\mathrm{\infty }$, but nevertheless they are enough to conclude that all solutions are bounded from above for $t\ge 0$ (because solutions are decreasing as long as they stay in the region $z\left(t\right)>1$). In the following result we prove a similar conclusion under more general assumptions on the coefficients.

#### Proposition 5.5.

Let $z\mathrm{:}\mathrm{\left[}\mathrm{0}\mathrm{,}\mathrm{+}\mathrm{\infty }\mathrm{\right)}\mathrm{\to }\mathrm{\left(}\mathrm{0}\mathrm{,}\mathrm{+}\mathrm{\infty }\mathrm{\right)}$ be a positive solution of class ${C}^{\mathrm{1}}$ to the differential equation (5.13). Let us assume the following:

• (i)

The function $\alpha :\left[0,+\mathrm{\infty }\right)\to \left(0,+\mathrm{\infty }\right)$ is of class ${C}^{1}$.

• (ii)

The functions $\beta :\left[0,+\mathrm{\infty }\right)\to ℝ$ and $\gamma :\left[0,+\mathrm{\infty }\right)\to ℝ$ are continuous.

• (iii)

There exists a constant ${L}_{1}$ such that

(5.23)

• (iv)

There exists a constant ${L}_{2}$ such that

(5.24)

Let ${t}_{\mathrm{0}}\mathrm{\ge }\mathrm{0}$ be any nonnegative real number such that

${L}_{2}\left(1+9{L}_{1}+32{L}_{1}^{2}+32{L}_{1}^{3}\right){e}^{-{t}_{0}}<\mathrm{log}2.$(5.25)

Then the following implication holds true:

$z\left({t}_{0}\right)\le 1⟹\underset{t\ge {t}_{0}}{sup}z\left(t\right)\le 2.$

#### Proof.

Let us assume that $z\left({t}_{0}\right)\le 1$, and set

If ${t}_{2}=+\mathrm{\infty }$, the result is proved. Let us assume by contradiction that this is not the case, and hence ${t}_{2}<+\mathrm{\infty }$. Due to the continuity of $z\left(t\right)$ and the maximality of ${t}_{2}$, it follows that

$z\left({t}_{2}\right)=2.$(5.26)

Let us set

Then it turns out that ${t}_{0}\le {t}_{1}<{t}_{2}$ and, moreover,

$z\left({t}_{1}\right)=1$(5.27)

and

(5.28)

Due to (5.23) and (5.28), from (5.13), we deduce that

(5.29)

Since $z\left(t\right)\ge 1$ in $\left[{t}_{1},{t}_{2}\right]$ and $\alpha \left(t\right)$ is positive, (5.13) implies also that

which we can integrate as a linear differential inequality. Taking (5.27) into account, we find that

Now we claim that

${\int }_{{t}_{1}}^{{t}_{2}}\gamma \left(\tau \right)𝑑\tau +{\int }_{{t}_{1}}^{{t}_{2}}\alpha \left(\tau \right)\beta \left(\tau \right){z}^{2}\left(\tau \right)𝑑\tau <\mathrm{log}2.$(5.30)

This would imply that $z\left({t}_{2}\right)<2$, thus contradicting (5.26). Due to the second inequality in (5.24), we can estimate the first integral as

${\int }_{{t}_{1}}^{{t}_{2}}\gamma \left(\tau \right)𝑑\tau \le {L}_{2}{e}^{-{t}_{1}}\le {L}_{2}{e}^{-{t}_{0}}.$(5.31)

In order to estimate the second integral, we introduce the function

This function is well defined because of the first inequality in (5.24) and, for the same reason, it satisfies

(5.32)

Now an integration by parts gives that

${\int }_{{t}_{1}}^{{t}_{2}}\alpha \left(\tau \right)\beta \left(\tau \right){z}^{2}\left(\tau \right)𝑑\tau =\alpha \left({t}_{2}\right){z}^{2}\left({t}_{2}\right)B\left({t}_{2}\right)-\alpha \left({t}_{1}\right){z}^{2}\left({t}_{1}\right)B\left({t}_{1}\right)-{\int }_{{t}_{1}}^{{t}_{2}}B\left(\tau \right)\left({\alpha }^{\prime }\left(\tau \right){z}^{2}\left(\tau \right)+2\alpha \left(\tau \right)z\left(\tau \right){z}^{\prime }\left(\tau \right)\right)𝑑\tau .$

From (5.23), (5.26), (5.27) and (5.32), it follows that

$|\alpha \left({t}_{2}\right){z}^{2}\left({t}_{2}\right)B\left({t}_{2}\right)-\alpha \left({t}_{1}\right){z}^{2}\left({t}_{1}\right)B\left({t}_{1}\right)|\le {L}_{1}\cdot 4\cdot {L}_{2}{e}^{-{t}_{2}}+{L}_{1}\cdot 1\cdot {L}_{2}{e}^{-{t}_{1}}\le 5{L}_{1}{L}_{2}{e}^{-{t}_{0}}.$(5.33)

From (5.23), (5.28), (5.29) and (5.32), we have

$|B\left(\tau \right)\left({\alpha }^{\prime }\left(\tau \right){z}^{2}\left(\tau \right)+2\alpha \left(\tau \right)z\left(\tau \right){z}^{\prime }\left(\tau \right)\right)|\le {L}_{2}{e}^{-\tau }\left(4{L}_{1}+32{L}_{1}\left({L}_{1}^{2}+{L}_{1}\right)\right)\le 4{L}_{2}\left({L}_{1}+8{L}_{1}^{2}+8{L}_{1}^{3}\right){e}^{-\tau }$(5.34)

for every $\tau \in \left[{t}_{1},{t}_{2}\right]$. From (5.33) and (5.34), it follows that

${\int }_{{t}_{1}}^{{t}_{2}}\alpha \left(\tau \right)\beta \left(\tau \right){z}^{2}\left(\tau \right)𝑑\tau \le {L}_{2}\left(9{L}_{1}+32{L}_{1}^{2}+32{L}_{1}^{3}\right){e}^{-{t}_{0}}.$(5.35)

Adding (5.31) and (5.35), and taking assumption (5.25) into account, we obtain (5.30). This completes the proof. ∎

## 6 Estimates on oscillating integrals

In the three results of this section, we prove the convergence of some oscillating integrals and series of oscillating integrals. We need these estimates in the proof of our main result when we deal with the trigonometric terms of (3.14) and (3.15).

#### Lemma 6.1.

Let $\alpha \mathrm{>}\mathrm{0}$, let ${L}_{\mathrm{3}}\mathrm{\ge }\mathrm{0}$, and let $\psi \mathrm{:}\mathrm{\left[}\mathrm{0}\mathrm{,}\mathrm{+}\mathrm{\infty }\mathrm{\right)}\mathrm{\to }\mathrm{R}$ be a function of class ${C}^{\mathrm{1}}$ such that

Then, for every $s\mathrm{\ge }t\mathrm{\ge }\mathrm{0}$, it turns out that

$|{\int }_{t}^{s}\mathrm{cos}\left(\alpha {e}^{\tau }+\psi \left(\tau \right)\right)𝑑\tau |\le \frac{3+{L}_{3}}{\alpha {e}^{t}}.$(6.1)

#### Proof.

We introduce the complex-valued functions

$g\left(t\right):=\mathrm{exp}\left(i\alpha {e}^{t}\right),f\left(t\right):=\mathrm{exp}\left(i\psi \left(t\right)\right),$

so that, clearly,

$|{\int }_{t}^{s}\mathrm{cos}\left(\alpha {e}^{\tau }+\psi \left(\tau \right)\right)𝑑\tau |\le |{\int }_{t}^{s}\mathrm{exp}\left(i\left[\alpha {e}^{\tau }+\psi \left(\tau \right)\right]\right)𝑑\tau |=|{\int }_{t}^{s}g\left(\tau \right)f\left(\tau \right)𝑑\tau |.$

Now we have

${\int }_{t}^{s}g\left(\tau \right)f\left(\tau \right)𝑑\tau ={\int }_{t}^{s}{g}^{\prime }\left(\tau \right)\frac{1}{i\alpha }{e}^{-\tau }f\left(\tau \right)𝑑\tau =\frac{1}{i\alpha }\left[g\left(s\right)f\left(s\right){e}^{-s}-g\left(t\right)f\left(t\right){e}^{-t}-{\int }_{t}^{s}g\left(\tau \right)\left({f}^{\prime }\left(\tau \right)-f\left(\tau \right)\right){e}^{-\tau }𝑑\tau \right],$

yielding the immediate estimate

$|{\int }_{t}^{s}g\left(\tau \right)f\left(\tau \right)𝑑\tau |\le \frac{3+{L}_{3}}{\alpha }{e}^{-t},$

which implies (6.1). ∎

Lemma 6.1 can also be viewed as a special case of the following result.

#### Lemma 6.2.

Let $g\mathrm{:}\mathrm{\left[}\mathrm{0}\mathrm{,}\mathrm{+}\mathrm{\infty }\mathrm{\right)}\mathrm{\to }\mathrm{C}$ be a continuous function, and let $f\mathrm{:}\mathrm{\left[}\mathrm{0}\mathrm{,}\mathrm{+}\mathrm{\infty }\mathrm{\right)}\mathrm{\to }\mathrm{C}$ be a function of class ${C}^{\mathrm{1}}$. Let us assume that there exist two constants ${L}_{\mathrm{4}}$ and ${L}_{\mathrm{5}}$ such that

$|{\int }_{t}^{s}g\left(\tau \right)𝑑\tau |\le {L}_{4}{e}^{-t}$(6.2)$\mathrm{max}\left\{|f\left(t\right)|,|{f}^{\prime }\left(t\right)|\right\}\le {L}_{5}$

Then it turns out that

(6.3)

#### Proof.

Let us introduce the function

This function is well defined because of assumption (6.2) and, for the same reason, it satisfies

Integrating by parts the left-hand side of (6.3), we find that

${\int }_{t}^{s}g\left(\tau \right)f\left(\tau \right)𝑑\tau =G\left(s\right)f\left(s\right)-G\left(t\right)f\left(t\right)-{\int }_{t}^{s}G\left(\tau \right){f}^{\prime }\left(\tau \right)𝑑\tau .$

At this point, our assumptions imply that

$|{\int }_{t}^{s}g\left(\tau \right)f\left(\tau \right)𝑑\tau |\le |G\left(s\right)|\cdot |f\left(s\right)|+|G\left(t\right)|\cdot |f\left(t\right)|+{\int }_{t}^{s}|G\left(\tau \right)|\cdot |{f}^{\prime }\left(\tau \right)|𝑑\tau$$\le {L}_{4}{e}^{-s}\cdot {L}_{5}+{L}_{4}{e}^{-t}\cdot {L}_{5}+{\int }_{t}^{s}{L}_{4}{e}^{-\tau }\cdot {L}_{5}𝑑\tau$$\le 3{L}_{4}{L}_{5}{e}^{-t},$

which proves (6.3). ∎

The next lemma extends the previous estimates to some series of functions.

#### Lemma 6.3.

Let ${g}_{k}\mathrm{:}\mathrm{\left[}\mathrm{0}\mathrm{,}\mathrm{+}\mathrm{\infty }\mathrm{\right)}\mathrm{\to }\mathrm{R}$ be a sequence of continuous functions, and let ${f}_{k}\mathrm{:}\mathrm{\left[}\mathrm{0}\mathrm{,}\mathrm{+}\mathrm{\infty }\mathrm{\right)}\mathrm{\to }\mathrm{R}$ be a sequence of functions of class ${C}^{\mathrm{1}}$. Let us assume that the two series of functions

$\sum _{k=0}^{\mathrm{\infty }}{f}_{k}\left(t\right),\sum _{k=0}^{\mathrm{\infty }}{f}_{k}^{\prime }\left(t\right)$

are normally convergent on compact subsets of $\mathrm{\left[}\mathrm{0}\mathrm{,}\mathrm{+}\mathrm{\infty }\mathrm{\right)}$, and that there exist three constants ${L}_{\mathrm{6}}$, ${L}_{\mathrm{7}}$, and ${L}_{\mathrm{8}}$ such that

$|{g}_{k}\left(t\right)|\le {L}_{6}$(6.4)$|{\int }_{t}^{s}{g}_{k}\left(\tau \right)𝑑\tau |\le {L}_{7}{e}^{-t}$(6.5)

and

(6.6)

Then the series

$\sum _{k=0}^{\mathrm{\infty }}{g}_{k}\left(t\right){f}_{k}\left(t\right)$(6.7)

is normally convergent on compact subsets of $\mathrm{\left[}\mathrm{0}\mathrm{,}\mathrm{+}\mathrm{\infty }\mathrm{\right)}$, and it satisfies

(6.8)

#### Proof.

In analogy with the proof of Lemma 6.2, we introduce the functions

${G}_{k}\left(t\right):={\int }_{t}^{+\mathrm{\infty }}{g}_{k}\left(\tau \right)𝑑\tau .$

We observe that they are well defined because of assumption (6.5), and they satisfy

(6.9)

From assumption (6.4), it follows that

for every compact set $K\subseteq \left[0,+\mathrm{\infty }\right)$. As a consequence, the normal convergence in K of the series (6.7) follows from the normal convergence in K of the series with general term ${f}_{k}\left(t\right)$. Due to normal convergence, we can exchange series and integrals in the left-hand side of (6.8) and deduce that

$|{\int }_{t}^{s}\left(\sum _{k=0}^{\mathrm{\infty }}{g}_{k}\left(\tau \right){f}_{k}\left(\tau \right)\right)d\tau |=|\sum {}_{k=0}{}^{\mathrm{\infty }}\int {}_{t}{}^{s}g{}_{k}\left(\tau \right)f{}_{k}\left(\tau \right)d\tau |\le \sum {}_{k=0}{}^{\mathrm{\infty }}|\int {}_{t}{}^{s}g{}_{k}\left(\tau \right)f{}_{k}\left(\tau \right)d\tau |.$

Now we integrate by parts each term of the series and exploit (6.9) in analogy with what we did before in the proof of Lemma 6.2. We obtain that

$|{\int }_{t}^{s}{g}_{k}\left(\tau \right){f}_{k}\left(\tau \right)𝑑\tau |\le {L}_{7}{e}^{-s}|{f}_{k}\left(s\right)|+{L}_{7}{e}^{-t}|{f}_{k}\left(t\right)|+{L}_{7}{\int }_{t}^{s}{e}^{-\tau }|{f}_{k}^{\prime }\left(\tau \right)|𝑑\tau$

for every $k\in ℕ$. When we sum over k, from (6.6) we deduce that

$\sum _{k=0}^{\mathrm{\infty }}{L}_{7}{e}^{-s}|{f}_{k}\left(s\right)|={L}_{7}{e}^{-s}\sum _{k=0}^{\mathrm{\infty }}|{f}_{k}\left(s\right)|\le {L}_{7}{L}_{8}{e}^{-t}$(6.10)

and, analogously,

$\sum _{k=0}^{\mathrm{\infty }}{L}_{7}{e}^{-t}|{f}_{k}\left(t\right)|\le {L}_{7}{L}_{8}{e}^{-t}.$(6.11)

As for the sum of integrals, we first observe that the normal convergence, on compact subsets of $\left[0,+\mathrm{\infty }\right)$, of the series with general term ${f}_{k}^{\prime }\left(t\right)$ implies an analogous convergence of the series

$\sum _{k=0}^{\mathrm{\infty }}{e}^{-\tau }|{f}_{k}^{\prime }\left(\tau \right)|.$

Therefore, we can exchange once again series and integrals. Taking (6.6) into account, this leads to

$\sum _{k=0}^{\mathrm{\infty }}{L}_{7}{\int }_{t}^{s}{e}^{-\tau }|{f}_{k}^{\prime }\left(\tau \right)|d\tau ={L}_{7}{\int }_{t}^{s}\left(\sum _{k=0}^{\mathrm{\infty }}{e}^{-\tau }|{f}_{k}^{\prime }\left(\tau \right)|\right)d\tau$$\le {L}_{7}{\int }_{t}^{s}{L}_{8}{e}^{-\tau }𝑑\tau$$\le {L}_{7}{L}_{8}{e}^{-t}.$(6.12)

At this point, (6.8) follows from (6.10)–(6.12). ∎

## 7.1 Equations for the energy and quotients

Preliminary estimates on components.

Let us consider the notations introduced in Section 3, where we reduced ourselves to proving (3.19) through (3.21). In this first paragraph we derive some k-independent estimates on ${\rho }_{k}\left(t\right)$ and ${\theta }_{k}\left(t\right)$ that are needed several times in the sequel. The constants ${M}_{8},\mathrm{\dots },{M}_{23}$, we introduce hereafter, depend on the solution (as the constants ${M}_{1},\mathrm{\dots },{M}_{7}$ of Section 3), but they do not depend on k. First of all, from (3.17) and (3.18), it follows that

$\sum _{k=0}^{\mathrm{\infty }}{\rho }_{k}^{2}\left(t\right)\le {M}_{4}$

and, in particular, we find

(7.1)

and

$\sum _{k=0}^{\mathrm{\infty }}{\rho }_{k}^{2}\left(t\right){\mathrm{sin}}^{2}{\theta }_{k}\left(t\right)\le {M}_{4}.$(7.2)

From this estimate and (3.16), it follows that

(7.3)

This implies, in particular, that

and

(7.4)

Moreover, from (7.3), it follows that

(7.5)

Let us consider now the series

$\sum _{k=0}^{\mathrm{\infty }}{\rho }_{k}^{m}\left(t\right),\sum _{k=0}^{\mathrm{\infty }}{\left[{\rho }_{k}^{m}\left(t\right)\right]}^{\prime },$

where $m\ge 2$ is a fixed exponent (in the sequel we need only the cases $m=2$ and $m=4$). From the previous estimates, we have

$\sum _{k=0}^{\mathrm{\infty }}{\rho }_{k}^{m}\left(t\right)\le {M}_{12},\sum _{k=0}^{\mathrm{\infty }}|{\left[{\rho }_{k}^{m}\left(t\right)\right]}^{\prime }|\le {M}_{12},$(7.6)

where of course the constant ${M}_{12}$ depends also on m. Moreover, from (7.5) and the square-integrability of the sequence ${\rho }_{k}\left(0\right)$, it follows that both series are normally convergent on compact subsets of $\left[0,+\mathrm{\infty }\right)$.

We stress that we can not hope that these series are normally convergent in $\left[0,+\mathrm{\infty }\right)$, even when $m=2$. Indeed, normal convergence would imply uniform convergence, and hence the possibility to exchange the series and the limit as $t\to +\mathrm{\infty }$, while the conclusion of Theorem 2.1 says that this is not the case, at least when J is an infinite set.

Finally, plugging (3.16) and (7.2) into (3.15), after integration, we obtain that

${\theta }_{k}\left(t\right)=-{\lambda }_{k}{e}^{t}-{\psi }_{k}\left(t\right),$(7.7)

for a suitable function ${\psi }_{k}:\left[0,+\mathrm{\infty }\right)\to ℝ$ of class ${C}^{1}$ satisfying

(7.8)

Estimates on trigonometric coefficients. For every $k\in ℕ$, we set

${a}_{k}\left(t\right):={\mathrm{sin}}^{2}{\theta }_{k}\left(t\right)-\frac{1}{2},{b}_{k}\left(t\right):={\mathrm{sin}}^{4}{\theta }_{k}\left(t\right)-\frac{3}{8}$

and, for every $k\ne h$,

${c}_{h,k}\left(t\right):={\mathrm{sin}}^{2}{\theta }_{h}\left(t\right){\mathrm{sin}}^{2}{\theta }_{k}\left(t\right)-\frac{1}{4}.$

These functions represent the corrections we have to take into account when we approximate the trigonometric functions with their time-average, as we did at the beginning of Section 4.

It is easy to see that

(7.9)

where the supremum is taken over all admissible indices or pairs of indices. Now we claim that

$|{\int }_{t}^{s}{a}_{k}\left(\tau \right)𝑑\tau |\le {M}_{14}{e}^{-t}$(7.10)$|{\int }_{t}^{s}{b}_{k}\left(\tau \right)𝑑\tau |\le {M}_{15}{e}^{-t}$(7.11)

and

(7.12)

In order to prove (7.10), we just observe that

${a}_{k}\left(t\right)=-\frac{1}{2}\mathrm{cos}\left(2{\theta }_{k}\left(t\right)\right),$

and hence, by (7.7),

${a}_{k}\left(t\right)=-\frac{1}{2}\mathrm{cos}\left(-2{\lambda }_{k}{e}^{t}-2{\psi }_{k}\left(t\right)\right)=-\frac{1}{2}\mathrm{cos}\left(2{\lambda }_{k}{e}^{t}+2{\psi }_{k}\left(t\right)\right).$

Thanks to (7.8), the assumptions of Lemma 6.1 are satisfied with $\alpha :=2{\lambda }_{k}$, ${L}_{3}:=2{M}_{13}$ and $\psi \left(t\right):={\psi }_{k}\left(t\right)$. Thus, we obtain that

$|{\int }_{t}^{s}{a}_{k}\left(\tau \right)𝑑\tau |\le \frac{3+2{M}_{13}}{2{\lambda }_{k}}{e}^{-t}\le {M}_{17}{e}^{-t},$

where in the last inequality, we exploited that all eigenvalues are larger than a fixed positive constant.

The proof of (7.11) is analogous, just starting from the trigonometric identity

${b}_{k}\left(t\right)=-\frac{1}{2}\mathrm{cos}\left(2{\theta }_{k}\left(t\right)\right)+\frac{1}{8}\mathrm{cos}\left(4{\theta }_{k}\left(t\right)\right).$

Also the proof of (7.12) is analogous, but in this case the trigonometric identity is

${c}_{h,k}=-\frac{1}{4}\mathrm{cos}\left(2{\theta }_{h}\right)-\frac{1}{4}\mathrm{cos}\left(2{\theta }_{k}\right)+\frac{1}{8}\mathrm{cos}\left(2{\theta }_{h}+2{\theta }_{k}\right)+\frac{1}{8}\mathrm{cos}\left(2{\theta }_{h}-2{\theta }_{k}\right).$

All the four terms can be treated through Lemma 6.1, but now in the last term the differences between eigenvalues are involved. As a consequence, for the last term, we obtain an estimate of the form

$|{\int }_{t}^{s}\mathrm{cos}\left(2{\theta }_{h}\left(\tau \right)-2{\theta }_{k}\left(\tau \right)\right)𝑑\tau |\le \frac{3+4{M}_{13}}{2|{\lambda }_{k}-{\lambda }_{h}|}{e}^{-t}.$

If we want this estimate to be uniform for $k\ne h$, we have to assume that the differences between eigenvalues are bounded away from 0, and this is exactly the point where assumption (2.7) comes into play in the proof of Theorem 2.5. Equation for the energy. Let $R\left(t\right)$ be the total energy as defined in (3.17). We claim that $R\left(t\right)$ solves a differential equation of the form

${R}^{\prime }\left(t\right)=R\left(t\right)-\frac{1}{2}{R}^{2}\left(t\right)-\frac{1}{4}\sum _{k=0}^{\mathrm{\infty }}{\rho }_{k}^{4}\left(t\right)+{\mu }_{1}\left(t\right)+{\mu }_{2}\left(t\right),$(7.13)

where (for the sake of shortness, we do not write the explicit dependence on t in the right-hand sides)

${\mu }_{1}\left(t\right):=2\sum _{k=0}^{\mathrm{\infty }}\left({\mathrm{\Gamma }}_{1,k}{\rho }_{k}^{2}+{a}_{k}{\rho }_{k}^{2}-{b}_{k}{\rho }_{k}^{4}\right),{\mu }_{2}\left(t\right):=-2\sum _{k=0}^{\mathrm{\infty }}\left({\rho }_{k}^{2}\sum _{i\ne k}{c}_{i,k}{\rho }_{i}^{2}\right).$(7.14)

We also claim that ${\mu }_{1}\left(t\right)$ satisfies

(7.15)

The verification of (7.13) is a lengthy but elementary calculation, which starts by writing

${R}^{\prime }\left(t\right)=2\sum _{k=0}^{\mathrm{\infty }}{\rho }_{k}\left(t\right){\rho }_{k}^{\prime }\left(t\right),$

and by replacing ${\rho }_{k}^{\prime }\left(t\right)$ with the right-hand side of (3.14). The crucial point is that when computing the product

${\rho }_{k}^{2}{\mathrm{sin}}^{2}{\theta }_{k}\cdot \sum _{i=0}^{\mathrm{\infty }}{\rho }_{i}^{2}{\mathrm{sin}}^{2}{\theta }_{i},$

one has to isolate the term of the series with $i=k$. In this way, the product becomes

${\rho }_{k}^{4}{\mathrm{sin}}^{4}{\theta }_{k}+{\rho }_{k}^{2}\sum _{i\ne k}{\rho }_{i}^{2}{\mathrm{sin}}^{2}{\theta }_{i}{\mathrm{sin}}^{2}{\theta }_{k},$

and now one can express ${\mathrm{sin}}^{4}{\theta }_{k}$ in terms of ${b}_{k}$, and ${\mathrm{sin}}^{2}{\theta }_{i}{\mathrm{sin}}^{2}{\theta }_{k}$ in terms of ${c}_{i.k}$. The rest is straightforward algebra.

The proof of (7.15) follows from several applications of Lemma 6.3 with different choices of ${f}_{k}\left(t\right)$ and ${g}_{k}\left(t\right)$.

• For the term ${\mathrm{\Gamma }}_{1,k}{\rho }_{k}^{2}$, we choose ${f}_{k}\left(t\right):={\rho }_{k}^{2}\left(t\right)$ and ${g}_{k}\left(t\right):={\mathrm{\Gamma }}_{1,k}\left(t\right)$. Indeed, the assumptions on ${f}_{k}\left(t\right)$ follow from (7.6) with $m=2$ and from the normal convergence of the same series on compact subsets of $\left[0,+\mathrm{\infty }\right)$, while the assumptions on ${g}_{k}\left(t\right)$ follow from (3.16).

• For the term ${a}_{k}{\rho }_{k}^{2}$, we choose ${f}_{k}\left(t\right):={\rho }_{k}^{2}\left(t\right)$ and ${g}_{k}\left(t\right):={a}_{k}\left(t\right)$. The assumptions on ${f}_{k}\left(t\right)$ are satisfied as before, while those on ${g}_{k}\left(t\right)$ follow from (7.9) and (7.10).

• For the term ${b}_{k}{\rho }_{k}^{4}$, we choose ${f}_{k}\left(t\right):={\rho }_{k}^{4}\left(t\right)$ and ${g}_{k}\left(t\right):={b}_{k}\left(t\right)$. Now we need the estimates for the series (7.6) with $m=4$ in order to verify the assumptions on ${f}_{k}\left(t\right)$, and (7.9) and (7.11) in order to provide the requires estimates on ${g}_{k}\left(t\right)$.

Equation for quotients. For every pair of indices h and k in J, we consider the ratio ${Q}_{h,k}\left(t\right)$ introduced in (4.8). We remind that components with indices in J never vanish, and therefore the quotient is well defined and positive for every $t\ge 0$. After some lengthy calculations, we obtain

${Q}_{h,k}^{\prime }\left(t\right)={\alpha }_{h}\left(t\right){Q}_{h,k}\left(t\right)\left(1-{Q}_{h,k}^{2}\left(t\right)\right)+{\alpha }_{h}\left(t\right){\beta }_{h,k}\left(t\right){Q}_{h,k}^{3}\left(t\right)+{\gamma }_{h,k}\left(t\right){Q}_{h,k}\left(t\right),$(7.16)

where

${\alpha }_{h}\left(t\right):=\frac{1}{8}{\rho }_{h}^{2}\left(t\right),{\beta }_{h,k}\left(t\right):=8\left({c}_{h,k}\left(t\right)-{b}_{k}\left(t\right)\right),$${\gamma }_{h,k}\left(t\right):={a}_{k}-{a}_{h}+{\mathrm{\Gamma }}_{1,k}-{\mathrm{\Gamma }}_{1,h}+{\rho }_{h}^{2}\left({b}_{h}-{c}_{h,k}\right)+\sum _{i\notin \left\{h,k\right\}}{\rho }_{i}^{2}\left({c}_{i,h}-{c}_{i,k}\right).$

We observe that the first term of equation (7.16) is the same as in equation (4.10), which was derived by neglecting all the rest.

We claim that

(7.17)

where the supremum is taken over all admissible indices or pairs of indices, and that

$|{\int }_{t}^{s}{\beta }_{h,k}\left(\tau \right)𝑑\tau |\le {M}_{20}\left(1+\frac{1}{|{\lambda }_{k}-{\lambda }_{h}|}\right){e}^{-t},$(7.18)$|{\int }_{t}^{s}{\gamma }_{h,k}\left(\tau \right)𝑑\tau |\le {M}_{21}\left(1+\frac{1}{|{\lambda }_{k}-{\lambda }_{h}|}+\underset{i\notin \left\{h,k\right\}}{sup}\left(\frac{1}{|{\lambda }_{i}-{\lambda }_{k}|}+\frac{1}{|{\lambda }_{i}-{\lambda }_{h}|}\right)\right){e}^{-t}$(7.19)

for every pair of admissible indices and every $s>t\ge 0$. We point out that the supremum in (7.19) is finite because the sequence of eigenvalues is increasing.

Estimate (7.17) follows from (7.1) and (7.4) in the case of ${\alpha }_{h}\left(t\right)$ and ${\alpha }_{h}^{\prime }\left(t\right)$, from (7.9) in the case of ${\beta }_{h,k}\left(t\right)$, and from (7.9), (3.16) and (3.18) in the case of ${\gamma }_{h,k}\left(t\right)$.

Estimate (7.18) follows from (7.11) and (7.12).

Finally, in order to verify (7.19), we consider the expression for ${\gamma }_{h,k}$, and we apply

• inequality (7.10) to the term ${a}_{k}-{a}_{h}$,

• inequality (3.16) to the term ${\mathrm{\Gamma }}_{1,k}-{\mathrm{\Gamma }}_{1,h}$,

• Lemma 6.2, (7.11) and (7.12) to the term ${\rho }_{h}^{2}\left({c}_{h,k}-{b}_{h}\right)$,

• Lemma 6.3 and (7.12) to the last term (the series).

## 7.2 Proof of Theorem 2.1

Key estimate for quotients.

We prove that if there exists ${h}_{0}\in J$ such that

$\underset{t\to +\mathrm{\infty }}{lim sup}{\rho }_{{h}_{0}}\left(t\right)>0,$(7.20)

then

(7.21)

To begin with, we observe that ${\rho }_{{h}_{0}}\left(t\right)$ is Lipschitz continuous in $\left[0,+\mathrm{\infty }\right)$ because of (7.4), and hence (7.20) implies that

${\int }_{0}^{+\mathrm{\infty }}{\rho }_{{h}_{0}}^{2}\left(t\right)𝑑t=+\mathrm{\infty }.$(7.22)

Let us consider now the quotients ${Q}_{{h}_{0},k}\left(t\right)$ with $k\in J$. We claim that in this case, equation (7.16) fits in the framework of Proposition 5.4 with

$z\left(t\right):={Q}_{{h}_{0},k}\left(t\right),\alpha \left(t\right):={\alpha }_{{h}_{0}}\left(t\right),\beta \left(t\right):={\beta }_{{h}_{0},k}\left(t\right),\gamma \left(t\right):={\gamma }_{{h}_{0},k}\left(t\right).$

Indeed, assumption (5.14) is exactly (7.22), assumptions (5.15) follows from (7.3), and the boundedness and semi-integrability of $\beta \left(t\right)$ and $\gamma \left(t\right)$ follow from (7.17)–(7.19). Thus, from Proposition 5.4, we obtain (7.21). The case where J is infinite. In this case, we show that all components tend to 0, which establishes statement (3).

Let us assume that this is not the case. Then there exists ${h}_{0}\in J$ for which (7.20) holds true, and hence also (7.21) holds true. At this point, arguing exactly as in the corresponding point in the proof of Theorem 4.1, from (7.20) and (7.21), we deduce that the total energy is unbounded, thus contradicting the estimate from above in (3.18). The case where J is finite. In this case, we prove that (3.19) is true. To begin with, we observe that there exists ${h}_{0}\in J$ for which (7.20) holds true, because otherwise the total energy would tend to 0, thus contradicting the estimate from below in (3.18). As a consequence, also (7.21) holds true and, in particular, the limit of ${\rho }_{k}\left(t\right)$ is the same for every $k\in J$, provided that this limit exists. At this point, (3.19) is equivalent to showing that

$\underset{t\to +\mathrm{\infty }}{lim}R\left(t\right)=\frac{4j}{2j+1},$(7.23)

where j denotes the number of elements of J.

To this end, we consider the equalities

$R\left(t\right)={\rho }_{{h}_{0}}^{2}\left(t\right)\sum _{k\in J}{Q}_{{h}_{0},k}^{2}\left(t\right),\sum _{k\in J}{\rho }_{k}^{4}\left(t\right)={\rho }_{{h}_{0}}^{4}\left(t\right)\sum _{k\in J}{Q}_{{h}_{0},k}^{4}\left(t\right).$

From these, we deduce that

$\sum _{k\in J}{\rho }_{k}^{4}\left(t\right)={R}^{2}\left(t\right)\cdot \left(\frac{1}{j}+q\left(t\right)\right),$

where

$q\left(t\right):=\left(\sum _{k\in J}{Q}_{{h}_{0},k}^{4}\left(t\right)\right)\cdot {\left(\sum _{k\in J}{Q}_{{h}_{0},k}^{2}\left(t\right)\right)}^{-2}-\frac{1}{j},$

hence, by (7.21),

$\underset{t\to +\mathrm{\infty }}{lim}q\left(t\right)=0.$(7.24)

Going back to (7.13), we find that $R\left(t\right)$ solves a differential equation of the form

${R}^{\prime }\left(t\right)=R\left(t\right)-\frac{2j+1}{4j}{R}^{2}\left(t\right)-\frac{1}{4}q\left(t\right){R}^{2}\left(t\right)+{\mu }_{1}\left(t\right)+{\mu }_{2}\left(t\right),$

where ${\mu }_{1}\left(t\right)$ and ${\mu }_{2}\left(t\right)$ are given by (7.14). This differential equation fits in the framework of Proposition 5.3 with

$z\left(t\right):=R\left(t\right),{z}_{\mathrm{\infty }}:=\frac{4j}{2j+1},{\psi }_{1}\left(t\right):={\mu }_{1}\left(t\right)+{\mu }_{2}\left(t\right),{\psi }_{2}\left(t\right)=|q\left(t\right)|\cdot {R}^{2}\left(t\right).$

Indeed, assumption (5.3) follows from (7.24), while assumption (5.4) follows from the estimate from below in (3.18). It remains to prove that ${\psi }_{1}\left(t\right)$ is semi-integrable in $\left[0,+\mathrm{\infty }\right)$. The semi-integrability of ${\mu }_{1}\left(t\right)$ is a consequence of (7.15), and the semi-integrability of ${\mu }_{2}\left(t\right)$ follows from a finite number of applications of Lemma 6.2 with $f\left(t\right):={\rho }_{k}^{2}\left(t\right){\rho }_{i}^{2}\left(t\right)$ and $g\left(t\right):={c}_{i,k}\left(t\right)$ (here it is essential that the set J is finite). The required assumptions of $f\left(t\right)$ and $g\left(t\right)$ follow from (7.1), (7.4) and (7.12).

At this point, Proposition 5.3 implies (7.23).

Asymptotic behavior of the phase.

It remains to prove (3.20). Actually we need this fact just in the case where J is finite, but the statement is true and the proof is the same even in the general case.

Let us consider equation (3.15). From (3.16), we know that ${\mathrm{\Gamma }}_{2,k}$ is integrable in $\left[0,+\mathrm{\infty }\right)$. Therefore, (3.20) is equivalent to showing that the function

$\left(\sum _{i=0}^{\mathrm{\infty }}{\rho }_{i}^{2}\left(\tau \right){\mathrm{sin}}^{2}{\theta }_{i}\left(\tau \right)-1\right)\mathrm{sin}{\theta }_{k}\left(\tau \right)\mathrm{cos}{\theta }_{k}\left(\tau \right)$

is semi-integrable in $\left[0,+\mathrm{\infty }\right)$ for every $k\in J$. First of all, we write the function as

$\sum _{i\ne k}{\rho }_{i}^{2}{\mathrm{sin}}^{2}{\theta }_{i}\mathrm{sin}{\theta }_{k}\mathrm{cos}{\theta }_{k}+{\rho }_{k}^{2}{\mathrm{sin}}^{3}{\theta }_{k}\mathrm{cos}{\theta }_{k}-\mathrm{sin}{\theta }_{k}\mathrm{cos}{\theta }_{k}.$

All these oscillating functions can be treated as we did many times before, starting from the trigonometric identities

$\mathrm{sin}{\theta }_{k}\mathrm{cos}{\theta }_{k}=\frac{1}{2}\mathrm{sin}\left(2{\theta }_{k}\right),{\mathrm{sin}}^{3}{\theta }_{k}\mathrm{cos}{\theta }_{k}=\frac{1}{4}\mathrm{sin}\left(2{\theta }_{k}\right)-\frac{1}{8}\mathrm{sin}\left(4{\theta }_{k}\right)$

and

${\mathrm{sin}}^{2}{\theta }_{i}\mathrm{sin}{\theta }_{k}\mathrm{cos}{\theta }_{k}=\frac{1}{4}\mathrm{sin}\left(2{\theta }_{k}\right)+\frac{1}{8}\mathrm{sin}\left(2{\theta }_{i}-2{\theta }_{k}\right)-\frac{1}{8}\mathrm{sin}\left(2{\theta }_{i}+2{\theta }_{k}\right).$

Due to the relation $\mathrm{sin}x=\mathrm{cos}\left(x-\pi /2\right)$, we can conclude by exploiting the results of Section 6, as we did in the proof of (7.10) through (7.12), and in the estimates of the coefficients of (7.16).

## 7.3 Proof of Theorem 2.5

Let us consider again the differential equation (7.13) solved by $R\left(t\right)$. We prove that the uniform gap assumption (2.7) implies the semi-integrability of ${\mu }_{2}\left(t\right)$ and a uniform bound on the quotients that allows to show that the series of fourth powers is negligible in the limit. At this point, we can conclude by applying Proposition 5.3. Estimate on ${\mu }_{\mathrm{2}}\mathit{}\mathrm{\left(}t\mathrm{\right)}$. We show that

(7.25)

Since ${\mu }_{2}\left(t\right)$ involves a double series, this requires a double application of Lemma 6.3. First of all, we exploit the uniform gap assumption (2.7), and from (7.12), we deduce that

(7.26)

Now we set

${\delta }_{k}\left(t\right):=\sum _{i\ne k}{c}_{i,k}\left(t\right){\rho }_{i}^{2}\left(t\right),$

and we apply Lemma 6.3 with ${f}_{i}\left(t\right):={\rho }_{i}^{2}\left(t\right)$ and ${g}_{i}\left(t\right):={c}_{i,k}\left(t\right)$. The assumptions are satisfied due to (7.6), (7.9) and (7.26). We obtain that

(7.27)

Moreover, from (7.9) and (3.18), we obtain also that

(7.28)

Due to (7.27) and (7.28), we can apply again Lemma 6.3 with ${f}_{k}\left(t\right):={\rho }_{k}^{2}\left(t\right)$ and ${g}_{k}\left(t\right):={\delta }_{k}\left(t\right)$, and this completes the proof of (7.25). Estimate on quotients. We claim that there exist ${t}_{0}\ge 0$ and ${h}_{0}\in J$ such that

(7.29)

This estimate is trivial when $k={h}_{0}$, independently on ${t}_{0}$. Otherwise, we exploit equation (7.16), which fits in the framework of Proposition 5.5 with

$z\left(t\right):={Q}_{h,k}\left(t\right),\alpha \left(t\right):={\alpha }_{h}\left(t\right),\beta \left(t\right):={\beta }_{h,k}\left(t\right),\gamma \left(t\right):={\gamma }_{h,k}\left(t\right).$

Let us check the assumptions. Estimate (5.23) follows from (7.17). Estimates (5.24) follow from (7.18) and (7.19), and the constant ${L}_{2}$ is independent of h and k due to the uniform gap assumption (2.7). As a consequence, any ${t}_{0}\ge 0$ satisfying (5.25) is independent of h and k, and ensures that the following implication holds true for every h and k in J:

${Q}_{h,k}\left({t}_{0}\right)\le 1⟹\underset{t\ge {t}_{0}}{sup}{Q}_{h,k}\left(t\right)\le 2.$(7.30)

At this point, we choose any such ${t}_{0}$, and we fix the index (or one of the indices) ${h}_{0}\in J$ such that

Such an index exists, even when J is infinite, because for every $t\ge 0$ it turns out that ${\rho }_{k}\left(t\right)\to 0$ as $k\to +\mathrm{\infty }$, due to the square-integrability of the sequence ${\rho }_{k}\left(t\right)$. This choice of ${h}_{0}$ implies that ${Q}_{{h}_{0},k}\left({t}_{0}\right)\le 1$ for every $k\in J$, and therefore, at this point, (7.29) follows from (7.30) with $h:={h}_{0}$. Conclusion. To complete the proof, we now observe that

$\sum _{k\in J}{\rho }_{k}^{4}\left(t\right)=\sum _{k\in J}{Q}_{{h}_{0},k}^{2}\left(t\right){\rho }_{{h}_{0}}^{2}\left(t\right)\cdot {\rho }_{k}^{2}\left(t\right)\le 4{\rho }_{{h}_{0}}^{2}\left(t\right)\cdot \sum _{k\in J}{\rho }_{k}^{2}\left(t\right)$

for every $t\ge {t}_{0}$. Plugging this estimate into (7.13), we deduce that

We are now (up to a time-translation by ${t}_{0}$) in the framework of Proposition 5.3 with

$z\left(t\right):=R\left(t\right),{z}_{\mathrm{\infty }}:=2,{\psi }_{1}\left(t\right):={\mu }_{1}\left(t\right)+{\mu }_{2}\left(t\right),{\psi }_{2}\left(t\right):={\rho }_{{h}_{0}}^{2}\left(t\right)\cdot R\left(t\right).$

Indeed, the semi-integrability of ${\psi }_{1}$ follows from (7.15) and (7.25), assumption (5.3) follows from the boundedness of $R\left(t\right)$ and the fact that ${\rho }_{{h}_{0}}\left(t\right)\to 0$ as $t\to +\mathrm{\infty }$, and assumption (5.4) follows from the estimate from below in (3.18).

At this point, (3.21) is exactly the conclusion of Proposition 5.3. ∎

## Acknowledgements

The first two authors are members of the Gruppo Nazionale per l’Analisi Matematica, la Probabilità e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM).

## References

• [1]

J. Bourgain, On the growth in time of higher Sobolev norms of smooth solutions of Hamiltonian, Int. Math. Res. Not. IMRN 6 (1996), 277–304.  Google Scholar

• [2]

A. Haraux, Comportement à l’infini pour une équation d’ondes non linéaire dissipative, C. R. Acad. Sci. Paris Sér. A 287 (1978), no. 7, 507–509.  Google Scholar

• [3]

A. Haraux, Semi-linear hyperbolic problems in bounded domains, Math. Rep. 3 (1987), no. 1, 1–281.  Google Scholar

• [4]

A. Haraux, Systèmes dynamiques dissipatifs et applications, Rech. Math. Appl. 17, Masson, Paris, 1991.  Google Scholar

• [5]

A. Haraux, ${L}^{p}$ estimates of solutions to some non-linear wave equations in one space dimension, Int. J. Math. Model. Numer. Optim. 1 (2009), no. 1–2, 146–154.  Google Scholar

• [6]

A. Haraux, Some simple problems for the next generations, preprint (2015), https://arxiv.org/abs/1512.06540.

• [7]

A. Haraux and M. A. Jendoubi, The Convergence Problem for Dissipative Autonomous Systems, Classical Methods and Recent Advances, Springer Briefs Math., Springer, Cham, 2015.  Google Scholar

• [8]

A. Haraux and E. Zuazua, Decay estimates for some semilinear damped hyperbolic problems, Arch. Ration. Mech. Anal. 100 (1988), no. 2, 191–206.

• [9]

M. Nakao, Asymptotic stability of the bounded or almost periodic solution of the wave equation with nonlinear dissipative term, J. Math. Anal. Appl. 58 (1977), no. 2, 336–343.

• [10]

J. Vancostenoble and P. Martinez, Optimality of energy estimates for the wave equation with nonlinear boundary velocity feedbacks, SIAM J. Control Optim. 39 (2000), no. 3, 776–797.

Revised: 2017-09-29

Accepted: 2017-09-30

Published Online: 2017-12-13

This project was partially supported by the PRA “Problemi di evoluzione: studio qualitativo e comportamento asintotico” of the University of Pisa.

Citation Information: Advances in Nonlinear Analysis, Volume 8, Issue 1, Pages 902–927, ISSN (Online) 2191-950X, ISSN (Print) 2191-9496,

Export Citation