On the distribution of powered numbers

: Asymptotic formulae are established for the number of natural numbers m with largest square-free divisor not exceeding m ϑ , for any ﬁ xed positive parameter ϑ . Related counting functions are also considered.


Introduction
Motivated by questions about diophantine equations and the abc-conjecture, Mazur [1] proposed to smooth out the set of positive l-th powers in a multiplicative way, by what he named powered numbers.To introduce the latter, let k(m) denote the largest square-free divisor of the natural number m, and let i(m) = log m/log k(m).For all l ∈ N one has i(m l ) ≥ l.Mazur's powered numbers (relative to l) are the numbers m ∈ N with i(m) ≥ l.Note here that l need not be integral in this definition, but for l ∈ N the powered numbers (relative to l) contain the l-th powers.It is proposed in [1] to replace, within a given diophantine equation, an l-th power by the corresponding powered numbers, and to consider the resulting equation between powered numbers as the associated "rounded" diophantine equation.
In this note we analyse the distribution of powered numbers.We find it is more appropriate to work with the real number ϑ = 1/l.The condition i(m) ≥ l is expressed equivalently as k(m) ≤ m ϑ , and for any ϑ > 0 we define the set Thus, Mazur's powered numbers (relative to l) are exactly the elements of A (1/l).For analytic approaches to rounded diophantine problems it is indispensable to determine the density of the set A (ϑ).Our principal goal are asymptotic formulae for the number S ϑ (x) of elements in A (ϑ) that do not exceed x, and for related counting functions.
It is not difficult to see that for any 0 < ϑ < 1 the number S ϑ (x) obeys the inequalities (1.1) x ϑ ≪ S ϑ (x) ≪ x ϑ+ε whenever ε is a given positive real number and x is large in terms of ε.It now transpires that for ϑ = 1/l the powered numbers are not much denser than the l-th powers.A weaker version of (1.1) occurs in Mazur [1] who refers to Granville, showing him an "easy" argument supposedly confirming the inequalities x ϑ−ε ≪ S ϑ (x) ≪ x ϑ+ε .In Mazur's article there is no indication how this would go but the simplest argument we know allows one to take ε = 0 in the lower bound.To substantiate this claim, fix a number ϑ ∈ (0, 1).Define l ∈ N by 1/l < ϑ ≤ 1/(l−1), and t ∈ R by l − t = 1/ϑ.Then t ∈ (0, 1] and Let W (x) be the number of natural numbers w that have a representation with n, m square-free and constrained to the intervals .
For the upper bound in (1.1) one may refer to authorities like Tenenbaum [3,Theorem II.1.15].However, the shortest argument available to us uses Rankin's trick within the following chain of obvious inequalities: A finer analysis of the counting function S ϑ (x) is currently an undesired lacuna in the literature.As a partial remedy we establish an asymptotic formula that should be sufficient for many applications.Let 1 ≤ y ≤ x.Following Robert and Tenenbaum [2] we write N (x, y) for the number of m ∈ N with m ≤ x and k(m) ≤ y.The numbers counted here are their nuclear numbers.It seems natural to expect that the trivial upper bound S ϑ (x) ≤ N (x, x ϑ ) should not be very wasteful.However, it was conjectured by Erdös (1962) and proved by De Bruijn and van Lint (1963) that For a quantitative version of this estimate, see [2,Théorème 4.4].These results suggest that the order of magnitude of S ϑ (x) is somewhat smaller than that of N (x, x ϑ ), and this is indeed the case.Before we make this precise, we recall an estimate for N (x, y).We are primarily interested in the case y = x ϑ with ϑ > 0, but we work in the wider range y > exp (log x) 2/3 .The multiplicative function and the function F : [0, ∞) → [0, ∞) defined by are featured in the uniform asymptotic formula (1.7) N (x, y) = 1 + o(1) yF log(x/y) y > exp (log x) 2/3 , x → +∞ that is contained in [2, Proposition 10.1].As we shall see momentarily, for each pair ϑ, x with 0 < ϑ < 1 and x ≥ 2, there is exactly one real number α = α ϑ (x) > 0 with (1.8) We are now in a position to state our first result.
Theorem 1.Let 0 < ϑ < 1 fixed.Then, for x ≥ 27, one has As x → ∞, one also has This result calls for several comments.First we take y = x ϑ in (1.7) and substitute the resulting equation within (1.10) to infer that This is an analogue of (1.5), in quantitative form that is of strength comparable to [2, Théorème 4.4].
Our second comment concerns the implicitly defined function α ϑ (x).It originates in the Dirichlet series that converges absolutely in Re s > 0, and therefore has no zeros in this half plane.For real numbers σ > 0 we have G (σ) > 0. We may then define Note that g extends to a holomorphic function on the right half plane, and one computes the logarithmic derivative of G (s) from the Euler product representation (1.12) to .
Note that the sum on the right hand side here coincides with the sum in (1.8).
On differentiating again, it transpires that the real function g ′ : (0, ∞) → R is increasing.Considering σ → 0 and σ → ∞ one finds that its range is the open interval (−∞, 0).We conclude that for a given v > 0 there is a unique positive number In [2, Lemme 6.6] it is shown that Here we choose v = (1 − ϑ) log x and then have α ϑ (x) = σ v .In particular, we see that (1.9) and (1.13) imply (1.10).Thus, it only remains to prove (1.9).Our last comment concerns the actual size of S ϑ (x).This requires some more information on the function F .From [2, (2.12)] we have the asymptotic relation By inserting (1.14) into (1.10),we deduce that there exists some positive number β(x; ϑ) > 0 with and the property that for any fixed ϑ ∈ (0, 1) one has Note that (1.15) yields another proof of (1.1).
We now turn to local estimates for S ϑ (x).In our case, this amounts to comparing the respective behaviour of S ϑ (zx) and S ϑ (x) uniformly for large x, when z is in some sense sufficiently close to 1.Such estimates are often obtained with the saddlepoint method, and we follow this route here, too.In a suitable range for z, the fraction S ϑ (zx)/S ϑ (x) may be approximated by a simple function of z.
Theorem 2. Let 0 < ϑ < 1.Then for x large, we have Finally, we consider the counting function for a variation of the powered numbers.For given ϑ ∈ (0, 1) and Θ ∈ R, we consider Note that S ϑ (x) = S ϑ,0 (x).The set of integers such that k(n) ≤ n ϑ (log n) Θ plays a prominent role in a forthcoming paper, and therefore, we provide an estimate for S ϑ,Θ (x).It turns out that the conditions k(n) ≤ n ϑ and k(n) ≤ n ϑ (log n) Θ are relatively close, and that the ratio S ϑ,Θ (x)/S ϑ (x) is roughly of size (log x) Θ .Theorem 3. Let 0 < ϑ < 1 and Θ ∈ R be fixed.Then for x large, one has

Proof of Theorem 1
In this section we derive Theorem 1.Before we embark on the main argument, we fix some notation and recall a pivotal result concerned with the distribution of square-free numbers.This involves the function ψ(m) as defined in (1.6), the Möbius function µ(m), and for a parameter 0 ≤ γ ≤ 1 2 at our disposal, the product One then has the estimate ([2, (10.1)]) (2.1) that holds uniformly relative to the square-free number k and the real parameters z, γ in the ranges z ≥ 1, 0 ≤ γ ≤ 1 2 .The first steps of our argument follow the pattern laid out in [2, Sect.10].Unique factorisation shows that for all natural numbers n there exists exactly one pair of coprime natural numbers l, m with µ(l) 2 = 1 and n = lmk(m).Note that the two conditions (l, m) = 1 and µ(l) 2 = 1 are equivalent to the single condition µ(lk(m)) 2 = 1.Further, one has k(n) = lk(m).With ϑ ∈ (0, 1) now fixed, it follows that S ϑ (x) equals the number of (l, m) ∈ N 2 satisfying the conditions These last three conditions we recast more compactly as From now on, the number κ = ϑ/(1 − ϑ) features prominently, and we also put y = x ϑ .Note that Hence, we consider the ranges m ≤ x/y and x/y < m ≤ x separately.By (2.2) and (2.3), this leads to the decomposition We apply (2.1) with k = k(m) to both inner sums and obtain where It turns out that R 1 and R 2 are small.In order to couch their estimation, as well as the analysis of other error terms that arise later, under the umbrella of a single treatment, we choose parameters γ and σ with 0 < γ < σ ≤ 1 2 and introduce the series The conditions σ > γ > 0 ensure convergence.It is routine to show that In fact, by Rankin's trick, For m ≤ x/y one has m κ ≤ y, and it follows that The appearance of k(m) in the summation conditions on the right hand sides of (2.5) and (2.6) is a nuisance, and we proceed by removing these.If the condition k(m) ≤ m κ is removed from the sum in (2.5) one imports an error no larger than Here, the last inequality is obtained by the argument that completed the estimation of R 1 .Similarly, if the condition mk(m) ≤ x is removed from the summation condition in (2.6), then the resulting error does not exceed x mk(m) Collecting together, we deduce from (2.5) and (2.6) the asymptotic relations and by (2.4), we infer that Note that the sum on the right is a partial sum of a convergent series.If one completes the sum, then it is immediate that the error thus imported is bounded by R, and hence by E. We have now reached the provisional expansion (2.9) It remains to estimate E. In its definition (2.7), we encounter a sum over a multiplicative function, and so As on an earlier occasion, we write v = (1 − ϑ) log x = log x y , and then choose σ = σ v and γ = σ v − 1 log y .By (1.12) one has . Now, on recalling (1.13), while one also has On collecting together, this shows that From [2, (2.11)] we deduce that and hence With the choice of y and v, one has σ v = α ϑ (x).Moreover, log y and v have the order of magnitude log x so that the last inequality now reads Our final task is to compare our estimate for E with the size of the sum on the right of (2.9).
Recall that in view of (1.1) and (1.11), S ϑ (x) and N (x, x ϑ ) are of comparable size.In order to mimick the estimate (1.11), we introduce the function so that (2.9) now reads S ϑ (x) = x ϑ H ϑ (x) + O(E).Our aim is to give an estimate of H ϑ (x) by using the saddle-point method, and to describe more precisely H ϑ (x)/F ((1 − ϑ) log x) as x → +∞.
After a linear change of variable in t, we arrive at Recall again that v = (1 − ϑ) log x, and that α ϑ (x) = σ v .We take σ = ϑ + (1 − ϑ)σ v .For large x one then has 0 < σ < 1, and the previous formula for H ϑ (x) becomes After truncation, we have Moreover, following the proof of [2, Théorème 8.6], we set and recall [2, Lemme 8.5], asserting that for some c > 0 we have It now follows that Setting we have where By Taylor expansion, we infer where Now, still following the pattern of the proof of [2, Théorème 8.6], one is lead to We omit the details.From [2, Théorème 8.6] we import the relation and the lemma follows.
We may now complete the proof of Theorem 1.Using the lemma and (1.13), we obtain so that the estimate (2.10) implies We then have We may now replace H ϑ (x) by the estimate from Lemma 1.Since x −ϑα ϑ (x) (log x) 9/4 (log log x) 3/4 ≪ log log x log x , this yields (1.9).As remarked earlier, (1.10) follows from (1.13) and (1.9).The proof of Theorem 1 is complete.

Proof of Theorem 2
Subject to the hypotheses of Theorem 2, when x is large, one has log zx ≍ log x.Hence, Theorem 1 implies that holds uniformly for | log z| ≪ log log x.We recall [2,Proposition 8.7].This asserts that uniformly for |t| ≤ v 3/4 (log v) 1/4 , one has Using this estimate with v = (1 − ϑ) log x and t = (1 − ϑ) log z, one finds Moreover, in the ranges for x and z considered here, one has which yields an admissible error term.
It may be worth pointing out that the above argument actually proves a little more.A close inspection of the proof of Theorem 2 shows that the estimate holds uniformly in the range z > 0, x > 27, | log z| ≤ (log x) 3/4 (log log x) 1/4 .

Proof of Theorem 3
Before proving Theorem 3, we briefly sketch the main steps.We first choose a suitable real number U = U (x) such that log U = (log x)(1 + o(1)), and count the integers n not exceeding x sucht that k(n) ≤ n ϑ (log U ) Θ .The first step is to show that the number of these integers is essentially S ϑ (x) multiplied by (log x) Θ .The second step is to prove that the number of these integers is close to S ϑ,Θ (x).
In light of this description, for any x ≥ 1 and any z > 0, we set Theorem 4. Let 0 < ϑ < 1 be fixed.Then for x large, one has uniformly for z > 0 with | log z| ≪ log log x.
Proof.We follow very closely the proof of Theorem 1 which corresponds to the case z = 1.We redefine the meaning of y, now set to y = x ϑ z, and keep the notation κ and E. Note that in the sequel of this proof, the error term E is to be interpreted with the current specific choices for x and y.
Finally we estimate the term S ϑ (xz −κ−1 ) by Theorem 2. Inserting this in the estimate for B(x, z) and noticing that in the main term the exponent 1+κ−(1+κ)ϑ of z is equal to 1 gives the expected result.
We may now complete the proof of Theorem 3. It is sufficient to prove the result for Θ = 0, since in the case Θ = 0 one has S ϑ,Θ (x) = S ϑ (x).We put First consider the case Θ > 0. Any integer n counted by S ϑ,Θ (x) satisfies k(n) ≤ n ϑ (log x) Θ , whence S ϑ,Θ (x) ≤ B x, (log x) Θ .Now, a lower bound is obtained by noticing that the set of integers U < n ≤ x counted in S ϑ,Θ (x) contains the integers such that k(n) ≤ n ϑ (log U ) Θ .These deliberations yield the inequalities B x, (log U ) Θ − B U, (log U ) Θ ≤ S ϑ,Θ (x) ≤ B x, (log x) Θ .Now, using Theorem 4 to estimate B x, (log U ) Θ and B x, (log x) Θ , and then replacing (log U ) Θ by (log x) Θ at the price of an admissible error term, one obtains the main term to estimate S ϑ,Θ (x).For the remaining term, Theorem 4 and the definition of U imply that B U, (log U ) Θ ≪ (log U ) Θ S ϑ (U ) ≪ (log x) Θ S ϑ (U ).

and since the sum p 1 p
log p converges,