0
$\begingroup$

Let $\mu$ denote the Möbius function, and let's define the functions $\mu_{-}$ and $\mu_{+}$ as follows: $\mu_{-}(n):=\frac{\mu(n)^{2}-\mu(n)}{2}$ and $\mu_{+}(n):=\frac{\mu(n)^{2}+\mu(n)}{2}$. Let $M_{-}$ be the summatory function of $\mu_{-}$ and $M_{+}$ the summatory function of $\mu_{+}$. One has $M(x)=M_{+}(x)-M_{-}(x)$, where $M$ is the summatory fonction of $\mu$, and $\hat{M}(x):=M_{+}(x)+M_{-}(x)$ is the number of squarefree integers below $x$. One has $\hat{M}(x)=\frac{1}{\zeta(2)}x+O(\sqrt{x})$. Let $\alpha$ and $\beta$ be such that $M_{-}(x)=\frac{1}{2\zeta(2)}x+O(x^{\alpha})$ and $M_{+}(x)=\frac{1}{2\zeta(2)}x+O(x^{\beta})$. If one can have $\alpha=\beta$ then these two quantities can be taken equal to $\frac{1}{2}$, and thus one would have $M(x)\ll\sqrt{x}$ which would imply RH. What are the best known (unconditional) error terms for $M_{+}$ and $M_{-}$? Is there any piece of evidence aside RH that they should be the same up to a multiplicative bounded quantity?
Thanks in advance.

$\endgroup$
4
  • 3
    $\begingroup$ You can read asymptotics for $M_{+}$ and $M_{-}$ directly from those for $M$ and $\hat{M}$. The error term for $M_{+}$ and $M_{-}$ are therefore $\frac{1}{2} M(x) + O(x^{1/2})$ and $-\frac{1}{2} M(x) + O(x^{1/2})$, respectively. Hence, the error terms have the same order of magnitude (unless $M(x) \ll \sqrt{x}$, which contradicts Gonek's conjecture). The best unconditional bound on $M(x)$ was given by Ivic and states that $$M(x) = O\left(x \exp\left(-c_{1} \log^{3/5} x (\log \log x)^{-1/5}\right)\right).$$ See the paper by Nathan Ng here. $\endgroup$ Aug 5, 2014 at 17:51
  • $\begingroup$ @JeremyRouse: I think your comment would make a fine answer. $\endgroup$
    – GH from MO
    Aug 5, 2014 at 19:10
  • $\begingroup$ All right. I'll turn it into an answer. $\endgroup$ Aug 5, 2014 at 19:49
  • $\begingroup$ The link in Jeremy's comment is broken, the correct link is in the answer. $\endgroup$
    – David Roberts
    Mar 29, 2022 at 1:13

1 Answer 1

4
$\begingroup$

As I say in the comment, the asymptotics for $M_{+}$ and $M_{-}$ follow directly from those for $M$ and $\hat{M}$. Therefore $M_{+}(x) = \frac{1}{2 \zeta(2)} x + \frac{1}{2} M(x) + O(x^{1/2})$ and $M_{-}(x) = \frac{1}{2 \zeta(2)} x - \frac{1}{2} M(x) + O(x^{1/2})$. These error terms will have the same order of magnitude unless $M(x)$ is small.

If $M(x) \ll \sqrt{x}$ for all $x$, then Gonek's conjecture mentioned in the paper of Ng is false. However, there will probably be infinitely many $x$ for which $M(x)$ is small, and for these $x$ it will be less clear what the size of the error terms of $M_{+}$ and $M_{-}$ will be. (Corollary 1 of Ng's paper, which is very conditional, implies among other things that the logarithmic density of $x$ for which $M(x) \leq \epsilon \sqrt{x}$ tends to zero as $\epsilon \to 0$.)

Finally, the best unconditional bound on $M(x)$ was given by Ivic in his book "The Riemann Zeta-Function" and states that $$ M(x) = O\left(x \exp\left(-c_{1} (\log^{3/5} x) (\log \log x)^{-1/5}\right)\right). $$ (The best conditional bound was given by Soundararajan in 2009.)

$\endgroup$

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge that you have read and understand our privacy policy and code of conduct.

Not the answer you're looking for? Browse other questions tagged or ask your own question.