MAT 338 Day 42 2011

2385 days ago by kcrisman

Hooray!  We made it.  You did a great job making it through the whole arc of number theory accessible at the undergraduate level.

 

And we really did see a lot of the problems out there, including solving them.   Here are some that we did not see all the way through, though we were able to prove some things about them.

  • Solving higher-degree polynomial congruences, like $x^3\equiv a\text{ mod }(n)$.
  • Knowing how to find integer points on hard things like the Pell (hyperbola) equation $x^2-ny^2=1$.
  • Writing a number not just in terms of a sum of squares, but a sum of cubes, or a sum like $x^2+7y^2$.
  • The Prime Number Theorem, and finding ever better approximations to $\pi(x)$.

It's this last one that I want to focus on today.  Recall Gauss' estimate for $\pi(x)$, the logarithmic integral function.

@interact def _(n=(1000,(1000,10^6))): P = plot(prime_pi,n-1000,n,color='black',legend_label='$\pi(x)$') P += plot(Li,n-1000,n,color='green',legend_label='$Li(x)$') show(P) 
       

Click to the left again to hide and once more to show the dynamic interactive window

It wasn't too bad.  But we were hoping we could get a little closer. So, among several other things, we tried $$Li(x)-\frac{1}{2}Li(\sqrt{x})\; .$$  And this was indeed better.

@interact def _(n=(1000,(1000,10^6))): P = plot(prime_pi,n-1000,n,color='black',legend_label='$\pi(x)$') P += plot(Li,n-1000,n,color='green',legend_label='$Li(x)$') P += plot(lambda x: Li(x)-.5*Li(sqrt(x)),n-1000,n,color='red',legend_label='$Li(x)-\\frac{1}{2}Li(\sqrt{x})$') show(P) 
       

Click to the left again to hide and once more to show the dynamic interactive window

So one might think one could keep adding and subtracting $$\frac{1}{n}Li(x^{1/n})$$ to get even closer, with this start to the pattern.  

As it turns out, that is not quite the right pattern.  In fact, the minus sign comes from $\mu(2)$, not from $(-1)^{2+1}$, as usually is the case in series like this!

@interact def _(n=(1000,(1000,10^6)),k=(3,[1..10])): P = plot(prime_pi,n-1000,n,color='black',legend_label='$\pi(x)$') P += plot(Li,n-1000,n,color='green',legend_label='$Li(x)$') F = lambda x: sum([Li(x^(1/j))*moebius(j)/j for j in [1..k]]) P += plot(lambda x: Li(x)-.5*Li(sqrt(x)),n-1000,n,color='red',legend_label='$Li(x)-\\frac{1}{2}Li(\sqrt{x})$') P += plot(F,n-1000,n,color='blue',legend_label='$\sum_{j=1}^{%s}\\frac{\mu(j)}{j}Li(x^{1/j})$'%k) show(P) 
       

Click to the left again to hide and once more to show the dynamic interactive window

However, it should be just as plain that this approximation doesn't really add a lot beyond $k=3$.  In fact, at $x=1000000$, just going through $k=3$ gets you within one of $\sum_{j=1}^\infty\frac{\mu(j)}{j}Li(x^{1/j})$.  So this is not enough to get a computable, exact formula for $\pi(x)$.

Questions this might raise:

  • So where does this Moebius $\mu$ come from anyway?  
  • What else is there to the error $$\left|\pi(x)-Li(x)\right|$$ anyway?
  • What does this have to do with winning a million dollars?
  • Are there connections with things other the just $\pi(x)$?

We will answer these questions in this last lecture.

@interact def _(k=(3,[2..11])): F = lambda x: sum([Li(x^(1/j))*moebius(j)/j for j in [1..k]]) T = [['$i$','$\pi(i)$','$Li(i)$','$\sum_{j=1}^{%s}\\frac{\mu(j)}{j}Li(x^{1/j})$'%k,'$\pi(i)-Li(i)$','$\pi(i)-\sum_{j=1}^{%s}\\frac{\mu(j)}{j}Li(x^{1/j})$'%k]] for i in [100000,200000..1000000]: T.append([i,prime_pi(i),Li(i).n(digits=7),F(i).n(digits=7),(prime_pi(i)-Li(i)).n(digits=4),(prime_pi(i)-F(i)).n(digits=4)]) html.table(T,header=True) 
       

Click to the left again to hide and once more to show the dynamic interactive window

This table shows the errors in Gauss' and our new estimates for every hundred thousand up to a million.  Clearly Gauss is not exact, but the other error is not always perfect either.  

After the PNT was proved, mathematicians wanted to get a better handle on the error in the PNT.  In particular, the Swedish mathematician Von Koch made a very interesting contribution in 1901.

Conjecture: The error in the PNT is less than $$\frac{1}{8\pi}\sqrt{x}\ln(x)\; .$$

This seems to work, broadly speaking.

@interact def _(n=(1000,(1000,10^6))): P = plot(prime_pi,n-1000,n,color='black',legend_label='$\pi(x)$') P += plot(Li,n-1000,n,color='green',legend_label='$Li(x)$') P += plot(lambda x: Li(x)-1/(8*pi)*sqrt(x)*log(x),n-1000,n,color='blue',linestyle='--',legend_label="Von Koch error estimate") show(P) 
       

Click to the left again to hide and once more to show the dynamic interactive window

Given this data, the conjecture seems plausible, if not even improvable (though remember that $Li$ and $\pi$ switch places infinitely often!).  Of course, a conjecture is not a theorem. He did have one, though.

Theorem: The error estimate above is equivalent to saying that $\zeta(s)$ equals zero precisely where Riemann thought it would be zero in 1859.

This may seem odd.  After all, $\zeta$ is just about reciprocals of all numbers, and can't directly measure primes.  But in fact, the original proofs of the PNT also used the $zeta$ function in essential ways.  So Von Koch was just formalizing the exact estimate it could give us on the error.

 

Indeed, Riemann was after bigger fish.  He didn't just want an error term.  He wanted an exact formula for $\pi(x)$, one that could be computed - by hand, or by machine, if such a machine came along - as close as one pleased.  And this is where $\zeta(s)$ becomes important, because of the Euler product formula:  $$\sum_{n=1}^{\infty} \frac{1}{n^s}=\prod_{p}\frac{1}{1-p^{-s}}$$

Somehow $\zeta$ does encode everything we want to know about prime numbers.  And Riemann's paper, "On the Number of Primes Less Than a Given Magnitude", is the place where this magic really does happen, and seeing just how it happens is our goal to close the course.

 

We'll begin by plotting $\zeta$, to see what's going on.

plot(zeta,-10,10,ymax=10,ymin=-1) 
       

As you can see, $\zeta(s)$ doesn't seem to hit zero very often.  Maybe for negative $s$...

Wait a minute!  What is this plot?  Shouldn't $\zeta$ diverge if you put negative numbers in for $s$?  After all, then we'd get things like $$\sum_{i=1}^\infty n$$ for $s=-1$, and somehow I don't think that converges.

G=graphics_array([complex_plot(zeta, (-20,20), (-20,20)),complex_plot(lambda z: z, (-3,3),(-3,3))]) G.show(figsize=[8,8]) 
       

In fact, it turns out that we can evaluate $\zeta(s)$ for nearly any complex number $s$ we desire.  The graphic above color-codes where each complex number lands by matching it to the color in the second graphic.

The important point isn't the picture itself, but that there is a picture.  Yes, $\zeta$ can be defined for (nearly) any complex number as input.

One way to see this is by looking at each term $\frac{1}{n^s}$ in $\zeta(s)=\sum_{n=1}^\infty \frac{1}{n^s}$.   If we let $s=\sigma+it$ (a long-standing convention, instead of $x+iy$), we can rewrite $$n^{-s}=e^{-s\ln(n)}=e^{-(\sigma+it)\ln(n)}=e^{-\sigma\ln(n)}e^{-it\ln(n)}=n^{-\sigma}\left(\cos(t\ln(n))-i\sin(t\ln(n))\right)$$ where the last step comes from something you may remember from calculus, and that is very easy to prove with Taylor series - that $$e^{ix}=\cos(x)+i\sin(x)\; .$$

So at least if $\sigma>1$, since $\cos$ and $\sin$ always have absolute value less than or equal to one, we still have the same convergence properties as with regular series, if we take the imaginary and real parts separately - $$\sum_{n=1}^\infty\frac{\cos(t\ln(n))}{n^s}+i\sum_{n=1}^\infty\frac{\sin(t\ln(n))}{n^s}$$

That doesn't explain the part of the complex plane on the left of the picture above, and all I will say is that it is possible to do this, and Riemann did it. (In fact, Riemann also is largely responsible for much of advanced complex analysis.)

Let's get a sense for what the $\zeta$ function looks like.  First, a three-dimensional plot of its absolute value for $\sigma$ between 0 and 1 (which will turn out to be all that is important for our purposes).

plot3d(lambda x,y: abs(zeta(x+i*y)),(0,1),(-20,20),plot_points=100)+plot3d(0,(0,1),(-20,20),color='green',alpha=.5) 
       

To get a better idea of what happens, we look at the absolute value of $\zeta$ for different inputs.  Here, we look at $\zeta(\sigma+it)$, where $\sigma$ is the real part, chosen by you, and then we plot $t$ out as far as requested.   Opposite that is the line which we are viewing on the complex plane.

@interact def _(sig=slider(.01, .99, .01, 0.5, label='$\sigma$'),end=slider(2,100,1,40,label='end of $t$')): p = plot(lambda t: abs(zeta(sig+t*i)), -end,end,rgbcolor=hue(0.7)) q = complex_plot(zeta,(0,1),(-end,end),aspect_ratio=1/end)+line([(sig,-end),(sig,end)],linestyle='--') show(graphics_array([p,q]),figsize=[10,6]) 
       

Click to the left again to hide and once more to show the dynamic interactive window

You'll notice that the only places the function has absolute value zero (which means the only places it hits zero) are when $\sigma=1/2$.

Another (very famous) image is that of the parametric graph of each vertical line in the complex plane as mapped to the complex plane.  You can think of this as where an infinitely thin slice of the complex plane is 'wrapped' to.

@interact def _(sig=slider(.01, .99, .01, 0.5, label='$\sigma$')): end=30 p = parametric_plot((lambda t: zeta(sig+t*i).real(),lambda t: zeta(sig+t*i).imag()), (0,end),rgbcolor=hue(0.7),plot_points=300) q = complex_plot(zeta,(0,1),(0,end),aspect_ratio=1/end)+line([(sig,0),(sig,end)],linestyle='--') show(graphics_array([p,q]),figsize=[10,6]) 
       

Click to the left again to hide and once more to show the dynamic interactive window

The reason this image is so famous is because the only time it seems to hit the origin at all is precisely at $\sigma=1/2$ - and there, it hits it lots of times.  Everywhere else it just misses, somehow.  

That is not one hundred percent true, because it is also zero at negative even integer input, but these are well understood; this is the mysterious part.  And so we have the

 

Riemann Hypothesis:

$$\text{ All the zeros of }\zeta(s)=\zeta(\sigma+it)\text{ where }t\neq 0\text{ are on }\sigma=1/2\; .$$

 

The importance of this problem is evidenced by it having been selected as one of the seven Millennium Prize problems by the Clay Math Institute (each holding a million-dollar award), as well as having no fewer than three recent popular books devoted to it (including the best one from our mathematical perspective, Prime Obsession by John Derbyshire).

The rest of our time is dedicated to seeing why this might be related to the distribution of prime numbers, such as Von Koch's result showing the RH is equivalent to a bound on the error $\left|\pi(x)-Li(x)\right|$.

 

We'll pursue this in three steps.

  • Our first step is to see the connection between $\pi(x)$ and $\mu(n)$.  
  • Then we'll see the connection between these and $\zeta$.
  • Finally, we'll see how the zeros of $\zeta$ come into play. 

 

def J(x): end = floor(log(x)/log(2)) out = 0 for j in [1..end]: out += 1/j*prime_pi(x^(1/j)) return out 
       
L1 = [(n,J(n)) for n in [1..20]] L2 = [(n,J(n)) for n in [1..150]] graphics_array([plot_step_function(L1),plot_step_function(L2)]).show(figsize=[10,5]) 
       

The function above is one Riemann called $f$, but which we (following Edwards and Derbyshire) will call $J(x)$.  It is very similar to $\pi(x)$ in its definition, so it's not surprising that it looks similar.

$$J(x)=\pi(x)+\frac{1}{2}\pi(\sqrt{x})+\frac{1}{3}\pi(\sqrt[3]{x})+\frac{1}{4}\pi(\sqrt[4]{x})+\cdots=\sum_{k=1}^\infty \frac{1}{n}\pi\left(x^{1/n}\right)$$

This looks like it's infinite, but it's not actually infinite.  For instance, we can see on the graph that $$J(20)=\pi(20)+\frac{1}{2}\pi(\sqrt{20})+\frac{1}{3}\pi(\sqrt[3]{20})+\frac{1}{4}\pi(\sqrt[4]{20})=8+\frac{2}{2}+\frac{1}{3}+\frac{1}{4}=9\frac{7}{12}$$ because $\sqrt[5]{20}\approx 1.8$ and $\pi(\sqrt[5]{20})\approx\pi(1.8)=0$, so the sum ends there.

 

Okay, so we have this new function.  Yet another arithmetic function.   So what?

Ah, but what have we been doing to all our arithmetic functions to see what they can do, to get formulas for them?  We've been Moebius inverting them, naturally!  In this case, Moebius inversion could be really great, since it would give us something about the thing being added - namely, $\pi(x)$.

The only thing standing in our way is that $$J(x)=\sum_{k=1}^\infty \frac{1}{n}\pi\left(x^{1/n}\right)$$ which is not a sum over divisors.  But it turns out that, just like when we took the limits of the sum over divisors $\sum_{d\mid n}\frac{1}{d}$, we got $\sum_{n=1}^\infty \frac{1}{n}$, we can do the same thing with Moebius inversion.

Fact

If $\sum_{n=1}^\infty f(x/n)$ and $\sum_{n=1}^\infty g(x/n)$ both converge absolutely, then $$g(x)=\sum_{n=1}^\infty f(x/n)\Longleftrightarrow f(x)=\sum_{n=1}^\infty \mu(n)g(x/n)\; .$$

For us, we've just defined $g=J$ with $f(x/n)=\frac{1}{n}\pi\left(x^{1/n}\right)$, and so we get the very important result that $$\pi(x)=\sum_{n=1}^\infty \mu(n)\frac{J(x^{1/n})}{n}=J(x)-\frac{1}{2}J(\sqrt{x})-\frac{1}{3}J(\sqrt[3]{x})-\frac{1}{5}J(\sqrt[5]{x})+\frac{1}{6}J(\sqrt[6]{x})+\cdots$$

(If that last use of Moebius inversion looked a little sketchy, it does to me too, but I cannot find a single source where it's complained about that $f(x/n)=\frac{1}{n}\pi\left(x^{1/n}\right)$ is really a function of $x$ and $n$, not just $x/n$.  In any case, the result is correct, via a somewhat different explanation of this version of inversion in a footnote in Edwards.)

 

Now, this looks just as hopeless as before.  How is $J$ going to help us calculate $\pi$, if we can only calculate $J$ in terms of $\pi$ anyway?  

Ah, but here is where Riemann "turns the Golden Key", as Derbyshire puts it.  

  • We will now use the Euler product for $\zeta$ to connect $\zeta$ to $J$.
  • Then we will see how the zeros of $\zeta$ give us an exact formula for $J$.
  • Then we will finally plug $J$ back into the Moebius-inverted formula for $\pi$, giving an exact formula for $\pi$!
import mpmath L = lcalc.zeros_in_interval(10,150,0.1) n=100 P = plot(prime_pi,n-50,n,color='black',legend_label='$\pi(x)$') P += plot(Li,n-50,n,color='green',legend_label='$Li(x)$') G = lambda x: sum([mpmath.li(x^(1/j))*moebius(j)/j for j in [1..3]]) P += plot(G,n-50,n,color='red',legend_label='$\sum_{j=1}^{%s}\\frac{\mu(j)}{j}Li(x^{1/j})$'%3) F = lambda x: sum([(mpmath.li(x^(1/j))-log(2)+numerical_integral(1/(y*(y^2-1)*log(y)),x^(1/j),oo)[0])*moebius(j)/j for j in [1..3]])-sum([(mpmath.ei(log(x)*((0.5+l[0]*i)/j))+mpmath.ei(log(x)*((0.5-l[0]*i)/j))).real for l in L for j in [1..3]]) P += plot(F,n-50,n,color='blue',legend_label='Really good estimate',plot_points=50) show(P) 
       

We can see above that this has the potential to be a very good approximation, even given that I did limited calculations here.  The most interesting thing is the gentle waves you should see; this is quite different from the other types of approximations we had, and seems to have the potential to mimic the more abrupt nature of the actual $\pi(x)$ function much better in the long run.

 

So let's see what this is.  First, let's connect $J$ and $\zeta$.

Recall the Euler product for $\zeta$ again:  $$\zeta(s)=\prod_{p}\frac{1}{1-p^{-s}}$$

The trick to getting information about primes out of this, as well as connecting to $J$, is to take the logarithm of the whole thing.   This will turn the product into a sum  - something we can work with much more easily: $$\ln(\zeta(s))=\sum_{p}\ln\left(\frac{1}{1-p^{-s}}\right)=\sum_{p}-\ln\left(1-p^{-s}\right)$$

If we just had the fraction, we could use the geometric series to turn this into a sum (in fact, that is where it came from), but we got rid of it.  

Question: So what can we do with $-\ln()$ of some sum, not a product?

Answer: We can use its Taylor series!  $$-\ln(1-x)=\sum_{k=1}^\infty \frac{x^k}{k}$$

Plug it in: $$\ln(\zeta(s))=\sum_{p}\sum_{k=1}^\infty \frac{(p^{-s})^k}{k}$$

Question: I don't see anything very useful here.  Can I get anything good out of this?

Answer: Yes, in two big steps.

First, we rewrite $\frac{(p^{-s})^k}{k}$ as an integral, so that we could add it up more easily.  

  • Standard Calculus II improper integral work shows that $$\frac{(p^{-s})^k}{k}=\frac{s}{k}\int_{p^k}^\infty x^{-s-1}dx$$ 
  • That means $$\ln(\zeta(s))=\sum_{p}\sum_{k=1}^\infty \frac{(p^{-s})^k}{k}=\sum_{p}\sum_{k=1}^\infty\frac{s}{k}\int_{p^k}^\infty x^{-s-1}dx=s\sum_{p}\sum_{k=1}^\infty\int_{p^k}^\infty \frac{1}{k}x^{-s-1}dx$$ 

Next, it would be nice to get this as one big integral.  But of what function, and with what endpoints?

  • We could unify these integrals from $p^k$ to $\infty$ somewhat artificially, by writing $$\int_{p^k}^\infty \frac{1}{k}x^{-s-1}dx=\int_1^{p^k}\frac{1}{k}\cdot 0\cdot x^{-s-1}\; dx+\int_{p^k}^\infty \frac{1}{k}x^{-s-1}dx$$  That would give an integral from $1$ to $\infty$, though it would be defined with a piecewise integrand.  
  • Hmm, what function would I get if I added up all those piecewise integrands?  
    • Well, it would be a function which added $\frac{1}{1}x^{-s-1}$ when $x$ reached a prime $p$ - so it would include $$\pi(x)x^{-s-1}\ldots$$
    • It would add $\frac{1}{2}x^{-s-1}$ when it reached a square of a prime $p^2$, which is the same as adding it when $\sqrt{x}$ hits a prime - so it would include $$\frac{1}{2}\pi(\sqrt{x})x^{-s-1}\ldots$$
    • And it adds $\frac{1}{3}x^{-s-1}$ when $x$ reaches a cube of a prime, which is when $\sqrt[3]{x}$ hits a prime - which adds $$\frac{1}{3}\pi(\sqrt[3]{x})x^{-s-1}$$
  • In short, adding up all these piecewise integrands seems to give a big integrand $$\left(\pi(x)+\frac{1}{2}\pi(\sqrt{x})+\frac{1}{3}\pi(\sqrt[3]{x})+\cdots\right)x^{-s-1}$$
  • But this is $J(x)$, of course (multiplied by $x^{-s-1}$)!

Hence $$\ln(\zeta(s))=s\sum_{p}\sum_{k=1}^\infty\int_{p^k}^\infty \frac{1}{k}x^{-s-1}dx=s\int_1^\infty J(x)x^{-s-1}dx$$  This completes our first of the final goals - connecting $\zeta$ and $J$.

L = lcalc.zeros_in_interval(10,100,0.1) [l[0] for l in L] 
       
[14.1347251, 21.0220396, 25.0108576, 30.4248761, 32.9350616, 37.5861782,
40.9187190, 43.3270733, 48.0051509, 49.7738325, 52.9703215, 56.4462477,
59.3470440, 60.8317785, 65.1125441, 67.0798105, 69.5464017, 72.0671577,
75.7046907, 77.1448401, 79.3373750, 82.9103808, 84.7354930, 87.4252746,
88.8091112, 92.4918993, 94.6513440, 95.8706342, 98.8311942]
[14.1347251, 21.0220396, 25.0108576, 30.4248761, 32.9350616, 37.5861782, 40.9187190, 43.3270733, 48.0051509, 49.7738325, 52.9703215, 56.4462477, 59.3470440, 60.8317785, 65.1125441, 67.0798105, 69.5464017, 72.0671577, 75.7046907, 77.1448401, 79.3373750, 82.9103808, 84.7354930, 87.4252746, 88.8091112, 92.4918993, 94.6513440, 95.8706342, 98.8311942]

We see all the zeros for $\sigma=1/2$ between $0$ and $100$; there are 29 of them.

Our next goal is to see how this connection $$\ln(\zeta(s))=s\int_1^\infty J(x)x^{-s-1}dx$$ relates to the zeros of the $\zeta$ function (and hence the Riemann Hypothesis).

 

We will do this by analogy - albeit a very powerful one, which Euler used to prove $\zeta(2)=\frac{\pi^2}{6}$ and which, correctly done, does yield the right answer.

Recall basic algebra.  The Fundamental Theorem of Algebra states that every polynomial factors over the complex numbers.  For instance, $$f(x)=5x^3-5x=5(x-0)(x-1)(x+1)\; .$$  So we could then say that $$\ln(f(x))=\ln(5)+\ln(x-0)+\ln(x-1)+\ln(x+1)$$ Then if it turned out that $\ln(f(x))$ was useful to us for some other reason, it would be reasonable to say that we can get information about that other material from adding up information about the zeros of $f$ (and the constant $5$), because of the addition of $\ln(x-r)$ for all the roots $r$.

You can't really do this with arbitrary functions, of course, and $\zeta$ doesn't work - for instance, because $\zeta(1)$ diverges badly, no matter how you define the complex version of $\zeta$.  

But it happens that $\zeta$ is very close to a function you can do that to, $(s-1)\zeta(s)$.   Applying the idea above to $(s-1)\zeta(s)$ (and doing lots of relatively hard complex integrals, or some other formal business with difficult convergence considerations) allows us to essentially invert $$\ln(\zeta(s))=s\int_1^\infty J(x)x^{-s-1}dx$$ to $$J(x)=Li(x)-\sum_{\rho}Li(x^\rho)-\ln(2)+\int_x^\infty\frac{dt}{t(t^2-1)\ln(t)}$$

 

It is hard to overestimate the importance of this formula.  Each piece comes from something inside $\zeta$ itself, inverted in this special way.

  • $Li(x)$ comes from the fact that we needed $(s-1)\zeta(s)$ to apply this inversion, not just $\zeta(s)$. 
    • In fact, we can directly see this particular inversion, as it's true that $$s\int_1^\infty Li(x)x^{-s-1}dx=-\ln(s-1)$$ so one can see that $s-1$ and $Li$ seem to correspond.  (Unfortunately, Maxima, which does integration in Sage, cannot do this one so nicely yet.)
  • Each $Li(x^\rho)$ comes from each of the zeros of $\zeta$ on the line $\sigma=1/2$ in the complex plane.
  • The $\ln(2)$ comes from the constant when you do the factoring, like the $5$ in the example.
  • The integral comes from the zeros of $\zeta$ at $-2n$ I mentioned briefly above.

 

To give you a sense of how complicated this formula $$J(x)=Li(x)-\sum_{\rho}Li(x^\rho)-\ln(2)+\int_x^\infty\frac{dt}{t(t^2-1)\ln(t)}$$ really is, here is a plot of something related.  

import mpmath parametric_plot((lambda t: mpmath.ei(log(20)*(0.5+i*RR(t))).real,lambda t: mpmath.ei(log(20)*(0.5+i*RR(t))).imag), (0,14.1),rgbcolor=hue(0.7),plot_points=300)+point((mpmath.ei(log(20)*(0.5+i*14.1)).real,mpmath.ei(log(20)*(0.5+i*14.1)).imag),color='red',size=20) 
       

This is the plot of $$Li(20^{1/2+it})$$ up through the first zero of $\zeta$ above the real axis.    It's beautiful, but also forbidding.  After all, if takes that much twisting and turning to get to $Li$ of the first zero, what is in store if we have to add up over all infinitely many of them to calculate $J(20)$?

 

Now we are finally ready to see Riemann's result, by plugging in this formula for $J$ into the Moebius inverted formula for $\pi$ given by $$\pi(x)=J(x)-\frac{1}{2}J(\sqrt{x})-\frac{1}{3}J(\sqrt[3]{x})-\frac{1}{5}J(\sqrt[5]{x})+\frac{1}{6}J(\sqrt[6]{x})+\cdots$$ Riemann did not prove it fully rigorously, and indeed one of the provers of the PNT mentioned taking decades to prove all the statements Riemann made in this one paper, just so he could prove the PNT.  Nonetheless, it is certainly Riemann's formula for $\pi(x)$, and an amazing one:

$$\pi(x)=\sum_{n=1}^\infty \frac{\mu(n)}{n}\left[ Li(x^{1/n})-\sum_{\rho}\left(Li(x^{\rho/n})+Li(x^{\bar{\rho}/n})\right)+\int_{x^{1/n}}^\infty\frac{dt}{t(t^2-1)\ln(t)}\right]$$

Two points:

  • Here, $\rho$ is a zero above the real axis, and $\bar{\rho}$ is the corresponding one below the real axis.  
  • If you're wondering where $\ln(2)$ went, it went to 0 because $\sum_{n=1}^\infty \frac{\mu(n)}{n}=0$, though this is very hard to prove (in fact, it is a consequence of the PNT).

 

Now let's see it in action.

import mpmath var('y') L = lcalc.zeros_in_interval(10,50,0.1) @interact def _(n=(100,(60,10^3))): P = plot(prime_pi,n-50,n,color='black',legend_label='$\pi(x)$') P += plot(Li,n-50,n,color='green',legend_label='$Li(x)$') G = lambda x: sum([mpmath.li(x^(1/j))*moebius(j)/j for j in [1..3]]) P += plot(G,n-50,n,color='red',legend_label='$\sum_{j=1}^{%s}\\frac{\mu(j)}{j}Li(x^{1/j})$'%3) F = lambda x: sum([(mpmath.li(x^(1/j))-log(2)+numerical_integral(1/(y*(y^2-1)*log(y)),x^(1/j),oo)[0])*moebius(j)/j for j in [1..3]])-sum([(mpmath.ei(log(x)*((0.5+l[0]*i)/j))+mpmath.ei(log(x)*((0.5-l[0]*i)/j))).real for l in L for j in [1..3]]) P += plot(F,n-50,n,color='blue',legend_label='Really good estimate',plot_points=50) show(P) 
       

Click to the left again to hide and once more to show the dynamic interactive window

And this graphic shows just how good it can get.  Again, notice the waviness, which allows it to approximate $\pi(x)$ not just once per 'step' of the function, but along the steps.

 

We can also just check out some numerical values.

var('y') L = lcalc.zeros_in_interval(10,300,0.1) F = lambda x: sum([(mpmath.li(x^(1/j))-log(2)+numerical_integral(1/(y*(y^2-1)*log(y)),x^(1/j),oo)[0])*moebius(j)/j for j in [1..3]])-sum([(mpmath.ei(log(x)*((0.5+l[0]*i)/j))+mpmath.ei(log(x)*((0.5-l[0]*i)/j))).real for l in L for j in [1..3]]) F(300); prime_pi(300); Li(300.); Li(300.)-1/2*Li(sqrt(300.))-1/3*Li((300.)^(1/3)) 
       
mpf('62.169581705460772')
62
67.2884511014
62.1320949397
mpf('62.169581705460772')
62
67.2884511014
62.1320949397

This is truly only the beginning of research in modern number theory.  

For instance, research in finding points on curves leads to more complicated series like $\zeta$, called $L$-functions.  There is a version of the Riemann Hypothesis for them, too!  

 

And it gives truly interesting, strange, and beautiful results - like the following result from the last year or two, with which we will end the course.

 

Let $r_{12}(n)$ denote the number of ways to write $n$ as a sum of twelve squares, like we did $r(n)$ the number of ways to write as a sum of two squares.  Here, order and sign both matter, so $(1,2)$ and $(2,1)$ and $(-2,1)$ are all different.

 

Theorem:

As we let $p$ go through the set of all prime numbers, the distribution of the fraction $$\frac{r_{12}(p)-8(p^5+1)}{32p^{5/2}}$$ is precisely as $$\frac{2}{\pi}\sqrt{1-t^2}$$ in the long run.  

def dist(v, b, left=float(0), right=float(pi)): """ We divide the interval between left (default: 0) and right (default: pi) up into b bins. For each number in v (which must left and right), we find which bin it lies in and add this to a counter. This function then returns the bins and the number of elements of v that lie in each one. ALGORITHM: To find the index of the bin that a given number x lies in, we multiply x by b/length and take the floor. """ length = right - left normalize = float(b/length) vals = {} d = dict([(i,0) for i in range(b)]) for x in v: n = int(normalize*(float(x)-left)) d[n] += 1 return d, len(v) def graph(d, b, num=5000, left=float(0), right=float(pi)): s = Graphics() left = float(left); right = float(right) length = right - left w = length/b k = 0 for i, n in d.iteritems(): k += n # ith bin has n objects in it. s += polygon([(w*i+left,0), (w*(i+1)+left,0), \ (w*(i+1)+left, n/(num*w)), (w*i+left, n/(num*w))],\ rgbcolor=(0,0,0.5)) return s def sin2(): PI = float(pi) return plot(lambda x: (2/PI)*math.sin(x)^2, 0,pi, plot_points=200, rgbcolor=(0.3,0.1,0.1), thickness=2) def sqrt2(): PI = float(pi) return plot(lambda x: (2/PI)*math.sqrt(1-x^2), -1,1, plot_points=200, rgbcolor=(0.3,0.1,0.1), thickness=2) delta = delta_qexp(10^5) 
       
@interact def delta_dist(b=(20,(10..150)), number = (500,1000,..,delta.prec())): D = delta[:number] w = [float(D[p])/(2*float(p)^(5.5)) for p in prime_range(number + 1)] d, total_number_of_points = dist(w,b,float(-1),float(1)) show(graph(d, b, total_number_of_points,-1,1) + sqrt2(), frame=True, gridlines=True) 
       

Click to the left again to hide and once more to show the dynamic interactive window

Amazing!

 

This is based on the graphic below, kindly created for me by William Stein, the founder of Sage, whose research is directly related to such things.  

@interact def delta_dist(b=(20,(10..150)), number = (500,1000,..,delta.prec()), verbose=False): D = delta[:number] if verbose: print "got delta" w = [float(D[p])/(2*float(p)^(5.5)) for p in prime_range(number + 1)] if verbose: print "normalized" v = [acos(x) for x in w] if verbose: print "arccos" d, total_number_of_points = dist(v,b) if verbose: print "distributed" show(graph(d, b, total_number_of_points) + sin2(), frame=True, gridlines=True) 
       

Click to the left again to hide and once more to show the dynamic interactive window