MAT 338 Day 37 2011

2399 days ago by kcrisman

You'll notice that as we near the end of the course, the more we will be seeing results and the less we will be giving full proofs or expecting calculations.  Don't worry!  Our goal, after all, is (as always) to see connections with the rest of mathematics.  We are really beginning to enter the most cutting-edge questions, and of necessity there is less that I can ask of students to reinforce it.  Enjoy the ride!

For today, our theme will be several of the many contributions of the great Russian mathematician Chebyshev (Чебышёв).  He made fundamental advances in this type of number theory as well as in statistics!  

One of his observations was that our familiar categories of primes - the classes $4k+1$ and $4k+3$ - don't always seem to have the same size.  

@interact def _(n=7): L = map(None,[p for p in prime_range(n+1) if p%4==1],[p for p in prime_range(n+1) if p%4==3]) L = [['',l[1]] if l[0] is None else l for l in L] T = [['$p\equiv 1\\text{ mod }(4)$','$p\equiv 3\\text{ mod }(4)$']] html.table(T+L,header=True) 
       

Click to the left again to hide and once more to show the dynamic interactive window

@interact def _(k=100): p1 = 0 p3 = 0 for i in prime_range(k): if i%4==1: p1 += 1 if i%4==3: p3 += 1 html("Up to $k=%s$, there are"%k) html("%s primes $p\equiv 1\\text{ mod }(4)$ and "%p1) html("%s primes $p\equiv 3\\text{ mod }(4)$."%p3) 
       

Click to the left again to hide and once more to show the dynamic interactive window

Do you detect the bias Chebyshev did?

Of course, for this question to make sense, we need to make sure this "prime race" won't suddenly run out of gas.  We know there are infinitely many primes, but what about each type of prime?

Proposition:  There are infinitely many primes congruent to 3 modulo 4.

Proposition: There are infinitely many primes congruent to 1 modulo 4.

Proof (of first proposition):

  • By contradiction: Let $p_1,p_2,\ldots,p_k$ be the (finite) set of primes congruent to 3 modulo 4.   
  • Let $$m=4p_1 p_2\cdots p_k - 1\; ,$$ the product of all these guys with four, then subtract one.  
  • What are the prime divisors of this number?
    • Clearly none of the $p_i$ can be a prime divisor, since it is congruent to $-1$ modulo all the $p_i$.
    • Yet it is not even, so it's not just a power of 2.  
    • But if it is a product only of primes congruent to 1 modulo 4, then it would have to be 1 modulo 4 itself (since any product of $1$s is 1).
    • This is false, so there must be another prime congruent to 3 modulo 4 which divides it.
  • This contradicts our assumption of having the full set of such primes. 

Proof (of the second proposition):  

  • Again, suppose there are finitely many primes $p_i$ which are congruent to 1 modulo 4.  
  • Let's form the modified product $$m=(2p_1 p_2\ldots p_k)^2+1\; .$$  
  • What are its prime divisors?
    • It is again clear that $m$ is odd and that it is not divisible by any of the $p_i$, for the same reasons as before.  
    • We can't directly use the same argument to show that one of the primes $p$ which divides $m$ is 1 modulo 4, because both $3^2$ and $1^2$ are congruent to 1 modulo 4, like $m$.  
    • However, $m=x^2+1\equiv 0\text{ mod }(p)$ for any prime divisor of $m$ and for $x=2p_1 p_2\ldots p_k\ldots$
    • So $-1$ is a quadratic residue modulo $p$!  
    • This can only happen if $p\equiv 1$ mod (4), and it wouldn't be one of the $p_i$.
  • This contradicts that we already had all such primes.

Notice that the first proof is quite elementary, and almost like the proof of the infinitude of primes, while the second one really does need something equivalent to the idea of a square root of $-1$ existing modulo some primes but not modulo others.  

Now, it looks like the $4k+3$ ones will always stay ahead, from what we've seen.  But that's not quite right.

def primes_up_to_k(k): p1 = 0 p3 = 0 for i in prime_range(k): if i%4==1: p1 += 1 if i%4==3: p3 += 1 html("Up to $k=%s$, there are"%k) html("%s primes $p\equiv 1\\text{ mod }(4)$ and "%p1) html("%s primes $p\equiv 3\\text{ mod }(4)$."%p3) 
       
primes_up_to_k(26860); primes_up_to_k(26862); primes_up_to_k(26864); primes_up_to_k(26880) 
       
Up to k=26860, there are
1472 primes p\equiv 1\text{ mod }(4) and 
1472 primes p\equiv 3\text{ mod }(4).
Up to k=26862, there are
1473 primes p\equiv 1\text{ mod }(4) and 
1472 primes p\equiv 3\text{ mod }(4).
Up to k=26864, there are
1473 primes p\equiv 1\text{ mod }(4) and 
1473 primes p\equiv 3\text{ mod }(4).
Up to k=26880, there are
1473 primes p\equiv 1\text{ mod }(4) and 
1474 primes p\equiv 3\text{ mod }(4).
Up to k=26860, there are
1472 primes p\equiv 1\text{ mod }(4) and 
1472 primes p\equiv 3\text{ mod }(4).
Up to k=26862, there are
1473 primes p\equiv 1\text{ mod }(4) and 
1472 primes p\equiv 3\text{ mod }(4).
Up to k=26864, there are
1473 primes p\equiv 1\text{ mod }(4) and 
1473 primes p\equiv 3\text{ mod }(4).
Up to k=26880, there are
1473 primes p\equiv 1\text{ mod }(4) and 
1474 primes p\equiv 3\text{ mod }(4).

There are other places as well, and it can be fun to look for them.  The next such place is over six hundred thousand, for a little while; after that, over twelve million.

Indeed, there is a theorem that there are infinitely many places where this will happen, and even that 

Fact: No matter how far out you go, there exists a place $x$ where the $4k+1$ team is ahead at $x$ by $$\frac{1}{2}\frac{\sqrt{x}}{\ln(x)}\ln(\ln(\ln(x)))\; .$$

That this fact is highly nontrivial is seen in the following interact.  Even though we can see the difference surge to become positive a few times, it seems hopeless to ever reach even the extremely slow $\ln(\ln(\ln(x)))$.  But it does.

@interact def _(k=26862): L = [] p1 = 0 p3 = 0 for i in prime_range(k): if i%4==1: p1 += 1 L.append([i,p1-p3]) if i%4==3: p3 += 1 L.append([i,p1-p3]) P = plot(1/2*sqrt(x)/ln(x)*ln(ln(ln(x))),(x,10,k+10)) P += plot_step_function(L) show(P) 
       

Click to the left again to hide and once more to show the dynamic interactive window

We can check out other races, too.  What is the pattern here?

@interact def _(k=100): p1,p3,p5,p7=0,0,0,0 L1 = [] L3 = [] L5 = [] L7 = [] for i in prime_range(k): if i%8==1: p1 += 1 L1.append([i,p1]) elif i%8==3: p3 += 1 L3.append([i,p3]) elif i%8==5: p5 += 1 L5.append([i,p5]) elif i%8==7: p7 += 1 L7.append([i,p7]) L1.append([k,p1]) L3.append([k,p3]) L5.append([k,p5]) L7.append([k,p7]) P = Graphics() P += plot_step_function(L1,color='red',legend_label='1 mod (8)') P += plot_step_function(L3,color='green',legend_label='3 mod (8)') P += plot_step_function(L5,color='blue',legend_label='5 mod (8)') P += plot_step_function(L7,color='orange',legend_label='7 mod (8)') show(P,xmin=max(0,k-1000),ymin=max(0,L1[-1][1]-100)) 
       

Click to the left again to hide and once more to show the dynamic interactive window

It turns out there are several types of theorems/conjectures one can make about such races.  The key is to recognize that the 'slow' ones are the residue classes $[a]$ such that $4n+a$ can be a perfect square; in our case, only $4k+1$ and $8k+1$ are possible perfect (odd) squares.

  • The residue classes with perfect squares do get ahead in the race infinitely often.
  • But if you 'count right' (and assume something else technical but important), the proportion of the time they are ahead in the race is very small.
  • Nonetheless, $$\lim_{x\to\infty}\frac{\text{Number of }p\equiv a\text{ mod }(n)\text{ less than }x}{\text{Number of }p\equiv b\text{ mod }(n)\text{ less than }x}=1\; \text{ for any }a,b\text{ coprime to each other and to }n\; ,$$ so they can't get too far away from each other.

 

Let's now look at the best prime race of all - of $\pi(x)$ against its approximations! Since we saw that $x/\ln(x)$ didn't seem to be as good an approximation, we'll leave it out for now.

@interact def _(n=100): P = plot(prime_pi,3,n,color='black',legend_label='$\pi(x)$') P += plot(Li,3,n,color='green',legend_label='$Li(x)$') show(P,xmin=max(n-1000,0),ymin=prime_pi(max(n-1000,0))) 
       

Click to the left again to hide and once more to show the dynamic interactive window

It seems pretty clear that $Li(x)$, even if it's a good approximation, will not ever be less than the actual count of primes.  But the English mathematician Littlewood (who is also responsible for some of the first results in the prime races above) proved the following analogous result:

Fact:  For any number $x$, there is an $x'>x$ such that $$Li(x')<\pi(x')\; .$$

Fact: The first time this happens is no higher than $$10^{10^{10^{10^{1000}}}}\; .$$

This bound originally had a $34$ in the last spot, but this result is without any special assumptions.  In fact, today we know that 

Fact: The first time this happens is no higher than $$1.4\times 10^{316}\; .$$

 

This sounds terrible, but actually is good news.  After all, if $\pi$ beats $Li$ once in a while, then $Li$ must indeed be a great approximation.  And indeed this is true.

Prime Number Theorem: If $\pi(x)$ is the number of primes $p\leq x$, then $$\lim_{x\to\infty}\frac{\pi(x)}{x/\log(x)}=1\, .$$  Equivalently, $$\lim_{x\to\infty}\frac{\pi(x)}{Li(x)}=1\; .$$

This was proved about 100 years after the initial investigations of Gauss by the French and Belgian mathematicians Jacques Hadamard and Charles-Jean de la Vallee-Poussin.  They made good use of the analytic methods we are slowly approaching.

Here are some of the many possible better approximations to $\pi(x)$ that come out of this sort of thinking.  Later, we'll see more of this.

@interact def _(n=100): P = plot(prime_pi,3,n,color='black',legend_label='$\pi(x)$') P += plot(Li,3,n,color='green',legend_label='$Li(x)$') P += plot(lambda x: Li(x)-sqrt(prime_pi(x)),3,n,color='orange',legend_label='$Li(x)-\sqrt{\pi(x)}$') P += plot(lambda x: Li(x)-.5*Li(sqrt(x)),3,n,color='red',legend_label='$Li(x)-\\frac{1}{2}Li(\sqrt{x})$') P += plot(lambda x: Li(x)-sqrt(x)/ln(x),3,n,color='purple',legend_label='$Li(x)-\sqrt{x}/\ln(x)$') show(P,xmin=max(n-1000,0),ymin=prime_pi(max(n-1000,0))) 
       

Click to the left again to hide and once more to show the dynamic interactive window

Chebyshev made a number of contributions to these questions.  The first was to prove a conjecture known (even today!) as Bertrand's Postulate.

Fact: For any integer $n\geq 2$, there is a prime between $n$ and $2n$.

@interact def _(n=25): html("$%s$ is a prime between $%s$ and $%s$"%(next_prime(n),n,2*n)) 
       

Click to the left again to hide and once more to show the dynamic interactive window

More immediately germane to our task of looking at $\pi(x)$ and its value, Chebyshev proved the first substantial result on the way to the Prime Number Theorem, validating Legendre's intuition.

Theorem: It is true both that $\pi(x)$ is $O(\frac{x}{\ln(x)})$ and that $\frac{x}{\ln(x)}$ is $O(\pi(x))$.

See the exercises to see that this is not the same as the Prime Number Theorem!

We can at least show the gist of a small piece of this:

Proposition: For big enough $x$,  $$\pi(x)<2\frac{x}{\ln(x)}\; .$$

Proof Sketch:

It's not hard to verify this for $x<1000$:

plot(prime_pi,1,1000)+plot(2*x/ln(x),1,1000,color='black') 
       

Now we'll proceed by induction, in an unusual way - namely, assume it is true for $n$, and prove it is true for $2n$!  This needs a little massaging for odd numbers, but is a legitimate induction method.  

  • So first assume that $\pi(n)<2\frac{n}{\ln(n)}$.  Now what?  
  • We now look at the product of all the primes between $n$ and $2n$, which we write as $$P=\prod_{n<p<2n} p\, .$$  
    • On the one hand, each of these primes $p$ is greater than $n$, and there are $\pi(2n)-\pi(n)$ of them.
    • So $$n^{\pi(2n)-\pi(n)}<P\, .$$  
    • On the other hand, each of these primes is greater than $n$ but they are all in the list of numbers from $n$ to $2n$, so...
    • Their product divides $$\frac{(2n)\cdot (2n-1)\cdot (2n-2)\cdots (n+1)}{n\cdot (n-1)\cdot (n-2)\cdots 1}$$
    • That is to say $$P \mid \frac{(2n)\cdot (2n-1)\cdot (2n-2)\cdots (n+1)}{n\cdot (n-1)\cdot (n-2)\cdots 1}=\frac{(2n)!}{n!n!}\, .$$ 
  • In particular, $$P\leq \frac{(2n)!}{n!n!}\, .$$  
  • But this is the number of ways to choose $n$ things from a collection of $2n$ things...
    • Which is certainly less than the number of ways to pick any old collection out of $2n$ things.
    • Which is $2^{2n}$ (because you either pick it or you don't).  
  • That leads to the estimate $$n^{\pi(2n)-\pi(n)}<P\leq \frac{(2n)!}{n!n!}<2^{2n}\, ;$$ 
  • Take $\ln$ of both ends to get $$(\pi(2n)-\pi(n))\ln(n)<2n\ln(2)$$ 
  • Now divide out and isolate to get $$\pi(2n)<\frac{2n\ln(2)}{\ln(n)}+\pi(n)<\frac{2n\ln(2)}{\ln(n)}+2\frac{n}{\ln(n)}=(\ln(2)+1)\frac{2n}{\ln(n)}\, .$$  
  • Then recall that $n>1000$ (since we already verified for lower $n$):
    • Compare $2\ln(n)$ and $\ln(2)(\ln(2)+\ln(n))$ for $n=1000$ - the first one is bigger.
    • Further, the derivatives are $2/n$ and $\ln(2)/n$, so the derivative of the first one is bigger too.
    • So we divide through a bit and get $\frac{\ln(2)+1}{\ln(n)}<\frac{2}{\ln(2)+\ln(n)}=\frac{2}{\ln(2n)}$.
  • Now we can put it all together to see that $$\pi(2n)<(\ln(2)+1)\frac{2n}{\ln(n)}<2\frac{2n}{\ln(2n)}\, ,$$ which is exactly what the proposition would predict.
  • To rescue this for $2n+1$, we need another calculus comparison.
    • First, from above we have $$\pi(2n+1)\leq \pi(2n)+1<\frac{2n\ln(2)}{\ln(n)}+\pi(n)+1<\frac{2n\ln(2)}{\ln(n)}+2\frac{n}{\ln(n)}+1\; .$$
    • Since $\frac{2n+1}{\ln(2n+1)}>\frac{2n}{\ln(2n+1)}$, it will suffice then to show $$(2+2\ln(2))\frac{n}{\ln(n)}+1<\frac{2n}{\ln(2n+1)}\; .$$
    • Since $n>1000$, $$(2+2\ln(2))\frac{n}{\ln(n)}+1<3.386\frac{n}{\ln(n)}+1<3.394\frac{n}{\ln(n)}$$ so it suffices to show $$3.394\frac{n}{\ln(n)}<\frac{2n}{\ln(2n+1)}\; .$$
    • This is the same as $$\frac{3.394}{4}<\frac{\ln(n)}{\ln(2n+1)}$$
    • The right hand side is bigger than the left hand side at $n=1000$.
    • The derivative of the right is positive.
    • So we have the required result that $$\pi(2n+1)<2\frac{2n+1}{\ln(2n+1)}$$

 

We end with a substantial piece of a real proof in the direction of the Prime Number Theorem.  It is dense, but just like the last few proofs, requires nothing beyond calculus and a willingness to allow a lot of algebraic and integral manipulation for the purposes of estimation.

Functions: First, we'll review the main function. Think of the prime counting function $\pi$ as a so-called `step' function, where every time you hit a new prime you add 1.

@interact def _(n=100): show(plot(prime_pi,1,n)) 
       

Click to the left again to hide and once more to show the dynamic interactive window

But now, instead of adding 1 each time, let's add $\log(p)$ (that is, the natural logarithm - we'll use that for the rest of this section) each time we hit a prime $p$. Of course, this will get bigger as $p$ gets bigger.

def theta(x): return sum(math.log(p) for p in prime_range(1,floor(x)+1)) @interact def _(n=100): show(plot(theta,1,n)) 
       

Click to the left again to hide and once more to show the dynamic interactive window

We call this Chebyshev's theta function, and we write $$\Theta(x)=\sum_{p\leq x}\log(p)\, .$$ It turns out the Prime Number Theorem implies the following limit (in fact, they're equivalent): $$\lim_{x\to\infty}\frac{\Theta(x)}{x}=1\, .$$ Here is a plot of both limits, along with the constant function 1.

def pnt(n): return prime_pi(n)*log(n)/n def thox(n): return theta(n)/n @interact def _(end=100000): show(plot(1,(1,end),color='black')+plot(pnt,(1,end),color='red')+plot(thox,(1,end))) 
       

Click to the left again to hide and once more to show the dynamic interactive window

So we will prove this implication - that if the Prime Number Theorem is true, it is also true that $\Theta(x)/x$ approaches $1$.

An Integral Formula: In order to prove this, we will first need a formula telling us about $\Theta(x)$. It turns out that integrals will help!

  • Let $m=\lfloor x\rfloor$, the greatest integer less than $x$. 
  •  Let $a(n)$ be the function such that $a(p)=1$ if $p$ is prime and $a(n)=0$ otherwise; another way to say this is $$a(n)=\pi(n)-\pi(n-1)\; .$$ 
  • Then it should be clear that $$\pi(x)=\sum_{n=1}^{m}a(n)\text{ and }\Theta(x)=\sum_{n=1}^{m}a(n)\log(n)\, .$$
  • By rearranging the sum in different ways (and using $\log(1)=0$), we see that $$\Theta(x)=\sum_{1\leq n\leq x}a(n)\log(n)=\sum_{n=1}^{m}a(n)\log(n)=$$ $$\sum_{n=2}^{m}[\pi(n)-\pi(n-1)]\log(n)=\sum_{n=2}^{m}\pi(n)\log(n)-\sum_{n=1}^{m-1}\pi(n)\log(n+1)$$ 
  • This can be combined with just two left over terms, the second of which is 0: $$\sum_{n=2}^{m-1}\pi(n)[\log(n)-\log(n+1)]+\pi(m)\log(m)-\pi(1)\log(1)\, .$$ 
  • Now, $\log(n+1)-\log(n)=\int_{n}^{n+1}\frac{dt}{t}$; using this and a few other sleights of hand of integrals, rearrangement, and that $\pi(x)=\pi(m)$ is constant on $[m,x]$, we get $$-\sum_{n=2}^{m-1}\Big[\pi(n)\int_{n}^{n+1}\frac{dt}{t}\Big]+\pi(m)\log(m)$$ $$=-\sum_{n=2}^{m-1}\Big[\pi(n)\int_{n}^{n+1}\frac{dt}{t}\Big]+\pi(m)\log(m)-\pi(x)\log(x)+\pi(x)\log(x)$$ $$=-\int_{2}^{m}\frac{\pi(t)dt}{t}+\pi(x)\log(x)-\int_{m}^{x}\frac{\pi(t)dt}{t} =\pi(x)\log(x)-\int_{2}^{x}\frac{\pi(t)dt}{t}\, .$$

The Final Step: Now we have a formula for $\Theta$ which will allow us to prove something. To recap, we have: $$\frac{\Theta(x)}{x}=\frac{\pi(x)\log(x)}{x}-\frac{\int_{2}^{x}\frac{\pi(t)}{t}dt}{x}\, ,$$ so, given that the Prime Number Theorem says that $\lim_{x\to\infty}$ of the fraction with $\pi(x)$ in it is 1, proving that $\lim_{x\to\infty}$ of $\frac{\Theta(x)}{x}$ is also 1 is equivalent to proving $$\lim_{x\to\infty}\frac{1}{x}\int_{2}^{x}\frac{\pi(t)}{t}dt=0\, .$$ But since the PNT essentially says that in the limit $\frac{\pi(t)}{t}\sim \frac{1}{\log(t)}$, we can write that $$\frac{1}{x}\int_{2}^{x}\frac{\pi(t)}{t}dt\sim \frac{1}{x}\int_{2}^{x}\frac{dt}{\log(t)}\, .$$ Now look at the following graph. An upper sum for the integral of $1/\log(t)$ between 2 and 9 is the area of the two rectangles shown, one with area $\frac{1}{\log(2)}(\sqrt{9}-2)$ and the other with area $\frac{1}{\log(\sqrt{9})}(9-\sqrt{9})$, where of course $\sqrt{9}=3$.

@interact def _(top=(16,[n^2 for n in [2..10]])): f(x)=1/ln(x) P=plot(f,1,top+1) P+=line([(2,0),(2,f(2)),(math.sqrt(top),f(2)),(math.sqrt(top),0)],rgbcolor='black') P+=line([(math.sqrt(top),f(math.sqrt(top))),(top,f(math.sqrt(top))),(top,0)],rgbcolor='black') P.show(ymax=2) 
       

Click to the left again to hide and once more to show the dynamic interactive window

So in general, the overestimate of $\int_2^x dt/\ln(t)$ will be $$\frac{1}{\log(2)}(\sqrt{x}-2)+\frac{1}{\log(\sqrt{x})}(x-\sqrt{x})$$  and we want the limit as $x\to\infty$ of $\frac{1}{x}$ times that quantity.  This can be simplified a lot with logarithmic identities:

$$\frac{1}{x}\left(\frac{1}{\log(2)}(\sqrt{x}-2)+\frac{1}{\log(\sqrt{x})}(x-\sqrt{x})\right)=\frac{1}{\log(2)x^{1/2}}-\frac{2}{x\log(2)}+\frac{1}{\log(\sqrt{x})}-\frac{1}{\log(\sqrt{x})x^{1/2}}$$ $$=\frac{1}{\log(2)x^{1/2}}-\frac{2}{x\log(2)}+\frac{2}{\log(x)}-\frac{2}{\log(x)x^{1/2}}$$

@interact def _(top=(16,[n^2 for n in [2..10]])): f(x)=1/ln(x) P=plot(f,1,top+1) P+=line([(2,0),(2,f(2)),(math.sqrt(top),f(2)),(math.sqrt(top),0)],rgbcolor='black') P+=line([(math.sqrt(top),f(math.sqrt(top))),(top,f(math.sqrt(top))),(top,0)],rgbcolor='black') P+=line([(2,0),(2,f(2)),(2+(math.sqrt(top)-2)/top,f(2)),(2+(math.sqrt(top)-2)/top,0)],rgbcolor='red') P+=line([(math.sqrt(top),f(math.sqrt(top))),(math.sqrt(top)+(top-math.sqrt(top))/top,f(math.sqrt(top))),(math.sqrt(top)+(top-math.sqrt(top))/top,0)],rgbcolor='red') P.show(ymax=2) 
       

Click to the left again to hide and once more to show the dynamic interactive window

This clearly goes to zero as $x\to\infty$, and the picture seems to indicate this as well.  Hence we have proved that the limit of $\frac{\theta(x)}{x}$ is the same as that of $\frac{\pi(x)}{x/\ln(x)}$, which is what we desired!

(The last few paragraphs was the other part that disappeared - I knew it was there before!)

 

Homework:

  1. Show that $\sigma(n)$ is $O(n^2)$ (compare to the sum of all integers).
  2. Use the formula for the sum of the first $n$ perfect squares (from Transitions/Discrete or Calculus; it was probably in both for you) and the previous exercise to show that the average value of $\sigma(n)$ is Big Oh of $n^2$.  (This can be loosey-goosey.)
  3. Find a formula for the average value of the $u$ and $N$ functions we discussed earlier (up through $n$).
  4. Finish the details of the proof that $\tau$ is $O(\sqrt[3]{x})$.
  5. Find absolute bounds for $\phi(n)$ (formulas in terms of $n$).
  6. Show that $\tau(n)$ is not $O(1)$.  (Hint: that means there is no constant $C$ such that $\tau(n)\leq C$ always.)
  7. Why would it not contradict our theorem that $\frac{1}{n}\sum_{k=1}^n\tau(k)=O(\ln(n))$ to say that $\tau(n)$ is not $O(\ln(n))$?
  8. Show that $\tau(n)$ is not $O(\ln(n))$.  (Hint: look at numbers of the form $6^k$, and compare $\tau$ of these to any given multiple of the natural logarithm using calculus.)
  9. Come up with two functions $f(x)$ and $g(x)$ that both go to infinity as $x\to\infty$, such that $f(x)$ is always ahead of $g(x)$, but $f$ and $g$ are asymptotic.
  10. Come up with two functions $f(x)$ and $g(x)$ that both go to infinity as $x\to\infty$, but that switch the lead infinitely often and $f$ and $g$ are asymptotic.
  11. Show that the two limits in the prime number theorem are really equivalent.  That is, show that if $\lim \pi(x)/Li(x)=1$, then the other limit is 1, and vice versa.
  12. Find an arbitrarily long sequence of consecutive composite numbers.  (Hint: factorials.)
  13. Come up with two functions $f(x)$ and $g(x)$ such that $f(x)$ is $O(g(x))$ and $g(x)$ is $O(f(x))$, but are not asymptotic.
  14. Use the piece of Chebyshev's theorem that we proved to show that $\lim_{x\to\infty}\pi(x)/x=0$.
  15. Verify that if $2\ln(n)$ is greater than $\ln(2)(\ln(2)+\ln(n))$ and its derivative is too, then $\frac{\ln(2)+1}{\ln(n)}<\frac{2}{\ln(2)+\ln(n)}=\frac{2}{\ln(2n)}$.
  16. Verify that the derivative of $\frac{\ln(n)}{\ln(2n+1)}$ is positive for $n>1000$.