I want to close out this section by covering a variant we've actually already seen a few times. But it deserves to be foregrounded better.
Huh. So what variant is that?
Well, first, let's start with our baseline.
$$f_k(n,j)=\rec{\frac{1}{k}-f_{k+1}(\frac{n}{j},2) + f_k (n, j+1)}$$And
Okay.
Now, look back at our prime counting function definition. What would be the consequence of rewriting it like this?
$$f_k(n,j)=\rec{\rr{j^0} \cdot (\frac{1}{k}-f_{k+1}(\frac{n}{j},2)) + f_k (n, j+1)}$$Well, nothing would change, of course. Because any non-zero number raised to the $0$th power is $1$.
Good. Now let's launch our first experiment. Let's replace the value of $\rr{0}$ with a different value, namely, $1$.
$$g_k(n,j)=\rec{\rr{j^1 \cdot }(\frac{1}{k}-g_{k+1}(\frac{n}{j},2)) + g_k (n, j+1)}$$What do you suppose the impact of that will be?
I'm not sure.
Well, let's graph that definition then.
Okay. Now it appears the function is still counting weighted prime powers, but then the count is scaled by each prime power in question. Actually, now that I think about it, the function is counting the sum of primes, if you exclude prime powers.
That's exactly right. In fact,
$$ g_1(n,2)=\sum_{p^k \le n} \frac{1}{k} \rr{ \cdot p^k } $$As usual, you can see how this works in the following animation.
That's interesting. How far can this idea be extended?
Well, let's try another experiment. Now let's make the scaling parameter be $\rr{j^{-1}}$. So we have
$$g_k(n,j)=\rec{ \rr{j^{-1}} \cdot (\frac{1}{k}-g_{k+1}(\frac{n}{j},2)) + g_k (n, j+1)}$$How do you suppose this alteration will affect things?
Well, after the last example, I imagine this time the function will grow much more slowly.
That is indeed reasonable speculation. In fact, the graph looks like this:
And growing much more slowly is putting it very, very, unbelievably mildly.
I see that at, say, $x=1000$, the function equals about $2.5$, and by $x=100000$, it's only up to about $3.2$. Is this function converging to some particular value here?
Well you might ask.
This is actually a perfect example of a situation where eye-balling a function and testing a few seemingly large numbers can be misleading. This function is actually right on a mathematical knife edge, so to speak. As a matter of fact, the case of $\rr{j^{-(1+\epsilon)}}$ does converge, if $\epsilon$ is any positive real number, no matter how small.
But this case doesn't?
No. In fact, this function is approximately $\gamma + \log \log x$, where $\gamma \approx .5772156649015329$, the Euler-Mascheroni constant.
And you're suggesting that that grows slowly.
Well, let me try to provide some perspective.
$\log \log 1000 = 1.932...$.
$\log \log 1000000000 = 3.031...$ .
And $\log \log 10^{1000} = 7.742...$.
Keep in mind, that last number is a $1$ followed by a thousand zeros, which really is an incomprehensibly large number.
Yeah. That's mind-boggling.
Indeed.
At any rate, all of this is to say that
$$ g_1(n,2)=\sum_{p^k \le n} \frac{1}{k} \rr{ \cdot \frac{1}{p^k} } $$And the corresponding animation looks roughly like you'd expect.
Okay.
Let me show one last example. I mentioned that there are powers where the function converges. Let's look at one of those quickly.
Sure.
So now let's make the scaling parameter be $\rr{j^{-2}}$. If we do that, we have
And this does look like it's converging, to some number near $0.498....$
You would find, if you were careful with calculations, that the function was converging to $0.4977003024707452...$, which happens to be $\displaystyle \log \frac{\pi^2}{6}$.
Really? That strikes me as an awfully strange value for this function to converge to.
Well, if it does, you're in luck. Because an enormous amount of popular math exposition exists covering this specific identity. It is, in fact, an identity nearly always connected to the genius of Leonhard Euler. I won't explain in here, but any good reference to the Basel problem should get you started.
Anyway, this means that
$$ g_1(n,2)=\sum_{p^k \le n} \frac{1}{k} \rr{ \cdot \frac{1}{p^{2 k}} } $$Okay. I suppose the pattern is fairly clear here.
Good. Let me close out with the corresponding animation, which behaves in a predictable fashion.
It should be. All this leads to what should be now a pretty obvious generalization.
$$g_k(n,j)=\rec{\rr{j^z \cdot }(\frac{1}{k}-g_{k+1}(\frac{n}{j},2)) + g_k (n, j+1)}$$Okay, I follow that.
And that function then equals
$$ g_1(n,2)=\sum_{p^k \le n} \frac{1}{k} \rr{ \cdot p^{k z} } $$Right, right.
So that works even if $z$ is some general complex number, you're saying?
It does indeed. And actually, it's significant that it does. Lurking behind the scenes in this statement is one of the most notorious, perplexing problems in all of math.
How so?
Well, this is a bit tricky to summarize. In our discussion just now, where did $\displaystyle \lim_{n \to \infty} g_1(n,2)$ converge?
I think we saw that $z$ had to be less than $-1$.
Right. Well, truthfully, it's a touch more complicated than that if we introduce complex numbers. If we do, $\displaystyle \lim_{n \to \infty} g_1(n,2)$ converges as long as the real part of $z < -1$.
Okay, I'll go along with that. And so I guess the limit doesn't make sense otherwise, right?
Well, if you've never had exposure to complex analysis, you might be surprised to learn that there are well-known techniques to assign meaningful values to the limit even when the real part of $z \ge -1$. But this does broach topics in complex analysis, so I'll just hand wave by them here. Any good text covering analytic continuation will get at the heart of the idea. This is a big, important, rich idea.
Oh, huh.
Right; the ideas do get more challenging. Anyway, if you do analytically continue $\displaystyle \lim_{n \to \infty} g_1(n,2)$, you will find that, for certain special values of $z$, that limit is $-\infty$.
And what's so special about that, exactly?
Well, the values of $z$ that have a limit of $-\infty$ are, more or less, special amplitude/frequency pairs. You can think of them that way. And importantly, they are amplitude/frequency pairs that can be used to actually describe the distribution of prime numbers.
Well, given that we are looking at sums of the primes, I guess that shouldn't catch us off guard, exactly. Okay, that sounds fascinating enough, but where does this notorious problem show up?
Suppose you look at the range where the real part of $z$ is between $-1$ and $0$.
Okay.
If you do that, you'll quickly find a limitless amount of spikes that go to $-\infty$. And you will notice they all seem to have a real part of $-\frac{1}{2}$.
Well, do they just seem to behave that way, or do they actually behave that way?
No one knows. That's at the heart of the Riemann Hypothesis.
Ah.
Right. We'll actually see one reason that hypothesis is useful immediately in the next discussion, as we turn to questions about approximation, so we aren't done with it.
Anyway, we've already covered the connection to the Riemann zeta function starting in our first discussion, but to reiterate it... If you unroll $g_1(n,2)$ from above from, and apply a change of variable $s=-z$, then you have $$g_1(n,2)=\sum_{ {j_1}=2}^n { {j_1}^{-s}}- \frac{1}{2} \sum_{ {j_1}=2}^n \sum_{ {j_2}=2}^{\lfloor \frac{n}{ {j_1}} \rfloor} ({j_1} \cdot {j_2})^{-s}+ \frac{1}{3} \sum_{ {j_1}=2}^n \sum_{ {j_2}=2}^{\lfloor \frac{n}{ {j_1}} \rfloor} \sum_{ {j_3}=2}^{\lfloor \frac{n}{ {j_1} \cdot {j_2}} \rfloor} ( {j_1} \cdot {j_2} \cdot {j_3})^{-s}- ...$$
If the real part of $s$ is greater than 1, then a limit exists for $f_1(n,2)$ as $n$ goes to infinity. And the limit of the entire expression ends up being $$\log \zeta(s) = \sum_{k=1}^\infty \frac{(-1)^{k+1}}{k} (\zeta(s)-1)^k$$
Just as before; this isn't anything new.
Exactly. I'll return to those special amplitude/frequency pairs in a later discussion, but that will have to wait for a second series of discussions. For now, this is a good point to wind down, I think.