My Authors
Read all threads
Everyone talks about the Riemann zeta function like it's tricky to construct and requires thinking about the sophisticated machinery of meromorphic continuation. It's not and it doesn't. It's much simpler than that.
Let ζ(-p) = 1^p + 2^p + 3^p + … be the Riemann zeta function. This converges when the real component of p is < -1. How do we make sense of this for general powers p?
Well, let's consider a more general problem. Things are often simpler in greater abstraction, at least to my mindset. Suppose you want to make sense of f(x) + f(x + 1) + f(x + 2) + … in general, even though this may not converge. What can you do?
Let's give this a nice name as a function: S(x) = f(x) + f(x + 1) + f(x + 2) + ….

And note that S(a) - S(b) should be [f(a) - f(b)] + [f(a + 1) - f(b + 1)] + [f(a + 2) - f(b + 2)] + …, which may converge even if neither S(a) nor S(b) converge in themselves.
Another notation for the same thing is that S'(x) = f'(x) + f'(x + 1) + f'(x + 2) + … may well converge for all x, even if S(x) itself doesn't.

This derivative tells us a lot! Integrating its derivative tells us everything about what S should be, except for an unknown +C term.
In the same way, we could take 2nd or higher derivatives. So even if the first derivative doesn't make it converge, so long as SOME higher derivative converges, we can recover f(x) + f(x + 1) + f(x + 2) + … by repeated integration, once we know which +Cs to pick at each step.
In other words, so long as some number of differentiations makes it converge, we know what f(x) + f(x + 1) + f(x + 2) + … should be, up to a polynomial term.

But how do we figure out what the +Cs in the integrations should be taken to be?
Well, let's make one more observation about the average values of these functions: Suppose g is the n-th derivative of f. What is the average value of g on an arbitrary unit interval [b, b + 1]? I.e., what should the integral of g from b to b + 1 be?
This would be [G(b + 1) - G(b)] + [G(b + 2) - G(b + 1)] + [G(b + 3) - G(b + 2)] + …, where G is the (n - 1)-th derivative of f. And this last series basically cancels out to just -G(b).
[Technically, it sums to -G(b) + G(b + ∞) in a limiting sense, but if it were true that G(x) + G(x + 1) + G(x + 2) + … converged, then it would have to be true that G approaches 0 in the limit anyway.]
So there you have it. To figure out what f(x) + f(x + 1) + f(x + 2) + … should be, differentiate enough times that you get a straightforward convergent series, then integrate back up,
choosing the +C at each stage so that each n-th derivative series is given an average value on any unit interval [b, b + 1] of -(the (n - 1)-th derivative of f at b).
One subtlety is that, in the last integration to get f(x) + f(x + 1) + f(x + 2) + …, this will require us to evaluate the -1-th derivative of f; that is, we can only carry out this technique after we've already settled on a choice of antiderivative for f itself. But that's fine.
Our technique will give the standard answer for f(x) + f(x + 1) + f(x + 2) + … whenever this series converges and the antiderivative F of f is chosen with asymptotic value F(+∞) = 0.
More generally, let us say a function is "pseudopolynomial" if it or some higher derivative of it approaches 0 as its inputs get large.
What our above reasoning implicitly shows is that our technique for defining F'(x) + F'(x + 1) + F'(x + 2) + … gives the unique pseudopolynomial function whose average value on each [b, b + 1] is -F(b) [with this existing if and only if F itself is pseudopolynomial].
Coming back now to the Riemann zeta function, we originally wanted to evaluate 1^p + 2^p + 3^p + ….

So let f(x) = x^p.
Now we need to pick an antiderivative for f. Specifically, an antiderivative which approaches 0 when f(x) + f(x + 1) + f(x + 2) + … is already convergent and that extends this in a clean pattern more generally. The natural choice is to use the antiderivative x^(p + 1)/(p + 1).
Having done so, we can use the technique above. Sufficiently many differentiations will indeed make this into something convergent, since each differentiation brings the power down by 1 and we get convergence once the real component of the power is < -1.
In this way, we can evaluate x^p + (x + 1)^p + (x + 2)^p + … for general p [this is called the Hurwitz zeta function, ζ(-p, x)] and by using the specific value x = 1, we get the Riemann zeta function.
Well, there's one hitch: when p = -1, our antiderivative x^(p + 1)/(p + 1) involves a division by zero. And this is why the zeta function has a pole at the power -1 (i.e., at the harmonic series).

At every other power, everything works out smoothly, to some finite number.
That's it. It's that simple. This defines the zeta function on arbitrary inputs. No need to think about meromorphic continuation. Just differentiation till you hit convergence, and integration back up with the right choice of constants to get the right average values.
We can use this technique to generalize other recurrences too.
For example, the question is often posed of how to generalize the factorial to non-integer arguments.

Well, the defining property of the factorial is that x! = x * (x - 1)!. In other words, log(x!) = log(x) + log((x - 1)!).
In other words, the average value of the derivative of the logarithm of the factorial on any unit interval [x - 1, x] should be log(x).
Note that, with two differentiations, log(x) turns into 1/x^2, which does indeed have the property that 1/x^2 + 1/(x + 1)^2 + 1/(x + 2)^2 + … converges. [Note also that this is just another instance of the Hurwitz zeta function!]
So our technique will find the unique pseudopolynomial fit to be the derivative of the logarithmic factorial. Given this and the starting value 0! = 1, we've defined the factorial itself, now generalized to arbitrary inputs. And we even see its relation to the zeta functions!
For what it's worth, this generalized factorial is what people often call the "gamma function", but this different name (and also the slight re-indexing people use for it) is a dumb hidebound thing that you need pay no attention to. Just call it the factorial like usual.
That's it from me for now. Twitter was the wrong medium for this, and in particular, my Twitter audience was the wrong audience for this, but I decided to write it up anyway.
Minor typo: The second derivative of log(x) is actually -1/x^2.
Missing some Tweet in this thread? You can try to force a refresh.

Enjoying this thread?

Keep Current with Sridhar Ramesh

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!