Given some function on some interval, can we approximate it by a sum of sines and cosines? Actually, since computing with complex exponentials is easier, we will tackle a more general problem: let be complex valued, and approximate it by the functions , where and are real numbers. We can always make a finite interval into by a change of variable, so we deal only with functions on . When we have to deal with extensions of this interval, we assume that the functions are periodic with period one -- we tile the whole real line with copies of the unit interval.
What amplitude and phase bring closest to in some sense? The obvious distance between two functions and at some point is If we then take this difference at all points and add it up, we get an integral
Applying this to the question at hand, we get
and to minimize with respect to and we take the derivatives and . It turns out to be equivalent, and significantly less work, if we take instead the square of the distance:
We find the extrema from:
We call the Fourier component of frequency , and usually denote it .
It is not at all obvious that we can actually reconstruct the function from these. It's possible that two different approximating functions might approximate the same feature, or that none of our functions will catch some features of the function. To resolve the first question, consider the overlap of two approximating functions of different frequencies and , for :
which equals 0 iff is an integer. To avoid overlap, we need to work only with integer separated frequencies.
The simplest choice is to choose the frequencies as the integers themselves (or times the integers if our function is defined on instead of ). There are two reasons for this: first, trying to make the transition later to infinite domains will otherwise be a nightmare; and second, Fourier tools are ubiquitously used to analyze transport equations (cousins to Laplace's equation ) and data believed to behave according to such equations. For such an equation in one dimension (), if we take a constant solution , we would like it to have a simple Fourier series, namely one term. Unless we take the integers for our frequencies, the constant function (given by ) won't be among our approximating functions.
This becomes clearer if considered in this way: the natural "home" of sine, cosine, and complex exponential is not a section of the real line, but a circle. If we took non-integer frequencies, we enter precisely the same realm as non-integer powers in complex analysis: for rational frequencies, things will eventually even out, but it will take some work. For irrational frequencies, we diverge off forever.
Next, do the Fourier frequencies catch all the features of the function? The short answer is no, but they can catch enough. We need to introduce a more abstract viewpoint to make sense of this.
Closeness is a geometric idea, so in order to discuss closeness of functions, we need to develop some geometric structure. The proper setting turns out to be a vector space.
Consider point particles, each of mass connected by springs. Let represent distance between the th and the th mass. Then we can write down a vector for which the normal vector addition and scalar multiplication have a sensible physical meaning.
We can define several sensible norms on this system: the total displacement from leftmost to rightmost particle
the square root of the potential energy of the system
Here we take the square root so that it satisfies for a scalar1. We could keep taking powers
for some integer. Norms with very large are rare in practice, except for , which we define as
The inner product also has a sensible interpretation.
represents the overlap of the two vectors. This should be obvious for the inner product of the vector with the standard basis vectors , , . It is perhaps not so obvious if we take other than the standard basis vectors, but we can always make a change of basis to reduce one of our vectors to such a form and the interpretation remains valid.
Now take the limit as . After some hemming and hawing, our vector becomes a function. At the moment we say nothing about smoothness. In all our definitions above, the sums become integrals
We recognize as the measure we used in the introduction to find our approximation.
As a first criterion for what we must have to make the Fourier series work, consider that we need every term in the integral representation of to be finite. Else we cannot pull the derivative through the integral. Recall that
so the criteria are
since for all real . We call the spaces of functions with finite ( stands for Lebesgue integrable). On finite domains, all we need is , since by the Schwartz inequality
that is, .
Since our Fourier transform is a change of basis, it ought to preserve the "length" of our vector. It turns out that
and the equality holds when (called Bessel's inequality and Parseval's identity, respectively). The proof is unrevealing, and can be found in most standard analysis textbooks.
It seems unlikely that an approximation by smooth functions like complex exponentials can reproduce points and jags in the original function. As it turns out, it can, if the points aren't too pointy. How do we measure "pointiness?"
Consider the family of functions . The first few of these look like:
At , these functions are obviously not differentiable, that is
does not exist. However, take
where . If we put in a sufficiently small , then the limit may exist.
The 's only misbehave at . There we have
which exists when . To see more clearly what this means, consider the box :
How much must we change to half the height of the box? If the function is differentiable, the answer (eventually, for a sufficiently small box) is to halve the horizontal dimensions of the box as well, since all differentiable can be well approximated as linear on sufficiently small regions. For our , we have to shrink faster and faster as increases.
More generally, a function is said to be -continuous if is the largest exponent in the denominator for which the limit above exists on the whole interval under consideration. is discontinuous -- we can't make the limit exist. Differentiable functions have . The 's provide a family of example illustrating 's throughout the admissable range.
Note that the depends on the domain which you are considering. has on the interval , but on .
When is the series
equal to ? And, worse, do its partial sums converge to ? We are all familiar with power series, where Taylor's remainder theorem tells us exactly how bad life can be. Can we find a similar tool for Fourier series?
First we show
for all continuous functions on . The proof is easy, but not obvious. Consider the function
for , called the Poisson kernel. The Poisson kernel has four useful properties:
We prove each of these in turn.
Proof of 1. If we can show that the integral is finite, then we can apply Fubini's theorem to switch the order of the sum and the integral. Once we have, it immediately becomes
The integrals vanish for all , and we are left with . To see that the integral is finite, consider
Proof of 2. We rewrite the sum in as two sums, one over the terms with and one over .
Proof of 3. We can rewrite as
which is obviously positive.
Proof of 4. Observe that
where is the characteristic function of the set . Then
Now we will prove that
for all . All continuous functions on a finite interval are , and it is easy to show that our sum, putting in the definition of , is finite. Thus we can freely use Fubini's theorem.
where represents convolution2. To show that the series equals at a point , we consider , which we can rewrite as the integral
Let . Then there exists such that whenever (since is continuous). Also, from the triangle inequality. Then
Continuous functions are bounded, so . When we take the limit , the second term goes away, and our result is proved.
This seems wonderfully easy, but nowhere have we made any mention of partial sums, which are all that we can actually work with except in very special cases. The pathological behavior of the partial Fourier sums will occupy us for the rest of this article.
The partial sums of Fourier series are difficult. We can establish a result for functions which are sufficiently smooth, but for overly rough creatures, there is no way to determine whether the series will behave itself.
As usual, we limit our attention to functions, since we can't even begin to work with anything else. If we take one of the 's defined before, we can hope to get some grasp of what happens. We approximate the function by a sine of frequency , but this doesn't catch the 's point. Indeed, in the neighborhood of the point, the sine is flat, and the function's behavior is essentially unchanged.
Now we have to fit a higher frequency in. Try twice the last frequency. We subtract off the approximated part of the function, and again we are left with a point, just a narrower one. At every additional frequency, our point gets narrower and narrower, and we get less of it with each frequency doubling. The trick is to find at what our series cannot match the point fast enough to converge.
The answer turns out to be , which we now set out to prove.
This problem turns out to be equivalent to finding for what
This isn't obvious. How are they related? Our heuristic discussion above provides the proper insight. We divide the series into sections with (corresponding to all modes between frequency doublings). We have to take since we were before talking about sines, and we must catch both complex exponentials corresponding to the sine.
Since we are concerned with -Holder continuity, we rewrite the Fourier coefficients in the following manner:
since is assumed periodic
where is the appropriate of our function. Now let (since it is arbitrary). The denominator turns out to be well behaved: since , we find that the exponential rotates over the left half of the unit circle in this range, and doesn't go near 1. Now consider the power contained in a such a range of frequencies:
for some constant
by Bessel's inequality
where we have absorbed the norm into , since for our function it is finite. Now consider the sum
If , then we have a summable series, which tells us that is a square summable series. Then we have
We just showed that the second square root is finite. The first one is finite only if , i.e., . We must have , which sets our lower limit on This is not to say series won't converge with lower ; such series are merely less predictable.
is a complete space (see a standard theorem that goes under the name "Arzela-Ascoli" and is found in most analysis books). Summable series have Cauchy partial sums, so the series must converge to something. We will show in the next section that it converges to .
Before we move on, we note that since
is absolutely summable, we can invoke Weierstrass's M-test to say that the sum (as a function of x) is continuous.
The proper tool to analyze partial Fourier sums is a cousin to the Poisson kernel we used earlier: the Dirichlet kernel, defined as
The first few Dirichlet kernels look like
The Dirichlet kernel has two properties of great importance:
and if is an integer.
The proofs are:
Proof of 1. We take
Now consider instead
Now we take the real part:
We apply the identity
Combined with our formula for we get
for not an integer. If is an integer, then from simple inspection.
Proof of 2. First of all,
for all , since all terms in the sum vanish under integration except the one. Now let be a continuous function, and let . There exists some such that whenever , .
It is enough to consider this identity at --- by a change of variable, we can always bring any other point to 0. Now take
In the second term, , and just as for the Poisson kernel (but now with instead of ),
Thus we have
Our result that
lets us rearrange integrals and infinite sums. Now consider
by property 2 of the Dirichlet kernel. Finally we have the convergence as desired.
From here, the reader might find it interesting to explore the Gibbs phenomenon from the point of view of the Dirichlet kernel, the effect of the Fourier series on derivatives, and the transition to the Fourier transform.