Before looking at this module, hopefully you have become fully
convinced of the fact that anyperiodic function,
$f\left(t\right)$, can be represented as a sum of complex sinusoids. If you
are not, then try looking back at eigen-stuff in a nutshell or eigenfunctions of LTI
systems. We have shown that we can represent a signal
as the sum of exponentials through the Fourier Series equations below:
Joseph
Fourier insisted that these equations were true,
but could not prove it. Lagrange publicly ridiculed
Fourier, and said that only continuous functions can be
represented by Equation 1 (indeed he
proved that Equation 1 holds for
continuous-time functions). However, we know now that
the real truth lies in between Fourier and Lagrange's
positions.
Understanding the Truth
Formulating our question mathematically, let
$$\frac{d{f}_{N}\left(t\right)}{d}=\sum _{n=-N}^{N}{c}_{n}{e}^{i{\omega}_{0}nt}$$
where
${c}_{n}$ equals the Fourier coefficients of
$f\left(t\right)$ (see Equation 2).
${{f}_{N}}^{\prime}\left(t\right)$ is a "partial reconstruction" of
$f\left(t\right)$
using the first
$2N+1$
Fourier coefficients.
${{f}_{N}}^{\prime}\left(t\right)$approximates$f\left(t\right)$, with the approximation getting better and better as
$N$ gets large. Therefore, we can
think of the set
$\left\{\forall N,N=\left\{0,1,\dots \right\}:\left(\frac{d{f}_{N}\left(t\right)}{d}\right)\right\}$
as a sequence of functions, each one
approximating
$f\left(t\right)$ better than the one before.
The question is, does this sequence converge to
$f\left(t\right)$? Does
${{f}_{N}}^{\prime}\left(t\right)\to f\left(t\right)$
as
$N\to \infty $? We will try to answer this question by thinking
about convergence in two different ways:
Looking at the energy of the error signal:
$${e}_{N}\left(t\right)=f\left(t\right)-{{f}_{N}}^{\prime}\left(t\right)$$
Looking at
$\underset{N\to \infty}{\mathrm{limit}}\frac{d{f}_{N}\left(t\right)}{d}$
at each point and comparing to
$f\left(t\right)$.
Approach #1
Let
${e}_{N}\left(t\right)$
be the difference (i.e. error) between the signal
$f\left(t\right)$ and its partial reconstruction
${{f}_{N}}^{\prime}\left(t\right)$
We can prove this equation using Parseval's relation:
$$\underset{N\to \infty}{\mathrm{limit}}{\int}_{0}^{T}{\left(\left|f\left(t\right)-{{f}_{N}}^{\prime}\left(t\right)\right|\right)}^{2}dt=\underset{N\to \infty}{\mathrm{limit}}\sum _{N=-\infty}^{\infty}{\left(\left|{\mathcal{F}}_{n}\left(f\left(t\right)\right)-{\mathcal{F}}_{n}\left(\frac{d{f}_{N}\left(t\right)}{d}\right)\right|\right)}^{2}=\underset{N\to \infty}{\mathrm{limit}}\sum _{\left|n\right|N}^{}{\left(\left|{c}_{n}\right|\right)}^{2}=0$$
where the last equation before zero is the tail sum of the
Fourier Series, which approaches zero because
$f\left(t\right)\in {L}^{2}\left(\left[0,T\right]\right)$.
Since physical systems respond to energy, the
Fourier Series provides an adequate representation for all
$f\left(t\right)\in {L}^{2}\left(\left[0,T\right]\right)$ equaling finite energy over one period.
Approach #2
The fact that
${e}_{N}\to 0$ says nothing about
$f\left(t\right)$
and
$\underset{N\to \infty}{\mathrm{limit}}\frac{d{f}_{N}\left(t\right)}{d}$
being equal at a given point. Take the
two functions graphed below for example:
Given these two functions,
$f\left(t\right)$ and
$g\left(t\right)$, then we can see that for all $t$,
$f\left(t\right)\ne g\left(t\right)$, but
$${\int}_{0}^{T}{\left(\left|f\left(t\right)-g\left(t\right)\right|\right)}^{2}dt=0$$
From this we can see the following relationships:
$$\mathrm{energy\; convergence}\ne \mathrm{pointwise\; convergence}$$$$\mathrm{pointwise\; convergence}\Rightarrow {\mathrm{convergence\; in\; L}}^{2}\left(\left[0,T\right]\right)$$
However, the reverse of the above statement does not hold true.
It turns out that if
$f\left(t\right)$
has a discontinuity (as can be seen in figure
of
$g\left(t\right)$ above) at
${t}_{0}$, then
$$f\left({t}_{0}\right)\ne \underset{N\to \infty}{\mathrm{limit}}\frac{d{f}_{N}\left({t}_{0}\right)}{d}$$
But as long as
$f\left(t\right)$
meets some other fairly mild conditions, then
$$f\left({t}^{\prime}\right)=\underset{N\to \infty}{\mathrm{limit}}\frac{d{f}_{N}\left({t}^{\prime}\right)}{d}$$
if
$f\left(t\right)$
is continuous at
$t={t}^{\prime}$.
These conditions are known as the Dirichlet Conditions.
Dirichlet Conditions
Named after the German mathematician, Peter Dirichlet, the
Dirichlet conditions are the sufficient conditions
to guarantee existence and energy convergence
of the Fourier Series.
The Weak Dirichlet Condition for the Fourier Series
For the Fourier Series to exist, the Fourier coefficients
must be finite. The Weak Dirichlet Condition
guarantees this. It essentially says that the
integral of the absolute value of the signal must be
finite.
Theorem 1
The coefficients of the Fourier Series are finite if
If we have the function:
$$\forall t,0<t\le T:\left(f\left(t\right)=\frac{1}{t}\right)$$
then you should note that this function
fails the above condition because:
$${\int}_{0}^{T}\left|\frac{1}{t}\right|dt=\infty $$
The Strong Dirichlet Conditions for the Fourier Series
For the Fourier Series to exist, the
following two conditions must be satisfied (along with the Weak
Dirichlet Condition):
In one period,
$f\left(t\right)$ has only a finite number of minima and maxima.
In one period,
$f\left(t\right)$ has only a finite number of discontinuities and
each one is finite.
These are what we refer to as the Strong
Dirichlet Conditions. In theory we can think of
signals that violate these conditions,
$\mathrm{sin}\left(\mathrm{log}t\right)$
for instance. However, it is not possible to create a signal
that violates these conditions in a lab. Therefore, any
real-world signal will have a Fourier representation.
Example 1
Let us assume we have the following function and equality:
If
$f\left(t\right)$ meets all three conditions of the Strong Dirichlet
Conditions, then
$$f\left(\tau \right)=\frac{df\left(\tau \right)}{d}$$
at every $\tau $ at which
$f\left(t\right)$ is continuous. And where
$f\left(t\right)$ is discontinuous,
${f}^{\prime}\left(t\right)$ is the average of the values
on the right and left.
Note:
The functions that fail the strong Dirchlet conditions are pretty
pathological - as engineers, we are not too interested in
them.