When working with signals many times it is helpful to break up a signal into smaller, more manageable parts. Hopefully by now you have been exposed to the concept of eigenvectors and there use in decomposing a signal into one of its possible basis. By doing this we are able to simplify our calculations of signals and systems through eigenfunctions of LTI systems.
Now we would like to look at an alternative way to represent signals, through the use of orthonormal basis. We can think of orthonormal basis as a set of building blocks we use to construct functions. We will build up the signal/vector as a weighted sum of basis elements.
The complex sinusoids $\frac{1}{\sqrt{T}}{e}^{i{\omega}_{0}nt}$ for all $-\infty <n<\infty $ form an orthonormal basis for ${L}^{2}\left(\left[0,T\right]\right)$.
In our Fourier series equation, $f\left(t\right)=\sum _{n=-\infty}^{\infty}{c}_{n}{e}^{i{\omega}_{0}nt}$, the $\left\{{c}_{n}\right\}$ are just another representation of $f\left(t\right)$.
Recall our definition of a basis: A set of vectors $\left\{{b}_{i}\right\}$ in a vector space $S$ is a basis if
Condition 2 in the above definition says we can decompose any vector in terms of the $\left\{{b}_{i}\right\}$. Condition 1 ensures that the decomposition is unique (think about this at home).
Let us look at simple example in ${\mathbb{R}}^{2}$, where we have the following vector: $$x=\left(\begin{array}{c}1\\ 2\end{array}\right)$$ Standard Basis: $\left\{{e}_{0},{e}_{1}\right\}=\left\{{\left(1,0\right)}^{T},{\left(0,1\right)}^{T}\right\}$ $$x={e}_{0}+2{e}_{1}$$ Alternate Basis: $\left\{{h}_{0},{h}_{1}\right\}=\left\{{\left(1,1\right)}^{T},{\left(1,-1\right)}^{T}\right\}$ $$x=\frac{3}{2}{h}_{0}+\frac{-1}{2}{h}_{1}$$
In general, given a basis $\left\{{b}_{0},{b}_{1}\right\}$ and a vector $x\in {\mathbb{R}}^{2}$, how do we find the ${\alpha}_{0}$ and ${\alpha}_{1}$ such that
Now let us address the question posed above about finding ${\alpha}_{i}$'s in general for ${\mathbb{R}}^{2}$. We start by rewriting Equation 2 so that we can stack our ${b}_{i}$'s as columns in a 2×2 matrix.
Here is a simple example, which shows a little more detail about the above equations.
To make notation simpler, we define the following two items from the above equations:
Given a standard basis, $\left\{\left(\begin{array}{c}1\\ 0\end{array}\right),\left(\begin{array}{c}0\\ 1\end{array}\right)\right\}$, then we have the following basis matrix: $$B=\left(\begin{array}{cc}0& 1\\ 1& 0\end{array}\right)$$
To get the ${\alpha}_{i}$'s, we solve for the coefficient vector in Equation 7
Let us look at the standard basis first and try to calculate $\alpha $ from it. $$B=\left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right)=I$$ Where $I$ is the identity matrix. In order to solve for $\alpha $ let us find the inverse of $B$ first (which is obviously very trivial in this case): $${B}^{-1}=\left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right)$$ Therefore we get, $$\alpha ={B}^{-1}x=x$$
Let us look at a ever-so-slightly more complicated basis of $\left\{\left(\begin{array}{c}1\\ 1\end{array}\right),\left(\begin{array}{c}1\\ -1\end{array}\right)\right\}=\left\{{h}_{0},{h}_{1}\right\}$ Then our basis matrix and inverse basis matrix becomes: $$B=\left(\begin{array}{cc}1& 1\\ 1& -1\end{array}\right)$$ $${B}^{-1}=\left(\begin{array}{cc}\frac{1}{2}& \frac{1}{2}\\ \frac{1}{2}& \frac{-1}{2}\end{array}\right)$$ and for this example it is given that $$x=\left(\begin{array}{c}3\\ 2\end{array}\right)$$ Now we solve for $\alpha $ $$\alpha ={B}^{-1}x=\left(\begin{array}{cc}\frac{1}{2}& \frac{1}{2}\\ \frac{1}{2}& \frac{-1}{2}\end{array}\right)\left(\begin{array}{c}3\\ 2\end{array}\right)=\left(\begin{array}{c}2.5\\ 0.5\end{array}\right)$$ and we get $$x=2.5{h}_{0}+0.5{h}_{1}$$
Now we are given the following basis matrix and $x$: $$\left\{{b}_{0},{b}_{1}\right\}=\left\{\left(\begin{array}{c}1\\ 2\end{array}\right),\left(\begin{array}{c}3\\ 0\end{array}\right)\right\}$$ $$x=\left(\begin{array}{c}3\\ 2\end{array}\right)$$ For this problem, make a sketch of the bases and then represent $x$ in terms of ${b}_{0}$ and ${b}_{1}$.
In order to represent $x$ in terms of ${b}_{0}$ and ${b}_{1}$ we will follow the same steps we used in the above example. $$B=\left(\begin{array}{cc}1& 2\\ 3& 0\end{array}\right)$$ $${B}^{-1}=\left(\begin{array}{cc}0& \frac{1}{2}\\ \frac{1}{3}& \frac{-1}{6}\end{array}\right)$$ $$\alpha ={B}^{-1}x=\left(\begin{array}{c}1\\ \frac{2}{3}\end{array}\right)$$ And now we can write $x$ in terms of ${b}_{0}$ and ${b}_{1}$. $$x={b}_{0}+\frac{2}{3}{b}_{1}$$ And we can easily substitute in our known values of ${b}_{0}$ and ${b}_{1}$ to verify our results.
We can also extend all these ideas past just ${\mathbb{R}}^{2}$ and look at them in ${\mathbb{R}}^{n}$ and ${\u2102}^{n}$. This procedure extends naturally to higher (> 2) dimensions. Given a basis $\left\{{b}_{0},{b}_{1},\dots ,{b}_{n-1}\right\}$ for ${\mathbb{R}}^{n}$, we want to find $\left\{{\alpha}_{0},{\alpha}_{1},\dots ,{\alpha}_{n-1}\right\}$ such that