When working with signals many times it is helpful to break up
a signal into smaller, more manageable parts. Hopefully by
now you have been exposed to the concept of eigenvectors and there
use in decomposing a signal into one of its possible basis.
By doing this we are able to simplify our calculations of
signals and systems through eigenfunctions of LTI systems.
Now we would like to look at an alternative way to represent
signals, through the use of orthonormal basis.
We can think of orthonormal basis as a set of building blocks
we use to construct functions. We will build up the
signal/vector as a weighted sum of basis elements.
The complex sinusoids
form an orthonormal basis for
In our Fourier
are just another representation of
For signals/vectors in a Hilbert Space
, the expansion
coefficients are easy to find.
Recall our definition of a basis:
A set of vectors
in a vector space is a basis if
are linearly independent.
span . That is, we can find
(scalars) such that
where is a
vector in ,
is a scalar in
, and is a vector in
Condition 2 in the above definition says we can
decompose any vector in terms of the
. Condition 1 ensures that the decomposition is
unique (think about this at home).
provide an alternate representation of
Let us look at simple example in
, where we have the following vector:
In general, given a basis
and a vector
, how do we find the
Finding the Coefficients
Now let us address the question posed above about finding
's in general for
. We start by rewriting Equation 2 so that we can stack our
's as columns in a 2×2 matrix.
Here is a simple example, which shows a little more detail
about the above equations.
Simplifying our Equation
To make notation simpler, we define the following two items
from the above equations:
This gives us the following, concise equation:
which is equivalent to
Given a standard basis,
, then we have the following basis matrix:
To get the
's, we solve for the coefficient vector in Equation 7
is the inverse
matrix of .
Let us look at the standard basis first and try to
calculate from it.
Where is the
identity matrix. In order to solve for
us find the inverse of
first (which is obviously very trivial in this case):
Therefore we get,
Let us look at a ever-so-slightly more complicated basis
Then our basis matrix and inverse basis matrix becomes:
and for this example it is given that
Now we solve for
and we get
Now we are given the following basis matrix and :
For this problem, make a sketch of the bases and then
in terms of
In order to represent in terms of
we will follow the same steps we used in the
And now we can write in terms of
And we can easily substitute in our known values of
to verify our results.
A change of basis simply looks at
from a "different
from the standard basis to
our new basis,
. Notice that this is a totally mechanical
Extending the Dimension and Space
We can also extend all these ideas past just
and look at them in
. This procedure extends naturally to higher (> 2)
dimensions. Given a basis
, we want to find
Again, we will set up a basis matrix
where the columns equal the basis vectors and it will always
be an n×n matrix (although the above matrix does not
appear to be square since we left terms in vector notation).
We can then proceed to rewrite Equation 7