This brief tutorial on some key terms in linear algebra is not meant to replace or be very helpful to those of you trying to gain a deep insight into linear algebra. Rather, this brief introduction to some of the terms and ideas of linear algebra is meant to provide a little background to those trying to get a better understanding or learn about eigenvectors and eigenfunctions, which play a big role in deriving a few important ideas on Signals and Systems. The goal of these concepts will be to provide a background for signal decomposition and to lead up to the derivation of the Fourier Series.

A set of vectors $\forall x,{x}_{i}\in {\mathbb{C}}^{n}:\left(\left\{{x}_{1},{x}_{2},\dots ,{x}_{k}\right\}\right)$ are linearly independent if none of them can be written as a linear combination of the others.

Definition 1: Linearly Independent

For a given set of vectors,
$\left\{{x}_{1},{x}_{2},\dots ,{x}_{n}\right\}$, they are linearly independent if
$${c}_{1}{x}_{1}+{c}_{2}{x}_{2}+\dots +{c}_{n}{x}_{n}=0$$
only when
${c}_{1}={c}_{2}=\dots ={c}_{n}=0$

Example 1

We are given the following two vectors:
$${x}_{1}=\left(\begin{array}{c}3\\ 2\end{array}\right)$$
$${x}_{2}=\left(\begin{array}{c}-6\\ -4\end{array}\right)$$
These are **not linearly independent** as
proven by the following statement, which, by inspection,
can be seen to not adhere to the definition of linear
independence stated above.
$$({x}_{2}=-2{x}_{1})\Rightarrow (2{x}_{1}+{x}_{2}=0)$$
Another approach to reveal a vectors independence is by
graphing the vectors. Looking at these two vectors
geometrically (as in Figure 1), one can again
prove that these vectors are **not**
linearly independent.

Example 2

We are given the following two vectors:
$${x}_{1}=\left(\begin{array}{c}3\\ 2\end{array}\right)$$
$${x}_{2}=\left(\begin{array}{c}1\\ 2\end{array}\right)$$
These are **linearly independent** since
$${c}_{1}{x}_{1}=-({c}_{2}{x}_{2})$$
only if
${c}_{1}={c}_{2}=0$. Based on the definition, this proof shows that
these vectors are indeed linearly independent. Again, we
could also graph these two vectors (see Figure 2) to check for linear independence.

Exercise 1

Are $\left\{{x}_{1},{x}_{2},{x}_{3}\right\}$ linearly independent? $${x}_{1}=\left(\begin{array}{c}3\\ 2\end{array}\right)$$ $${x}_{2}=\left(\begin{array}{c}1\\ 2\end{array}\right)$$ $${x}_{3}=\left(\begin{array}{c}-1\\ 0\end{array}\right)$$

Solution

By playing around with the vectors and doing a little
trial and error, we will discover the following
relationship:
$${x}_{1}-{x}_{2}+2{x}_{3}=0$$
Thus we have found a linear combination of these three
vectors that equals zero without setting the coefficients
equal to zero. Therefore, these vectors are **not
linearly independent**!

As we have seen in the two above examples, often times the independence of vectors can be easily seen through a graph. However this may not be as easy when we are given three or more vectors. Can you easily tell whether or not these vectors are independent from Figure 3. Probably not, which is why the method used in the above solution becomes important.

Hint:

A set of $m$ vectors in
${\mathbb{C}}^{n}$ cannot be linearly independent if
$m>n$.

Definition 2: Span

The span of a set of vectors
$\left\{{x}_{1},{x}_{2},\dots ,{x}_{k}\right\}$
is the set of vectors that can be written as a linear
combination of
$\left\{{x}_{1},{x}_{2},\dots ,{x}_{k}\right\}$
$$\mathrm{span}\left(\left\{{x}_{1},\dots ,{x}_{k}\right\}\right)=\left\{\forall \alpha ,{\alpha}_{i}\in {\mathbb{C}}^{n}:\left({\alpha}_{1}{x}_{1}+{\alpha}_{2}{x}_{2}+\dots +{\alpha}_{k}{x}_{k}\right)\right\}$$

Example 3

Given the vector
$${x}_{1}=\left(\begin{array}{c}3\\ 2\end{array}\right)$$
the span of
${x}_{1}$
is a **line**.

Example 4

Given the vectors $${x}_{1}=\left(\begin{array}{c}3\\ 2\end{array}\right)$$ $${x}_{2}=\left(\begin{array}{c}1\\ 2\end{array}\right)$$ the span of these vectors is ${\mathbb{C}}^{2}$.

Definition 3: Basis

A basis for
${\mathbb{C}}^{n}$
is a set of vectors that: (1) spans
${\mathbb{C}}^{n}$
**and** (2) is linearly independent.

Example 5

We are given the following vector $${e}_{i}=\left(\begin{array}{c}0\\ \vdots \\ 0\\ 1\\ 0\\ \vdots \\ 0\end{array}\right)$$ where the $1$ is always in the $i$th place and the remaining values are zero. Then the basis for ${\mathbb{C}}^{n}$ is $$\left\{\forall i,i=\left[1,2,\dots ,n\right]:\left({e}_{i}\right)\right\}$$

Note:

$\left\{\forall i,i=\left[1,2,\dots ,n\right]:\left({e}_{i}\right)\right\}$
is called the standard basis.
Example 6

$${h}_{1}=\left(\begin{array}{c}1\\ 1\end{array}\right)$$ $${h}_{2}=\left(\begin{array}{c}1\\ -1\end{array}\right)$$ $\left\{{h}_{1},{h}_{2}\right\}$ is a basis for ${\mathbb{C}}^{2}$.

If
$\left\{{b}_{1},\dots ,{b}_{2}\right\}$ is a basis for
${\mathbb{C}}^{n}$, then we can express **any**
$x\in {\mathbb{C}}^{n}$ as a linear combination of the
${b}_{i}$'s:
$$\forall \alpha ,{\alpha}_{i}\in \mathbb{C}:\left(x={\alpha}_{1}{b}_{1}+{\alpha}_{2}{b}_{2}+\dots +{\alpha}_{n}{b}_{n}\right)$$

Example 7

Given the following vector, $$x=\left(\begin{array}{c}1\\ 2\end{array}\right)$$ writing $x$ in terms of $\left\{{e}_{1},{e}_{2}\right\}$ gives us $$x={e}_{1}+2{e}_{2}$$

Exercise 2

Try and write $x$ in terms of $\left\{{h}_{1},{h}_{2}\right\}$ (defined in the previous example).

Solution

$$x=\frac{3}{2}{h}_{1}+\frac{-1}{2}{h}_{2}$$

In the two basis examples above, $x$ is the same vector in both cases, but we can express it in many different ways (we give only two out of many, many possibilities). You can take this even further by extending this idea of a basis to function spaces.

Note:

As mentioned in the introduction, these concepts of linear
algebra will help prepare you to understand the Fourier Series, which
tells us that we can express periodic functions,
$f\left(t\right)$,
in terms of their basis functions,
${e}^{i{\omega}_{0}nt}$.
Khan Lecture on Basis of a Subspace