Site Feedback

# Introduction

A great many applications in signal processing, image processing, and beyond involve determining the presence and location of a target signal within some other signal. A radar system, for example, searches for copies of a transmitted radar pulse in order to determine the presence of and distance to reflective objects such as buildings or aircraft. A communication system searches for copies of waveforms representing digital 0s and 1s in order to receive a message.

Two key mathematical tools that contribute to these applications are inner products and the Cauchy-Schwarz inequality. As is shown in the module on the Cauchy-Schwarz inequality, the expression $x||x||,y||y||x||x||,y||y||$ attains its upper bound, which is 1, when $y=axy=ax$ for some scalar $aa$ in a real or complex field. The lower bound, which is 0, is attained when $xx$ and $yy$ are orthogonal. In informal intuition, this means that the expression is maximized when the vectors $xx$ and $yy$ have the same shape or pattern and minimized when $xx$ and $yy$ are very different. A pair of vectors with similar but unequal shapes or patterns will produce relatively large value of the expression less than 1, and a pair of vectors with very different but not orthogonal shapes or patterns will produce relatively small values of the expression greater than 0. Thus, the above expression carries with it a notion of the degree to which two signals are “alike”, the magnitude of the normalized correlation between the signals in the case of the standard inner products.

This concept can be extremely useful. For instance consider a situation in which we wish to determine which signal, if any, from a set $XX$ of signals most resembles a particular signal $yy$. In order to accomplish this, we might evaluate the above expression for every signal $x∈Xx∈X$, choosing the one that results in maxima provided that those maxima are above some threshold of “likeness”. This is the idea behind the matched filter detector, which compares a set of signals against a target signal using the above expression in order to determine which is most like the target signal.

# Signal Comparison

The simplest variant of the matched filter detector scheme would be to find the member signal in a set $XX$ of signals that most closely matches a target signal $yy$. Thus, for every $x∈Xx∈X$ we wish to evaluate

$m ( x , y ) = x | | x | | , y | | y | | m ( x , y ) = x | | x | | , y | | y | |$
1

in order to compare every member of $XX$ to the target signal $yy$. Since the member of $XX$ which most closely matches the target signal $yy$ is desired, ultimately we wish to evaluate

$x m = argmax x ∈ X x | | x | | , y | | y | | . x m = argmax x ∈ X x | | x | | , y | | y | | .$
2

Note that the target signal does not technically need to be normalized to produce a maximum, but gives the desirable property that $m(x,y)m(x,y)$ is bounded to $[0,1][0,1]$.

The element $xm∈Xxm∈X$ that produces the maximum value of $m(x,y)m(x,y)$ is not necessarily unique, so there may be more than one matching signal in $XX$. Additionally, the signal $xm∈Xxm∈X$ producing the maximum value of $m(x,y)m(x,y)$ may not produce a very large value of $m(x,y)m(x,y)$ and thus not be very much like the target signal $yy$. Hence, another matched filter scheme might identify the argument producing the maximum but only above a certain threshold, returning no matching signals in $XX$ if the maximum is below the threshold. There also may be a signal $x∈Xx∈X$ that produces a large value of $m(x,y)m(x,y)$ and thus has a high degree of “likeness” to $yy$ but does not produce the maximum value of $m(x,y)m(x,y)$. Thus, yet another matched filter scheme might identify all signals in $XX$ producing local maxima that are above a certain threshold.

Example 1

For example, consider the target signal given in Figure 1 and the set of two signals given in Figure 2. By inspection, it is clear that the signal $g2g2$ is most like the target signal $ff$. However, to make that conclusion mathematically, we use the matched filter detector with the $L2L2$ inner product. If we were to actually make the necessary computations, we would first normalize each signal and then compute the necessary inner products in order to compare the signals in $XX$ with the target signal $ff$. We would notice that the absolute value of the inner product for $g2g2$ with $ff$ when normalized is greater than the absolute value of the inner product of $g1g1$ with $ff$ when normalized, mathematically stated as

$g 2 = argmax x ∈ { g 1 , g 2 } x | | x | | , f | | f | | . g 2 = argmax x ∈ { g 1 , g 2 } x | | x | | , f | | f | | .$
3
Template Signal Figure 1: We wish to find a match for this target signal in the set of signals below.
Candidate Signals Figure 2: We wish to find a match for the above target signal in this set of signals.

# Pattern Detection

A somewhat more involved matched filter detector scheme would involve attempting to match a target time limited signal $y=fy=f$ to a set of of time shifted and windowed versions of a single signal $X={wStg|t∈R}X={wStg|t∈R}$ indexed by $RR$. The windowing funtion is given by $w(t)=u(t-t1)-u(t-t2)w(t)=u(t-t1)-u(t-t2)$ where $[t1,t2][t1,t2]$ is the interval to which $ff$ is time limited. This scheme could be used to find portions of $gg$ that have the same shape as $ff$. If the absolute value of the inner product of the normalized versions of $ff$ and $wStgwStg$ is large, which is the absolute value of the normalized correlation for standard inner products, then $gg$ has a high degree of “likeness” to $ff$ on the interval to which $ff$ is time limited but left shifted by $tt$. Of course, if $ff$ is not time limited, it means that the entire signal has a high degree of “likeness” to $ff$ left shifted by $tt$.

Thus, in order to determine the most likely locations of a signal with the same shape as the target signal $ff$ in a signal $gg$ we wish to compute

$t m = argmax t ∈ R f | | f | | , w S t g | | w S t g | | t m = argmax t ∈ R f | | f | | , w S t g | | w S t g | |$
4

to provide the desired shift. Assuming the inner product space examined is $L2(RL2(R$ (similar results hold for $L2(R[a,b))L2(R[a,b))$, $l2(Z)l2(Z)$, and $l2(Z[a,b))l2(Z[a,b))$), this produces

$t m = argmax t ∈ R 1 | | f | | | | w S t g | | ∫ - ∞ ∞ f ( τ ) w ( τ ) g ( τ - t ) ¯ d τ . t m = argmax t ∈ R 1 | | f | | | | w S t g | | ∫ - ∞ ∞ f ( τ ) w ( τ ) g ( τ - t ) ¯ d τ .$
5

Since $ff$ and $ww$ are time limited to the same interval

$t m = argmax t ∈ R 1 | | f | | | | w S t g | | ∫ t 1 t 2 f ( τ ) g ( τ - t ) ¯ d τ . t m = argmax t ∈ R 1 | | f | | | | w S t g | | ∫ t 1 t 2 f ( τ ) g ( τ - t ) ¯ d τ .$
6

Making the subsitution $h(t)=g(-t)¯h(t)=g(-t)¯$,

$t m = argmax t ∈ R 1 | | f | | | | w S t g | | ∫ t 1 t 2 f ( τ ) h ( t - τ ) d τ . t m = argmax t ∈ R 1 | | f | | | | w S t g | | ∫ t 1 t 2 f ( τ ) h ( t - τ ) d τ .$
7

Noting that this expression contains a convolution operation

$t m = argmax t ∈ R ( f * h ) ( t ) | | f | | | | w S t g | | . t m = argmax t ∈ R ( f * h ) ( t ) | | f | | | | w S t g | | .$
8

where $hh$ is the conjugate of the time reversed version of $gg$ defined by $h ( t ) = g ( - t ) ¯ . h ( t ) = g ( - t ) ¯ .$

In the special case in which the target signal $ff$ is not time limited, $ww$ has unit value on the entire real line. Thus, the norm can be evaluated as $||wStg||=||Stg||=||g||=||h||||wStg||=||Stg||=||g||=||h||$. Therefore, the function reduces to $tm=argmaxt∈R(f*h)(t)||f||||h||tm=argmaxt∈R(f*h)(t)||f||||h||$ where $h(t)=g(-t)¯.h(t)=g(-t)¯.$ The function $f*g=(f*h)(t)||f||||h||f*g=(f*h)(t)||f||||h||$ is known as the normalized cross-correlation of $ff$ and $gg$.

Hence, this matched filter scheme can be implemented as a convolution. Therefore, it may be expedient to implement it in the frequency domain. Similar results hold for the $L2(R[a,b))L2(R[a,b))$, $l2(Z)l2(Z)$, and $l2(Z[a,b])l2(Z[a,b])$ spaces. It is especially useful to implement the $l2(Z[a,b])l2(Z[a,b])$ cases in the frequency domain as the power of the Fast Fourier Transform algorithm can be leveraged to quickly perform the computations in a computer program. In the $L2(R[a,b))L2(R[a,b))$ and $l2(Z[a,b])l2(Z[a,b])$ cases, care must be taken to zero pad the signal if wrap-around effects are not desired. Similar results also hold for spaces on higher dimensional intervals with the same inner products.

Of course, there is not necessarily exactly one instance of a target signal in a given signal. There could be one instance, more than one instance, or no instance of a target signal. Therefore, it is often more practical to identify all shifts corresponding to local maxima that are above a certain threshold.

Example 2

The signal in Figure 4 contains an instance of the template signal seen in Figure 3 beginning at time $t=s1t=s1$ as shown by the plot in Figure 5. Therefore,

$s 1 = argmax t ∈ R f | | f | | , w S t g | | w S t g | | . s 1 = argmax t ∈ R f | | f | | , w S t g | | w S t g | | .$
9
Pattern Signal Figure 3: This function shows tha pattern we are looking for in the signal below, which occurs at time t=s1t=s1.
Longer Signal Figure 4: This signal contains an instance of the above signal starting at time t=s1t=s1.
Absolute Value of Output Figure 5: This signal shows a sketch of the absolute value of the matched filter output for the interval shown. Note that this was just an "eyeball approximation" sketch. Observe the pronounced peak at time t=s1t=s1.

# Image Detection

Matched Filtering is used in image processing to detect a template image within a reference image. This has real-word applications in verifying fingerprints for security or in verifying someone's photo. As a simple example, we can turn to the ever-popular "Where's Waldo?" books (known as Wally in the UK!), where the reader is tasked with finding the specific face of Waldo/Wally in a confusing background rife with look-alikes! If we are given the template head and a reference image, we can run a two dimensional convolution of the template image across the reference image to obtain a three dimensional convolution map, Link, where the height of the convolution map is determined by the degree of correlation, higher being more correlated. Finding our target then becomes a matter of determining the spot where the local surface area is highest. The process is demonstrated in Link. In the field of image processing, this matched filter-based process is known as template matching. Figure 6: Example of "Where's Waldo?" picture. Our Matched Filter Detector can be implemented to find a possible match for Waldo.

then we could easily develop a program to find the closest resemblance to the image of Waldo's head in the larger picture. We would simply implement our same match filter algorithm: take the inner products at each shift and see how large our resulting answers are. This idea was implemented on this same picture for a Signals and Systems Project at Rice University (click the link to learn more).

Exercise 1: Pros and Cons

What are the advantages of the matched filter algorithm to image detection? What are the drawbacks of this method?

Solution

This algorithm is very simple and thus easy to code. However, it is susceptible to certain types of noise - for example, it would be difficult to find Waldo if his face was rotated, flipped, larger or smaller than expected, or distorted in some other way.

# Communications Systems

Matched filter detectors are also commonly used in Communications Systems. In fact, they are the optimal detectors in Gaussian noise. Signals in the real-world are often distorted by the environment around them, so there is a constant struggle to develop ways to be able to receive a distorted signal and then be able to filter it in some way to determine what the original signal was. Matched filters provide one way to compare a received signal with two possible original ("template") signals and determine which one is the closest match to the received signal.

For example, below we have a simplified example of Frequency Shift Keying (FSK) where we having the following coding for '1' and '0': Figure 7: Frequency Shift Keying for '1' and '0'.

Based on the above coding, we can create digital signals based on 0's and 1's by putting together the above two "codes" in an infinite number of ways. For this example we will transmit a basic 3-bit number, 101, which is displayed in Figure 8: Figure 8: The bit stream "101" coded with the above FSK.

Now, the signal picture above represents our original signal that will be transmitted over some communication system, which will inevitably pass through the "communications channel," the part of the system that will distort and alter our signal. As long as the noise is not too great, our matched filter should keep us from having to worry about these changes to our transmitted signal. Once this signal has been received, we will pass the noisy signal through a simple system, similar to the simplified version shown in Figure 9: Figure 9: Block diagram of matched filter detector.

Figure 9 basically shows that our noisy signal will be passed in (we will assume that it passes in one "bit" at a time) and this signal will be split and passed to two different matched filter detectors. Each one will compare the noisy, received signal to one of the two codes we defined for '1' and '0.' Then this value will be passed on and whichever value is higher (i.e. whichever FSK code signal the noisy signal most resembles) will be the value that the receiver takes. For example, the first bit that will be sent through will be a '1' so the upper level of the block diagram will have a higher value, thus denoting that a '1' was sent by the signal, even though the signal may appear very noisy and distorted.

The interactive example below supposes that our transmitter sends 1000 bits, plotting how many of those bits are received and interpreted correctly as either 1s and 0s, and also keeps a tally of how many are accidentally misinterpreted. You can play around with the distance between the energy of the "1" and the "0" (discriminability), the degree of noise present in the channel, and the location of the criterion (threshold) to get a feel for the basics of signal detection theory.

Example 3

Let's use a matched filter to find the "0" bits in a simple signal.

Let's use the signal $s1(t)s1(t)$ from example 1 to represent the bits. $s1(t)s1(t)$ represents 0, while $-s1(t)-s1(t)$ represents 1.

$0⇒(b=1)⇒( s 1 ⁢t=s⁢t) 0 b 1 s 1 t s t$ for $0≤t≤T 0 t T$

$1⇒(b=-1)⇒( s 2 ⁢t=−s⁢t) 1 b -1 s 2 t s t$ for $0≤t≤T 0 t T$

$X t =∑ i =−PP b i ⁢s⁢t−i⁢T X t i P P b i s t i T$
10

The matched filter output clearly shows the location of the "0" bits.

One of the first, and more intriguing forms of communication that used the matched filter concept was radar. A known electromagnetic signal is sent out by a transmitter at a target and reflected off of the target back to the sender with a time delay proportional to the distance between target and sender. This scaled, time-shifted signal is then convolved with the original template signal, and the time at which the output of this convolution is highest is noted.

This technology proved vital in the 1940s for the powers that possessed it. A short set of videos below shows the basics of how the technology works, its applications, and its impact in World War 2.