import numpy as np
import matplotlib.pyplot as plt
from scipy import signal
import IPython.display as ipd
We can modify our difference equations to include feedback terms; that is, we can add terms that come from past outputs instead of past inputs. In particular, the general form of a single delay feedback filter is
Ex) $ y[n] = x[n] + \alpha y[n-T] $¶
this gives rise to an interesting effect that we didn't have with finite impulse response convolutional filters before; that is, we can get instabilities where certain frequencies grow without bound. Below is some code to evaluate this and show this effect when $\alpha = 1$ and $T = 10$:
def apply_one_lag_filter(x, T, alpha):
y = np.zeros(len(x)+T-1)
for n in range(len(y)):
if n < len(x):
y[n] += x[n]
if n >= T:
y[n] += alpha*y[n-T]
return y
np.random.seed(0)
T = 10
k = 1
w = 2*np.pi*k/T
ns = np.arange(200)
x = np.cos(w*ns)
alpha = 1
y = apply_one_lag_filter(x, T, alpha)
plt.plot(y)
[<matplotlib.lines.Line2D at 0x770e0fb21410>]
to see why this happens analytically, we can use the z-transform on the terms that depend on y and the terms that depend on $x$. Starting again with the equation
$ y[n] = x[n] + \alpha y[n-T] $
Assume that input $x[n] = e^{j \omega n}$, then $y[n] = A e^{j \omega n + \phi}$
$ y[n] - \alpha y[n-T] = x[n] $
$y[n-T] = A e^{j \omega (n-T) + \phi} = A e^{j \omega n + \phi} e^{-j \omega T} = y[n] e^{-j \omega T}$
Therefore, this equation can be written as
$ y[n](1 - \alpha e^{-j \omega T}) = x[n] $¶
which can be re-arranged into
$ y[n] = x[n]\frac{1}{(1 - \alpha e^{-j \omega T})} = x[n] \frac{1}{(1 - \alpha z^{-T})} $¶
NOTE: $ \frac{1}{(1 - e^{-j \omega T})} = \frac{e^{j \omega T /2}}{i} \frac{1}{2 \sin(\omega T /2 )} $¶
plt.figure(figsize=(8, 4))
T = 10
w = np.linspace(0, np.pi-0.1, 1000)
f = w / (2*np.pi)
plt.plot(f, np.abs(1/(2*np.sin(w*T/2))))
plt.xlabel("Cycles / sample")
plt.ylabel("Transfer Function Magnitude")
/tmp/ipykernel_42614/3406966789.py:5: RuntimeWarning: divide by zero encountered in divide plt.plot(f, np.abs(1/(2*np.sin(w*T/2))))
Text(0, 0.5, 'Transfer Function Magnitude')
Let's solve for values of $\omega$ that will make the denominator 0 when $\alpha = 1$
$wT = 2\pi k$
Therefore, if $w = 2 \pi k /T $ and $\alpha = 1$, then the equation blows up.
Even if we send random noise through such a filter, we will get a signal that grows without bound, because it will contain some of these unstable frequencies. This is precisely what happens when a microphone "feeds back" into a speaker
sr = 44100
T = 100
x = np.random.randn(5*sr)
alpha = 1
y = apply_one_lag_filter(x, T, alpha)
plt.plot(y)
ipd.Audio(y, rate=sr)
A "pole" is the term for such a root that leads to a zero denominator of the transfer function of $x$. If we make $0 < \alpha < 1$, then the poles move insie of the circle in the complex plane, but they still pull up the points on the unit circle near them. For example, if $\alpha = 0.9$, then plugging in $z = e^{i \omega T}$ yields a magnitude of $\frac{1}{1 - 0.9} = 10$. If $\alpha = 0.99$, then it will yield a magnitude of $\frac{1}{1 - 0.99} = 100$.
T = 10
pix = np.linspace(-1.5, 1.5, 512)
re, im = np.meshgrid(pix, pix)
z = re + 1j*im
alpha = 0.9
mags = np.abs(1/(1-alpha*z**(-T)))
plt.imshow(mags, extent=(pix[0], pix[-1], pix[-1], pix[0]), vmin=0, vmax=10)
plt.colorbar()
t = np.linspace(0, 2*np.pi)
plt.plot(np.cos(t), np.sin(t), linestyle='--')
plt.title("$| 1/(1 - \\alpha z^{-T})|$")
plt.xlabel("Real")
plt.ylabel("Imaginary")
Text(0, 0.5, 'Imaginary')
Note also the connection to Karplus strong, which we saw in homework 2. If we fill $x$ with just a little bit of random noise at the beginning and we pick some $0 < \alpha < 1$, then we get a plucked string sound. The closer $\alpha$ is to 1, the longer the sound rings out
sr = 44100
T = 100
alpha = 0.95
x = np.zeros(sr)
x[0:T] = np.random.randn(T)
y = apply_one_lag_filter(x, T, alpha)
plt.plot(y[0:sr//32])
ipd.Audio(y, rate=sr)
Finally, it's worth explaining where the term "infinite impulse response" comes from. If $|\alpha| < 1$, then we can recognize the expression
$ y[n] = x[n]\frac{1}{(1 - \alpha e^{-j \omega T})} $¶
as the result of an infinite geometric series:
$ y[n] = x[n] \left( 1 + \alpha e^{-j \omega T} + \alpha^2 e^{-2j \omega T} + \alpha^3 e^{-3j \omega T} + ... \right) $¶
$y[n] = x[n] + \alpha x[n -T] + \alpha^2 x[n - 2T] + \alpha^2 x[n - 3T] + ... $¶
Written as an impulse response, we're saying
$h[0] = 1, h[T] = \alpha, h[2T] = \alpha^2, h[3] = \alpha^3, ...$¶
Below is an example showing numerically what happens when we send an impulse through a filter with $T = 10$ and $\alpha = 0.9$
def apply_one_lag_filter(x, T, alpha):
y = np.zeros(len(x)+T-1)
for n in range(len(y)):
if n < len(x):
y[n] += x[n]
if n >= T:
y[n] += alpha*y[n-T]
return y
x = np.zeros(200)
x[0] = 1
T = 10
alpha = 0.9
plt.plot(apply_one_lag_filter(x, T, alpha))
[<matplotlib.lines.Line2D at 0x770e0db00c10>]
General Form of Feedback Filters¶
$a[0]y[n] = (b[0]x[n] + b[1]x[n-1] + b[2]x[n-2] + ...) - ( a[1]y[n-1] + a[2]y[n-2] + a[3]y[n-3] + ... )$
$a[0]y[n] + a[1]y[n-1] + a[2]y[n-2] + a[3]y[n-3] + ... = b[0]x[n] + b[1]x[n-1] + b[2]x[n-2] + ... $
$a[0]y[n] + a[1]y[n]z^{-1} + a[2]y[n]z^{-2} + ... = b[0]x[n] + b[1]x[n]z^{-1} + b[2]x[n]z^{-2} + ...$
$y[n](a[0] + a[1]z^{-1} + a[2]z^{-2} + ...) = x[n](b[0] + b[1]z^{-1} + b[2]z^{-2} + ...)$
$y[n] = x[n] \frac{(b[0] + b[1]z^{-1} + b[2]z^{-2} + ...)}{(a[0] + a[1]z^{-1} + a[2]z^{-2} + ...)}$¶
Ex) $y[n] = x[n] - ( -\alpha y[n-T])$¶
$\frac{1}{(1 - \alpha z^{-T})}$¶
Ex) $ y[n] = x[n] - (-y[n-2] -y[n-3] + y[n-5]) $¶
$y[n] = x[n] \frac{1}{1 - z^{-2} - z^{-3} + z^{-5}}$¶
$y[n] = x[n] \frac{1}{(1 - z^{-2})(1 - z^{-3})}$¶
$y[n] = x[n] \frac{(b[0] + b[1]z^{-1} + b[2]z^{-2} + ...)}{(a[0] + a[1]z^{-1} + a[2]z^{-2} + ...)}$¶
def eval_filter(b, a, x):
assert(a[0] != 0)
y = np.zeros(len(x))
for n in range(len(y)):
## Loop through feed-forward (convolutional) lag terms in x
for i in range(len(b)):
if i <= n:
y[n] += b[i]*x[n-i]
## Loop through the feedback terms (lags of y)
for i in range(1, len(a)):
if i <= n:
y[n] -= a[i]*y[n-i]
y[n] /= a[0]
return y
def get_transfer_function(b, a, z, eps=1e-2):
num = complex(0, 0)
for i in range(len(b)):
num += b[i]*(z**(-i))
denom = complex(0, 0)
for i in range(len(a)):
denom += a[i]*(z**(-i))
if np.abs(denom) < eps:
denom = 1
return num/denom
def get_transfer_function_semicircle(b, a, n=1000, eps=1e-2):
ws = np.linspace(0, np.pi, n)
return ws, [get_transfer_function(b, a, np.exp(1j*w), eps) for w in ws]
T = 10
alpha = 1
a = np.zeros(T+1)
a[0] = 1
a[T] = -alpha
b = [1]
k = 1
w = 2*np.pi*k/T
ns = np.arange(200)
x = np.cos(w*ns)
y = eval_filter(b, a, x)
plt.subplot(211)
plt.plot(y)
plt.subplot(212)
zs, H = get_transfer_function_semicircle(b, a)
plt.plot(zs, np.abs(H))
[<matplotlib.lines.Line2D at 0x770e0dd92050>]
def plot_complex_plane(b, a, rg=1.5, res=512, vmin=0, vmax=10):
pix = np.linspace(-rg, rg, res)
re, im = np.meshgrid(pix, pix)
z = re + 1j*im
mags = np.zeros(z.shape)
for i in range(len(z)):
for j in range(len(z[i])):
mags[i, j] = np.abs(get_transfer_function(b, a, z[i, j]))
plt.imshow(mags, extent=(pix[0], pix[-1], pix[-1], pix[0]), vmin=vmin, vmax=vmax)
plt.colorbar()
t = np.linspace(0, 2*np.pi)
plt.plot(np.cos(t), np.sin(t), linestyle='--')
plt.xlabel("Real")
plt.ylabel("Imaginary")
b = [1]
alpha = 0.1
a = [1, 0, -1, -1, 0, 1]
plot_complex_plane(b, a)
zeros, poles, _ = signal.tf2zpk(b, a)
for z in zeros:
plt.scatter(np.real(z), np.imag(z), c='C1', s=10)
for p in poles:
plt.scatter(np.real(p), np.imag(p), marker='x', c='C1', s=50)
Feedback Filter Design with Butterworth Filters¶
We can design a lowpass feedback filter using something called a "butterworth filter." The poles are near 1 on the unit circle, so they pull us up for low frequencies. The zeros are towards $\pi$ in the unit circle, so they pull us down for high frequencies
order = 5 # How many terms in the past I need to look in the feedback filter
w = 0.1 # cycles/sample
b, a = signal.butter(order, w*2, 'lowpass', output='ba')
plot_complex_plane(b, a)
zeros, poles, _ = signal.tf2zpk(b, a)
for z in zeros:
plt.scatter(np.real(z), np.imag(z), c='C1', s=10)
for p in poles:
plt.scatter(np.real(p), np.imag(p), marker='x', c='C1', s=50)
Let's compare this to the finite impulse response filters we designed with the DFT before. Below we're making an impulse response with 40 terms
## DFT-based design
hann_window = lambda N: (0.5*(1 - np.cos(2*np.pi*np.arange(N)/N))).astype(np.float32)
N = 41
k = int(w*N) # (cycles/sample) * (samples / window) = cycles/windows
H = np.zeros(N//2+1)
H[0:k+1] = 1
h = np.fft.irfft(H)
h = np.fft.fftshift(h)
h *= hann_window(h.size)
plt.plot(h)
def z_transform_feedforward(h, z):
ret = 0
for n in range(len(h)):
ret += h[n]*(z**(-n))
return ret
def z_semicircle_feedforward(h, n_samples=10000):
ws = np.linspace(0, np.pi, n_samples)
ret = np.zeros(n_samples, dtype=complex)
for i, w in enumerate(ws):
ret[i] = z_transform_feedforward(h, np.exp(1j*w))
return ws, ret
When we compare the transfer functions, we see that we get a lot more bang for the buck with our feedforward filter. It only needs to look 5 samples into the past to get similar performance to a convolutional kernel that needs to look 40 samples into the past
plt.figure(figsize=(8, 2.5))
ws, H = get_transfer_function_semicircle(b, a)
plt.plot(ws/(2*np.pi), 10*np.log10(np.abs(H)))
ws, H = z_semicircle_feedforward(h)
plt.plot(ws/(2*np.pi), 10*np.log10(np.abs(H)))
plt.axvline(w, linestyle='--')
plt.ylim(-75, 1)
plt.xlabel("Cycles per sample")
plt.legend([f"Feed forward order={order}", f"Feedback, N = {N}"])
<matplotlib.legend.Legend at 0x770e0daa1fd0>
If the filter is high enough order, there are so many zeros and poles that it's best not to evaluate the transfer function directly for reasons of numerical stability, but rather to factor it into something called "second order sections" (sos). Below is an example doing this with a butterworth filter with order 40:
order = 40 # How many terms in the past I need to look
w = 0.1 # cycles/sample
sos = signal.butter(order, w*2, 'lowpass', output='sos')
def plot_zeros_poles(sos):
t = np.linspace(0, 2*np.pi, 100)
plt.plot(np.cos(t), np.sin(t), linestyle='--')
zeros, poles, _ = signal.sos2zpk(sos)
for z in zeros:
if np.abs(z) > 0:
plt.scatter(np.real(z), np.imag(z), c='C1', s=50)
for p in poles:
if np.abs(p) > 0:
plt.scatter(np.real(p), np.imag(p), marker='x', c='C1', s=50)
plt.axis('equal')
plot_zeros_poles(sos)
/home/ctralie/miniconda3/envs/py311/lib/python3.11/site-packages/scipy/signal/_filter_design.py:1125: BadCoefficients: Badly conditioned filter coefficients (numerator): the results may be meaningless b, a = normalize(b, a)
The version of my "get_transfer_function_semicircle" for second order sections is called "freqz_sos":
plt.figure(figsize=(8, 2.5))
ws, H = signal.freqz_sos(sos, 10000)
plt.plot(ws/(2*np.pi), 10*np.log10(np.abs(H)))
plt.axvline(w, linestyle='--')
<matplotlib.lines.Line2D at 0x770e0d937790>
Here's an example using this technique to design an order 40 bandpass filter Note that there are actually twice as many poles as there would be for a lowpass filter of the same "order," because we get a bandpass filter by applying a lowpass filter first and then applying a highpass filter
order = 5 # How many terms in the past I need to look
w1 = 0.1 # cycles/sample
w2 = 0.2 # cycles/sample
sos = signal.butter(order, [w1*2, w2*2], 'bandpass', output='sos')
plot_zeros_poles(sos)
plt.figure(figsize=(8, 2.5))
ws, H = signal.freqz_sos(sos, 10000)
plt.plot(ws/(2*np.pi), 10*np.log10(np.abs(H)))
plt.axvline(w1, linestyle='--')
plt.axvline(w2, linestyle='--')
plt.ylim([-200, 11])
/tmp/ipykernel_42614/2490979200.py:10: RuntimeWarning: divide by zero encountered in log10 plt.plot(ws/(2*np.pi), 10*np.log10(np.abs(H)))
(-200.0, 11.0)
Audio Examples¶
Finally, let's look at some audio examples of lowpass filters. Here is an example where we cut out high frequencies before decimating a square wave to avoid aliasing. The original sounds really bad because of aliasing
sr = 44100
t = np.arange(sr*4)/sr
f = 440
x = np.sign(np.cos(2*np.pi*f*t))
fac = 2
ipd.Audio(x[0::fac], rate=sr)
But if we filter it first, it sounds a little better
order = 5 # How many terms in the past I need to look in the feedback filter
w = 0.5/fac # cycles/sample
b, a = signal.butter(order, w*2, 'lowpass', output='ba')
y = eval_filter(b, a, x)
ipd.Audio(y[0::fac], rate=sr)
We can also a lowpass filter as rudimentary denoiser. Here is an original noisy signal
import librosa
x, sr = librosa.load("realdft.mp3", sr=44100)
x += 0.1*np.random.randn(x.size)
ipd.Audio(x, rate=sr)
And here is the lowpass version
order = 5 # How many terms in the past I need to look in the feedback filter
w = 0.1 # cycles/sample
b, a = signal.butter(order, w*2, 'lowpass', output='ba')
y = eval_filter(b, a, x)
ipd.Audio(y, rate=sr)