Contact
CoCalc Logo Icon
StoreFeaturesDocsShareSupport News AboutSign UpSign In
| Download
Views: 7344
Kernel: Python 3

ThinkDSP

This notebook contains solutions to exercises in Chapter 10: Signals and Systems

Copyright 2015 Allen Downey

License: Creative Commons Attribution 4.0 International

# Get thinkdsp.py import os if not os.path.exists('thinkdsp.py'): !wget https://github.com/AllenDowney/ThinkDSP/raw/master/code/thinkdsp.py
import numpy as np import matplotlib.pyplot as plt from thinkdsp import decorate

Exercise 1

In this chapter I describe convolution as the sum of shifted, scaled copies of a signal. Strictly speaking, this operation is linear convolution, which does not assume that the signal is periodic.

But when we multiply the DFT of the signal by the transfer function, that operation corresponds to circular convolution, which assumes that the signal is periodic. As a result, you might notice that the output contains an extra note at the beginning, which wraps around from the end.

Fortunately, there is a standard solution to this problem. If you add enough zeros to the end of the signal before computing the DFT, you can avoid wrap-around and compute a linear convolution.

Modify the example in chap10soln.ipynb and confirm that zero-padding eliminates the extra note at the beginning of the output.

Solution: I'll truncate both signals to 2162^{16} elements, then zero-pad them to 2172^{17}. Using powers of two makes the FFT algorithm most efficient.

Here's the impulse response:

if not os.path.exists('180960__kleeb__gunshot.wav'): !wget https://github.com/AllenDowney/ThinkDSP/raw/master/code/180960__kleeb__gunshot.wav
from thinkdsp import read_wave response = read_wave('180960__kleeb__gunshot.wav') start = 0.12 response = response.segment(start=start) response.shift(-start) response.truncate(2**16) response.zero_pad(2**17) response.normalize() response.plot() decorate(xlabel='Time (s)')
Image in a Jupyter notebook

And its spectrum:

transfer = response.make_spectrum() transfer.plot() decorate(xlabel='Frequency (Hz)', ylabel='Amplitude')
Image in a Jupyter notebook

Here's the signal:

if not os.path.exists('92002__jcveliz__violin-origional.wav'): !wget https://github.com/AllenDowney/ThinkDSP/raw/master/code/92002__jcveliz__violin-origional.wav
violin = read_wave('92002__jcveliz__violin-origional.wav') start = 0.11 violin = violin.segment(start=start) violin.shift(-start) violin.truncate(2**16) violin.zero_pad(2**17) violin.normalize() violin.plot() decorate(xlabel='Time (s)')
Image in a Jupyter notebook

And its spectrum:

spectrum = violin.make_spectrum()

Now we can multiply the DFT of the signal by the transfer function, and convert back to a wave:

output = (spectrum * transfer).make_wave() output.normalize()

The result doesn't look like it wraps around:

output.plot()
Image in a Jupyter notebook

And we don't hear the extra note at the beginning:

output.make_audio()

We should get the same results from np.convolve and scipy.signal.fftconvolve.

First I'll get rid of the zero padding:

response.truncate(2**16) response.plot()
Image in a Jupyter notebook
violin.truncate(2**16) violin.plot()
Image in a Jupyter notebook

Now we can compare to np.convolve:

output2 = violin.convolve(response)

The results are similar:

output2.plot()
Image in a Jupyter notebook

And sound the same:

output2.make_audio()

But the results are not exactly the same length:

len(output), len(output2)
(131072, 131071)

scipy.signal.fftconvolve does the same thing, but as the name suggests, it uses the FFT, so it is substantially faster:

from thinkdsp import Wave import scipy.signal ys = scipy.signal.fftconvolve(violin.ys, response.ys) output3 = Wave(ys, framerate=violin.framerate)

The results look the same.

output3.plot()
Image in a Jupyter notebook

And sound the same:

output3.make_audio()

And within floating point error, they are the same:

output2.max_diff(output3)
1.7763568394002505e-14

Exercise 2

The Open AIR library provides a "centralized... on-line resource for anyone interested in auralization and acoustical impulse response data" (http://www.openairlib.net). Browse their collection of impulse response data and download one that sounds interesting. Find a short recording that has the same sample rate as the impulse response you downloaded.

Simulate the sound of your recording in the space where the impulse response was measured, computed two way: by convolving the recording with the impulse response and by computing the filter that corresponds to the impulse response and multiplying by the DFT of the recording.

Solution: I downloaded the impulse response of the Lady Chapel at St Albans Cathedral http://www.openairlib.net/auralizationdb/content/lady-chapel-st-albans-cathedral

Thanks to Audiolab, University of York: Marcin Gorzel, Gavin Kearney, Aglaia Foteinou, Sorrel Hoare, Simon Shelley.

if not os.path.exists('stalbans_a_mono.wav'): !wget https://github.com/AllenDowney/ThinkDSP/raw/master/code/stalbans_a_mono.wav
response = read_wave('stalbans_a_mono.wav') start = 0 duration = 5 response = response.segment(duration=duration) response.shift(-start) response.normalize() response.plot() decorate(xlabel='Time (s)')
Image in a Jupyter notebook

Here's what it sounds like:

response.make_audio()

The DFT of the impulse response is the transfer function:

transfer = response.make_spectrum() transfer.plot() decorate(xlabel='Frequency (Hz)', ylabel='Amplitude')
Image in a Jupyter notebook

Here's the transfer function on a log-log scale:

transfer.plot() decorate(xlabel='Frequency (Hz)', ylabel='Amplitude', xscale='log', yscale='log')
Image in a Jupyter notebook

Now we can simulate what a recording would sound like if it were played in the same room and recorded in the same way. Here's the trumpet recording we have used before:

if not os.path.exists('170255__dublie__trumpet.wav'): !wget https://github.com/AllenDowney/ThinkDSP/raw/master/code/170255__dublie__trumpet.wav
wave = read_wave('170255__dublie__trumpet.wav') start = 0.0 wave = wave.segment(start=start) wave.shift(-start) wave.truncate(len(response)) wave.normalize() wave.plot() decorate(xlabel='Time (s)')
Image in a Jupyter notebook

Here's what it sounds like before transformation:

wave.make_audio()

Now we compute the DFT of the violin recording.

spectrum = wave.make_spectrum()

I trimmed the violin recording to the same length as the impulse response:

len(spectrum.hs), len(transfer.hs)
(110251, 110251)
spectrum.fs
array([0.00000e+00, 2.00000e-01, 4.00000e-01, ..., 2.20496e+04, 2.20498e+04, 2.20500e+04])
transfer.fs
array([0.00000e+00, 2.00000e-01, 4.00000e-01, ..., 2.20496e+04, 2.20498e+04, 2.20500e+04])

Now we can multiply in the frequency domain and the transform back to the time domain.

output = (spectrum * transfer).make_wave() output.normalize()

Here's a comparison of the original and transformed recordings:

wave.plot()
Image in a Jupyter notebook
output.plot()
Image in a Jupyter notebook

And here's what it sounds like:

output.make_audio()

Now that we recognize this operation as convolution, we can compute it using the convolve method:

convolved2 = wave.convolve(response) convolved2.normalize() convolved2.make_audio()