Contact
CoCalc Logo Icon
StoreFeaturesDocsShareSupport News AboutSign UpSign In
| Download
Views: 7335
Kernel: Python 3

ThinkDSP

This notebook contains solutions to exercises in Chapter 1: Sounds and Signals

Copyright 2015 Allen Downey

License: Creative Commons Attribution 4.0 International

# Get thinkdsp.py import os if not os.path.exists('thinkdsp.py'): !wget https://github.com/AllenDowney/ThinkDSP/raw/master/code/thinkdsp.py

Exercise 1

Go to http://freesound.org and download a sound sample that includes music, speech, or other sounds that have a well-defined pitch. Select a roughly half-second segment where the pitch is constant. Compute and plot the spectrum of the segment you selected. What connection can you make between the timbre of the sound and the harmonic structure you see in the spectrum?

Use high_pass, low_pass, and band_stop to filter out some of the harmonics. Then convert the spectrum back to a wave and listen to it. How does the sound relate to the changes you made in the spectrum?

Solution

I chose this recording (or synthesis?) of a trumpet section http://www.freesound.org/people/Dublie/sounds/170255/

As always, thanks to the people who contributed these recordings!

if not os.path.exists('170255__dublie__trumpet.wav'): !wget https://github.com/AllenDowney/ThinkDSP/raw/master/code/170255__dublie__trumpet.wav
from thinkdsp import read_wave wave = read_wave('170255__dublie__trumpet.wav') wave.normalize() wave.make_audio()

Here's what the whole wave looks like:

wave.plot()
Image in a Jupyter notebook

By trial and error, I selected a segment with a constant pitch (although I believe it is a chord played by at least two horns).

segment = wave.segment(start=1.1, duration=0.3) segment.make_audio()

Here's what the segment looks like:

segment.plot()
Image in a Jupyter notebook

And here's an even shorter segment so you can see the waveform:

segment.segment(start=1.1, duration=0.005).plot()
Image in a Jupyter notebook

Here's what the spectrum looks like:

spectrum = segment.make_spectrum() spectrum.plot(high=7000)
Image in a Jupyter notebook

It has lots of frequency components. Let's zoom in on the fundamental and dominant frequencies:

spectrum = segment.make_spectrum() spectrum.plot(high=1000)
Image in a Jupyter notebook

peaks prints the highest points in the spectrum and their frequencies, in descending order:

spectrum.peaks()[:30]
[(620.4417258380604, 870.0), (504.0431989726268, 866.6666666666667), (438.97048697396684, 506.6666666666667), (387.76854063931785, 346.6666666666667), (365.1860713882272, 520.0), (343.86168948117404, 843.3333333333334), (336.15317388630996, 350.0), (321.2246582837962, 840.0), (306.4203033088299, 1226.6666666666667), (285.12868197300406, 700.0), (274.10338786058963, 510.0), (262.84411351625783, 340.0), (258.9390175932505, 416.6666666666667), (258.23677366979797, 523.3333333333334), (230.80255875889878, 1040.0), (229.6907151984611, 1216.6666666666667), (226.60369867476524, 253.33333333333334), (211.4725336173951, 436.6666666666667), (206.6415361440749, 1910.0), (203.67607185473378, 1013.3333333333334), (199.5263275810788, 2440.0), (186.75243436443085, 526.6666666666667), (185.8754158628943, 703.3333333333334), (183.2606673135278, 503.33333333333337), (183.25438342093636, 1736.6666666666667), (177.71707154069557, 686.6666666666667), (175.4786945878676, 433.33333333333337), (174.53541534265207, 1010.0), (172.80411755155416, 500.0), (169.5971538893914, 1223.3333333333335)]

The dominant peak is at 870 Hz. It's not easy to dig out the fundamental, but with peaks at 507, 347, and 253 Hz, we can infer a fundamental at roughly 85 Hz, with harmonics at 170, 255, 340, 425, and 510 Hz.

85 Hz is close to F2 at 87 Hz. The pitch we perceive is usually the fundamental, even when it is not dominant. When you listen to this segment, what pitch(es) do you perceive?

Next we can filter out the high frequencies:

spectrum.low_pass(2000)

And here's what it sounds like:

spectrum.make_wave().make_audio()

The following interaction allows you to select a segment and apply different filters. If you set the cutoff to 3400 Hz, you can simulate what the sample would sound like over an old (not digital) phone line.

from thinkdsp import decorate from IPython.display import display def filter_wave(wave, start, duration, cutoff): """Selects a segment from the wave and filters it. Plots the spectrum and displays an Audio widget. wave: Wave object start: time in s duration: time in s cutoff: frequency in Hz """ segment = wave.segment(start, duration) spectrum = segment.make_spectrum() spectrum.plot(high=5000, color='0.7') spectrum.low_pass(cutoff) spectrum.plot(high=5000, color='#045a8d') decorate(xlabel='Frequency (Hz)') audio = spectrum.make_wave().make_audio() display(audio)
from ipywidgets import interact, fixed interact(filter_wave, wave=fixed(wave), start=(0, 5, 0.1), duration=(0, 5, 0.1), cutoff=(0, 5000, 100));
Image in a Jupyter notebook

Exercise 2

Synthesize a compound signal by creating SinSignal and CosSignal objects and adding them up. Evaluate the signal to get a Wave, and listen to it. Compute its Spectrum and plot it. What happens if you add frequency components that are not multiples of the fundamental?

Solution

Here are some arbitrary components I chose. It makes an interesting waveform!

from thinkdsp import SinSignal signal = (SinSignal(freq=400, amp=1.0) + SinSignal(freq=600, amp=0.5) + SinSignal(freq=800, amp=0.25)) signal.plot()
Image in a Jupyter notebook

We can use the signal to make a wave:

wave2 = signal.make_wave(duration=1) wave2.apodize()

And here's what it sounds like:

wave2.make_audio()

The components are all multiples of 200 Hz, so they make a coherent sounding tone.

Here's what the spectrum looks like:

spectrum = wave2.make_spectrum() spectrum.plot(high=2000)
Image in a Jupyter notebook

If we add a component that is not a multiple of 200 Hz, we hear it as a distinct pitch.

signal += SinSignal(freq=450) signal.make_wave().make_audio()

Exercise 3

Write a function called stretch that takes a Wave and a stretch factor and speeds up or slows down the wave by modifying ts and framerate. Hint: it should only take two lines of code.

Solution

I'll use the trumpet example again:

wave3 = read_wave('170255__dublie__trumpet.wav') wave3.normalize() wave3.make_audio()

Here's my implementation of stretch

def stretch(wave, factor): wave.ts *= factor wave.framerate /= factor

And here's what it sounds like if we speed it up by a factor of 2.

stretch(wave3, 0.5) wave3.make_audio()

Here's what it looks like (to confirm that the ts got updated correctly).

wave3.plot()
Image in a Jupyter notebook

I think it sounds better speeded up. In fact, I wonder if we are playing the original at the right speed.