Contact
CoCalc Logo Icon
StoreFeaturesDocsShareSupport News Sign UpSign In
| Download
Views: 7307
Kernel: Python 3

ThinkDSP

by Allen Downey (think-dsp.com)

This notebook contains examples and demos for a SciPy 2015 talk.

import thinkdsp from thinkdsp import decorate import numpy as np

A Signal represents a function that can be evaluated at an point in time.

cos_sig = thinkdsp.CosSignal(freq=440)

A cosine signal at 440 Hz has a period of 2.3 ms.

cos_sig.plot() decorate(xlabel='time (s)')
Image in a Jupyter notebook

make_wave samples the signal at equally-space time steps.

wave = cos_sig.make_wave(duration=0.5, framerate=11025)

make_audio creates a widget that plays the Wave.

wave.apodize() wave.make_audio()

make_spectrum returns a Spectrum object.

spectrum = wave.make_spectrum()

A cosine wave contains only one frequency component (no harmonics).

spectrum.plot() decorate(xlabel='frequency (Hz)')
Image in a Jupyter notebook

A SawTooth signal has a more complex harmonic structure.

saw_sig = thinkdsp.SawtoothSignal(freq=440) saw_sig.plot()
Image in a Jupyter notebook

Here's what it sounds like:

saw_wave = saw_sig.make_wave(duration=0.5) saw_wave.make_audio()

And here's what the spectrum looks like:

saw_wave.make_spectrum().plot()
Image in a Jupyter notebook

Here's a short violin performance from jcveliz on freesound.org:

violin = thinkdsp.read_wave('92002__jcveliz__violin-origional.wav') violin.make_audio()

The spectrogram shows the spectrum over time:

spectrogram = violin.make_spectrogram(seg_length=1024) spectrogram.plot(high=5000)
Image in a Jupyter notebook

We can select a segment where the pitch is constant:

start = 1.2 duration = 0.6 segment = violin.segment(start, duration)

And compute the spectrum of the segment:

spectrum = segment.make_spectrum() spectrum.plot()
Image in a Jupyter notebook

The dominant and fundamental peak is at 438.3 Hz, which is a slightly flat A4 (about 7 cents).

spectrum.peaks()[:5]
[(2052.3878454763044, 438.33333333333337), (1504.1231272792363, 876.6666666666667), (1313.4058092162186, 878.3333333333334), (1024.7130064064418, 2193.3333333333335), (809.7623839848649, 2195.0)]

As an aside, you can use the spectrogram to help extract the Parson's code and then identify the song.

Parson's code: DUUDDUURDR

Send it off to http://www.musipedia.org

A chirp is a signal whose frequency varies continuously over time (like a trombone).

import math PI2 = 2 * math.pi class SawtoothChirp(thinkdsp.Chirp): """Represents a sawtooth signal with varying frequency.""" def _evaluate(self, ts, freqs): """Helper function that evaluates the signal. ts: float array of times freqs: float array of frequencies during each interval """ dts = np.diff(ts) dps = PI2 * freqs * dts phases = np.cumsum(dps) phases = np.insert(phases, 0, 0) cycles = phases / PI2 frac, _ = np.modf(cycles) ys = thinkdsp.normalize(thinkdsp.unbias(frac), self.amp) return ys

Here's what it looks like:

signal = SawtoothChirp(start=220, end=880) wave = signal.make_wave(duration=2, framerate=10000) segment = wave.segment(duration=0.06) segment.plot()
Image in a Jupyter notebook

Here's the spectrogram.

spectrogram = wave.make_spectrogram(1024) spectrogram.plot() decorate(xlabel='Time (s)', ylabel='Frequency (Hz)')
Image in a Jupyter notebook

What do you think it sounds like?

wave.apodize() wave.make_audio()

Up next is one of the coolest examples in Think DSP. It uses LTI system theory to characterize the acoustics of a recording space and simulate the effect this space would have on the sound of a violin performance.

I'll start with a recording of a gunshot:

response = thinkdsp.read_wave('180960__kleeb__gunshot.wav') start = 0.12 response = response.segment(start=start) response.shift(-start) response.normalize() response.plot() decorate(xlabel='Time (s)', ylabel='amplitude')
Image in a Jupyter notebook

If you play this recording, you can hear the initial shot and several seconds of echos.

response.make_audio()

This wave records the "impulse response" of the room where the gun was fired.

Now let's load a recording of a violin performance:

wave = thinkdsp.read_wave('92002__jcveliz__violin-origional.wav') start = 0.11 wave = wave.segment(start=start) wave.shift(-start) wave.truncate(len(response)) wave.normalize() wave.plot() decorate(xlabel='Time (s)', ylabel='Amplitude')
Image in a Jupyter notebook

And listen to it:

wave.make_audio()

Now we can figure out what the violin would sound like if it was played in the room where the gun was fired. All we have to do is convolve the two waves:

output = wave.convolve(response) output.normalize()

Here's what it looks like:

wave.plot(label='original') output.plot(label='convolved') decorate(xlabel='Time (s)', ylabel='Amplitude')
Image in a Jupyter notebook

And here's what it sounds like:

output.make_audio()

If you think this example is black magic, you are not alone. But there is a good reason why this works, and I do my best to explain it in Chapter 9. So stay tuned.

I'd like to thanks jcveliz and kleeb for making these recordings available from freesound.org.