ThinkDSP
This notebook contains code examples from Chapter 5: Autocorrelation
Copyright 2015 Allen Downey
To investigate serial correlation of signals, let's start with a sine wave at 440 Hz.
I'll make two waves with different phase offsets.
The two waves appears correlated: when one is high, the other is usually high, too.
We can use np.corrcoef
to compute the correlation matrix.
The diagonal elements are the correlations of the waves with themselves, which is why they are 1. The off-diagonal elements are the correlations between the two waves. In this case, 0.54 indicates that there is a moderate correlation between these waves.
The correlation matrix is more interesting when there are more than two waves. With only two waves, there is really only one number in the matrix we care about.
Wave
provides corr
, which computes the correlation between waves:
To investigate the relationship between phase offset and correlation, I'll make an interactive function that computes correlation for each offset:
The following interaction plots waves with different phase offsets and prints their correlations:
Finally, we can plot correlation as a function of offset:
That curve is a cosine.
Next we'll compute serial correlations for different kinds of noise.
We expect uncorrelated noise to be... well... uncorrelated.
As expected, the serial correlation is near 0.
In Brownian noise, each value is the sum of the previous value and a random "step", so we expect a strong serial correlation:
In fact, the correlation is near 1.
Since pink noise is between white and Brownian, we expect an intermediate correlation.
And we get one.
Now we can plot serial correlation as a function of the pink noise parameter .
The autocorrelation function calls serial_corr
with different values of lag
.
Now we can plot autocorrelation for pink noise with various values of .
For low values of , the autocorrelation function drops off quickly. As increases, pink noise shows more long range dependency.
Now let's investigate using autocorrelation for pitch tracking. I'll load a recording of someone singing a chirp:
The spectrum tells us what frequencies are present, but for chirps, the frequency components are blurred over a range:
The spectrogram gives a better picture of how the components vary over time:
We can see the fundamental frequency clearly, starting near 500 Hz and dropping. Some of the harmonics are also visible.
To track the fundamental frequency, we can take a short window:
The spectrum shows a clear peak near 400 Hz, but we can't get an very accurate estimate of frequency, partly because the peak is blurry, and partly because even if it were a perfect spike, the frequency resolution is not very good.
Each element of the spectrum spans a range of 100 Hz, so we can't get an accurate estimate of the fundamental frequency.
For signals that are at least approximately periodic, we can do better by estimating the length of the period.
The following function plots the segment, and a shifted version of the segment, and computes the correlation between them:
With a small shift the segments are still moderately correlated. As the shift increases, the correlation falls for a while, then rises again, peaking when the shift equals the period of the signal.
You can use the following interaction to search for the shift that maximizes correlation:
The autocorr
function automates this process by computing the correlation for each possible lag, up to half the length of the wave.
The following figure shows this autocorrelation as a function of lag:
The first peak (other than 0) is near lag=100.
We can use argmax
to find the index of that peak:
We can convert from an index to a time in seconds:
Given the period in seconds, we can compute frequency:
This should be a better estimate of the fundamental frequency. We can approximate the resolution of this estimate by computing how much we would be off by if the index were off by 1:
The range is less than 10 Hz.
The function I wrote to compute autocorrelations is slow; np.correlate
is much faster.
np.correlate
computes correlations for positive and negative lags, so lag=0 is in the middle. For our purposes, we only care about positive lags.
Also, np.correlate
doesn't correct for the fact that the number of overlapping elements changes as the lag increases.
The following code selects the second half of the results and corrects for the length of the overlap:
Now the result is similar to what we computed before.
If we plot the results computed by NumPy and my implementation, they are visually similar. They are not quite identical because my version and theirs are normalized differently.
The difference between the NumPy implementation and mine is less than 0.02 over most of the range.