# what is this line all about? Answer in lecture 4%matplotlibinlineimportmatplotlib.pyplotaspltfromIPython.displayimportImage
The SciPy framework builds on top of the low-level NumPy framework for multidimensional arrays, and provides a large number of higher-level scientific algorithms. Some of the topics that SciPy covers are:
To demonstrate the typical usage of special functions we will look in more detail at the Bessel functions:
## The scipy.special module includes a large number of Bessel-functions# Here we will use the functions jn and yn, which are the Bessel functions # of the first and second kind and real-valued order. We also include the # function jn_zeros and yn_zeros that gives the zeroes of the functions jn# and yn.#fromscipy.specialimportjn,yn,jn_zeros,yn_zeros
n=0# orderx=0.0# Bessel function of first kindprint"J_%d(%f) = %f"%(n,x,jn(n,x))x=1.0# Bessel function of second kindprint"Y_%d(%f) = %f"%(n,x,yn(n,x))
is called numerical quadrature, or simply quadature. SciPy provides a series of functions for different kind of quadrature, for example the quad, dblquad and tplquad for single, double and triple integrals, respectively.
The quad function takes a large number of optional arguments, which can be used to fine-tune the behaviour of the function (try help(quad) for details).
The basic usage is as follows:
# define a simple function for the integranddeff(x):returnx
x_lower=0# the lower limit of xx_upper=1# the upper limit of xval,abserr=quad(f,x_lower,x_upper)print"integral value =",val,", absolute error =",abserr
integral value = 0.5 , absolute error = 5.55111512313e-15
If we need to pass extra arguments to integrand function we can use the args keyword argument:
defintegrand(x,n):""" Bessel function of first kind and order n. """returnjn(n,x)x_lower=0# the lower limit of xx_upper=10# the upper limit of xval,abserr=quad(integrand,x_lower,x_upper,args=(3,))printval,abserr
For simple functions we can use a lambda function (name-less function) instead of explicitly defining a function for the integrand:
Note how we had to pass lambda functions for the limits for the y integration, since these in general can be functions of x.
Ordinary differential equations (ODEs)
SciPy provides two different ways to solve ODEs: An API based on the function odeint, and object-oriented API based on the class ode. Usually odeint is easier to get started with, but the ode class offers some finer level of control.
Here we will use the odeint functions. For more information about the class ode, try help(ode). It does pretty much the same thing as odeint, but in an object-oriented fashion.
To use odeint, first import it from the scipy.integrate module
A system of ODEs are usually formulated on standard form before it is attacked numerically. The standard form is:
and f is some function that gives the derivatives of the function yi(t). To solve an ODE we need to know the function f and an initial condition, y(0).
Note that higher-order ODEs can always be written in this form by introducing new variables for the intermediate derivatives.
Once we have defined the Python function f and array y_0 (that is f and y(0) in the mathematical formulation), we can use the odeint function as:
y_t = odeint(f, y_0, t)
where t is and array with time-coordinates for which to solve the ODE problem. y_t is an array with one row for each point in time in t, where each column corresponds to a solution y_i(t) at that point in time.
We will see how we can implement f and y_0 in Python code in the examples below.
To make the Python code simpler to follow, let's introduce new variable names and the vector notation: x=[θ1,θ2,pθ1,pθ2]
g=9.82L=0.5m=0.1defdx(x,t):""" The right-hand side of the pendulum ODE """x1,x2,x3,x4=x,x,x,xdx1=6.0/(m*L**2)*(2*x3-3*cos(x1-x2)*x4)/(16-9*cos(x1-x2)**2)dx2=6.0/(m*L**2)*(8*x4-3*cos(x1-x2)*x3)/(16-9*cos(x1-x2)**2)dx3=-0.5*m*L**2*(dx1*dx2*sin(x1-x2)+3*(g/L)*sin(x1))dx4=-0.5*m*L**2*(-dx1*dx2*sin(x1-x2)+(g/L)*sin(x2))return[dx1,dx2,dx3,dx4]
# choose an initial statex0=[pi/4,pi/2,0,0]
# time coodinate to solve the ODE for: from 0 to 10 secondst=linspace(0,10,250)
# solve the ODE problemx=odeint(dx,x0,t)
# plot the angles as a function of timefig,axes=plt.subplots(1,2,figsize=(12,4))axes.plot(t,x[:,0],'r',label="theta1")axes.plot(t,x[:,1],'b',label="theta2")x1=+L*sin(x[:,0])y1=-L*cos(x[:,0])x2=x1+L*sin(x[:,1])y2=y1-L*cos(x[:,1])axes.plot(x1,y1,'r',label="pendulum1")axes.plot(x2,y2,'b',label="pendulum2")axes.set_ylim([-1,0])axes.set_xlim([1,-1]);
Simple annimation of the pendulum motion. We will see how to make better animation in Lecture 4.
ODE problems are important in computational physics, so we will look at one more example: the damped harmonic oscillation. This problem is well described on the wiki page: http://en.wikipedia.org/wiki/Damping
The equation of motion for the damped oscillator is:
where x is the position of the oscillator, ω0 is the frequency, and ζ is the damping ratio. To write this second-order ODE on standard form we introduce p=dtdx:
In the implementation of this example we will add extra arguments to the RHS function for the ODE, rather than using global variables as we did in the previous example. As a consequence of the extra arguments to the RHS, we need to pass an keyword argument args to the odeint function:
defdy(y,t,zeta,w0):""" The right-hand side of the damped oscillator ODE """x,p=y,ydx=pdp=-2*zeta*w0*p-w0**2*xreturn[dx,dp]
# initial state: y0=[1.0,0.0]
# time coodinate to solve the ODE fort=linspace(0,10,1000)w0=2*pi*1.0
# solve the ODE problem for three different values of the damping ratioy1=odeint(dy,y0,t,args=(0.0,w0))# undampedy2=odeint(dy,y0,t,args=(0.2,w0))# under dampedy3=odeint(dy,y0,t,args=(1.0,w0))# critial dampingy4=odeint(dy,y0,t,args=(5.0,w0))# over damped
Fourier transforms are one of the universal tools in computational physics, which appear over and over again in different contexts. SciPy provides functions for accessing the classic FFTPACK library from NetLib, which is an efficient and well tested FFT library written in FORTRAN. The SciPy API has a few additional convenience functions, but overall the API is closely related to the original FORTRAN library.
To use the fftpack module in a python program, include it using:
To demonstrate how to do a fast Fourier transform with SciPy, let's look at the FFT of the solution to the damped oscillator from the previous section:
N=len(t)dt=t-t# calculate the fast fourier transform# y2 is the solution to the under-damped oscillator from the previous sectionF=fft(y2[:,0])# calculate the frequencies for the components in Fw=fftfreq(N,dt)
Since the signal is real, the spectrum is symmetric. We therefore only need to plot the part that corresponds to the postive frequencies. To extract that part of the w and F we can use some of the indexing tricks for NumPy arrays that we saw in Lecture 2:
indices=where(w>0)# select only indices for elements that corresponds to positive frequenciesw_pos=w[indices]F_pos=F[indices]
As expected, we now see a peak in the spectrum that is centered around 1, which is the frequency we used in the damped oscillator example.
The linear algebra module contains a lot of matrix related functions, including linear equation solving, eigenvalue solvers, matrix functions (for example matrix-exponentiation), a number of different decompositions (SVD, LU, cholesky), etc.
The eigenvectors corresponding to the nth eigenvalue (stored in evals[n]) is the nth column in evecs, i.e., evecs[:,n]. To verify this, let's try mutiplying eigenvectors with the matrix and compare to the product of the eigenvector and the eigenvalue:
There are also more specialized eigensolvers, like the eigh for Hermitian matrices.
# norms of various ordersnorm(A,ord=2),norm(A,ord=Inf)
Sparse matrices are often useful in numerical simulations dealing with large systems, if the problem can be described in matrix form where the matrices or vectors mostly contains zeros. Scipy has a good support for sparse matrices, with basic linear algebra operations (such as equation solving, eigenvalue calculations, etc).
There are many possible strategies for storing sparse matrices in an efficient way. Some of the most common are the so-called coordinate form (COO), list of list (LIL) form, and compressed-sparse column CSC (and row, CSR). Each format has some advantanges and disadvantages. Most computational algorithms (equation solving, matrix-matrix multiplication, etc) can be efficiently implemented using CSR or CSC formats, but they are not so intuitive and not so easy to initialize. So often a sparse matrix is initially created in COO or LIL format (where we can efficiently add elements to the sparse matrix data), and then converted to CSC or CSR before used in real calcalations.
We can use the fmin_bfgs function to find the minima of a function:
Optimization terminated successfully.
Current function value: -3.506641
Function evaluations: 30
Gradient evaluations: 10
Optimization terminated successfully.
Current function value: 2.804988
Function evaluations: 15
Gradient evaluations: 5
We can also use the brent or fminbound functions. They have a bit different syntax and use different algorithms.
Finding a solution to a function
To find the root for a function of the form f(x)=0 we can use the fsolve function. It requires an initial guess:
omega_c=3.0deff(omega):# a transcendental equation: resonance frequencies of a low-Q SQUID terminated microwave resonatorreturntan(2*pi*omega)-omega_c/omega
fig,ax=plt.subplots(figsize=(10,4))x=linspace(0,3,1000)y=f(x)mask=where(abs(y)>50)x[mask]=y[mask]=NaN# get rid of vertical line when the function flip signax.plot(x,y)ax.plot([0,3],[0,0],'k')ax.set_ylim(-5,5);
/Users/rob/miniconda/envs/py27-spl/lib/python2.7/site-packages/IPython/kernel/__main__.py:4: RuntimeWarning: divide by zero encountered in divide
Interpolation is simple and convenient in scipy: The interp1d function, when given arrays describing X and Y data, returns and object that behaves like a function that can be called for an arbitrary value of x (in the range covered by X), and it returns the corresponding interpolated y value:
n=arange(0,10)x=linspace(0,9,100)y_meas=f(n)+0.1*randn(len(n))# simulate measurement with noisey_real=f(x)linear_interpolation=interp1d(n,y_meas)y_interp1=linear_interpolation(x)cubic_interpolation=interp1d(n,y_meas,kind='cubic')y_interp2=cubic_interpolation(x)
# create a (discreet) random variable with poissionian distributionX=stats.poisson(3.5)# photon distribution for a coherent state with n=3.5 photons
n=arange(0,15)fig,axes=plt.subplots(3,1,sharex=True)# plot the probability mass function (PMF)axes.step(n,X.pmf(n))# plot the commulative distribution function (CDF)axes.step(n,X.cdf(n))# plot histogram of 1000 random realizations of the stochastic variable Xaxes.hist(X.rvs(size=1000));
# create a (continous) random variable with normal distributionY=stats.norm()
x=linspace(-5,5,100)fig,axes=plt.subplots(3,1,sharex=True)# plot the probability distribution function (PDF)axes.plot(x,Y.pdf(x))# plot the commulative distributin function (CDF)axes.plot(x,Y.cdf(x));# plot histogram of 1000 random realizations of the stochastic variable Yaxes.hist(Y.rvs(size=1000),bins=50);
X.mean(),X.std(),X.var()# poission distribution
(3.5, 1.8708286933869707, 3.5)
Y.mean(),Y.std(),Y.var()# normal distribution
(0.0, 1.0, 1.0)
Test if two sets of (independent) random data comes from the same distribution: