Contact
CoCalc Logo Icon
StoreFeaturesDocsShareSupport News AboutSign UpSign In
| Download

Jupyter notebook Project_IsingModel/Project_IsingModel.ipynb

Views: 2777
Kernel: Python 3 (Anaconda 2019)

Monte Carlo Study of Ferro-magnetism using an Ising Model

The goal of this project (over Exercises 1-8) is to create a statistical model simulating the evolution of magnetism as a function of material temperature.

Since the emergence of magnetism is attributed to the contribution of a great many small atomic magnetic dipoles a statistical method is to be utilised:

  • Monte Carlo methods

  • Random number generation

  • Ferromagetism

  • Ising Model

The subject of of this project will be statistical in nature, and hence a basic understanding of Monte Carlo methods and random number algorithms will be necessary.

Monte Carlo Methods

Numerical computations which utilise random numbers are called Monte Carlo methods after the famous casino. The obvious applications of such methods are in stochastic physics: e.g., statistical thermodynamics. However, there are other, less obvious, applications including the evaluation of multi-dimensional integrals.

This method was popularised by physicists such as Stanislaw Ulam, Enrico Fermi, John von Neumann, and Nicholas Metropolis, among others. A famous early use was employed by Enrico Fermi who in 1930 used a random method to calculate the properties of the recently discovered neutron. Of course, these early simulations where greatly restricted by the limited computational power available at that time.

Uses of Monte Carlo methods require large amounts of random numbers, and it was their use that spurred the development of pseudorandom number generators, which were far quicker to use than the tables of random numbers which had been previously used for statistical sampling.

Figure 1: The famous casino in Monte Carlo, France, gives it's name to the class of algorithms which utilise randomness to make statistical predictions.

Creating Random Numbers

No numerical algorithm can generate a truly random sequence of numbers. However, there exist algorithms which generate repeating sequences of NmaxN_{max} (say) integers which are, to a fairly good approximation, randomly distributed in the range 0 to Nmax1N_{max}−1. Here, NmaxN_{max} is a (hopefully) large integer. This type of sequence is termed psuedo-random.

The most well-known algorithm for generating psuedo-random sequences of integers is the so-called linear congruental method. The formula linking the nthn^{th} and (n+1)th(n + 1)th integers in the sequence is Xn+1=(AXn+C)MODNmax\begin{darray}{rcl} X_{n+1} = (A X_n + C)\, \mathrm{MOD}\, N_{max} \end{darray}

where AA, CC, and NmaxN_{max} are positive integer constants. The first number in the se- quence, the so-called “seed” value, is selected by the user.

As an example, calculate a list of number using A = 7, C = 0, and NmaxN_{max} = 10.

# Generate psuedo-random numbers I = 1 A = 7; C=2; M=10; for i in range(8): In = (A * I + C) % M print(In) I = In
9 5 7 1 9 5 7 1

A typical sequence of numbers generated by this formula is X=3,1,7,9,3,1,...\begin{darray}{rcl} X = {3, 1, 7, 9, 3, 1, ...} \end{darray} Evidently, the above choice of values for A, C, and NmaxN_{max} is not a particularly good one, since the sequence repeats after only four iterations. However, if A, C, and NmaxN_{max} are properly chosen then the sequence is of maximal length (i.e., of length NmaxN_{max}), and approximately randomly distributed in the range 0 to NmaxN_{max} − 1.

Testing for randomness

As a general rule, before implementing a random-number generator in your programs, you should check its range and that it produces numbers that “appear” random. This can be attempted wither using graphical display of your random numbers or a more robustly, performing a mathematical analysis.

With the visual method, since your brain is quite refined at recognising patterns it can imitate if there is one in your random numbers. For instance, separate your random numbers into pairs (x,y)=(Xi,Xi+1)(x, y) = (X_i,\, X_{i + 1} ) and analyse visually using a plot(x,y).

# Generate psuedo-random numbers # in to pairs and compare visually with a plot # Run a loop to calculate import matplotlib.pyplot as plt Xn = 1 x =[] y = [] A = 16807; C=0; Nmax= 2147483647; # 2^31-1 #A = 7; C=2; Nmax=10; for i in range(10000): N = (1.0*A*Xn+C)%Nmax if i%2 ==0: x.append(N/Nmax) else: y.append(N/Nmax) Xn = N #print X/Nmax plt.plot(x,y,'.')
[<matplotlib.lines.Line2D at 0x7fb8cbcfb6a0>]
Image in a Jupyter notebook

Another visual method is to plot using a histogram. If we observe a flat line at 1.0, subject to random fluctuations we can confirm there is no bias in the random distribution. Look up the format of the hist function and plot:

  • a normalized probability your list of random numbers from 0-1

  • in 100 bins

  • draw a red line to show the probability = 1

The more bins/samples we take the smaller the fluctuations about the average value.

N = x+y plt.hist(N,100,normed=True); plt.plot([0,1.],[1.,1.],'r-',lw=3) plt.xlabel("x",fontsize=20) plt.ylabel("sample probability",fontsize=20)
<matplotlib.text.Text at 0x7fb8ca9c0588>
Image in a Jupyter notebook

If your list is truly random you should observe that every value of xx is (roughly) equally as likely to be chosen as every other value.

Most computer languages and simulation packages have good built-in random number generators. In Python, we can use the NumPy random library, which is based on the Mersenne-Twister algorithm which was developed in 1997.

First lets review the details of the module called 'random':

Now import the function 'random' via Numpy which produces a Uniformly distributed values:

import numpy as np

As before, when you want to reproduce a particular set of random numbers, you can set the "seed" value of the generator to a particular value. Every time you set the seed to this value, it returns the generator to its initial state and will always give you the same sequence of numbers.

The default seed value in python is None. The python kernel interprets this to mean that we want it to try to initialise itself using a pseudo-random number from the computer's random number cache or it will use the current computer clock time to set the seed.

Let's see what setting and resetting the seed does:

#Using the random function # print 5 uniformly distributed numbers between 0 and 1 print( np.random.random(5) ) # now print another 5 - should be different print( np.random.random(5) )
[ 0.26269082 0.29278685 0.81589992 0.38623881 0.08344994] [ 0.02783127 0.79774273 0.39379147 0.05528732 0.03385086]

Set the seed to any value using the seed function, and print 5 random numbers:

#now set the seed to something: np.random.seed(4242) # print 5 random numbers from the generator with this seed print("Using seed = 4242:") print(np.random.random(5))
Using seed = 4242: [ 0.32494949 0.94041458 0.91400794 0.28650938 0.78180262]

Run this cell of commands with the same initial seed a few times. You should see the it produces the same result.

The random function can also create different sizes of arrays. Here is an a 2D example using a 5x5 array of ones or zeros:

np.random.randint(2, size=(5,5))
array([[1, 1, 0, 0, 1], [0, 1, 0, 0, 0], [0, 0, 1, 1, 1], [1, 1, 0, 1, 1], [0, 0, 0, 0, 0]])

Here we are using the .randint method to ask for integers as opposed to floats.

Random Walks and the Markov process

A classic visualisation of random behaviour is the random walk. Consider a completely drunk person who walks along a street and being drunk has no sense of direction. So this drunkard may move forwards with equal probability that he moves backwards.

Here is an example in 2D animated using the vPython module:

# Walk.py Random walk with graph # Use visual python to animate the random walk from vpython import graph, canvas, gcurve, color, rate # Need the ramdom number generator from numpy import numpy as np # Initialise variables np.random.seed(None) # Seed generator, None => system clock steps = 200 # number of steps x = y = 0. # Start at origin # Set up the vpython animated graph gd = graph(width=400, height=400, title='A Random Walk in 2D', xmax=10, xmin=-10, ymax=10, ymin=-10) graph1 = canvas(title='Random Walk', xtitle='x',ytitle='y') pts = gcurve(color = color.red) pts.plot(pos = (x, y) ) for i in range(0, steps + 1): x += (np.random.random() - 0.5)*2. # -1 =< x =< 1 y += (np.random.random() - 0.5) 2. # -1 =< y =< 1 pts.plot(pos = (x, y)) # Plot points #print(x) rate(20) # Refresh Rate of Graph

Run this a number of times to see the effects of a random walk! Observe that even when the starting point remains the same, each new step in a random direction combines towards an overall new destination in phase space.

A Markov process is a random walk with a selected probability for making a move. The new move is independent of the previous history of the system. The Markov chain is used repeatedly in Monte Carlo simulations in order to generate new random states.

In the context of a physical system containing many atoms, molecules etc, the different energy states are practically infinite. Hence, statistical approaches utilise algorithms to sample this large state-space and calculate average measrements such as energy and magnetisation. With Monte Carlo methods, we can explore and sample this state space using a random walk. The role of the Markov chain is to sample those states that make the most significant contributions.

The reason for choosing a Markov process is that when it is run for a long enough time starting with a random state, we will eventually reach the most likely state of the system. In thermodynamics, this means that after a certain number of Markov steps we reach an equilibrium distribution. This mimicks the way a real system reaches its most likely state at a given temperature of the surroundings.

To reach this distribution, the Markov process needs to obey two important conditions, that of ergodicity and detailed balance. These conditions impose then constraints on our algorithms for accepting or rejecting new random states.

The Metropolis algorithm discussed next abides to both these constraints. The Metropolis algorithm is widely used in Monte Carlo simulations and the understanding of it rests within the interpretation of random walks and Markov processes.

The Metropolis algorithm

Inorder to follow the predictions of a given statistical probaility function such as a Boltzmann distribution, the samples of state-space need to be considered accordindly. Instead of sampling a lot of states and then weighting them by their Boltzmann probability factors, it makes more sense to choose states based on their Boltzman probability and to then weight them equally. This is known as the Metropolis algorithm which has a characteristic cycle:

  1. A trial configuration is made by randomly choosing one state

  2. The energy difference, ΔE\Delta E, of adopting this trial state relative to the present state is calculated.

  3. If this reduces the total energy of the system, i.e. if ΔE0\Delta E \le 0, then the trial state is energetically favorable and thus accepted.

  4. Otherwise, it will only be accepted if its probability is greater than some random number exp(ΔE/kBT)>η\exp(−\Delta E/{k_BT}) > \eta where 0η10 \le \eta \le 1.

Each cycle accepts or rejects a potential state and repeats testing many other states in a Markov process. The total number of cycles is typically the number of atoms, or bodies in the system. Obviously, the system must be allowed to reach termal equilibrium before sampling the Boltzmann distribution in this way.

Ferromagnetism using the Ising Model

A ferromagnetic material is one that produces a magnetic field of it’s own, even without the presence of an external magnetic field. A ferromagnet can be any material that forms itself a permanent magnet, the magnitude of which is not reduced in the presence of any other magnetic fields.

A paramagnet is a material in which, with the presence of an external magnetic field, interatomic induced magnetic fields are formed, and therefore a magnetic field through the material is produced. However, once the external field is removed, the induced magnetic fields between atoms are lost and therefore the material can only have an induced magnetic field.

alt text

Figure 2: A pictorial description of the ordering of spins in ferromagnetism, antiferromagnetism, ferrimagnetism, and paramagnetism

Ferromagnets contain finite-size domains in which the spins of all the atoms point in the same direction. When an external magnetic field is applied to these materials, the different domains align and the materials become “magnetised.” Yet as the temperature is raised, the total magnetism decreases, and at the Curie temperature the system goes through a phase transition beyond which all magnetisation vanishes.

Your task in this project is to model and explain the thermal behaviour of ferromagnets.

The Ising model is the simplest model of a ferromagnet. The basic phenomenology of the Ising model is simple: there is a certain temperature TcT_c below which the system will spontaneously magnetise. This is what we will study with Monte Carlo.

Ising Model

As shown in Figure 2, the model consists of an array of particles, with a spin value of either +1 or -1, corresponding to an up or down spin configuration respectively. Inside the lattice, spins interact with their ‘neighbours’, and each particle on the lattice has an associated interaction energy. The value of this interaction energy is dependent on whether neighbouring particles have parallel or anti-parallel spins. The interaction energy between two parallel spins is –J, and for anti-parallel spins; +J; where J is an energy coupling constant that is dependent on the material being simulated:

  • J>0J>0 corresponds to a ferromagnetic state in which spins tend to align with each other in order to minimize the energy.

  • J<0J<0, they prefer to be antiparallel and for a simple lattice that leads to a chessboard-like alignment, a feature of the anti-ferromagnetic state

  • J=0J=0 , the spin alignment is arbitrary.

alt text

Figure 3: An example of a two-dimensional Ising model. Upward pointing arrows corresponding to a particle with up spin (+1), and the downward pointing arrows, a spin of -1.

By summing the interaction energies of every particle on the lattice, the total energy, EE, of the configuration can be obtained, and is given by equation:

E=J<i,j>SiSj\begin{darray}{rcl} E = -J \sum_{<i,j>}S_iS_j \end{darray}

Where <i,j><i,j> represents nearest neighbours and Si=+1S_i=+1 for an up spin and -1 for a down spin on site ii. For a give spin, it's local energy is calculated by summing over all the energies of each spin of it's neigbours as given by: Ei=J×Si×(Sj1+Sj2+Sj3+Sj4)\begin{darray}{rcl} E_{i} = -J \times S_i \times(S_{j1}+ S_{j2}+ S_{j3} + S_{j4}) \end{darray}

alt text

Figure 3: Figure 4: (a) The 4 nearest neighbours of SiS_i are shown on the lattice. (b) Flipping SiS_i will change the energy of the lattice by ΔE\Delta E . The probaility of this flip occurring requires a statistical model.

The change in energy of the system is dictated by the interaction of a dipole with its neighbours, so that flipping SiS_i to SiS'_i changes the energy by: ΔE=EiEi=2JjSiSj\begin{darray}{rcl} \Delta E = E_{i} - E'_{i} = 2J \sum_{j}S_iS_j \end{darray}

The two-dimensional square lattice Ising model is very difficult to solve analyticaly, the first such description was achieved by Lars Onsager in 1944, who solved the dependance: KbTcJ=2ln(1+2)2.269 \frac{K_bT_c}{J} = \frac{2} {\mathrm{ln}(1+\sqrt{2})} \approx 2.269

His solution, although elegant, is rather complicated. We're going to use the Monte Carlo method to see the effects that his solution describes.

We expect there is some temperature at which this phase transition happens - where the systems goes from being a Ferromagent to a Paramagnet. This temperature was solved for exactly by Lars Onsager in 1944

The program starts at a certain given temperature and calculates whether the considered spin flips or not for a certain number of iterations. For each step we first performed l iterations to reach thermal equilibrium and then performed another l/2 iterations to determine the physical quantities Energy per site, Magnetization per site, Magnetic Susceptibility, Specific Heat, Correlation Function and the Correlation Length.

Planning your model

We will consider a square two-dimensional lattice with periodic boundary conditions. Here, the first spin in a row 'sees' the last spin in the row and vice versa. The same applies for spins at the top and bottom of a column.

We will define individual functions for all the components of our model, such as:

  • creating a 2D grid of lattice of spins

  • randomly choose a spin

  • flip the spin

  • calculate nearest neighbour values

  • calculate energy and magnetisation of lattice

  • metropolis algorithm

The ising model will then simply start at some temperature, TT, evolve to equilibium and then evolve further to a steady state.

Exercise 1: Setting up a 2D grid

We are to simulate a simplified 2D surface consisting of magnetic dipoles using the Ising approach. The Ising model represents a regular grid of points where each point has two possible states, spin up or spin down. States like to have the same spin as their immediate neighbors so when a spin-down state is surrounded by more spin-up states it will switch to spin-up and vice versa. Also, due to random fluctuations, points might switch spins, even if this switch is not favourable.

So to begin, we will define a function to create an initial N×MN\times M grid with magnetic spin values of ±1\pm 1.

For this exercise create 3 functions:

  • normallattice: create N×MN\times M lattice with uniform spin values

  • randomlattice: create N×MN\times M lattice with random spin values

  • plotlattice: plot an image of the lattice with a colour code for the spin

During your later investigations, you can use these functions to test if an initial random or uniform state is significant.

Debug your code by plotting resulting lattice values.

Exercise 2: Randomly choose & flip a lattice point

Implement a function to randomly select one of the particles in the lattice and return its coordinates (i,j)(i,j). Next create a function to flip the spin of the particle pointed by the (i,j) indices and return the new lattice state.

To test the these functions temporarly add a print-out for the co-ordinates and the spin of the chosen particle.

Exercise 3: Nearest Neighbour algorithm

A key element of this model is calculating the combined spin state of the 4 nearest neighbours around a given lattice point (i,j)(i,j).

Write a function to return the combined spin state and which respects periodic boundary conditions.

Perform some tests with a simple 5x5 lattice. Once you have convinced yourself that this functions correctly, add the next component to your model.

Exercise 4: Calculating the energy of the lattice

The local energy is defined as the total interaction energy between the selected particle and its immediate neighbours.

(a) Write a function to calculate this. (b) Also write a function to calculate the total energy of the lattice.

Perform some tests with a simple 5x5 lattice, for example:

  • Compare the energy of the lattice for different configuations of spins.

  • What is the total energy of the system when all spins point up or down or randomly?

Once you have convinced yourself that this functions correctly, add the next component to your model.

Exercise 5: Calculate the magnetisation of the lattice

The total magnetisation of the lattice (MM) is defined as:

M=iSi\begin{darray}{rcl} M = \sum_i S_i \end{darray}

and the magnetisation per spin is: m=1NSiSi\begin{darray}{rcl} m = \frac{1}{N_S}\sum_i S_i \end{darray} where, NSN_S is the total number of spins on the lattice.

Implement functions to calculate these values.

Perform some tests with a simple 5x5 lattice, for example:

  • What will be the value of mm if all spins are aligned up? what if all the spins are aligned down? What if half of the spins are up and half are down?

  • At the very beginning of your program you should have m=1.0m = 1.0 as all the spins point up. Try to perturb a few particles and recalculate the magnetisation

Once you have convinced yourself that this functions correctly, add the next component to your model.

Exercise 6: Implement the Metropolis Algorithm

At this point in your the code you should have all the nesscessary functions properly implemented and the thermodynamic simulation of the system can take place.

  1. Set up the system in an initial configuration;

  2. Choose one of the particles at random using a Markov Monte Carlo approach

  3. Calculate the energy change ΔE\Delta E of the system if the spin of the chosen particle is flipped

  4. If ΔE\Delta E is negative, then select to flip the spin and go to step 7, otherwise ....

  5. Generate a random number rr such that 0<r<10 < r < 1

  6. If this number is less than the probability of ΔE\Delta E i.e. r<exp(ΔE/kBT)r < \exp(-\Delta E/k_B T), then flip the spin.

  7. Choose another spin of the lattice at random and repeat steps 2 to 6 a chosen number of times (NMCSN_{MCS})

Note, the Metropolis algorithm only contains ΔE/kBT\Delta E/k_BT, where kBk_B is the Boltzmann constant. Therefore, by defining T=kBT/JT'=k_B T/J the values of JJ and KBK_B are not required and you can work with TT' as a dimensionless parameter independent on the material chosen. Hence the expression in step (6) reduces to r<exp(ΔE/T)r < \exp(-\Delta E/T'), with ΔE\Delta E an integer number with values between -4 and 4.

It is usual to reject the first NMCS/2N_{MCS}/2 configurations in each Monte Carlo run in order to first establish thermalisation, and to consider only one configuration every NSN_S to avoid correlations.

For your final production run choose NMCS=100000N_{MCS}=100000 or a larger number, while you should use a smaller value of NMCSN_{MCS} for debugging. When using lattices of different size, comparable quality of results can be obtained using the same value of NMCS/NSN_{MCS}/N_S, where NSN_S is the number of spin. This ratio is referred to as 'number of Monte Carlo configurations per spin' and indicates that an equal number of random choices is taken for each spin in the system.

Exercise 7: Perform measurements

This is the key section of your project where you get to perform statistical measurements of the 2D system. These measurements are magnetisation, magnetic susceptibility, energy and specific heat. If you perform many simulations at different temperatures, you should be in a position to observe phase transitions and measure the transition or Curie temperature, kTckT_c.

For a given temperature, we will wish to calculate the average magnetisation.

The average magnetisation per spin of the lattice is given by: <m>=1NciNSmi\begin{darray}{rcl} < m > = \frac{1}{N_c} \sum_i ^ {N_S} m_i \end{darray}

where, NCN_C is the number of configurations included in the statistical averaging, and mim_i is the value of the magnetisation for a given configuration.

Investigate the magnetisation over a range of temperature where the ferromagnetic to paramagnetic phase transition occurs. It is best to start at a low temperature and work upwards. Analyse the magnetization as a function of temperature and visualise the results. Also consider providing representive lattice images, below, at and above the transition temperature, kTckT_c (or even a nice animated gif of the full temperature scan!).

Identify and discuss phase transitions in the evolution of the system.

Consider benchmarking your numerical model for magnetisation. Can you find analytic solutions from research literature to compare with your numerical model predictions?

The magnetic susceptibility is another useful material parameter. This tells us how much the magnetisation changes by increasing the temperature. From the results of Exercise 8 it should be possible to calculate the magnetic susceptibility as a function of temperature. The magnetic susceptibility is calculated using: χ=1T[<m2><m>2]\begin{equation} \chi = \frac{1}{T} [<m^2> - <m>^2 ] \end{equation} Interprete and discuss your results.

According to the fluctuation dissipation theorem in statistical physics, the specific heat per spin of the lattice at temperature TT is given by CV=<E2><E>2NST2\begin{darray}{rcl} C_V = \frac{<E^2>-<E>^2}{N_ST^2} \end{darray} where EE is the energy of the lattice. The thermal averages <E>< E > and <E2>< E^2 > can again be calculated by the Monte Carlo method using: <E>=1NCE\begin{darray}{rcl} < E > = \frac{1}{N_C} \sum E \end{darray} Using this method, investigate the specific heat of the spin system in the vicinity of the phase transition.

Exercise 8: Calculate statistical errors and estimate finite size effects

Estimating the statistical errors is very important when performing Monte Carlo simulations i.e a single simulation may produce a fluke result! Also, the finite size of the 2D space can effect the measurements such as TCT_C.

Repeat exercises 8 and 9 by varying the size of the lattice to 32x32, 64x64 etc to estimate the finite size effects of 2D grid.

Run each lattice simulation a number of times in order to estmate the statistical errors of the measurements. Save your data in a text files for future analysis.

Submission of project

The topic of your project is "An investigation of Ferromagnetism using a 2D Ising Model".

For this project, you are required to:

  • Build a computational model of Ferromagnetism using the Ising Model

  • Do so by completing the steps indicated by Exercises 1-8

  • Create a single Jupyter notebook called Ising2D.ipynb containing your code and to present your work

  • Annotate your Jupyter notebook with short sections of text for:

    • explaining your code & model

    • analysing and visualising your data

    • interpreting and discussing the results with

      • context to the physics

      • context the limitations of the model

  • This assigned project folder will be collected on SageMathCloud (SMC) for assessing on Sunday 21 May 23:59.

Remember you are not constrained to develop your work on SMC. If desired you can work on your local computer or via a computer cluster and indeed work with python scripts (.py). However, the submission of the project will be via SMC in notebook form as described above.

References:

  1. Torquato, S., 2011. Toward an Ising model of cancer and beyond. Physical biology, 8(1), p.015017.

  2. Ising, E., 1925. Beitrag zur Theorie des Ferromagnetismus. Z. Phys. 31: 253-258

  3. Onsager, L., 1944. Crystal statistics. I. A two-dimensional model with an order-disorder transition. Phys. Rev., Series II 65: 117–149

  4. Metropolis, N. et al., 1953. Equation of State Calculations by Fast Computing Machines. Journal of Chemical Physics 21 (6): 1087.