Monte Carlo Simulation: Principle and Implementation

Principle of Monte Carlo Simulation

The method of Monte Carlo simulation for the study of thermodynamic properties of matter is based on statistical physics with the use of a computer programming language. It was introduced for the first time in 1953 by Metropolis et al. [233]. However, simulations became popular only from the early 1990s when several new methods of simulation of high precision were introduced and, at the same time, powerful computers became accessible for a large number of researchers [33]. Today, numerical simulations are considered as an important investigation method veiy efficient and complementary to theory and experiment in the study of properties of matter. Numerical simulations allow us to test the validity of theoretical approximations, to compare quantitatively simulated results with experimental data, and to propose interpretations. In addition, real systems have many parameters but often only a few of them govern their main properties. Theories and experiments cannot test all of them for the simple reason that experimental realizations and theoretical calculations take time and cost to carry

Physics of Magnetic Thin Films: Theory and Simulation Hung T. Diep

Copyright © 2021 Jenny Stanford Publishing Pte. Ltd.

ISBN 978-981-4877-42-8 (Hardcover), 978-1-003-12110-7 (eBook) out. Simulations can help identify relevant mechanisms before realizing experimental setups or constructing theoretical models. In many cases, numerical simulations are the only way to study very complex systems that theory and experiment cannot investigate properly.

It is obvious that numerical simulations also have particular difficulties. The first kind of difficulty is related to the capacity of computers: limited memory and speed. The second kind of difficulty concerns the efficiency of the simulation method to get good statistical averages, to treat correctly particular physical effects, to reduce errors,... Of course, there are remedies to improve and to get rid of most of these difficulties as seen below. A good simulation requires a great care in every step from the choice of model to a deep analysis of results. We recall that simulation is a method to study a problem: a good knowledge on theoretical background and experimental data prior to the simulation is necessary.

In the following, we present the principle of the Monte Carlo simulation using the Metropolis algorithm. Although Monte Carlo simulations can be performed using the micro-canonical and grand- canonical statistical descriptions (see Appendix A), we employ in the following the canonical description for illustration because this description is most frequently used in simulations. The system we consider is kept at a constant temperature T. The internal variables such as energy and magnetization are free to fluctuate at T. Of course, to show a quantity versus temperature using the Metropolis algorithm we have to perform independent simulations at several discrete temperatures in the temperature region of interest. However, when we use advanced Monte Carlo techniques, we can calculate physical quantities as continuous functions of temperature, as seen in Section 6.5.

To illustrate the principle of the Monte Carlo simulation using the Metropolis algorithm, we consider the Ising model with ferromagnetic interaction J between nearest neighbors. This simple example, however, does not cause a loss of generality of the method.

In a simulation, we wish to calculate average values of physical quantities such as the average energy < E >, the heat capacity Cv, the average magnetization < M > and the susceptibility /. The average value of a physical quantity A is defined by

where Z(T) is the partition function at the temperature T, £(s) and /4(s) are the system energy and the value of A in the microscopic state s. For a spin system, s is a spin configuration.

In principle, we have to sum over all spin configurations s. The total number of spin configurations in a system of N Ising spins is 2N. This is a huge number when N is large for a reasonable value of N. It is impossible to take into account all microscopic states. However, we can overcome this difficulty by two following ways:

Simple Sampling

We generate a number C of random spin configurations by giving randomly a value +1 or -1 to each spin. We then calculate < A > using these C states

It is obvious that the precision on the obtained average value depends on the number C of configurations: The larger C, the more precise < A >. This simple sampling suffers from a more serious problem: the randomly generated spin configurations correspond to disordered states at high temperatures. So, for low T, we have to generate random configurations with a larger concentration of up spins with respect to down spins (or vice versa). But then, how do we know which concentration corresponds to which temperature? The average value so calculated contains certainly uncontrolled errors. We can use simulations at constant concentration instead of constant temperature, but the description will be micro-canonical.

Importance Sampling

The importance sampling is based on the following principle: We generate spin configurations s which are most probable at T, namely we generate microscopic states using the canonical probability. Once these states are generated, the average value < A > is calculated by a simple addition

The question is "how to generate these microscopic states according to the canonical probability at Г in a convenient way for simulation”?

An answer to this question is to use the following algorithm. Instead of generating these states in an independent manner, we can generate a series of states called "Markov chain” in which the (/ + l)-th state, called si+is obtained from the precedent state s, by an appropriate transition probability w(s, -*■ S;+1) between these two states. The choice of w(s, ->■ s,+i) should obey the following probability at equilibrium (see Appendix A):

This is possible if we impose on w(s, ->• s,+i) the following "principle of detailed balance":

Using (6.4), we get the detailed balance for a system at equilibrium where Д E = E(sk) - £(s,).

If the above relation for equilibrium is obeyed, then there is in principle no problem for the system to reach equilibrium. The choices frequently used to respect the detailed balance (6.6) are


The second choice will be used in the following to write a Monte Carlo program for the Ising spin model.


  • (1) The state s,+i can be different from the state s, by just the state of a single spin. This facilitates the simulation task: The difference of energy ДЕ of the two states is equal to the difference of energy of the single spin under consideration.
  • (2) The power of the Monte Carlo method resides in the facility to make the system transit from the state s, to the state s,+1: We just reverse one single spin or a block of spins. The first choice is called "single-spin flip algorithm” or "Metropolis algorithm," and the second choice is called "cluster-flip algorithm" which is shown in Section 6.5.

Implementation: Construction of a Computer Program

We implement now the principle of the Monte Carlo simulation presented in the previous section. To simplify the presentation, we use the Metropolis algorithm hereafter. Let us consider the Ising model on a square lattice with a ferromagnetic interaction J between nearest neighbors. The Hamiltonian is given by

where the sum is made over pairs of nearest neighbors. S(R/<) indicates the spin at the position Rfc. We suppose the periodic boundaiy conditions. Next, we proceed to the following steps:

(1) Initial spin configuration:

In the case of the Ising model, we create a square lattice of dimension L x L where at each site we attribute a spin. For the square lattice, each site is defined by two Cartesian indices (/, j) corresponding to the position of the site (see Fig. 6.1). The spin occupying this site is S(/, j) with an attributed initial value (1 or 1).

(2) Choice of physical parameters:

Here the parameters of the model are ] and T. We take ] = 1 as the energy unit.

Coding of the square lattice. Each site is defined by two Cartesian indices

Figure 6.1 Coding of the square lattice. Each site is defined by two Cartesian indices.

(3) Calculation of the energy of spin S(i, j), using the periodic boundary conditions if it is at an edge: The best and fast way to identify the neighbors of S(i, j) is to call its left neighbor by (im, j), its right neighbor by (ip, j), the one on its top by (/, jp) and the one below it by (/', jm) (see Fig. 6.1). One writes

where we see that im = / - 1 as long as / ф 1 and ip = i + 1 as long as / Ф L because the integer divisions (1/0 and (//L) give zero (with Fortran). When i = 1, we have im = Land when i = L we have ip = 1. This automatically respects the periodic condition in the x direction: The left neighbor of the spin 5(1, j) is S(L, j), and the right neighbor of S(L, j) is 5(1, j). The same explanation is for they direction. The energy of the spin 5(/, j) is thus written as

(4) Updating the spin S(i, j) and calculating its new energy:

For an Ising spin, the new state of S[i, j) is obtained by changing its sign. Its new energy is thus E2 = —El.lf E2 < El, the new orientation of S(i, j) is accepted using (6.8). If E2 > El, the new orientation is accepted with a probability e~^E2~E1^/T (we take kB = 1 for simplicity). This step is called "spin update.”

(5) Taking another spin and repeating steps 3 and 4:

We continue until all spins have been updated: We say we have made one Monte Carlo step per spin.

(6) Equilibrating the system with Ni Monte Carlo steps:

We have to make many Monte Carlo steps to equilibrate the system at T since the initial configuration does not in general correspond to an equilibrium state at the temperature we make the simulation. The repetition of spin updates is thus necessary to bring the system to equilibrium at T.

(7) Averaging physical quantities with N2 Monte Carlo steps:

During N2 Monte Carlo steps, we calculate average values of physical quantities of interest such as the energy and the magnetization



where t is the Monte Carlo "time," E (t), M(t) and S[i, j) are the energy per spin, the magnetization and the spin at t. The factor 1/2 in (6.16) is to avoid the double counting.

Example of time evolution

Figure 6.2 Example of time evolution (in unit of Monte Carlo step), during equilibrating, of the energy per spin Eft) (in unit of /), at kBT/J = 1 for the ferromagnetic Ising model on the triangular lattice with a random initial spin configuration.


  • (i) During the Ni Monte Carlo steps for equilibrating, we can record the instantaneous energy and magnetization at each Monte Carlo step in order to observe their time evolution. We can then see the time necessary to get equilibrium. An example is shown in Fig. 6.2.
  • (ii) At the end of step 7, we record all average values in a file, and then restart the simulation at another temperature. We have at the end average values for several temperatures so that we can plot average values of physical quantities versus T. From those curves, we can recognize the phase transition as well as other properties of the system. We show some examples in the next section.
< Prev   CONTENTS   Source   Next >