# Metropolis Algorithm

Many problems with simulation, estimation, and GOF are due to the simulation performing badly - the sampler does not move freely between graphs. In this section, we present a brief but more detailed description of the Metropolis sampler. A sequence x^{(0)}, x^{(1)},..., *x ^{(M—1)}*

*,*x

^{(M)}of graphs is generated as follows: in each iteration m, with the current state

*x*

^{(m—1)}*,*a pair of nodes

*i*and

*j*is chosen at random with a proposal to update the corresponding tie-variable such that the proposal graph x* is equal to x

^{(m-1)}, except that

*xim*

^{-1)}is set to 1 — xjm

^{-1}-we call this a “toggle.” If the target distribution is P

_{g}(x), we accept the proposed move with probability

where the ratio in the acceptance probability is called the “Hastings ratio.” If the move is accepted, we set x^{(m)} = x*; otherwise, we set

x(m) = *x(m—* 1).

Note that the ratio of probabilities in the Hastings ratio may be expressed as a ratio of the conditional probability of *X _{i}j *

*=*1 —

*xj*

^{1) }to the conditional probability of

*X*

_{i}j*=*

*xi’m*given the rest x—j^. Recall that this ratio is available in closed form and that the logarithm of this is the logit (or reciprocal of the logit):

^{—V)}

Furthermore, this Hastings ratio does not involve the normalizing constant, and we only need to calculate the differences in statistics that result in changing x *j* ^{1)} to 1 — x *j* ^{1)}. These differences are commonly called the “change statistics.” (see Eq. 6.1)

Alternatives to proposing a move through a simple toggle are given in Snijders (2002) and generally involve proposing bigger moves or updating null ties and ties with unequal weights as in, for example, the tie- no tie sampler (Morris, Handcock, & Hunter, 2008). These alternative algorithms are generally more efficient because they reduce the required burn-in between graphs. From Figure 12.3, we note that when the chain has settled in, most graphs are sampled from the right area. Thus, there is no need to restart the chain if we want several graphs from the same distribution. Instead, we let the chain burn in for a shorter period of time between sample points. The number of iterations discarded in-between sample points is called “thinning.” It is important to keep in mind that these sampled graphs will constitute a dependent sample, and for this reason, an important diagnostic when simulating graphs is to make sure the autocorrelation for the statistics is not too big (ideally, zero, but a maximum of 0.4 in absolute value is tolerated - see Section 12.4.2 on the multiplication factor).