For directed graphs, an intuitive form of dependence is to allow for the tie from person i to person j to be dependent of the tie to person i from person j. Hence, the model is no longer a model for the ties of the network but for pairs of ties of the network, and the pairs of tie-variables are called “dyads.”
Markov Dependence Assumption
Frank and Strauss (1986) proposed a “Markov dependence assumption,” the simplest assumption that goes beyond a dyad. Two tie-variables are assumed to be independent unless they share a node. If instead of considering the edges of a graph as connecting nodes, we think of the nodes of the graph as connecting the edges, the Markov dependence assumption suggests itself. Because node i connects the possible edges (i, j) and (i, h), we say that the tie-variables corresponding to (i, j) and (i, h) are dependent conditional on the rest of the graph.
This assumption accounts for the fact that whether Mary talks to John may depend on whether Mary talks to Peter because both ties pertain to Mary. In addition, the probability that John talks to Peter may be affected by whether both John and Peter talk to Mary (a possible tie between John and Peter is conditionally dependent on ties between Mary and John and between Mary and Peter under the Markov assumption - note the triangle!). The Markov dependence assumption leads to the class of Markov random graphs, where the (log-) probability of a graph is proportional to the weighted sum of counts of different structural features such as edges, stars, and triangles. How these features are derived is summarized in Chapter 7, and the structural features themselves and how they may be interpreted is dealt with in the next section.