Markov Modeling in Decision Analysis

INTRODUCTION

A pharmacoeconomic problem is tackled using a formal process that begins with constructing a mathematical model. In this book a number of pharmacoeconomic constructs are presented, ranging from spreadsheets to sophisticated numerical approximations to continuous compartment models. For nearly 50 years, the decision tree has been the most common and simplest formalism, comprising choices, chances, and outcomes. As discussed in Chapter 2, the modeler crafts a tree that represents near-term events within a population or cohort as structure and attempts to balance realism and attendant complexity with simplicity. In problems that lead to long-term differences in outcome, the decision model must have a definite time horizon, up to which the events are characterized explicitly. At the horizon, the future health of a cohort must be summed and averaged to “subsequent prognosis.” For problems involving quantity and quality of life, where the future natural history is well characterized, techniques such as the declining exponential approximation of life expectancy [1, 2] or differential equations may be used to generate outcome measures. Life tables may be used directly or the results from clinical trials may be adopted to generate relevant values. Costs in decision trees are generally aggregated, collapsing substantial intrinsic variation into single monetary estimates.

Most pharmacoeconomic problems are less amenable to these summarizing techniques. In particular, clinical scenarios that involve a risk that is ongoing over time, competing risks that occur at different rates, or costs that need to be assessed incrementally lead to either rapidly branching decision trees or unrealistic pruning of possible outcomes for the sake of simplicity. In these cases, a more sophisticated mathematical model is employed to characterize the natural history of the problem and its treatment. This chapter explores the pharmacoeconomic modeling of cohorts using a relatively simple probabilistic characterization of natural history that can substitute for the outcome node of a decision tree. Beck and Pauker introduced the Markov process as a solution for the natural history modeling problem in 1983, building on their and others’ work with stochastic models over the previous six years [3]. During the ensuing 36 years, over 2,000 articles have directly cited either this paper or a tutorial published a decade later [4], and over 6,000 records in PubMed can be retrieved using (Markov decision model) OR (Markov cost-effectiveness) as a search criterion. This chapter will define the Markov process model by its properties and illustrate its use in pharmacoeconomics by exploring a simplified example from the field of advanced prostate cancer [5].

THE MARKOV PROCESS AND TRANSITION PROBABILITIES

Stochastic Processes

A Markov process is a special type of stochastic model. A stochastic process is a mathematical system that evolves over time with some element of uncertainty. This contrasts with a deterministic system, in which the model and its parameters specify the outcomes completely. The simplest example of a stochastic process is coin flipping. If a fair coin is flipped a number of times and a record of the result kept (H=“heads,” T=“tails”), a sequence such as THHTTHHHTTHHHTHTHHTTHTTTTHTHH might arise. At each flip (or trial), either T or H would result with equal probability of one-half. Dice rolling is another example of this type of stochastic system, known as an independent trial experiment. Each flip or roll is independent of all that have come before, because dice and coins have no memory of prior results. Independent trials have been studied and described for nearly three centuries [6].

Markov Processes

The Markov process relaxes this assumption a bit. In a Markov model, the probability of a trial outcome varies depending on the current result (generally known as a “state”). Andrei Andreevich Markov, a Russian mathematician, originally characterized such processes in the first decade of the 20th century [7]. It is easy to see how this model works via a simple example. Consider a clerk who assigns case report forms to three reviewers: Larry, Maureen, and Nell. The clerk assigns charts to these readers using a peculiar method. If the last chart was given to Larry, the clerk assigns the current one to Larry with probability one-quarter, and to Maureen or Nell with equal probability of three-eighths. Maureen never gets two charts in a row; after Maureen, the clerk assigns the next chart to Larry with probability one-quarter and Nell three- quarters. After Nell gets a chart, the next chart goes to Larry with probability one-half, and Nell and Maureen each one-quarter. Thus, the last assignment (Larry, Maureen, or Nell) must be known to determine the probability of the current assignment.

Transition Probabilities

Table 4.1 shows this behavior as a matrix of transition probabilities. Each cell of Table 4.1 shows the probability of a chart being assigned to the reviewer named as column head if the last chart was assigned to the reviewer named as row head. An n~xn matrix is a probability matrix if each row element is nonnegative and each row sums to 1. Since the row headings and column headings refer to states of the process, Table 4.1 is a special form of probability matrix: a transition probability matrix.

This stochastic model differs from independent trials because of the Markov Property, the distribution of the probability of future states of a stochastic process depends on the current state (and only on the current state, not the prior natural history). That is, one does not need to know what has happened with scheduling in the past, but only needs to know who was most recently assigned a chart. For example, if Larry got the last review, the next one will be assigned to any of the three readers with equal probability.

Working with a Transition Probability Matrix

The Markov property leads to some interesting results. What is the likelihood that, if Maureen is assigned a patient chart, that Maureen will get the patient chart after next? This can be calculated as follows:

TABLE 4.1

Chart Assignment Probability Table

Current

Next

Larry

Maureen

Nell

Larry

0.250

0.375

0.375

Maureen

0.250

0.000

0.750

Nell

0.500

0.250

0.250

After Maureen, the probability of Larry is one-quarter and Nell three- quarters. After Larry, the probability of Maureen is three-eighths, and after Nell it is one-quarter. So, the probability of Maureen (anyone)- Maureen is (one- quarter x three-eighths) + (three-quarters x one-quarter), or 0.281. A complete table of probabilities at two assignments after a known one is shown in Table 4.2. This table is obtained using matrix multiplication, treating Table 4.1 as a 3 x 3 matrix and multiplying it by itself.[1] Note that the probability of Maureen going to Maureen in two steps is found in the corresponding cell of Table 4.2.

This process can be continued because Table 4.2 is also a probability matrix, in that the rows all sum to 1. In fact, after two more multiplications by Table 4.1, the matrix is represented by Table 4.3.

The probabilities in each row are converging, and by the seventh cycle, after a known assignment, the probability matrix is shown in Table 4.4. This is also a probability matrix, with all of the rows identical, and it has a straightforward interpretation. Seven or more charts after a known assignment, the probability that the next chart review would go to Larry is 0.353, to Maureen 0.235, and to Nell 0.412. Or, if someone observes the clerk at any random time, the likelihood of the next chart going to Larry is 0.353, and so on. This is the limiting Markov matrix, or the steady state of the process. Over time Larry would be issued 35.3% of the charts, Maureen fewer, and Nell somewhat more.

TABLE 4.2

Two-Step Markov Probabilities

Current

Chart After Next

Larry

Maureen

Nell

Larry

0.344

0.188

0.469

Maureen

0.438

0.281

0.251

Nell

0.313

0.250

0.438

TABLE 4.3

Assignment Model After Four Cycles

Current

After Four Cycles

Larry

Maureen

Nell

Larry

0.355

0.235

0.411

Maureen

0.352

0.237

0.411

Nell

0.352

0.235

0.413

TABLE 4.4

Steady-State or Limiting Markov Matrix

Larry

Maureen

Nell

Larry

0.353

0.235

0.412

Maureen

0.353

0.235

0.412

Nell

0.353

0.235

0.412

  • [1] Matrix multiplication can be reviewed in any elementary textbook of probability or finite mathematics, or at http://en.wikipedia.org/wiki/Matrix_multiplication.
 
Source
< Prev   CONTENTS   Source   Next >