Use of Absorbing Markov Models in Clinical Decision Analysis

The Markov formalism can substitute for an outcome in a typical decision tree. The simplest outcome structure is life expectancy. This has a natural expression in a Markov cohort model: life expectancy is a summed experience of the cohort over time. If we assign credit for being in a state at the end of a cycle, the value of each state function in Figure 4.2 represents the probability of being

Absorbing Markov chain natural history

FIGURE 4.2 Absorbing Markov chain natural history.

alive in that state in that cycle. At the start of the process, all members of the cohort are in the Well state. At Cycle One (Table 4.5), 80% are still Well and 15% have progressive disease, so the cohort would have experienced 0.8 average cycles Well, and 0.15 cycles in Progressive disease. At Cycle Two (Table 4.6), 64.3% are Well and 22.5% have Progressive disease. Thus, after two cycles, the cohort experience is 0.8 + 0.643, or 1.443 cycles Well and 0.15 + 0.225, or 0.375 cycles in Progressive disease. Summing the process over 45 cycles, until all are in the Dead state, the results are 4.262 cycles Well and 2.630 cycles in Progressive. So, the life expectancy of this cohort, transitioning according to the probability matrix in Table 4.5, is 6.892 cycles, roughly 2:1 in Well versus Progressive disease. Refinements to this approach, involving correction for initial state membership, can be found in Sonnenberg and Beck [4].

Whereas a traditional outcome node is assigned a value, or in Chapters 2, 7, 8, 10, and 12, a utility, the Markov model is used to calculate the value by summing adjusted cohort membership. For this to work, each Markov state is assigned an incremental utility for being in that state for one model cycle. In the example above, the Well state might be given a value of 1. the Progressive state a value of 0.7. That is, the utility for being in the Progressive state is 70% of the value of the Well state for each cycle in it. In most models Dead is worth 0. Incremental costs can also be applied for Markov cost-effectiveness or cost-utility analysis. For this tutorial example, assume the costs of being in the Well state are $5,000 per cycle, and in the Progressive state $8,000 per cycle. Summing the cohort over 45 cycles leads to the results in Table 4.7. In the second column, the overall cost in the Well state is calculated as 4.262 x $5,000, or $21,311. At $8,000 per cycle in the Progressive state the total cost in this state is $21,043. Thus, in this tutorial example, the cohort can expect to survive 6.892 cycles, or 6.103 quality-adjusted cycles, for a total cost of over $42,000. These values would substitute for the outcomes at the terminal node of a decision tree model, and could be used for decision or cost-utility analysis.

An alternative way to use a Markov model is to simulate the behavior of a cohort of patients, one at a time. This approach is known as a Monte Carlo analysis. Each patient begins in the starting state (Well, in this example) and, at the end of each cycle, the patient is randomly allocated to a new state based on the transition probability matrix. Life expectancy and quality adjustments are

TABLE 4.7

Markov Cohort Costs and Expected Utilities

Well (Q = 1.0)

Progressive (Q = 0.7)

Total

Expected cycles

4.262

2.630

6.892

Quality-adjusted

4.262

1.841

6.103

Cost/cyclc

$5,000

$8,000

Total costs

$21,311

$21,043

$42,354

handled as in the cohort solution. When the patient enters the Dead state, the simulation terminates and a new patient is queued. This process is repeated a large number of times, and a distribution of survival, quality-adjusted survival, and costs results. Modern approaches to Monte Carlo analysis incorporate probability distributions on the transition probabilities, to enable statistical measures like mean and variance to be determined [8].

Two enhancements to the Markov model render the formalism more realistic for clinical studies; both involve adding a time element. First, although the Markov property requires no memory of prior states, it is possible to superimpose a time function on a transition probability. The most obvious example of this is the risk of death, which rises over time regardless of other clinical conditions. This can be handled in a Markov model by modifying the transition probability to death using a function: in the tutorial example, time could be incorporated as p (Well->Dead) = 0.05 + G(age), where G represents the Gompertz mortality function [9] or another well-characterized actuarial model.

Second, standard practice in decision modeling discounts future costs and benefits to incorporate risk aversion and the decreasing value of assets and events in the future. Discounting (see Chapter 11) may be incorporated in Markov models as simply another function that can modify (i.e., reduce) the state-dependent incremental utilities.

 
Source
< Prev   CONTENTS   Source   Next >