Multicriteria Decision Analysis for the Healthcare Decision Maker

INTRODUCTION

... a formalization of common sense for decision problems which are too complex for informal use of common sense.

(R. L. Keeney 1982, 806)

Keeney’s quote concisely captures the support that multicriteria decision analysis (MCDA) offers decision makers. Decisions can be complex, involving many alternative courses of action, many criteria against which to evaluate these alternatives, uncertainty in the performance against these criteria, and conflicting perspectives. Where this is the case, decision makers risk relying on simplifying heuristics that cannot be guaranteed to reach the “right” decision. MCDA can support decision makers facing such situations to make better, more transparent, and consistent decisions.

We begin by providing a brief introduction to MCDA in the context of healthcare decisions. This is followed by a high-level description of the steps involved in conducting an MCDA. Finally, we illustrate how to implement an MCDA using the example of an individual’s choice of contraception.

MCDA - A BRIEF OVERVIEW

9.2.1 What Is MCDA?

MCDA is a term used to denote a collection of approaches that allow one to formally evaluate alternatives against a set of multiple, often conflicting, objectives, from the perspective of individuals or groups of decision makers and where no dominating course of action is evident [1]. As such, MCDA is a sociotechnical process. It is designed around the social element of the decision, that is, engaging stakeholders within the process, and the technical aspect of how the problem is solved, that is, which method(s) and tool(s) is/are used. Regardless of the method(s) used, the same sequence of steps is broadly used.

MCDA can aid decision making by providing away to structure deliberation, facilitate knowledge transfer, promote better-quality discussions, and to transparently communicate the reasons for decisions [2]. Table 9.1 lists the key benefits of MCDA.

9.2.2 Uses of MCDA

MCDA is a technique that has been widely applied in nonhealth contexts [3, 4]. More recently, there has been an increased interest in the applications of MCDA in health [2, 5]. MCDA can be useful in supporting a wide range of decisions, such as:

TABLE 9.1

The Key Benefits of MCDA

Benefit

Qualification

Completeness

Ensuring that all relevant criteria arc considered

Formal incorporation of value judgments

Quantification of stakeholders’ value judgments and combination with performance measurement

Understanding

Fostering a shared understanding of a decision problem and identifying areas of important disagreement

Transparency

Forming a transparent link between judgments and decisions

  • Pharmaceutical companies making decisions regarding their pipelines and trial designs. For example, Allergan conducted an MCDA to support decisions about investment in 59 assets across five therapeutic areas [6].
  • Marketing authorization'. The Innovative Medicines Initiative Pharmacoe- pidemiological Research on Outcomes of Therapeutics by a European Consortium (IMI PROTECT) applied MCDA to support benefit-risk assessment involved in marketing authorization decisions [7]. For other examples, see Ho et al. [8] and European Medicines Agency [9-11]
  • Reimbursement decisions: Examples of the use of MCDA to support the evaluation of new medical technologies for reimbursement purposes in Hungary [12], Italy [13], Germany [14], and Thailand [15].
  • Resource allocation decisions: MCDA was used to support the Isle of Wight Primary Care Trust’s allocation of resources across 21 interventions spread across five health priority areas [16].
  • Prescription and shared decision-making decisions: MCDA was used to support the choice of colorectal cancer screening options [17].

IMPLEMENTING AN MCDA - AN OVERVIEW OF THE STEPS

Various guidelines for implementing MCDA are available [1, 18-20]; these broadly agree on the sequence of steps as listed in Table 9.2. While the steps are presented in a linear manner, they are often undertaken in an iterative manner, refining the MCDA as learning is gained throughout the process.

9.3.1 Step 1: Problem Structuring

The first step in implementing an MCDA is to define the decision problem. This involves answering a number of questions, such as: What are the alternatives to be evaluated? Who are the decision makers? What are their objectives? And what type of decision do they have to make? The answer to the last question may be that they need to rank alternatives.

TABLE 9.2 MCDA Steps

Step Description

Step 1. Problem structuring

• Agree on a shared definition of the problem

Step 2: Criteria selection

• Identify criteria important to decision makers

Step 3: Determining the performance of alternatives against criteria

• Gather data to measure performance against each outcome

Step 4: Determining the scores and weights - estimating the values of the outcomes

  • • Evaluate the scores of outcomes
  • • Elicit weights (trade-offs) representing the relative importance of the outcomes

Step 5: Evaluation and comparison of alternatives

  • • Evaluate alternatives
  • • Interpret and communicate the results
  • • Conduct a sensitivity analysis

Another important question is whether the decision makers want to apply their own preference in the evaluation or to elicit the preferences of another stakeholder group. For instance, regulators may want to understand patients’ preferences, or a reimbursement agency may want to know the preferences of the general population.

A definition of the decision problem can be provided by reviewing documents such as the mission statement of the organization the decision makers represent or the rationale for previous decisions they have made. It is also recommended that the decision makers themselves be consulted. Other expert input can also be useful, such as key opinion leaders and/or patient advocacy groups.

9.3.2 Step 2: Criteria Selection

Different decisions will involve different sets of objectives and, thus, evaluation criteria. For instance, a prescription decision may consider clinical outcomes as well as convenience criteria, such as the mode of administration and location of getting a treatment. A resource allocation decision may consider a broader set of criteria, such as equity and budget impact.

Two approaches can be used to generate the list of criteria to include in the MCDA:value-focused thinking (also called top-down approach) and alternative focused thinking (bottom-up approach). The first approach helps identify fundamental objectives and further decomposes them into subobjectives. The second approach is driven by the existing alternatives and their distinguishing characteristics to articulate objectives [21, 22]. While there are strong arguments for using value-focused thinking [22], research also shows that individuals may struggle to think about their fundamental objectives and may need prompts to support articulating what they hope to achieve [23]. This could be done by some preliminary research on existing studies as a starting point to guide the discussions. Franco and Montibeller [21] provide a comprehensive list of tools for generating objectives.

The output from the review can then be validated and refined through discussions with the stakeholders. In contexts where there are numerous stakeholders with differing technical backgrounds, facilitated decision-conferencing workshops can be useful [6].

Once these steps have been conducted, a long list of concepts would have been gathered. In many cases this stage will generate too many criteria to be incorporated into the MCDA. There is no rule as to how many criteria to include although as few criteria should be included as are requisite with the decision. Identifying this number requires making trade-off's between increasing the validity of the decision by using a more complete set of criteria versus the resulting fatigue from the increased length of the decision task [24]. MCDAs in healthcare use between 3 and 19 criteria, with an average of 8.2 [2].

To narrow down the criteria, it is helpful to keep in mind the desirable properties of criteria sets, described in Table 9.3. These will depend on the form of the MCDA aggregation function adopted (see step 5). The most commonly adopted aggregation function in healthcare is an additive model (also referred to as “weighted sum model” or “additive multiattribute value model”). In this model, a numerical score for each alternative on a given criterion is multiplied by the relative weight for the criterion and then summed to get a “total score” for each alternative. While simple to construct and communicate, additive models require adherence to certain criteria set properties, such as preference independence, that is, that the preference for a criterion can be stated without knowing how an intervention performs on another criterion.

TABLE 9.3

Desirable Properties of Criteria [1]

Properties

Description

Unambiguous

A clear relationship exists between consequences and descriptions of consequences using the criteria.

Understandable

Consequences and value trade-offs made using the attribute can readily be understood and clearly communicated.

Direct

The criteria levels directly describe the consequences of interest.

Operational

In practice, information to describe consequences can be obtained and value trade-offs can reasonably be made.

Comprehensive

The criteria levels cover the range of possible consequences for the corresponding objective and value judgments implicit in the criterion arc reasonable.

Preferential

independence

I low much one cares about the performance of an intervention on a criterion should optimally not depend on its performance on other criteria

A common example of preference dependence is a patient’s preference for the frequency of administration depending on the mode of administration: preference for the frequency of administration will depend on whether the treatment is administered orally or via injection. Where preference dependence exists, this can be dealt with by restructuring criteria, such as combining the mode and frequency of administration into a single criterion.

To facilitate the application of criteria set properties, the concepts can be organized into groups with the purpose of extracting the essence of what matters to the decision makers. Problem structuring methods, including cognitive maps and strategic options development and analysis (SODA) maps, can be used to achieve this [25, 26] and the resulting criteria can further be organized into a value tree [21, 27]. This exercise is not trivial and will require iterations to arrive at a final list of criteria.

The definition of criteria should consider the types of measurements that are available [28]:

  • Direct or natural measurement - Whenever possible, this type of measurement is favored as it has a common understanding and directly measures the criterion in question.
  • Proxy measurement or indirect measurement - Where direct measures are not available, proxies may be required. For example, distance to hospital may be used instead of travel time. An example might be using “commute time” when measuring the distance to receive a treatment.
  • Constructed scale - In the absence of direct or proxy measures, it may be necessary to construct a scale.
  • 9.3.3 Step 3: Determining the Performance of Alternatives against Criteria

Data to measure the performance of alternatives against criteria can be collected from a range of sources, including trials, observational studies, systematic reviews, and expert opinion. Existing standards for assessing the quality of evidence, such as the Cochrane Risk of Bias tool [29], should be applied.

It is helpful to organize the data in a performance matrix or effects table [10] (as in Table 9.4).

TABLE 9.4 Effects Table

Outcome

Time

Point

Unit

(Scale)

Intervention 1 (Mean, LCI-UCI)

Intervention 2 (Mean, LCI-UCI)

Min’

Max

Outcome 1

Outcome 2

Outcome 3

Outcome 4

'Min and Max denote the range of outcomes between which the interventions perform. LCI = lower confidence interal; UCI = upper confidence interval.

9.3.4 Step 4: Determining the Scores and Weights - Estimating the Values of the Outcomes

Aggregating multiple criteria requires that they be translated onto a single scale. This is undertaken using scores and weights:

  • Scores: The relative value of changes within a criterion. For example, is a weight loss from 30 to 25 lbs valued the same as a weight loss from 25 to 20 lbs? These are also referred to as partial values, which can be displayed in a value function.
  • Weights: The relative value of changes on different criteria, or the trade-offs between criteria. For example, how much weight loss would be required to offset an increase in the risk of death?

Eliciting scores and weights from stakeholders can be done using numerous methods, such as the following:

  • • Stated preference methods, for example, discrete choice experiments [30-32]
  • • Pairwise comparison, such as the analytical hierarchy process (AHP) [33] or measuring attractiveness by a categorical based evaluation technique (MACBETH) [34]
  • • Swing weighting method and the bisection method [4]
  • • Additional methods (see review in Marsh et al. [2, 35])

The selection of the appropriate elicitation methods will depend on the following:

  • • The sample size achievable
  • • The number and complexity of the attributes
  • • The nature of preferences, that is, strength and homogeneity

Further guidance on the selection and implementation of scoring and weighting methods is available [36^40].

  • 9.3.5 Step 5: Evaluation and Comparison of Alternatives
  • 9.3.6 Step 5.a: Aggregate the Data to Obtain the Overall Value of the Alternatives

The evaluation of assets using an additive model is done in a relatively simple manner - a weighted average is used to aggregate the weights and the scores. That is, for each criterion, the weight on a criterion is multiplied by the score of an intervention on that criterion. These weighted scores are then summed to give us the overall value of an alternative. This can be done using a calculator, a spreadsheet, or an existing software PROGRAM to support the elicitation of preferences and building a model. Though also valuable, undertaking a probabilistic sensitivity analysis of an additive model will require more sophisticated methods.

Formally, this is done using the following formula to estimate this overall value estimate [1, 41]:

where

  • U(x) is the overall value of an intervention x;
  • »i is the weight attached to criterion /;
  • Uj is the score function for criterion i~;
  • x, is the performance of alternative x against criterion /.

This can be organized as in Table 9.5.

TABLE 9.5

Scores and Overall Values of Interventions

Interventions

Criterion 1

Criterion 2

Criterion 3

Overall Value

Intervention a

Intervention b

Intervention c

Intervention d

Weights

9.3.7 Step 5.b: Testing Assumptions via Sensitivity Analysis

Sensitivity analysis can be used to both validate and test the robustness of an MCDA model.

Quality assurance: Sensitivity analysis can be used to test the behavior of the model by simply changing some of the inputs to see if the results are as expected. A typical test would consist of setting all the weights to 0 and checking that the overall value of alternatives is also 0. Another typical test would consist of setting the scores of a given alternative to 0 and then 100 to test if the overall value is 0 and 100, respectively.

Understanding the robustness of preference ranking: Sensitivity analysis can be used to gain confidence in the ranking of preferred alternatives given that inputs, both value judgments and the performance estimates, are uncertain. Chapters 5 and 6 in Keeney and Raiffa [42] illustrate how to do this.

 
Source
< Prev   CONTENTS   Source   Next >