# Interactive Methods

Interactive methods (Steuer 1986; Vanderpooten and Vincke 1989; Vincke 1992; Lee and Olson 1999) alternate the computation steps with interaction steps in which the analyst gradually specifies or revises preference information, in accordance to the decision maker's or other stakeholders' requests.

In the early stages of investigation, the set of decision options may be itself an outcome of this interaction.

The underlying principle of this MCA approach is inspired by Simon's theory of satisficing (Simon 1976), the goal being to find a satisfactory compromise solution. This is especially appropriate (Belton and Stewart 2002) for the case when the participants in the decision process have some good a priori ideas about the realistically achievable levels for the evaluation criteria.

Interactive methods can be seen to function in a search-oriented or learning-oriented framework. In the latter setting, the set of non-dominated solutions is freely explored, the current solution found being compared with the most preferred up to that stage. Therefore, a solution discarded at some step might be reconsidered at a later stage.

# Sensitivity and Robustness Analysis

Data uncertainty and imprecision are inherent characteristics of real-life applications and equally affect MCA models. A classical way to deal with this is to undertake sensitivity analysis. This seeks to determine the parameters which contribute most to the variance in the MCA results or how much the model parameters (for example, criteria weights) may vary such that the conclusion of interest (for example, that a policy option achieves the best rank) still holds.

An alternative way to address uncertainty and imprecision in MCA is robustness analysis. The notion of robustness may have different interpretations (Dias 2006). Roy and Bouyssou (1993, p. 315) use the term *'robust'* for a result or conclusion that is not 'clearly invalidated' for any possible instance of the decision model parameters (for example, weights or thresholds). Connected to that, the *robustness analysis* is the process of elaborating recommendations founded on robust conclusions. Dias and Climaco (1999) identify two types:

*• Absolute robust* conclusion, in other words, a statement referring to one option only, which is valid for all admissible instances of the MCA model parameters, for example, 'option *a* has the utility U(a)>0.5';

*• Relative robust* conclusion, in other words, a statement referring to one option in relation to others, and which is valid for all admissible instances of the MCA parameters, for example, 'option *a* has a better rank than option *V* or 'option *a* has the best rank'.

For instance, if the range of a criterion's weight is estimated as [0.3,0.5], sensitivity analysis may point out for example that the ranking of the different policy options changes if this weight becomes larger than 0.4. Robustness analysis can indicate instead that a given option will always outperform another, no matter what the particular value of the weight in the given interval.