Uncertainty Analysis in Chemical Risk Assessment

INTRODUCTION

Uncertainty is an intrinsic property of risk assessment and concerns each step of the assessment process. In principle, the presence of uncertainties precludes neither risk assessment nor risk management decision-making, but it does affect the result of the assessment. Moreover, nowadays, risk assessments are subject to thorough scrutiny by scientists, decision-makers, stakeholders and the public in general (Abt et al. 2010). This increased attention reflects the growing public concern for health safety issues. It is driven not only by increased media coverage but also because inappropriate measures - insufficiently preventive, late or disproportionate - can lead, and have led, to national or even global health crises, like the asbestos disaster, the diethylstilbestrol story and the "mad cow disease" crisis, to name but a few (EEA 2001, 2013). Lessons learned from past crises have led to improved procedures for more accurate and robust risk assessments that meet the needs of decision-makers. These improvements aim to ensure that risk assessments make the best use of the best available scientific data and address uncertainty in a comprehensive and systematic way (Aven 2016). As such, uncertainty analysis is not an aim in itself but a means and an integral part of well-conducted risk assessments (NRC 2009).

Uncertainty analysis is carried out throughout all stages of the risk assessment process to identify and describe the different sources of uncertainty in the process and hence, to better understand how they impact the overall risk estimation (Van der Sluijs et al. 2003b). The goal of uncertainty analysis, by specifying all uncertainties, is to increase the transparency of risk assessments and provide the most complete information for the decision-making process. For example, only partially identifying or assessing the uncertainty is equivalent to underestimating the real uncertainty and hence overestimating the reliability of the assessment results. Uncertainty analysis is thus necessary to assess the level of confidence in the results (ANSES 2016d). Moreover, through the identification of uncertainty sources, initial methodological choices may be reconsidered and revised to improve the accuracy or robustness of the final assessment results. In such circumstances, uncertainty analysis is used as a productive means to refine the risk assessment (Verdonck et al. 2007). Uncertainty analysis can also be used as a means for further risk characterization refinement, as the main contributors to the overall uncertainty indicate which additional data should be collected or which type of research should be carried out to most refine results (Verdonck et al. 2007, ANSES 2016d). Finally, to allow a correct appreciation of risk assessment results, it is essential that the risk estimate is reported together with the results of the uncertainty analysis, both being expressed in a transparent and understandable way (EFSA 2016, ANSES 2016c). In particular, expressing the results of the uncertainty analysis in a way that is not clear or useful for decision-making may lead to inadequate risk management measures (Abt et al. 2010, ANSES 2016d).

Thus, in a risk assessment context, the aim of any uncertainty analysis is to identify, describe, then quantify or qualify and finally, communicate all the uncertainties associated with the results of the risk estimation (ANSES 2016c).

DEFINITION OF UNCERTAINTY

To estimate a health risk, risk assessors rely on and use the available scientific knowledge. This knowledge - in the form of qualitative or quantitative data or information - may relate to the health effects, the contents in various products or matrices, the concentrations emitted, the doses of exposure, etc. (NRC 2009).

Whatever the amount of available data or information and their quality, scientific knowledge has, by definition, limits and shortcomings. The data or information may, for example, be inaccurate, have measurement errors, not be completely representative with respect to the population or the exposure of interest, etc. (Walker et al. 2003). In this context, uncertainty refers to limits or shortcomings in the scientific knowledge that is available to risk assessors during the time and given the resources - in terms of methods, tools and skills - allocated to the risk assessment (ANSES 2016c).

CLASSIFICATION OF UNCERTAINTY SOURCES

In the risk assessment process, uncertainty may occur for different reasons; for example, a lack or a limit of knowledge about the exposure pathways of interest, the target population under consideration, the exposure scenario(s) on which the assessment is based, the models used and/or the data used, etc. To provide the best estimate of the risk, the full range of uncertainty has to be captured. This necessarily involves a comprehensive inventory of the different uncertainty sources (Spiegelhalter and Riesch 2011). The inventory process must be both transparent and reproducible.

For this, it is essential to use a standardized classification of uncertainty sources that corresponds to a generic checklist of the types of uncertainty sources. The use of such a checklist facilitates identifying all sources of uncertainty encountered during a risk assessment, and their individual differences, since they may need to be treated in different ways (Petersen et al. 2013). For example, probability theory is the best-known and the most widely used formalism for quantifying uncertainty. However while it may be an appropriate means of expressing some kinds of uncertainty, it is not for some others (Aven et al. 2014).

In the last 20 years, there have been several attempts to provide a generic classification for sources of uncertainty in risk assessment. The earliest developments mainly focused on uncertainty in quantitative data (emissions, dose-response, concentration, etc.) that feed risk assessment models or are used to estimate model parameters (Cullen and Frey 1999). Examples of such uncertainty include the analytical limits encountered when determining substance concentrations in food, water, etc. or the limits of pathogen amplification methods for microbiological hazard, which are usually based on models calibrated on historical data with no guarantee that the relationship still holds today. To produce relevant results in such cases, a risk assessor has to extrapolate and make further untestable assumptions. These assumptions- for example, those relating to "worst-case" situations - are typically sources of uncertainty that must be identified and discussed (Verdonck et al. 2007). Beyond the uncertainty in quantitative data, other types of uncertainty may arise from selection processes themselves (choice of scenario, model or data, etc.). Finally, more recently, the need to also define sources of uncertainty related to the assessment context, like the decision-making context, has been emphasized (U.S. EPA 2014a, Morgan et al. 2009, EFSA 2015).

Different classifications have been proposed (ANSES 2016c, IPCS 2008, van der Sluijs et al. 2003a, IOM 2013, Hayes 2011). Some of them have been developed for a specific field (e.g. climate change, chemical risks in health- environment, etc.), while others focus on a specific stage of the risk assessment process. All the proposed classifications are, however, consistent with the overall logic of the risk assessment process and include the three following broad classes of uncertainties: Context, Method and Communication.

  • • The Context class covers sources of uncertainty that relate to the scope of the assessment. Scoping an assessment implies making choices that are themselves sources of uncertainty. Some of these choices - such as the question asked, the decision-making context or the resources allocated, including the time for instruction - are made when framing the risk assessment and can be attributed to the petitioner. For example, the question asked by the petitioner may be equivocal or too broad to be fully dealt with in the allotted time or, on the contrary, too focused on one aspect to allow a proper understanding of the problem that arises. Some other contextual sources of uncertainty can be attributed to the risk assessor; for example, when reformulating the petitioner's question restricts or more generally redefines the target population or the exposure to be assessed. Current methodological guidance documents recommend distinguishing between these two subclasses (framing and reformulation), as they require different types of action to reduce uncertainties (U.S. EPA 2014a, EFSA 2015).
  • • The Method class encompasses all technical sources of uncertainty, which generally arise from three principal origins, the data, scenarios and models used for the assessment, and in particular, their representativeness with regard to the question asked; that is, whether or not the data, scenarios and models used allow the exposure and the risk of the target population to be estimated in a robust and precise manner for adequate decision-making. Thus, uncertainty analysis that explicitly characterizes and separately deals with these different aspects of the overall methodological uncertainty can lead to a more open and rational decision process. Hence, this class is generally divided into three subclasses: data, scenarios and models.
  • • For data, the data search and selection processes, when not correctly structured to identify and retain the most appropriate data for the question being addressed, are potential sources of uncertainty. Then, the sources of uncertainty related to the methodological limitations of the data used for the evaluation (limitations in study design, in sampling or analytical methods, in questionnaires, etc.) need to be cited. Finally, the way the data is used, especially how the inherent variability of the phenomenon studied is taken into account, may also be a source of uncertainty.
  • • For the scenarios, the possible sources of uncertainty can be revealed by the PECOTS statement, which specifies the target Population of the risk assessment (P), the Exposure to be assessed (E), the Comparator, that is, the control population or reference exposure (C), the Outcome of interest (O), the Timing of the exposure (T) and the Setting of interest (S)
  • (see Box 16.1) specific to the risk assessment question asked. Defining a PECOTS statement is strongly recommended when planning a risk assessment (see Section 16.3.1). Naturally, only those parts of the PECOTS statement that are relevant to the question asked need to be specified. An inaccurate or erroneous statement definition would be a source of uncertainty.
  • • For the models, the most often cited sources of uncertainty are those related to the mathematical equations used: do they take into account the main factors that determine the phenomenon studied (for example, exposure)? Are the possible interactions and correlations between these factors considered? Finally, as for the data sub-class, the model search and selection processes, if not correctly structured, can also represent a potential source of uncertainty.
  • • The Communication class focuses on uncertainties caused by the way in which the risk assessment is reported. Risk assessors have both a professional and an ethical responsibility not only to present the risk assessment results but also to clearly state the framework in which the assessment was made and limitations of their work. Indeed, an erroneous or non-exhaustive description of the assessment process, the assumptions made, the data used, and the results and conclusions may be a source of uncertainty. This may lead to a misunderstanding and hence, misuse of the results and conclusions and consequently, to inappropriate risk management decisions. When communicating a risk assessment or analyzing the uncertainties of published risk assessments (reports, articles, etc.), it is important to address the following questions. Is the approach or assessment process fully documented? Are the assumptions and data on which the assessment based precisely explained? Are the results and the uncertainty around them well expressed? Do the conclusions of the assessment take into account the uncertainty of the results?

The complete proposed generic classification of types of uncertainty sources is schematically presented in Figure 16.1.

 
Source
< Prev   CONTENTS   Source   Next >