Qualitative and Quantitative Exposure Assessment

Qualitative and quantitative methods can exploit the exposure assessment for food spoilage. The main difference between qualitative and quantitative assessments is related to data treatment, which is a descriptive analysis in qualitative assessments and a mathematical evaluation in quantitative assessments. Some points should be discussed before defining the type of assessment that will be conducted, and the main issue is data availability.

When the available data are insufficient for numerical analysis, a qualitative assessment must be conducted in order to determine the risks of the exposure factors. Usually, the results of a risk assessment are divided into low, medium, and high, although there may be cases of very low and very high depending on the situation. Although the data needed to perform a qualitative assessment are relatively more straightforward than for a quantitative assessment, the results may lead to different conclusions (e.g., the threshold of the difference between a low and very low risk). However, through this analytical approach, it is more challenging to predict medium risks than low and high.

Through the quantitative approach, the numerical data can be analyzed, so that the results are more reliable than qualitative assessments. In order to compile all numerical data, the development of a mathematical model is needed to compare and evaluate the factors that affect spoilage risk to foods. The "input" (e.g., time, temperature, or pH) used to perform a mathematical model determines the "output" that result in response to the risk assessment. Most works on risk assessment do not take into account the process of spoilage, and the problem is that the conditions of critical levels of hazard may favor spoilage (Koutsoumanis, 2009; Nauta, Litman, & Barker, 2003). Depending on the type of inputs, quantitative assessments can be separated into two types of analysis: deterministic and stochastic. In the deterministic method, single values such as the average are used to generate the outputs, and in the stochastic method, all available data for each input are used to generate a distribution of the possible results.

Due to the difficulty of solving stochastic models analytically, software (e.g., ©RISK®) is usually used in a Monte Carlo simulation to evaluate the model. Being an extension of the "what-if" test, Monte Carlo simulation has been widely used in quantitative spoilage risk assessment in foods such as meats and canned vegetables (Koutsoumanis, 2009; Rigaux, Andre, Albert, & Carlin, 2014). Once the model is converted into a Monte Carlo simulation, a likelihood distribution is formed. On the other hand, the software evaluates the model according to the distribution of variables by selecting specific values, and each value can be selected according to the probability used to describe the variable. By iteration of the selected values and equations to describe the model, the spoilage risk exposure is calculated (WHO/FAO, 2008).

Model Development

The model development is based on the routes of interest by which the target of the assessment can be exposed to the risk (expressed by text, diagrams, or mathematical models). Commonly used, a mathematical model is primarily used to describe the route of the spoilage risk exposure and all factors that may influence the final assessment. The model structure should be developed in such a way as to facilitate the analysis. Each of the independent variables that affect the risk exposure need their inputs and outputs paired correctly. So, with Monte Carlo simulation software, the user's need indicates which variables must be followed.

Model Types and Availability

There are many types of model available to be used as a platform for new models to be developed in the context of food spoilage. Rates, limits, and probability of growth as a function of intrinsic and extrinsic characteristics of the food and environment are some of the possibilities. Besides, there are online platforms, such as PMP (Predictive Microbiology Portal) and ComBase Predictor, which describe the behavior of pathogens and spoilage microorganisms. These tools are handy for industry and academia in order to solve problems quickly and additionally possess a user- friendly and intuitive interface.

Application of Predictive Microbiology within Exposure Assessment

Predictive models of microbiology play an important role in exposure assessment, though two critical points still need to be considered in order to adjust their accuracy. For example, it is challenging for a mathematical model to predict the closeness of all the variables involved in the growth or inactivation of a microorganism. Throughout, the models developed must be submitted to a validation process whereby it is possible to determine to what extent the model created can represent the microbial behavior. Conventionally, it is desirable for a mathematical model to have as few parameters as possible so that its outputs have a high reliability (J Baranyi, Ross, Mcmeekin, & Roberts, 1996). As described by Ross (1999), the addition of one variable to the mathematical model represents an increase of approximately ±10-15%. This means that as the number of variables of the model increases, the confidence of the prediction tends to decrease.


Although the words "uncertainty" and "variability" may be used in the same context, their meanings are completely different. Uncertainty, according to the Cambridge Dictionary, can be defined as "a situation in which something is not known, or something that is not known or certain." When we transfer this word to the risk analysis world, we can define it as "the lack of perfect knowledge, information, sampling or measurement errors of a given variate value" (Pouillot, Albert, Cornu, & Denis, 2003; Regis Pouillot & Delignette-Muller, 2010), and the effects of this uncertainty can be minimized by adding further experimental assays. On the other hand, "variability," according to the same dictionary, means "lack of consistency or fixed pattern." Again, when we use it in the microbiological context, it can be defined as the "true heterogeneity of a population irreducible by additional measurements" (Pouillot et al., 2003). It is important to understand the differences between these two concepts for a better understanding in modeling microbial risk assessment (Nauta, 2000).

In model-based optimization, uncertainty is always an inherent problem when we are dealing with living organisms and can originate from external disturbances (Tellen et al., 2015) or unmodeled process variables (Kaern, Elston, Blake, & Collins, 2005). In biological processes, this uncertainty can be present due to inherent biological variability (Nimmegeers, Telen, Logist, & Van Impe, 2016) between genetically identical cells (Kaern et al., 2005). The wrong estimation could result from not taking this uncertainty into account, generating inaccurate models (Nagy & Braatz, 2004). This leads to a challenging optimization problem under uncertainty, which requires a sturdy solution (Nimmegeers et al., 2016).

Variability is due not only to microbiological characteristics but also to these "external disturbances," such as quality of the raw material, variability in the storage times and temperatures, microbial load, and the determination of the intrinsic properties of foods. In order to estimate an accurate shelf life for a given product, the variability of external factors cannot be excluded (Chotyakul, Lamela, & Torres, 2012). However, it is very difficult to achieve a significant reduction in uncertainty when multiple factors are considered in the microbial model used for this estimation. As with the ability or inability to evaluate uncertainty and variability, a source of variation in model parameters, separately, will determine whether a proposed model will be successfully employed (Nauta, 2000; Pouillot et al., 2003). However, more accurate and realistic predictions are difficult to achieve due to the different sources of variability (Zwietering, 2015). The fitness and robustness of strains depend on the physiological state, cell history, genetic and phenotypic variability within a population (i.e., population heterogeneity), and strain variability (den Besten, Aryani, Metselaar, & Zwietering, 2017). Sturdy controls (i.e., variables that can be manipulated throughout the process) ensure that constraints are met and an overall better objective function estimate is guaranteed.

In this scenario, sensitivity analysis can be used in order to identify which of the inputted variables are more sensitive to a specific QMRA. A large number of methods are available to apply sensitivity analysis, although the challenge lies in an accurate separation between variability and uncertainty inputs. The sensitivity analysis underlines that depending on the process characteristics, microbiological variability can become the most important determining factor affecting variability in the final contamination levels (den Besten et al., 2017). A procedure is proposed that focuses on the relationship between risk estimates obtained by Monte Carlo simulation and the location of pseudo-randomly sampled input variables within the uncertainty and variability distributions (Busschaert, Geeraerd, Uyttendaele, & van Impe, 2011).

The majority of papers and established models are designed for pathogenic microorganisms. Limited information on microbiological variability is available for spoilage microorganisms (Aryani, Besten, & Zwietering, 2016). Lactic acid bacteria (LAB) and molds are the most common organisms used in spoilage research. For example, variability was demonstrated in the growth of individual fungal spores at the lowest inoculum possible (Samapundo, Devlieghere, De Meulenaer, & Debevere, 2007) with the same physiological state (Gougouli & Koutsoumanis, 2012).

Variability was described mostly by normal, lognormal, uniform, or Pert distributions (Pujol, Albert, Magras, Brian, & Membre, 2015). However, risk variability distributions and the evaluation of uncertainty associated with this spoilage risk can be obtained through a two-dimensional Monte Carlo simulation separately propagating uncertainty and variability through the model (Mokhtari & Frey, 2005; Regis Pouillot & Delignette-Muller, 2010).

The variation of the primary model parameters can be analyzed based on the propagation of errors intentionally introduced into experimental data that are assumed to be normally distributed (Poschet et al., 2004). However, the variation of the primary model parameters has not been taken into account in determining the uncertainty of the parameters of the secondary model (Giannakourou & Stoforos, 2017).

< Prev   CONTENTS   Source   Next >