Skip to main content
EU Science Hub

GLOBAL SENSITIVITY ANALYSIS

Global sensitivity analysis is the study of how the uncertainty in the output of a model (numerical or otherwise) can be apportioned to different sources of uncertainty in the model input”. Global could be an unnecessary specification here, were it not for the fact that most analysis met in the literature are local or one-factor-at-a-time.

All models have use for global sensitivity analysis. Applications worked out by the Joint Research Centre group for Applied Statistics include: Atmospheric chemistry, transport emission modelling, fish population dynamics, composite indicators, hydrocarbon exploration models, macroeconomic modelling, radioactive waste management.

Prescriptions have been issued for global sensitivity analysis of models when these used for policy analysis. In Europe, the European Commission recommends sensitivity analysis in the context of the extended impact assessment guidelines and handbook (2002). Similar recommendation in the United States EPA’s White Paper on model use acceptability (1999).

The EC handbook for extended impact assessment, a working document by the European Commission, 2002, states: "A good sensitivity analysis should conduct analyses over the full range of plausible values of key parameters and their interactions, to assess how impacts change in response to changes in key parameters". The EPA paper (1999) is less prescriptive, but insists on the need for uncertainty and sensitivity analysis.

We list below what are the desirable properties of an ideal global sensitivity analysis method.

  1. Cope with the influence of scale and shape. The influence of the input should incorporate the effect of the range of input variation and the form of its probability density function (pdf). It matters whether the pdf of an input factor is uniform or normal, and what are the distribution parameters.
  2. Include multidimensional averaging. In a local approach to SA one computes partial derivatives, as discussed above. This is the effect of the variation of a factor when all others are kept constant at the central (nominal) value. A global method should instead evaluate the effect of a factor while all others are varying as well.
  3. Be model independent. The method should work regardless of the additivity or linearity of the model. A global sensitivity measure must be able to appreciate the so-called interaction effect, especially important for non-linear, non-additive models. These arise when the effect of changing two factors is different from the sum of their individual effects.
  4. Be able to treat grouped factors as if they were single factors. This property of synthesis is essential for the agility of the interpretation of the results. One would not want to be confronted with a SA made of dense tables of sensitivity measures.

Property 1
Scale and shape

Property 2
Multi-dimensional averaging

Property 2
Model independence

Property 2
Grouping of factors

Derivatives, Local methods

N

N

N

Y

Regression method, e.g standardised regression coefficients

Y

Y

N

N

Morris

N / Y

Y

Y

Y

Variance based methods

Y

Y

Y

Y

Monte Carlo Filtering

Y

Y

Y

N

Table Properties of sensitivity measures

A few words about the output Y of interest. In our experience, the target of interest should not be the model output per se, but the question that the model has been called to answer. To make an example, if a model predicts contaminant distribution over space and time, it is the total area where a given threshold is exceeded at a given time which would play as output of interest, or the total health effects per time unit.

One should seek from the analyses conclusions of relevance to the question put to the model, as opposed to relevant to the model, e.g.

  • Uncertainty in emission inventories [in transport] are driven by variability in driving habits more than from uncertainty in engine emission data.
  • In transport with chemical reaction problems, uncertainty in the chemistry dominates over uncertainty in the inventories.
  • Engineered barrier count less than geological barriers in radioactive waste migration.

This remark on the output of interest clearly applies to model use, not to model building, where the analyst might have interest in studying a variety of intermediate outputs.

TOP

MONTE CARLO (OR SAMPLE_BASED) ANALYSIS

Monte Carlo (MC) analysis is based on performing multiple evaluations with randomly selected model input, and then using the results of these evaluations to determine both uncertainty in model predictions and apportioning to the input factors their contribution to this uncertainty. A MC analysis involves the selection of ranges and distributions for each input factor; generation of a sample from the ranges and distributions specified in the first step; evaluation of the model for each element of the sample; uncertainty analysis and sensitivity analysis.

Various sampling procedures are used in MC studies. Among those are: random sampling, stratified sampling (including latin hypercube sampling), and quasi-random sampling.

Sensitivity measures based on the MC approach include regression-based measures (Standardised Regression Coefficients (SRC), Partial Correlation Coefficients (PCC), Standardised Rank Regression Coefficients (SRRC), Partial Rank Correlation Coefficients (PRCC)).

Suggested references:

  • Helton JC, FJ Davis (2000) Sampling Based Methods. Chapter 6 in Mathematical and Statistical Methods for Sensitivity Analysis of Model Output. Edited by A. Saltelli, K. Chan, and M. Scott, John Wiley and Sons.
  • Helton JC (1993) Uncertainty and sensitivity analysis techniques for use in performance assessment for radioactive waste disposal. Reliability Engineering and System Safety, 42, 327-367.

TOP

RESPONSE SURFACE METHODOLOGY

This procedure is based on the development of a response surface approximation to the model under consideration. This approximation is then used as a surrogate for the original model in uncertainty and sensitivity analysis.

The analysis involves the selection of ranges and distributions for each input factor, the development of an experimental design defining the combinations of factor values on which evaluate the model, evaluations of the model, construction of a response surface approximation to the original model, uncertainty analysis and sensitivity analysis.

Different types of experimental designs are available to select the points at which evaluate the model. The choice of the design points depend on a several factors: the number of independent variables under consideration, the computational effort needed for each model evaluation, the presence of quadratic or higher order effects, the importance of variable interactions.

Sensitivity measures for the input factors are derived from the constructed response surface. This surface plays the same role in a response surface methodology as the Taylor series in a differential analysis.

TOP

SCREENING DESIGNS

Factors screening may be useful as a first step when dealing with a model containing a large number of input factors (hundreds). By input factor we mean any quantity that can be changed in the model prior to its execution. This can be a model parameter, or an input variable, or a model scenario. Often, only a few of the input factors and groupings of factors, have a significant effect on the model output.

Screening experiments are used to identify the subset of factors that controls most of the output variability with a relatively low computational effort. As a drawback, these economical methods tend to provide qualitative sensitivity measures, i.e. they rank the input factors in order of importance, but do not quantify how much a given factor is more important than another.

Typical screening designs are one-at-a-time (OAT) experiments, in which the impact of changing the values of each of the chosen factors is evaluated in turn. Although simple, easy to implement, and computationally cheap, the OAT methods have a limitation in that they do not enable estimation of interactions among factors and usually provide a sensitivity measure that is local (around a given point of the input space).

An OAT design that is not dependent on the choice of the specific point in the input space is that proposed by Morris.

Alternative approaches to the problem of screening include the design of Cotter, the Iterated Fractional Factorial Designs (IFFDs) introduced by Andres and Hajas, the sequential bifurcation method proposed by Bettonvil; and the method proposed by Morris, that still being an OAT experiment covers the whole input factor space.

Suggested references:

  • Morris, M.D, Factorial Sampling Plans for Preliminary Computational Experiments, 1991
  • Technometrics, 33 (2), 161-174.
  • F. Campolongo, J. Kleijnen, and T. Andres, 2000, Screening methods in Sensitivity Analysis. Chapter 4 in Sensitivity Analysis, A. Saltelli, K. Chan, and M. Scott, Eds. John Wiley and Sons Publishers.
  • Handbook of Simulation, Jerry Banks Editor, Wiley, New York, 1998.
  • W. J.Welch, R. J.Buck, J. Sacks, H. P. Wynn, T. J. Mitchell , and M. D. Morris, 1992. Screening, predicting, and computer experiments. Technometrics, 34(1), 15-47.
  • Campolongo, F., Tarantola, S., and Saltelli, A., 1999b, Tackling quantitatively large dimensionality problems, Computer Physics Communications, 117, 75-85.

TOP

LOCAL - DIFFERENTIAL ANALYSIS

Local SA investigates the impact of the input factors on the model locally, i.e. at some fixed point in the space of the input factors. Local SA is usually carried out by computing partial derivatives of the output functions with respect to the input variables (differential analysis). In order to compute the derivative numerically, the input parameters are varied within a small interval around a nominal value. The interval is not related to our degree of knowledge of the variables and is usually the same for all of the variables.

One shortcoming of the linear sensitivity approach is that it is not possible to assess effectively the impact of possible differences in the scale of variation of the input variables, unless the model itself is linear. When significant uncertainty exists in the input parameters, the linear sensitivities alone are not likely to provide a reliable estimator of the output uncertainty in the model. When the model is non-linear and various input variables are affected by uncertainties of different orders of magnitude a global sensitivity method should be used.

Differential analysis techniques are based on the use of a Taylor series to approximate the model under consideration. Once constructed, this series can be used as a surrogate for the original model in analytical uncertainty and sensitivity studies.

A differential analysis involves four steps:

  • base values and ranges are selected for each input factor;
  • a Taylor series approximation to the output is developed around the base values for the inputs;
  • variance propagation techniques are used to estimate the uncertainty in the output in terms of its expected value and its variance;
  • the Taylor series approximation is used to estimate the importance of individual input factors.

In the fourth step, there are different ways of measuring the importance of the input factors. For example normalised partial derivatives, in the first order Taylor series approximation, can measure the effect on the solution that results from perturbing an input factor by a fixed fraction of its base value.

One problem arising in a differential analysis is the determination of an appropriate order for the Taylor series approximation. Estimates for expected value and variance, in the third step, vary according to the order of approximation.

Differential analysis methods have been used extensively in chemistry in a variety of applications, such as the solution of inverse problems, where they have proven their worth. Nevertheless, the use of global methods, possibly quantitative, should be preferred to derivative-based SA for all problem settings where finite parameter variations are involved, unless the model is known to be linear or the range of variation is small.

Suggested references:

T. Turanyi, and H. Rabitz. Local methods and their applications. Chapter 5 in Mathematical and Statistical Methods for Sensitivity Analysis of Model Output. Edited by A. Saltelli, K. Chan, and M. Scott, to be published by John Wiley and Sons.

T. Turanyi, 1990. Reduction of large reaction mechanisms. New J. Chem. 14, 795-803.

TOP

FORM-SORM

FORM and SORM are useful methods when the analyst is not interested in the magnitude of Y (and hence its potential variation) but in the probability of Y exceeding some critical value. The constraint (Y-Ycrit < 0) determines a hyper-surface in the space of the input factors, X. The minimum distance between some design point for X and the hyper-surface is the quantity of interest.

Let B denote such a minimum distance for some assigned joint distribution of the input X. In these settings one can chose as sensitivity measure the derivative of B with respect to the input factors. Such a quantity should not be confused with the local derivative of Y with respect to the inputs, as the action of taking the minimum of B over the hyper-space of X introduces an element of probabilistic weighting. The First Order Reliability Method (FORM) offers such a probabilistic measure. It gives an estimate of how much a given input factor may drive the risk (probability of failure) of the system.

Suggested references:

  • J. Cawlfield. Reliability Algorithms (FORM and SORM). Chapter 7 in Mathematical and Statistical Methods for Sensitivity Analysis of Model Output. Edited by A. Saltelli, K. Chan, and M. Scott, to be published by John Wiley and Sons.

TOP