El futuro del análisis de sensibilidad: una disciplina esencial para el modelado de sistemas y el apoyo a políticas
The Future of Sensitivity Analysis: An essential discipline for systems modeling and policy support
Sensitivity analysis (SA) is en route to becoming an integral part of mathematical modeling.
Sensitivity analysis (SA), in the most general sense, is the study of how the ‘outputs’ of a ‘system’ are related to, and are influenced by, its ‘inputs’.
‘factors’ in SA, may include model parameters, forcing variables, boundary and initial conditions, choices of model structural configurations, assumptions and constraints.
scientific discovery to explore causalities
It has roots in ‘design of experiments’ (DOE) which is a broad family of statistical methods
We believe that SA is en route to becoming a mature and independent, but interdisciplinary and enabling, field of science.
opinions on the possible and desirable future evolutions of SA science and practice
The modern era of SA has focused on a notion that is commonly referred to as ‘Global Sensitivity Analysis (GSA)’
Such measures are said to be ‘derivative-based’ as they either analytically compute derivatives or numerically quantify the change in output when factors of interest (continuous or discrete) are perturbed around a point.
The full variance-based SA framework was laid down by Ilya Sobol’ in 1993
One persistent issue in SA is that nearly all applications, regardless of the method used, rest on the assumption that inputs are uncorrelated
ignoring correlation effects and multivariate distributional properties of inputs largely biases, or even falsifies, any SA results
Applications of SA are widespread across many fields, including earth system modeling (Wagener and Pianosi, 2019), engineering (Guo et al., 2016), biomechanics (Becker et al., 2011), water quality modeling (Koo et al., 2020a and 2020b), hydrology (Shin et al., 2013; Haghnegahdar and Razavi, 2017), water security (Puy et al., 2020c), nuclear safety (Saltelli and Tarantola, 2002; Iooss and Marrel, 2019) and epidemiology
it is not a formally recognized discipline
its application in some fields might appear under other titles.
why do I need to run SA for a given problem and what is the underlying question that SA is expected to answer?
how should I design the SA experiment to address that underlying question?
Teach SA more broadly and consistently
a dominant application of SA is for parameter screening, to support model calibration by identifying and fixing non-influential parameters.
Management of uncertainty through its characterization and attribution should be at the heart of the scientific method and, a fortiori, in the use of science for policy
while models are becoming more and more complex, they are treated more and more like a black-box, even by model developers themselves.
SA has significant potential to help in diagnosing the behavior of a mathematical model and for assessing how plausibly the model mimics the system under study for the given application.
To diagnostically test a model, one may compare SA results with expert knowledge on how the underlying system being modeled works.
Most models are poorly-identifiable, largely because of over-parameterization relative to the data and information available
SA and identifiability analysis (IA) are different but complementary
an insensitive parameter is non-identifiable, but the converse is not necessarily true, that is, a sensitive parameter may or may not be identifiable.
Model reduction, however, should be done with caution, as a parameter that seems non-influential under a particular condition might become quite influential under a new condition
fixing parameters that have small sensitivity indices may result in model variations that cannot be explained in the lower dimensional space
Development of research-specific software is at the core of modern modeling efforts.
Computational burden has been a major hindrance to the application of modern SA methods to real-world problems.
The application of SA with machine learning is further complicated because of the fundamental differences between machine learning and other types of models
calls for mutual trust between model developers and end users
The future, therefore, needs new generations of algorithms to keep pace with the ever-increasing complexity and dimensionality of the state-of-the-art models.
A complete assessment of the computational performance of any SA algorithm must be conducted across four aspects: efficiency, convergence, reliability and robustness.
an SA algorithm is robust to sampling variability if its performance remains almost ‘identical’ when applied on two different sample sets taken from the same model.
bootstrapping (Efron, 1987) is often used with SA algorithms to estimate robustness in the form of uncertainty distributions on sensitivity indices without requiring additional model evaluations.
The function evaluation procedure is typically the most computationally intensive component of SA.
future of SA may step more towards ‘sampling-free’ algorithms that can work on any ‘given data’
More recently, authors have proposed parameter estimation procedures based on nearest neighbors (Broto, 2020), rank statistics (Gamboa et al., 2020) and robustness-based optimization (Sheikholeslami and Razavi, 2020)
Higher dimensionality exacerbates the difficulty of assigning multivariate distributions to uncertain inputs
cases require excessively large sample sizes
uncertainty estimate, it is notable that a minority of works apply this quantification systematically
Sensitivity analysis of sensitivity analysis
informal (and often local) SA has contributed and will continue to contribute to a variety of decision-making problems.
Understand whether the current state of knowledge on input uncertainty is sufficient to enable a decision to be taken
Computational burden is recognized as a major hindrance to the application of SA to cases where SA can be most useful, such as for high-dimensional problems