Because environmental, water resource, and other mathematical models are often extremely complex, an evaluation of the degree to which each model input contributes to uncertainty in the output is a crucial part of model development and calibration. To date, however, few studies have assessed the quality of these so-called sensitivity analyses, and those that do typically rely on bootstrapping, a technique that is not appropriate for small sample sizes.
To more efficiently evaluate the quality of sensitivity analyses without needing to perform bootstrapping or additional, time-consuming model runs, Mai and Tolson have developed a new technique called model variable augmentation (MVA). This approach augments the original model inputs with additional, known variables whose sensitivities are, in turn, used to gauge the reliability of the sensitivities determined for the original parameters.
To test the effectiveness of this new approach, the authors applied MVA to Sobol’ and PAWN, two standard sensitivity analysis methods, to check the consistency of their results when utilizing two hydrologic models. The results clearly demonstrate some advantages of using MVA: The new method revealed and resolved a weakness in the PAWN sampling strategy and also created more accurate Sobol’ rankings of model input sensitivities regardless of whether the team conducted large or small numbers of model runs.
Because MVA works with any number of model runs and can be applied to any method of sensitivity analysis that is based on random sampling, it is likely to be of broad interest to the research community. Modelers in many fields will be able to take advantage of this new approach instead of needing to rely upon their own physical understanding of the modeled system. (Water Resources Research, https://doi.org/10.1029/2018WR023382, 2019)
—Terri Cook, Freelance Writer