Noacco Et Al - EA - v1 PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 27

1 This manuscript has been submitted for peer-review in the journal MethodsX.

It has not yet been


2 formally accepted and the final version of the manuscript might differ slightly from the current version.
3 If accepted, the final manuscript will be available via the peer-reviewed DOI link on this webpage.

6 Method Article
7
8 Matlab/R workflows to assess critical choices in Global
9 Sensitivity Analysis using the SAFE toolbox
10

11 Valentina Noaccoa, Fanny Sarrazina, Francesca Pianosia, Thorsten Wagenera,b


a
12 Department of Civil Engineering, University of Bristol, Bristol, BS8 1TR, UK.
b
13 Cabot Institute, University of Bristol, Bristol, BS8 1UJ, UK.

14 Contact email: [email protected]


15

16 Keywords: earth system modelling, sensitivity analysis, input variability, reproducibility, uncertainty
17 analysis, output metric, sample size, simulation performance, screening, input interactions

18 Graphical abstract

19
20

21 Abstract
22 Global Sensitivity Analysis (GSA) is a set of statistical techniques to investigate the effects of the
23 uncertainty in the input factors of a mathematical model on the model’s outputs. The value of GSA for
24 the construction, evaluation, and improvement of earth system models is reviewed in a companion
25 paper by Wagener and Pianosi [n.d.]. The present paper focuses on the implementation of GSA and
26 provides a set of workflow scripts to assess the critical choices that GSA users need to make before
27 and while executing GSA. The workflows proposed here can be adopted by GSA users and easily
28 adjusted to a range of GSA methods. We demonstrate how to interpret the outcomes resulting from
29 these different choices and how to revise the choices to improve GSA quality, using a simple rainfall-

1
1 runoff model as an example. We implement the workflows in the SAFE toolbox, a widely used open
2 source software for GSA available in MATLAB and R.

3 • The workflows aim to contribute to the dissemination of good practice in GSA applications.
4 • The workflows are well-documented and reusable, as a way to ensure robust and reproducible
5 computational science.

6 Specifications table
7
Subject Area Earth and Planetary Sciences

More specific subject area: Uncertainty and Sensitivity Analysis in Earth System
Modelling
Method name: Global Sensitivity Analysis
Name and reference of original method Wagener and Pianosi et al. (in review) [1]

Resource availability SAFE toolbox available at: www.safetoolbox.info

9 Method details
10

11 Global Sensitivity Analysis (GSA) is a set of statistical techniques that allow to assess the effects of the
12 uncertainty and variability in the input factors of a mathematical model on the model’s output(s) [2].
13 GSA has been shown to improve the construction and evaluation of earth system models and to
14 maximise the information content that is extracted from model predictions [1]. The application of GSA
15 involves running Monte Carlo simulations and some post-processing of input-output samples. The
16 latter typically consists of the calculation of a set of sensitivity indices for the different input factors of
17 the model. GSA users need to make a number of choices to set-up their GSA application. All these
18 choices can significantly affect the GSA results, i.e. the estimated sensitivity indices, the consequent
19 ordering of the most influential input factors (‘ranking’ in the GSA jargon), and the identification of
20 non-influential input factors (‘screening’). Therefore, a thorough monitoring of the choices made is
21 crucial to ensure the transparency and reproducibility of GSA results.

22 In the last decade the topic of reproducibility of scientific results has gained a growing interest, and
23 concerns have been raised that a substantial number of scientific papers were falling short of this
24 standard [3,4]. In particular, several papers (e.g. [5–7]) have stressed the importance of transparency
25 and reproducibility in computational science to gain trust in the robustness of scientific results.
26 Nonetheless, especially for large-scale, computationally expensive and time-consuming studies, the
27 reproducibility quest might be difficult to achieve. One way to overcome this problem is to share the
28 code that was developed to produce scientific results, along with well-documented and reusable
29 workflows. The latter should combine the code and the data to produce the published scientific results
30 [6].

31 In this paper, we will implement several GSA techniques by means of the SAFE (Sensitivity Analysis For
32 Everybody) toolbox, which is available as Matlab code (SAFE version R1.1) and as an R package (SAFER
33 version R1.0) [8]. A Python version of the SAFE toolbox is also currently under development. SAFE is
34 an open source software which has been adopted so far by over 1700 academic users worldwide (by
35 November 2018). The SAFE toolbox includes workflow scripts to guide users in applying GSA. The

2
1 added value of the present paper is to provide workflows which make explicit the range of choices
2 one should consider before and while running GSA, and to show the implications of different choices
3 through practical examples. The rationale for the choices (‘remarks’) discussed in this paper is
4 described in Section 2 of [1]. The workflows shown here aim to provide a basis for good practice and
5 will help GSA users in their reproducibility quest. In fact, these workflows are generic and easy to
6 customise. In turn, this will help in producing more transparent and robust GSA results, as the choices
7 of the GSA users will be made explicit and the implications of these choices explored.

8 For illustration purposes, this paper uses the rainfall-runoff model HyMod, introduced by [9] and
9 described in [10]. The model takes time series of precipitation and potential evapotranspiration as
10 input and produces a time series of streamflow predictions as output. It includes five parameters: two
11 parameters which control the soil moisture component and therefore the water balance (‘beta’ and
12 ‘Sm’), and three parameters which control the routing component and therefore the streamflow
13 dynamics (‘alfa’ which partitions the flow between slow and fast reservoirs, and ‘Rs’ and ‘Rf’ which
14 define the residence time of the slow and fast flow reservoirs, respectively). The application study site
15 is the Leaf River catchment, a 1950 km2 catchment located north of Collins, Mississippi, USA and
16 described in [11].

17 Following this introduction, we provide an introductory code that sets-up the case study. Secondly,
18 we discuss six critical choices in GSA applications. We provide below the code in Matlab/Octave and
19 in the Supplementary material in R. The introductory code reports the basic steps that are necessary
20 to generate the input-output samples that will be used in the subsequent remarks. Each remark can
21 be run independently, but the order of the remarks follows what the authors believe is the natural
22 sequence of choices GSA users would make in their analysis. For each remark, we report the code used
23 to perform the analysis and the main figure showing the results, with a brief explanation of the results
24 and their implications.

25 Introductory code to run before any subsequent analysis


26 % First add the path to the directory where the SAFE toolbox is located.
27 my_dir = pwd; % use the 'pwd' command if you have already setup the Matlab
28 % current directory to the SAFE directory.
29 % Otherwise, you may define 'my_dir' manually by giving the path to the SAFE directory, e.g.:
30 % my_dir = '/Users/valentinanoacco/Documents/safe_R1.2';
31
32 % Set current directory to 'my_dir' and add path to sub-folders:
33 cd(my_dir)
34 addpath(genpath(my_dir))
35
36 % Load the data (here time series of rainfall, potential evaporation and streamflow):
37 load -ascii LeafCatch.txt
38 rain = LeafCatch(1:1095,1); % Choose the number of years of data to run the model (here 3
39 years are used)
40 evap = LeafCatch(1:1095,2);
41 flow = LeafCatch(1:1095,3);
42
43 % Define the input factors subject to GSA (here the 5 parameters of the HyMod model) and
44 their distributions and ranges:
45 DistrFun = 'unif'; % Probability distribution function of the input factors (here uniform)
46 DistrPar = {[0 400]; [0 2]; [0 1]; [0 0.1]; [0.1 1]}; % Ranges of the input factors (here
47 they come from the literature)
48 x_labels = {'Sm','beta','alfa','Rs','Rf'}; % Name of the input factors
49

3
1 % Define the output function:
2 myfun = 'hymod_MulOut';
3
4 % The hymod_MulOut function has the following output metrics:
5 % f(1) RMSE (Root Mean Square Error between the time series of observed and simulated
6 % streamflow)
7 % f(2) BIAS (Bias between the time series of observed and simulated streamflow)
8 % f(3) MEAN (Mean of the time series of simulated streamflow)
9 % f(4) STANDARD DEVIATION (Standard deviation of the time series of simulated streamflow)
10 % f(5) VARIANCE (Variance of the time series of simulated streamflow)
11
12 % Sample the input factors' space:
13 SampStrategy = 'lhs'; % Sampling strategy (here Latin Hypercube)
14 N = 3000; % Sample size
15 M = length(DistrPar); % Number of input factors
16 X = AAT_sampling(SampStrategy,M,DistrFun,DistrPar,N);
17
18 % Run the model:
19 Y = model_evaluation(myfun,X,rain,evap,flow);
20
21 % Visualise the input-output samples through scatter plots (as an example for RMSE):
22 figure; scatter_plots(X,Y(:,1),[],'RMSE',x_labels); % This figure is shown in Figure 1

23
24 Figure 1. Scatter plots of the output samples (RMSE of streamflow predictions) against the five input
25 factors (the 5 model parameters: maximum soil moisture (Sm [mm]), exponent in the soil moisture
26 routine (beta [-]), partition coefficient between fast and slow flow routing (alfa [-]), coefficient of slow
27 reservoir residence time (Rs [1/dd]) and coefficient of fast reservoir residence time (Rf [1/dd])).
28 Roughly uniformly scattered points (e.g. Sm) indicate low sensitivity, while clear patterns when
29 moving along the horizontal axis (e.g. alfa) denote higher sensitivity.

30

31 Remark 1: Multiple definitions of the model output are possible


32

33 The problem. GSA assumes that the output metric is a scalar output. Nonetheless most earth system
34 models yield time- and/or space- distributed outputs, which then need to be summarised in a scalar
35 output metric for the purpose of the GSA, such as a performance metric or a statistic of the simulated
36 variables. A choice of which scalar output metric to use is then required. Therefore, it is important to
37 analyse the impact that different choices have on the GSA results. This remark is discussed in section
38 2.1 of [1].

39 Implementation details. To illustrate this point, we use the Regional Sensitivity Analysis (RSA) method,
40 also called Monte Carlo filtering [2], and first proposed by [12] and [13]. We use the implementation

4
1 of RSA based on grouping, introduced in [10], where the output values are ranked and divided into a
2 prescribed number of groups (here ten), each with equal number of output samples. For each group,
3 the corresponding Cumulative Distribution Function (CDF) is derived for each input. Then, for each
4 input, a sensitivity index that measures the difference between the CDFs for the various groups is
5 derived. In the analysis below we compute the maximum vertical distance between the CDFs (i.e. the
6 Kolmogorov-Smirnov statistic [14,15]). In this example, four definitions of the model output are
7 considered: two performance metrics (the root mean squared error, denoted as ‘RMSE’, and the
8 absolute mean error, denoted as ‘bias’) and two statistics (the mean and the standard deviation of the
9 simulated streamflow).

10 The code

11 % Here the following output metrics are used from hymod_MulOut.m function:
12 % f(1) RMSE
13 % f(2) BIAS
14 % f(3) MEAN
15 % f(4) STANDARD DEVIATION
16
17 Y = Y(:,1:4); % Define the output metrics considered among those available in hymod_MulOut
18 n_out = size(Y,2); % Number of output metrics considered
19
20 flag = 2; % Select the statistic to compute the sensitivity indices (here the maximum
21 vertical distance or Kolmogorov-Smirnov statistic between CDFs)
22 ngroup = 10; % Number of groups into which the output values are divided
23
24 for ii=1:n_out
25 [Srsa(ii,:),idx(:,ii),Yk(:,ii)] = RSA_indices_groups(X,Y(:,ii),ngroup,flag); % Calculate
26 the sensitivity indices for each output metric
27 end
28
29 % Plot the sensitivity indices:
30 for ii = 1:n_out
31 figure % This figure is shown in Figure 2
32 boxplot1(Srsa(ii,:),x_labels,'Sensitivity index (RSA)')
33 end

5
1 The results

2
3 Figure 2. Sensitivity indices of four different output metrics: a) RMSE, b) bias, c) the mean of the
4 simulated flow, d) the standard deviation (std) of the simulated flow.

5 From Figure 2 we can see that different definitions of the output metric result in different ordering of
6 importance of the input factors. For instance, RMSE and the standard deviation of the simulated flow
7 are mainly influenced by alfa, Rs and Rf (Figure 2.a and 2.d), while bias and the mean of the simulated
8 flow are mainly influenced by Sm and beta (Figure 2.b and 2.c). This is expected, because different
9 model outputs capture different model behaviours: in this example, RMSE and the standard deviation
10 of the simulated flow highlight the model dynamics (which are controlled by alfa, Rs and Rf), while
11 bias and the mean of the simulated flow highlight the water balance (which are controlled by Sm and
12 beta).

13 Interpretation of the results. Now we look for an explanation of the results obtained by plotting the
14 input factors' CDFs. The additional code used is reported below and Figure 3 provides detailed
15 explanations.

16 % Plot input factors' CDFs:


17 for ii = 1:n_out
18 figure; RSA_plot_groups(X,idx(:,ii),Yk(:,ii),[],x_labels); % Figure 3
19 end

6
1
2 Figure 3. Cumulative Distribution Functions (CDFs) of the input factors for the four definitions of the
3 output metric: a) RMSE, b) bias, c) mean(flow), d) std(flow). The sensitivity index for each input in
4 Figure 2 was derived as the maximum vertical distance (mvd) between the above CDFs of each input
5 factor (see the arrow in the right plot of Figure 3.a). From this Figure we can see that the highest
6 sensitivity of RMSE (a) and std(flow) (d) to parameters alfa, Rs and Rf is due to the larger spread
7 between the CDFs of these input factors. A similar observation holds for bias (b) and mean(flow) (c),
8 but in this case the largest spread among CDFs is that of input factors beta and alfa.

9 Implications. The choice of the definition of the model output should be based on the intended use
10 of the model. Multiple definitions of the model output are advised, as they allow further insight to be
11 gained into the model behaviour, as well as to check whether the model behaves as expected.

12

13 Remark 2: Sample size also affects GSA results, so robustness of GSA results should be
14 checked
15

16 The problem. The analytical computation of the sensitivity indices is generally not possible and
17 sensitivity indices are typically approximated using an input-output sample obtained through Monte
18 Carlo simulations. The choice of the sample size is therefore critical and should be evaluated so to find
19 a good balance between the need for robust (i.e. sample independent) results and the need of limiting
20 the computational cost of the analysis. In this section we demonstrate how to check that the sample

7
1 size is large enough to obtain robust sensitivity analysis results, following the guidelines proposed in
2 [16]. This remark is also discussed in section 2.5 of [1].

3 Implementation details. To illustrate this remark, we use the Elementary Effect Test (EET) [2], also
4 called method of Morris [17]. The EET consists of calculating a number (r) of finite differences (also
5 called Elementary Effects, EEs) for the different input factors at different points of the input factors’
6 space. The sensitivity index is then computed as the mean of the EEs across the input factors’ space.
7 Contrary to RSA, the EET requires a tailored sampling strategy to calculate the sensitivity indices.
8 Therefore, we cannot build on the samples drawn from the introductory code, and we need to derive
9 a tailored input-output sample. In this application, the output metric is the performance metric RMSE.

10 The code

11 % Derive the tailored input factors' sample to perform EET:


12 r = 20; % Number of EEs
13 design_type = 'radial'; % Design strategy to compute the EEs (here radial)
14 Xeet = OAT_sampling(r,M,DistrFun,DistrPar,SampStrategy,design_type); % EET requires One-At-a-
15 Time sampling. The size of the input sample is (r*(M+1),M)
16
17 % Define the output function:
18 myfun = 'hymod_MulOut'; % We use the same output function as in the introductory code
19 iY = 1; % Define the index of the output metric(s) to consider, f(1) = RMSE
20
21 % Run the model:
22 Yeet = model_evaluation(myfun,Xeet,rain,evap,flow); % The size of the output sample is
23 (r*(M+1),n_out), where n_out is the number of outputs produced by the output function (here
24 5)
25
26 % Calculate the sensitivity indices:
27 xmin = [0 0 0 0 0.1]; % Lower bound of input factors
28 xmax = [400 2 1 0.1 1]; % Upper bound of input factors
29 Seet = EET_indices(r,xmin,xmax,Xeet,Yeet(:,iY),design_type); % Calculate the mean of the
30 absolute value of the EEs.
31
32 % Plot the sensitivity indices (Figure 4.a):
33 figure; boxplot1(Seet,x_labels,'Sensitivity index (EET)')

34 Now we repeat the estimation of the EET sensitivity indices using bootstrapping to derive confidence
35 bounds around the sensitivity indices.

36 % Calculate the sensitivity indices using bootstrapping:


37 Nboot = 1000; % Number of bootstrap resamples
38 [Seet_m,~,~,~,~,Seet_lb,~,Seet_ub,~] = ...
39 EET_indices(r,xmin,xmax,Xeet,Yeet(:,iY),design_type,Nboot); % Calculate the mean of the
40 absolute value of the EEs (Seet_m is the bootstrap mean, Seet_lb is bootstrap lower bound and
41 Seet_ub is the bootstrap upper bound)
42
43 % Plot the sensitivity indices with confidence intervals (Figure 4.b):
44 figure; boxplot1(Seet_m,x_labels,'Sensitivity index (EET)',Seet_lb,Seet_ub)

45 And finally, we repeat the calculations with a larger number of input-output samples.

8
1 % Extend the tailored input factors' sample to perform the EET:
2 r_ext = 100; % Extended number of EEs
3 [Xeet_ext,Xeet_new ] = ...
4 OAT_sampling_extend(Xeet,r_ext,DistrFun,DistrPar,design_type); % Xeet_ext is the extended
5 sample of size (r_ext*(M+1),M) and Xeet_new is the new sample of size((r_ext-r)*(M+1),M)
6
7 % Run the model:
8 Yeet_new = model_evaluation(myfun,Xeet_new,rain,evap,flow) ; % New output sample of size
9 ((r_ext-r)*(M+1),5)
10 Yeet_ext = [Yeet ; Yeet_new]; % Extended output sample of size (r_ext*(M+1),5)
11
12 % Assess the robustness of the results to the choice of sample size by using bootstrapping:
13 [Seet_ext_m,~,~,~,~,Seet_ext_lb,~,Seet_ext_ub] = ...
14 EET_indices(r_ext,xmin,xmax,Xeet_ext,Yeet_ext(:,iY),design_type,Nboot); % Calculate the mean
15 of the absolute value of the EEs.
16
17 % Plot the sensitivity indices with confidence intervals (Figure 4.c):
18 figure; boxplot1(Seet_ext_m,x_labels,'Sensitivity index (EET)',Seet_ext_lb,Seet_ext_ub)

19

20 The results

21
22 Figure 4. EET sensitivity indices (mean of EEs) computed using: a) r=20 EEs, b) r=20 EEs and
23 bootstrapping, and c) r=100 EEs and bootstrapping. In Figure 4.b-c the boxes represent the 95 %
24 bootstrap confidence intervals and the black lines indicate the bootstrap mean.

25 Interpretation of the results. By comparing Figure 4.a and Figure 4.b, which were both obtained using
26 the same sample size (i.e. r=20), we see the added value of deriving confidence bounds for the
27 sensitivity indices. While Figure 4.a indicates that alfa and Rf are highly influential and Sm, beta and
28 Rs are little influential, Figure 4.b shows that no robust conclusion can actually be drawn from the
29 analysis because the confidence intervals of the input factors are overlapping. When we increase the
30 sample size (Figure 4.c), we observe a reduction in the width of the confidence intervals, as expected,
31 which means that the results become more robust to the choice of the sample size. We now see a
32 clear separation between highly influential (i.e. alfa and Rf) and little influential (i.e. Sm, beta and Rs)
33 input factors. We also note that in this specific example the ordering of the input factors does not
34 change when increasing the sample size, however in general this may not be the case.

9
1 Implications. Assessing the robustness of the results via bootstrapping and visualising it through
2 confidence intervals around the sensitivity indices is essential to check the robustness of the GSA
3 results. The advantage of this method is that it does not require additional model executions. Wide
4 and overlapping confidence intervals means that either the sample size should be increased if
5 computationally affordable (as done in the above example) , or the GSA user should opt for a less
6 computationally demanding GSA method otherwise. The order of magnitude of the computational
7 complexity of GSA methods is provided for instance in [18].

9 Remark 3 – Method choice matters as it can result in different sensitivity estimates (so
10 using multiple methods is advisable)
11
12 The problem. Sensitivity indices can be computed using different GSA methods relying on different
13 rationales and assumptions. This section shows that different GSA methods can result in different
14 sensitivity estimates. This remark is discussed in section 2.3 of [1].

15 Implementation details. We apply two GSA methods, namely Variance-Based Sensitivity Analysis
16 (VBSA) described in [19] and a moment-independent method, called PAWN and described in [20,21].
17 VBSA relies on the variance decomposition proposed by [22] and consists of assessing the
18 contributions to the variance of the model output from variations in the input factors. In this example,
19 we use as sensitivity index the first-order (main effect) index, which measures the variance
20 contribution from variations in an individual input factor alone (i.e. excluding interactions with other
21 factors). Similar to the EET, VBSA requires a tailored sampling strategy and therefore we need to derive
22 a tailored input-output sample.
23 In contrast to the VBSA method, the PAWN method estimates the effect of the input factors on the
24 entire output distribution, instead of on its variance only. The method compares the output
25 unconditional CDF, which is obtained by varying all input factors simultaneously, to the conditional
26 CDFs that are obtained by varying all input factors but one, which is restrained to a prescribed
27 conditioning interval [Pianosi and Wagener, 2018]. For each input factor, we use 𝑛 (here ten)
28 conditional CDFs and calculate the average Kolmogorov-Smirnov (KS) statistic [14,15] (i.e. average
29 maximum vertical distance) between the n conditional CDFs and the unconditional one, using the code
30 available at https://www.safetoolbox.info/pawn-method/. In this example, the output scalar metric is
31 the variance of the simulated flow.
32
33 The code

34 We perform Variance-Based Sensitivity Analysis (VBSA).

35 % Derive the tailored input factors' sample to perform VBSA:


36 N = 15000 ; % Base sample size. We note that the sample size is larger than the one used in
37 the introductory code. This is due to the fact that, in our example, VBSA requires is larger
38 sample size to produce robust results.
39 Xvbsa = AAT_sampling(SampStrategy,M,DistrFun,DistrPar,2*N); % Xvbsa has size (2*N,M)
40 [XA,XB,XC] = vbsa_resampling(Xvbsa) ; % Prepare the input samples needed to compute the
41 estimates of the VBSA sensitivity indices
42
43 % Define the output function:
44 myfun = 'hymod_MulOut'; % We use the same output function as in the introductory code
45 iY = 5; % Define the index of the output metric(s) to consider, f(5) = Variance of simulated
46 flow

10
1
2 % Run the model (warning: these lines may take some time to run):
3 YA = model_evaluation(myfun,XA,rain,evap,flow); % size (N,n_out) where n_out is the number of
4 outputs produced by the output function (here 5).
5 YB = model_evaluation(myfun,XB,rain,evap,flow); % size (N,n_out)
6 YC = model_evaluation(myfun,XC,rain,evap,flow); % size (N*M,n_out)
7
8 % Calculate the VBSA sensitivity indices using bootstrapping:
9 Nboot = 1000; % Number of bootstrap resamples
10 [Svbsa_M_m,~,~,~,Svbsa_M_lb,~,Svbsa_M_ub] = vbsa_indices(YA(:,iY),YB(:,iY),YC(:,iY),Nboot); %
11 Calculate the main effects
12
13 % Plot the VBSA sensitivity indices with confidence intervals:
14 figure; boxplot1(Svbsa_M_m,x_labels,'Sensitivity index (VBSA)',Svbsa_M_lb,Svbsa_M_ub) %
15 Figure 5.a

16

17 We perform PAWN.

18 % Calculate the sensitivity indices using the input/output sample given the introductory code
19 and using bootstrapping (Warning: the following line may take some time to run):
20 n = 10; % Number of conditioning intervals
21 Nboot = 1000; % Number of bootstrap resamples
22 [~,KS_mean] = pawn_indices_givendata(X,Y(:,iY),n,Nboot); % Calculate the PAWN sensitivity
23 indices: KS_mean (size (Nboot,M)) is the mean KS statistic across the n conditioning
24 intervals for the different bootstrap resamples.
25
26 % Calculate statistics across bootstrap resamples (mean, lower and upper bounds of
27 sensitivity indices):
28 alfa = 0.05; % Significance level for the confidence intervals estimated by bootstrapping
29 Spawn_m = mean(KS_mean); % Mean PAWN sensitivity index across bootstrap resamples
30 KS_sort = sort(KS_mean);
31 Spawn_lb = KS_sort(max(1,round(Nboot*alfa/2)),:); % Lower bound
32 Spawn_ub = KS_sort(round(Nboot*(1-alfa/2)),:); % Upper bound
33
34 % Plot the PAWN sensitivity indices with confidence intervals:
35 figure; boxplot1(Spawn_m,x_labels,'Sensitivity index (PAWN)',Spawn_lb,Spawn_ub) % Figure 5.b

36

11
1 The results

2
3 Figure 5. Sensitivity indices calculated using a) the VBSA methods (main effects) and b) the PAWN
4 method (mean value of the KS statistic across the conditioning intervals). The boxes represent the 95
5 % bootstrap confidence intervals and the black lines indicate the bootstrap mean.

6 From Figure 5.a, we see that Sm, beta and Rs have a similarly low value of the variance-based
7 sensitivity index, with largely overlapping confidence intervals, while alfa and Rf have a much higher
8 sensitivity index. By using PAWN (Figure 5.b), we still infer that alfa and Rf are the two most influential
9 input factors, however Rs appears to be the third most influential input factor, with a distinctively
10 higher sensitivity index than Sm and beta. The relative importance of Rs is thus judged quite differently
11 depending on the GSA method used. Therefore, in the interpretation of the results we will focus on
12 the parameter Rs, as an example.

13 Interpretation of the results. Now we look for an explanation of the results obtained by examining
14 the conditional and unconditional output distributions. The additional code used is reported below
15 and Figure 6 provides detailed explanations.

16 % For each input factor, compute the unconditional output and CDF (respectively YU and Fu),
17 the conditional outputs and CDFs for the n conditioning intervals (respectively YY and Fu) ,
18 the center of the n conditioning intervals (xc) and the KS statistics between conditional and
19 unconditional CDFs (KS):
20 [KS,~,YF,Fu,Fc,YY,xc,~,~,YU] = pawn_ks_givendata(X,Y(:,iY),n);
21
22 % Plot the unconditional (Fu) and conditional (Fc) CDFs for parameter Rs (Figure 6.a):
23 i = 4; % index of parameter Rs
24 figure; pawn_plot_cdf(YF,Fu,Fc(i,:),[],'Variance of flow')
25
26 % Compute the mean of the conditional outputs for Rs (this will allow to interpret the
27 provided by the VBSA method):
28 mC = nan(n,1); % mean of conditional outputs
29 for k=1:n % loop over conditioning intervals
30 mC(k) = mean(YY{i,k});
31 end
32 disp(var(mC)) % Calculate and display the variance of the mean of the conditional output
33 (equal to 3)
34 disp(var(YU)) % Calculate and display the variance of the unconditional output (equal to 96)
35
36 % Plot the mean of the unconditional outputs to interpret the VBSA results of Figure 5.a:

12
1 col = gray(n); % Colours for the different conditioning intervals (grey scale)
2 figure % This figure is shown in Figure 6.b
3 for k=1:n
4 hold on; plot(xc{i}(k),mC(k,i),'^k','MarkerFaceColor',col(k,:))
5 % Plot means as grey triangles
6 end
7
8 % Plot the KS statistics between conditional and unconditional CDFs (KS) to interpret the
9 PAWN results of Figure 5.b:
10 figure % This figure is shown in Figure 6.c
11 for k=1:n
12 hold on; plot(xc{i}(k),KS(k,i),'ok','MarkerFaceColor',col(k,:))
13 % Plot KS-statistics as grey circles
14 end

15
16 Figure 6. Distribution and statistics of conditional and unconditional outputs (variance of simulated
17 flow) for parameter Rs. a) Cumulative Distribution Functions (CDFs) of unconditional output (red
18 dashed line) and of conditional outputs obtained by fixing the value of Rs within one of the ten
19 conditioning intervals (grey lines). b) Mean of conditional outputs for the ten conditioning intervals
20 (grey triangles). c) KS statistic between unconditional and conditional CDFs for the ten conditioning
21 intervals (grey circles).

22 From Figure 6.a, we observe differences between the unconditional CDF and the conditional CDFs,
23 and in particular over the lower range of values of the output (i.e. variance of flow below 10). As a
24 result, the KS statistics shown in Figure 6.c take high values, which explains the large value of the
25 PAWN sensitivity index (Figure 5.b). On the contrary, from Figure 6.b, we observe little variation in the
26 mean of the conditional CDFs across the conditioning intervals (values are between 6.9 and 12). We
27 further estimate that the variance of the mean of conditional CDFs across the conditioning intervals is
28 equal to 3, which is very small compared to the variance of the unconditional CDF which is equal to
29 93. This means that the contribution of Rs to the total output variance is very small. This explains the
30 low value of the VBSA sensitivity index (Figure 5.a).

31 Implications. The choice of the GSA method is critical as different methods focus on different aspects
32 of the model input-output response and may therefore lead to different sensitivity estimates. In the
33 example examined in this remark, the two methods used (VBSA and PAWN) analyse different aspects
34 of the output distribution. This leads to slightly different conclusions, where the same parameter is
35 considered uninfluential when only looking at the output variance (VBSA), and relatively influential
36 when considering the entire output CDF (PAWN). It is also known that the different GSA methods have
37 different ability to capture interactions among input factors as explained for instance in [2].

13
1 Interactions are further analysed in remark 4. Consequently, we advise that the choice of GSA method
2 should depend on the objective of the analysis and, when possible, that the GSA user should aplply
3 different methods to the same case study for a more exhaustive analysis of the model sensitivity. The
4 task is facilitated by the increasing availability of sensitivity approximation techniques that can be
5 applied to the same generic input-output sample (e.g. [23]).

6 Remark 4 – GSA methods can measure direct and joint effects of input factors across
7 their variability space
8
9 The problem. In complex dynamic models such as earth system models, input factors often have a
10 high level of interactions, which means that the effect of a given input factor on the output(s) can be
11 amplified or reduced depending on the value taken by the other input factors. Some GSA methods,
12 such as Variance-Based Sensitivity Analysis introduced in remark 3, allow to capture input factors’
13 interactions. This remark is discussed in section 2.2 of [1]
14
15 Implementation details. We calculate three sensitivity indices using the VSBA methods, i.e. (1) the
16 main effect index (used in remark 3), which measures the direct effect of each input factor, (2) the
17 total effect index, which measures the direct effect and the interaction effects with all other input
18 factors and (3) the interaction effect index, which is the difference between total and main effect
19 index. We use the same output metric and input-output samples generated in remark 3.
20
21 The code
22 We compute the main and total effect sensitivity index using VBSA.

23 % We use the output samples (YA, YB and YC) derived in remark 3


24
25 % Calculate the VBSA sensitivity indices using bootstrapping:
26 Nboot = 1000; % Number of bootstrap resamples
27 [Svbsa_M_m,Svbsa_T_m,~,~,Svbsa_M_lb,Svbsa_T_lb,...
28 Svbsa_M_ub,Svbsa_T_ub,Svbsa_M_all,Svbsa_T_all] = ...
29 vbsa_indices(YA(:,iY),YB(:,iY),YC(:,iY),Nboot); % Calculate the main effects (Svbsa_M_m,
30 Svbsa_M_lb, Svbsa_M_ub) and total effects (Svbsa_T_m, Svbsa_T_lb, Svbsa_T_ub). Svbsa_M_all
31 and Svbsa_T_all are the main and total effects respectively for the different bootstrap
32 resamples (matrix (Nboot,M)).
33
34 % Calculate interaction effects with bootstrapping as the difference between total and main
35 effects across bootstrap resamples:
36 Svbsa_I_all = Svbsa_T_all-Svbsa_M_all; % Total interactions, matrix (Nboot,M)
37
38 % Calculate statistics of interaction effects across bootstrap resamples (mean, lower and
39 upper bounds of sensitivity indices):
40 alfa = 0.05; % Significance level for the confidence intervals estimated by bootstrapping
41 Svbsa_I_m = mean(Svbsa_I_all); % Mean interactions across bootstrap resamples
42 Svbsa_I_sort = sort(Svbsa_I_all);
43 Svbsa_I_lb = Svbsa_I_sort(max(1,round(Nboot*alfa/2)),:); % Lower bound
44 Svbsa_I_ub = Svbsa_I_sort(round(Nboot*(1-alfa/2)),:); % Upper bound
45
46 % Plot the VBSA sensitivity indices with confidence intervals:
47 % Figure 7.a (main effects):
48 figure; boxplot1(Svbsa_M_m,x_labels,'Sensitivity index(VBSA)',Svbsa_M_lb,Svbsa_M_ub)
49 % Figure 7.b (total effects):
50 figure; boxplot1(Svbsa_T_m,x_labels,'Sensitivity index (VBSA)',Svbsa_T_lb,Svbsa_T_ub)

14
1 % Figure 7.c (interaction effects):
2 figure; boxplot1(Svbsa_I_m,x_labels,'Sensitivity index (VBSA)',Svbsa_I_lb,Svbsa_I_ub)

3 The results
4

5
6 Figure 7. Sensitivity indices calculated using the VBSA methods a) main effects indices, b) total effect
7 indices and c) total interaction effect indices. The boxes represent the 95 % bootstrap confidence
8 intervals and the central black lines indicate the bootstrap mean.

9 We observe that the total effect sensitivity index of parameters alfa and Rf (Figure 7.b) is significantly
10 higher than the main effect sensitivity index (Figure 7.a). This means that these two parameters have
11 an effect on the output not only through their individual variations but also through interactions, as
12 we can see from Figure 7.c which shows the total interaction effect index. Instead, the other three
13 parameters (Sm, beta and Rs) have low main, total and interaction effect indices, and therefore these
14 three parameters have a small effect, both direct and through interactions.
15
16 Interpretation of the results. Now we look for an explanation of the results by examining the two-
17 dimensional scatter plots that allow to identify pairwise interactions (second order effects) between
18 input factors. The additional code used is reported below and Figure 8 provides detailed explanations.

19
20 % Rescale the output values for visualization purposes (Box-Cox transformation):
21 lambda = 0.5; % Parameter of the Box-Cox transformation
22 YA_boxcox = (YA.^lambda-1)/lambda; % Box-Cox transformation of YA
23
24 % Plot 2-dimensional scatter plots to visualize pairwise interactions:
25 scatter_plots_interaction(XA,YA_boxcox(:,iY),[],x_labels) % This figure is shown in Figure 8

26

15
1
2 Figure 8. Two-dimensional scatter plots. The colour indicates the value of the output (Variance of
3 flow). The plot (alfa vs Rf) shows that high values of the variance (red colour) are obtained only when
4 both alfa and Rf have values close to the upper bound of their range of variability. This means that
5 these two input factors are interacting. On the contrary the other scatter plots do not show marked
6 patterns and therefore the second order effects for the other pairs of input factors appear to be small.
7
8 Implications. It is important to use a GSA method that can detect parameter interactions, such as
9 VBSA, if this information is relevant. In addition, as discussed in [24], scatter plot are easy-to-
10 implement tools that can complement the results of a rigorous and quantitative GSA method and that
11 provide insight into pairwise interactions between input factors. Nonetheless, complex interactions
12 might not be visible from a scatterplot, that is why using VBSA, coupled with other methods, might be
13 useful to highlight interactions between input factors.
14
15

16
1 Remark 5 – The definition of the space of variability of the input factors has
2 potentially a great impact on GSA
3

4 The problem. Sampling the input factors’ space requires the definition of the distribution and range
5 of the input factors. This section shows how the choice of the range of the input factors can impact
6 the sensitivity indices. This remark is discussed in section 2.4 of [1].

7 Implementation details. We use RSA as in remark 1, but here we adopt the variant of RSA with
8 threshold, where the input samples are divided into two groups (‘behavioural’ and ‘non-behavioural’),
9 whether their corresponding output falls above or below a prescribed threshold [12,13]. The output
10 metric selected is the RMSE.

11 The code

12 % Set output threshold:


13 threshold = 2.5; % Threshold for model output (here RMSE)
14 flag = 1; % Select the statistic to compute the sensitivity indices (here the maximum
15 vertical distance between the CDFs)
16
17 iY = 1; % Define the index of the output metric(s) to consider from those estimated in remark
18 1, f(1) = RMSE
19
20 % Perform RSA (find behavioural parameterisations):
21 [~,idxb] = RSA_indices_thres(X,Y(:,iY),threshold,flag) ;
22
23 % Calculate the sensitivity indices using bootstrapping:
24 Nboot = 1000; % Number of bootstrap resamples
25 [Srsa_m,~,Srsa_lb,Srsa_ub] = RSA_indices_thres(X,Y(:,iY),threshold,flag,Nboot); % Mean and
26 lower and upper bounds of the sensitivity indices (i.e. maximum vertical distance)
27
28 % Plot the sensitivity indices with confidence intervals with input factors' default ranges:
29 figure; boxplot1(Srsa_m,x_labels,'Sensitivity index (RSA)',Srsa_lb,Srsa_ub) % This figure is
30 shown in Figure 9.a

31 Now we repeat the estimation of the RSA sensitivity indices using modified input factors' ranges.

32 % Define new input factors' ranges:


33 DistrPar_2 = {[0 800]; [0 4]; [0 0.8]; [0 0.4]; [0.4 1]};
34
35 % Sample input factors' space:
36 X_2 = AAT_sampling(SampStrategy,M,DistrFun,DistrPar_2,N);
37
38 % Run the model:
39 Y_2 = model_evaluation(myfun,X_2,rain,evap,flow);
40
41 % Perform RSA (find behavioural parameterisations):
42 [~,idxb_2] = RSA_indices_thres(X_2,Y_2(:,iY),threshold,flag);
43
44 % Calculate the sensitivity indices using bootstrapping:
45 [Srsa_2_m,~,Srsa_2_lb,Srsa_2_ub] = RSA_indices_thres(X_2,Y_2(:,iY),threshold,flag,Nboot); %
46 Mean and lower and upper bounds of the sensitivity indices (i.e. maximum vertical distance)
47
48 % Plot the sensitivity indices with confidence intervals with input factors' modified ranges:

17
1 figure; boxplot1(Srsa_2_m,x_labels,'Sensitivity index (RSA)',Srsa_2_lb,Srsa_2_ub) % Figure
2 9.b

4 The results

5
6 Figure 9. Sensitivity indices calculated using a) the default ranges of the input factors taken from the
7 literature and b) modified ranges. The boxes represent the 95 % bootstrap confidence intervals and
8 the central black lines indicate the bootstrap mean.

9 The results reported in Figure 9 show that changing the ranges of the input factors can change the
10 sensitivity indices, as expected. In the specific, decreasing the range of Rf decreases its sensitivity index
11 considerably, while decreasing the range of alfa slightly decreases its sensitivity index, and increasing
12 considerably the range of Sm only slightly decreases its sensitivity index.

13 Interpretation of the results. Now we look for an explanation of the results by examining the ranges
14 of the behavioural parameterisations using parallel coordinate plot. The additional code used is
15 reported below and Figure 10 provides detailed explanations.

16 % Parallel coordinate plot with default ranges:


17 figure; parcoor(X,x_labels,[],idxb) % This figure is shown in Figure 10.a
18
19 % Parallel coordinate plot with modified ranges:
20 figure; parcoor(X_2,x_labels,[],idxb_2) % This figure is shown in Figure 10.b

18
1
2 Figure 10. Parallel coordinate plots representing the behavioural parameterisations (black lines) and
3 the non-behavioural ones (grey lines) for the analysis with a) default input factors’ ranges and b)
4 modified ranges. In the case with modified ranges, we decreased the range of Rf. We see that the
5 lower part of the default range of Rf provides non-behavioural parameterisations only (Figure 10.a),
6 while both behavioural and non-behavioural parameterisations can be found throughout the reduced
7 range of Rf (Figure 10.b). This explains the higher sensitivity index of Rf with the default ranges (Figure
8 9.a) compared to the modified ranges (Figure 9.b). Instead, decreasing the upper bound of the range
9 of alfa has the effect of decreasing the sensitivity index of alfa (as shown in Figure 9). In fact, alfa has
10 most of its behavioural samples in the lower range and therefore the difference between behavioural
11 and non-behavioural is reduced (as shown in Figure 10).

12 Implications. Changing the range of variability of an input factor can greatly change its sensitivity
13 index. Therefore, careful consideration should be given to choose a range wide enough so that it
14 includes all feasible (physical) values according to expert knowledge or literature values, but not too
15 wide so that it excludes infeasible values. In this remark we varied the ranges of the input factors only
16 (we used a uniform distribution). The effect of different distributions on the input factors’ sensitivity
17 estimates could also be tested, especially if a priory information is available on their distributions.
18 Another implication is that sensitivity analysis and performance analysis go hand in hand, as discussed
19 in greater detail in the next remark.

20

21 Remark 6– It may be necessary to remove poorly performing simulations to avoid that


22 they dominate the estimation of sensitivity indices
23
24 The problem. Adopting input factors ranges that are too wide might result in the inclusion of poorly
25 performing input factors values, which in turn might affect the sensitivity analysis results, as shown in
26 remark 5. In this section, we analyse the effect of filtering out poorly performing input factors’ sets
27 based on their corresponding output values. We show the importance of running sensitivity analysis
28 with and without the inclusion of poorly performing (sometimes called non-behavioural) simulations
29 to assess their impact on the results. This remark forms part of the discussion in section 2.4 of [1].

30
31 Implementation details. We use RSA as in remark 1, i.e. with the implementation of RSA based on
32 grouping. The output values are again divided into ten groups and the non-behavioural simulations
33 are set to be those related to the highest (or lowest) performance metric (here set to be those with

19
1 RMSE > 3.8, i.e. the group with the least performing simulations according to the performance metric
2 chosen).
3
4 The code
5 We run RSA based on grouping as in remark 1, but here the confidence intervals of the sensitivity
6 indices are also estimated with bootstrapping.
7 flag = 2; % We select the maximum vertical distance or Kolmogorov-Smirnov statistic between
8 CDFs as statistics as in remark 1
9
10 iY = 1; % Define the index of the output metric(s) to consider, f(1) = RMSE
11 ngroup = 10; % Number of groups into which the output values are divided
12
13 % Calculate the respective groups of the samples (idx) and the range of Y in each group (Yk):
14 [~,idx,Yk] = RSA_indices_groups(X,Y(:,iY),ngroup,flag);
15
16 % Calculate the sensitivity indices using bootstrapping:
17 Nboot = 1000; % Number of bootstrap resamples
18 [Srsa_m,~,~,~,Srsa_lb,Srsa_ub] = RSA_indices_groups(X,Y(:,iY),ngroup,flag,Nboot); % Calculate
19 the mean and confidence intervals of the sensitivity indices (here the maximum vertical
20 distance). The output metric considered is RMSE.
21
22 % Plot the sensitivity indices with confidence intervals:
23 figure; boxplot1(Srsa_m,x_labels,'Sensitivity index (mvd)',Srsa_lb,Srsa_ub) % This is shown
24 in Figure 11.a

25
26 Now we remove the poorly performing simulations and rerun the analysis. First, we decide the value
27 of the performance metric above (or below) which simulations are considered poorly performing

28 % Identify the poorly performing simulations to be removed:


29 Y_pp = Yk(10,iY); % Define the value above which we have poorly performing outputs (here
30 reference to the output groups’ limit corresponding to a RMSE = 3.8 is made)
31 idx_pp = find(Yk(:,iY) >= Y_pp & Yk(:,iY) ~= Yk(end,iY)); % Index of the sets with poorly
32 performing output
33 Y_wp = Y(idx(:,iY) ~= idx_pp,iY); % Subsample of output where poorly performing values have
34 been excluded
35 X_wp = X(idx(:,iY) ~= idx_pp,:); % Subsample of input factors where poorly performing values
36 have been excluded
37
38 ngroup = 10; % Number of groups into which the output values are divided
39
40 % Calculate the respective group of the samples (idx_wp) and the range of Y in each group
41 (Yk_wp):
42 [~,idx_wp,Yk_wp] = RSA_indices_groups(X_wp,Y_wp,ngroup,flag);
43
44 % Calculate the sensitivity indices using bootstrapping:
45 [Srsa_wp_m,~,~,~,Srsa_wp_lb,Srsa_wp_ub] = RSA_indices_groups(X_wp,Y_wp,ngroup,flag,Nboot); %
46 Calculate the mean and confidence intervals of the sensitivity indices (here the maximum
47 vertical distance).
48
49 % Plot the sensitivity indices with confidence intervals calculated without poorly performing
50 simulations:

20
1 figure; boxplot1(Srsa_wp_m,x_labels,'Sensitivity index (RSA)',Srsa_wp_lb,Srsa_wp_ub) % This
2 is shown in Figure 11.b

3
4 The results

5
6 Figure 11. Sensitivity indices calculated a) with all the simulations and b) with poorly performing
7 simulations removed. The boxes represent the 95 % bootstrap confidence intervals and the central
8 black lines indicate the bootstrap mean.

9 The results reported in Figure 11 show that removing the poorly performing simulations can change
10 the sensitivity indices, as expected. In this specific case, it leads to a decrease in the sensitivity index
11 of alfa.
12
13 Interpretation of the results. Now we look for an explanation of the results obtained by plotting the
14 input factors' CDFs. The additional code used is reported below and Figure 12 provides detailed
15 explanations.

16 % Plot input factors' CDFs with all simulations:


17 figure; RSA_plot_groups(X,idx,Yk,[],x_labels,'RMSE'); % Figure 12.a
18
19 % Plot input factors' CDFs with poorly performing simulations removed:
20 figure; RSA_plot_groups(X_wp,idx_wp,Yk_wp,[],x_labels,'RMSE'); % Figure 12.b

21

21
1
2 Figure 12. Cumulative Distribution Functions (CDFs) of the input factors: a) when using all the
3 simulations, b) when removing the poorly performing simulations. The sensitivity index for each input
4 in Figure 11 was derived as the maximum vertical distance (mvd) between the above CDFs (as
5 explained in Figure 3). We can see that the poorly performing simulations in Figure 12.a (i.e. with
6 RMSE > 3.8, red CDF) have been removed in Figure 12.b. This has the effect of decreasing the mvd for
7 alfa as it eliminates the red CDF of Figure 12.a which stands out from the other CDFs for these two
8 input factors. In fact, for alfa, the red CDF of Figure 10.a shows that most of the worst performing
9 samples of alfa are in its upper range. Instead, for example for Rf, the removal of poorly performing
10 simulations (red CDF in Figure 12.a) does not affect the mvd.

11 Implications. Including poorly performing simulations can impact the results of the sensitivity analysis
12 and might change how influential the input factors actually are. Therefore, an analysis of the
13 subranges of values of the input factors which give poorly performing simulations should be carried
14 out. The modeller should explicitly consider whether these subranges should be included in the
15 sensitivity analysis or not.
16
17 Remark 7 – Influential and non-influential input factors can be identified using a
18 ‘dummy’ input factor
19
20 The problem. GSA is commonly applied to identify the non-influential input factors that can be set to
21 any value within their space of variability with negligible effect on the output [2]. To set a threshold
22 value on the sensitivity indices to separate influential and non-influential input factors (i.e. a screening
23 threshold), [25] proposed to artificially introduce an additional ‘dummy’ input factor. Such a ‘dummy’
24 input factor is defined to have no impact on the output because it does not appear in the model
25 equations. In this remark, we demonstrate the use of the ‘dummy’ input factor to screen influential
26 and non-influential input factors. This remark forms part of the discussion in section 2.5 of [1].

27 The implementation. We use the PAWN method described in remark 3, because the method can be
28 used for screening purposes [20]. As in remark 3, we apply PAWN to a generic input/output sample
29 using the code available at https://www.safetoolbox.info/pawn-method/. This code also implements
30 the computation of the PAWN sensitivity indices for the dummy input factor. We adopt as sensitivity

22
1 index the maximum value of the KS statistic across the conditional CDFs. This sensitivity index is an
2 appropriate metric to identify non-influential input factors (and therefore for screening purposes) as
3 it allows to detect whether an input factor has an effect on the output for at least one conditioning
4 interval. The sensitivity index for the dummy input factor estimates the error in approximating the
5 ‘true’ CDFs using the current input/output sample. The output metric selected to perform PAWN is
6 the bias.

7 The code

8 We perform PAWN using a subsample of the input/output sample derived in the introductory code,
9 to determine whether we could use a smaller sample to screen the input factors.

10 x_labels = {'Sm','beta','alfa','Rs','Rf','dummy'}; % Name of input factors (we add an


11 artificial 'dummy' input factor)
12
13 iY = 2; % Define the index of the output metric(s) to consider from those estimated in remark
14 1, f(2) = RMSE
15
16 % Calculate the sensitivity indices using bootstrapping for a subsample of the input/output
17 sample derived in the introductory code (warning: it may take some time to run these lines):
18 Nsub = 1000; % Size of the subsample
19 X_sub = X(1:Nsub,:); % Subsample of input factors
20 Y_sub = Y(1:Nsub,:); % Subsample of output
21 n = 10; % Number of equally spaced intervals
22 Nboot = 1000; % Number of bootstrap resamples
23 [~,~,KS_max_sub,KS_dummy_sub] = pawn_indices_givendata(X_sub,Y_sub(:,iY),n,Nboot);
24 % Calculate PAWN sensitivity index:
25 % - KS_max_sub (size (Nboot,M)) is the maximum KS across the n conditioning intervals for the
26 Nboot bootstrap resamples and for the M model parameters analysed (we note that we use a
27 different index compared to remark 3 where we used the mean KS across the conditioning
28 intervals, because the objective here is to screen uninfluential input factors);
29 % - KS_dummy_sub (size (Nboot,1)) is the KS statistic for the dummy parameter.
30
31 % Calculate statistics across bootstrap resamples (mean, lower and upper bounds of
32 sensitivity indices):
33 alfa = 0.05; % Significance level for the confidence intervals estimated by bootstrapping
34 Spawn_sub_m = mean([KS_max_sub,KS_dummy_sub]); % Mean PAWN sensitivity index across bootstrap
35 resamples
36 KS_sort = sort([KS_max_sub,KS_dummy_sub]);
37 Spawn_sub_lb = KS_sort(max(1,round(Nboot*alfa/2)),:); % Lower bound
38 Spawn_sub_ub = KS_sort(round(Nboot*(1-alfa/2)),:); % Upper bound
39
40 % Plot the sensitivity indices with confidence intervals (Figure 13.a):
41 figure
42 boxplot1(Spawn_sub_m,x_labels,'Sensitivity index (PAWN)',Spawn_sub_lb,Spawn_sub_ub)

43

44 Now we repeat the calculation of the PAWN sensitivity indices using the entire input/output sample
45 derived in the introductory code.

46 % Calculate the sensitiviy indices (warning: it may take some time to run the following
47 line):
48 [~,~,KS_max,KS_dummy] = pawn_indices_givendata(X,Y(:,iY),n,Nboot);
49

23
1 % Calculate statistics across bootstrap resamples (mean, lower and upper bounds of
2 sensitivity indices):
3 Spawn_m = mean([KS_max,KS_dummy]); % Mean PAWN sensitivity index across bootstrap resamples
4 KS_sort = sort([KS_max,KS_dummy]);
5 Spawn_lb = KS_sort(max(1,round(Nboot*alfa/2)),:); % Lower bound
6 Spawn_ub = KS_sort(round(Nboot*(1-alfa/2)),:); % Upper bound
7
8 % Plot the sensitivity indices with confidence intervals (Figure 13.b):
9 figure
10 boxplot1(Spawn_m,x_labels,'Sensitivity index (PAWN)',Spawn_lb,Spawn_ub)

11

12 The results

13
14 Figure 13. PAWN sensitivity indices for the five model input factors and the additional ‘dummy’ input
15 factor, calculated using an input/output sample a) of size 1000 and b) of size 3000. The boxes
16 represent the 95 % bootstrap confidence intervals and the central black lines indicate the bootstrap
17 mean.

18 Interpretation of the results. From Figure 13.a, we observe that the confidence intervals of the
19 sensitivity index for three model input factors (alfa, Rs and Rf) are overlapping with the confidence
20 intervals of the dummy input factor. On the other hand, the sensitivity index of Sm and beta are
21 significantly higher than the sensitivity index of the dummy input factor. This means that, when using
22 a sample of size 1000, we can conclude that Sm and beta are influential parameters, while alfa, Rs and
23 Rf appear to be non-influential, because their sensitivity index is of the same order of magnitude as
24 the approximation error of the CDFs. However, we observe that the confidence intervals on the
25 sensitivity indices are rather wide, including for the low-sensitivity input factors. When using a larger
26 sample size (Figure 13.b), we observe a significant reduction in the width of the confidence intervals,
27 which results in a clear separation between Rs and the dummy input factor. This means that we can
28 robustly infer that Rs is an influential input factor using a sample of size 3000 (Figure 13.b), while the
29 sample of size 1000 was too small to robustly identify the non-influential input factors (Figure 13.a).

30 Implications. The method discussed in this remark identifies the influential input factors as those
31 factors that have a sensitivity index (and corresponding confidence intervals) higher than, (and not
32 overlapping with), the sensitivity index of the dummy input factor. Input factors that have a sensitivity
33 index lower than (or overlapping with) the dummy input factor can be considered non-influential,

24
1 because their sensitivity index has the same magnitude as the approximation error of the sensitivity
2 indices. However, the choice of the sample size is critical to obtain robust screening results (see further
3 discussion on the choice of the sample size in remark 3). Specifically, it can be expected that the
4 number of non-influential input factors identified using the ‘dummy’ approach will decrease with
5 increasing sample size, as the approximation error of the sensitivity indices reduces.

7 Author contributions
8 V.N., F.S., F.P. and T.W. designed the study; V.N. and F.S. have contributed in equal measure to the
9 preparation of the workflows and to the writing of the manuscript, with contributions from F.P. and
10 T.W.; all authors have approved the final version of the manuscript.

11 Acknowledgements
12 This work was partially supported by the Natural Environment Research Council through the
13 Consortium on Risk in the Environment: Diagnostics, Integration, Benchmarking, Learning and
14 Elicitation (CREDIBLE) project (grant number NE/J017450/1). V.N. is supported by the Natural
15 Environment Research Council through a Knowledge Exchange Fellowship (grant number
16 NE/R003734/1), T.W. is also supported by a Royal Society Wolfson Research Merit Award and F.P. is
17 further supported by the Engineering and Physical Sciences Research Council through an Early Career
18 “Living with Environmental Uncertainty” Fellowship (grant number EP/R007330/1).

19

20

21

25
1 References
2 [1] T. Wagener, F. Pianosi, What has Global Sensitivity Analysis ever done for us? A systematic
3 review to support scientific advancement and to inform policy-making in earth system
4 modelling, Earth-Science Rev. in review.

5 [2] A. Saltelli, M. Ratto, T. Andres, F. Campolongo, J. Cariboni, D. Gatelli, M. Saisana, S. Tarantola,


6 Global Sensitivity Analysis, The Primer, John Wiley & Sons, Ltd, Chichester, UK, 2008.

7 [3] Reality check on reproducibility, Nature. 533 (2016) 437–437. doi:10.1038/533437a.

8 [4] J. Berg, Progress on reproducibility, Science . 359 (2018) 9. doi:10.1126/science.aar8654.

9 [5] S. Ceola, B. Arheimer, E. Baratti, G. Blöschl, R. Capell, A. Castellarin, J. Freer, D. Han, M.


10 Hrachowitz, Y. Hundecha, C. Hutton, G. Lindström, A. Montanari, R. Nijzink, J. Parajka, E. Toth,
11 A. Viglione, T. Wagener, Virtual laboratories: New opportunities for collaborative water
12 science, Hydrol. Earth Syst. Sci. 19 (2015) 2101–2117. doi:10.5194/hess-19-2101-2015.

13 [6] C. Hutton, T. Wagener, J. Freer, D. Han, C. Duffy, B. Arheimer, Most computational hydrology
14 is not reproducible, so is it really science?, Water Resour. Res. 52 (2016) 7548–7555.
15 doi:10.1002/2016WR019285.

16 [7] R.D. Peng, Reproducible research in computational science, Science. 334 (2011) 1226–1227.
17 doi:10.1126/science.1213847.

18 [8] F. Pianosi, F. Sarrazin, T. Wagener, A Matlab toolbox for Global Sensitivity Analysis, Environ.
19 Model. Softw. 70 (2015) 80–85. doi:10.1016/j.envsoft.2015.04.009.

20 [9] D. Boyle, Multicriteria calibration of hydrological models, PhD thesis. University of Arizona,
21 Tucson, 2001.

22 [10] T. Wagener, D.P. Boyle, M.J. Lees, H.S. Wheater, H. V. Gupta, S. Sorooshian, A framework for
23 development and application of hydrological models, Hydrol. Earth Syst. Sci. 5 (2001) 13–26.
24 doi:10.5194/hess-5-13-2001.

25 [11] S. Sorooshian, V.K. Gupta, J.L. Fulton, Evaluation of Maximum Likelihood Parameter Estimation
26 Techniques for and Length on Model Credibility, Water Resour. Res. 19 (1983) 251–259.
27 doi:10.1029/WR019i001p00251.

28 [12] P.C. Young, R.C. Spear, G.M. Hornberger, Modelling badly defined systems: some further
29 thoughts, in: Proc. SIMSIG Simul. Conf., Australian National University, Canberra, 1978: pp. 24–
30 32.

31 [13] R.C. Spear, G.M. Hornberger, Eutrophication in peel inlet - II. Identification of critical
32 uncertainties via generalized sensitivity analysis, Water Res. 14 (1980) 43–49.
33 doi:10.1016/0043-1354(80)90040-8.

34 [14] A. Kolmogorov, Sulla determinazione empirica di una legge di distribuzione, G. dell’Istituto Ital.
35 Degli Attuari. 4 (1933) 83–91.

26
1 [15] N. Smirnov, On the estimation of the discrepancy between empirical curves of distribution for
2 two independent samples, Bull. Mathématique l’Université Moscou. 2 (1939) 3–14.

3 [16] F. Sarrazin, F. Pianosi, T. Wagener, Global Sensitivity Analysis of environmental models:


4 Convergence and validation, Environ. Model. Softw. 79 (2016) 135–152.
5 doi:10.1016/j.envsoft.2016.02.005.

6 [17] M.D. Morris, Factorial sampling plans for preliminary computational experiments,
7 Technometrics. 33 (1991) 161–174. doi:10.2307/1269043.

8 [18] F. Pianosi, K. Beven, J. Freer, J.W. Hall, J. Rougier, D.B. Stephenson, T. Wagener, Sensitivity
9 analysis of environmental models: A systematic review with practical workflow, Environ.
10 Model. Softw. 79 (2016) 214–232. doi:10.1016/j.envsoft.2016.02.008.

11 [19] A. Saltelli, Making best use of model evaluations to compute sensitivity indices, Comput. Phys.
12 Commun. 145 (2002) 280–297. doi:10.1016/S0010-4655(02)00280-1.

13 [20] F. Pianosi, T. Wagener, A simple and efficient method for global sensitivity analysis based on
14 cumulative distribution functions, Environ. Model. Softw. 67 (2015) 1–11.
15 doi:10.1016/j.envsoft.2015.01.004.

16 [21] F. Pianosi, T. Wagener, Distribution-based sensitivity analysis from a generic input-output


17 sample, Environ. Model. Softw. 108 (2018) 197–207. doi:10.1016/j.envsoft.2018.07.019.

18 [22] I.M. Sobol’, Sensitivity estimates for nonlinear mathematical models, Mat. Model. 2, 112-118
19 (in Russ. Transl. English (1993). Math. Model. Comput. Exp. 1 (1990) 407–414.

20 [23] E. Borgonovo, X. Lu, E. Plischke, O. Rakovec, M.C. Hill, Making the most out of a hydrological
21 model data set: Sensitivity analyses to open the model black-box, Water Resour. Res. 53 (2017)
22 7933–7950. doi:10.1002/2017WR020767.

23 [24] F. Sarrazin, F. Pianosi, T. Wagener, An introduction to the SAFE Matlab Toolbox with practical
24 examples and guidelines, in: G. Petropoulos, P. Srivastava (Eds.), Sensit. Anal. Earth Obs.
25 Model., Elsevier Inc., 2017: pp. 363–378. doi:10.1016/B978-0-12-803011-0.00018-5.

26 [25] F. Khorashadi Zadeh, J. Nossent, F. Sarrazin, F. Pianosi, A. van Griensven, T. Wagener, W.


27 Bauwens, Comparison of variance-based and moment-independent global sensitivity analysis
28 approaches by application to the SWAT model, Environ. Model. Softw. 91 (2017) 210–222.
29 doi:10.1016/j.envsoft.2017.02.001.

30

31

32

27

You might also like