Activating project at `~/Teaching/BEE4750/fall2023/slides`
Lecture 26
November 17, 2023
Activating project at `~/Teaching/BEE4750/fall2023/slides`
Many parts of a systems-analysis workflow involve potentially large numbers of modeling assumptions, or factors:
Additional factors increase computational expense and analytic complexity.
Source: Saltelli et al (2019)
Sensitivity analysis is…
the study of how uncertainty in the output of a model (numerical or otherwise) can be apportioned to different sources of uncertainty in the model input
— Saltelli et al (2004), Sensitivity Analysis in Practice
Source: Reed et al (2022)
Which factors have the greatest impact on output variability?
Source: Reed et al (2022)
Which factors have negligible impact and can be fixed in subsequent analyses?
Source: Reed et al (2022)
Which values of factors lead to model outputs in a certain output range?
Source: Reed et al (2022)
We’ve seen one example of a quantified sensitivity before: the shadow price of an LP constraint.
The shadow price expresses the objective’s sensitivity to a unit relaxation of the constraint.
Sorting by Shadow Price ⇄ Factor Prioritization
Assumption: Factors are linearly independent (no interactions).
Benefits: Easy to implement and interpret.
Limits: Ignores potential interactions.
Number of different sampling strategies: full factorial, Latin hypercubes, more.
Benefits: Can capture interactions between factors.
Challenges: Can be computationally expensive, does not reveal where key sensitivities occur.
Local sensitivities: Pointwise perturbations from some baseline point.
Challenge: Which point to use?
Global sensitivities: Sample throughout the space.
Challenge: How to measure global sensitivity to a particular output?
Advantage: Can estimate interactions between parameters
Number of approaches. Some examples:
For a fixed release strategy, look at how different parameters influence reliability.
Take \(a_t=0.03\), and look at the following parameters within ranges:
Parameter | Range |
---|---|
\(q\) | \((2, 3)\) |
\(b\) | \((0.3, 0.5)\) |
\(ymean\) | \((\log(0.01), \log(0.07))\) |
\(ystd\) | \((0.01, 0.25)\) |
The Method of Morris is an elementary effects method.
This is a global, one-at-a-time method which averages effects of perturbations at different values \(\bar{x}_i\):
\[S_i = \frac{1}{r} \sum_{j=1}^r \frac{f(\bar{x}^j_1, \ldots, \bar{x}^j_i + \Delta_i, \bar{x}^j_n) - f(\bar{x}^j_1, \ldots, \bar{x}^j_i, \ldots, \bar{x}^j_n)}{\Delta_i}\]
where \(\Delta_i\) is the step size.
The Sobol method is a variance decomposition method, which attributes the variance of the output into contributions from individual parameters or interactions between parameters.
\[S_i^1 = \frac{Var_{x_i}\left[E_{x_{\sim i}}(x_i)\right]}{Var(y)}\]
\[S_{i,j}^2 = \frac{Var_{x_{i,j}}\left[E_{x_{\sim i,j}}(x_i, x_j)\right]}{Var(y)}\]
┌ Warning: The `generate_design_matrices(n, d, sampler, R = NoRand(), num_mats)` method does not produces true and independent QMC matrices, see [this doc warning](https://docs.sciml.ai/QuasiMonteCarlo/stable/design_matrix/) for more context. │ Prefer using randomization methods such as `R = Shift()`, `R = MatousekScrambling()`, etc., see [documentation](https://docs.sciml.ai/QuasiMonteCarlo/stable/randomization/) └ @ QuasiMonteCarlo ~/.julia/packages/QuasiMonteCarlo/KvLfb/src/RandomizedQuasiMonteCarlo/iterators.jl:255
Model for CO2 Emissions
CO2 Emissions Sensitivities
Source: Srikrishnan et al (2022)
Monday: Multiple Objectives and Tradeoffs
Today: Lab 4 Due