ME 597UQ Uncertainty Quantification
Category
Published on
Abstract
The goal of this course is to introduce the fundamentals of uncertainty quantification to advanced undergraduates or graduate engineering and science students with research interests in the field of predictive modeling. Upon completion of this course you should be able to:
- Represent mathematically the uncertainty in the parameters of physical models.
- Propagate parametric uncertainty through physical models to quantify the induced uncertainty on quantities of interest.
- Calibrate the uncertain parameters of physical models using experimental data.
- Combine multiple sources of information to enhance the predictive capabilities of models.
- Pose and solve design optimization problems under uncertainty involving expensive computer simulations.
Topics to be covered
- Probability Theory
- Random variables, expectations, conditional probabilities.
- Common probability distributions/densities and their properties. Gaussian distribution. Multivariate Gaussian distribution.
- Random number generation. Rejection sampling. Inverse sampling.
- Bayes rule, parameter estimation, model selection.
- Representation of Uncertainties
- Generic principles. Invariant probabilities. Maximum entropy principle.
- Dimension reduction. Principal component analysis. Non-linear dimensionality reduction methods.
- Functional uncertainty. Random fields. Gaussian processes. Covariance functions. Karhunen-Loève expansion of random fields.
- Uncertainty Propagation
- Sampling methods. Monte Carlo, quasi-random sequences, Latin hypercube designs, multi-level Monte Carlo. Sobol sensitivity indices.
- Intrusive techniques: generalized polynomial chaos.
- Classic collocation methods: Generalized polynomial chaos, sparse grid collocation.
- Bayesian collocation methods: Bayesian linear regression, Gaussian process regression.
- Selecting optimal simulations for efficiently propagating uncertainties when simulations are expensive.
- Calibration of Physical Models
- Classical approach via minimization of (regularized) loss functions. Statistical (Bayesian) interpretation of the model calibration problem.
- Sampling methods: Markov Chain Monte Carlo, Metropolis-Hastings, Hybrid Monte Carlo, Sequential Monte Carlo.
- Surrogate-based methods: Generalized polynomial chaos, Gaussian process regression.
- Variational methods: Relative entropy, Kullback-Leibler divergence, Automatic differentiation variational inference.
- Data assimilation: Kalman filter, ensemble Kalman filter, particle filter.
- Selecting optimal experiments measurements/sensor locations for calibrating model parameters.
- Selecting optimal simulations for efficiently calibrating model parameters when simulations are expensive.
- Optimization Under Uncertainty
- Robust optimization, risk quantification, multi-objective optimization, Pareto-front, utility theory.
- Gradient-based optimization: Newton-raphson, conjugate gradients, BFGS.
- Sampling methods: Scenario based optimization, sampling average approximation.
- Stochastic methods: Randomized search, simulated annealing, Robins-Monroe algorithm, particle swarm approximation.
- Bayesian global optimization: Probability of improvement, expected improvement, knowledge gradient.
Cite this work
Researchers should cite this work as follows:
Location
Grissom, Room 102, Purdue University, West Lafayette, IN