The Center for Predictive Engineering and Computational Sciences (PECOS) is a DOE-funded Center of Excellence within the Institute for Computational Engineering and Sciences (ICES) at The University of Texas at Austin. PECOS is one of five such centers sponsored under the Predictive Science Academic Alliance Program (PSAAP) of the National Nuclear Security Administration’s Advanced Simulation and Computing Program.
PECOS brings together an interdisciplinary, multi-university team with research partners at the DOE National Labs and NASA. The goal of the PECOS Center is to develop the next generation of advanced computational methods for predictive simulation of multiscale, multiphysics phenomena, and to apply these methods to the analysis of vehicles reentering the atmosphere. In pursuing this research, PECOS is advancing the science and modeling of atmospheric reentry, and the science of predictive simulation.
Simulation of vehicle reentry into the atmosphere is a challenging problem involving many complex physical phenomena such as aerothermochemistry, thermal radiation, turbulence, and the response of complex materials to extreme conditions. These phenomena are caused by the interaction of extremely high-temperature gas flows with the vehicle’s thermal protection system.
Reliable predictions of such complex physical systems require sophisticated mathematical models of the physical phenomena involved. However, an equally important requirement is the systematic, comprehensive treatment of the calibration and validation of these models, along with the quantification of the uncertainties inherent in such models. Development of such predictive tools, along with the required mathematical models, is the primary mission of the PECOS Center.
The PECOS Center research program is resulting in fundamental advances in both predictive science and the science of reentry vehicles.
The PSAAP Program
The PECOS Center was created with funding from the Predictive Science Academic Alliance Program (PSAAP) at the US Department of Energy. The PSAAP program is a follow-up to a previous Alliance Program, the Advanced Simulation and Computing program (ASC), which is administered by the National Nuclear Security Administration (NNSA) at the Department of Energy.
The primary goal of the PSAAP program is to establish validated, large-scale, multidisciplinary, simulation-based “Predictive Science” as a major academic and applied research field. Predictive science is understood as the application of verified and validated computational simulations in order to predict the behavior of complex systems. This large-scale and multidisciplinary approach to using advanced simulation for predictive science is broadly applicable: to complex systems such as global climate, economics, and advanced energy infrastructure.
The DOE ASC program has funded centers at five universities to advance the PSAAP goals, including the PECOS Center at the University of Texas. The other four are:
- The Center for the Predictive Modeling and Simulation of High-Energy Density Dynamic Response of Materials at the California Institute of Technology
- The Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan
- The Center for Prediction of Reliability, Integrity and Survivability of Microsystems (PRISM) at Purdue University
- PSAAP@Stanford at Stanford University
Predicting the behavior of the physical world is central to both science and engineering. Advances in computer simulation (a relatively new tool of science and engineering), along with theory and experiment, have lead researchers to contemplate predictions of increasingly complicated physical phenomena. However, the complexity of recent simulations has made their reliability difficult to assess, and one faces the danger of drawing false conclusions from inaccurate predictions made about models that are too complex to compute.
The Center for Predictive Engineering and Computational Sciences (PECOS) is dedicated to the development of methods and techniques for reliable computational predictions. This is accomplished through the verification of numerical computations, the validation of the underlying physical models, and quantification of the uncertainties in the predictions.
Verification of numerical computations – in which one asks if numerical results are an accurate representation of the solution to the mathematical model that is being solved – is relatively well understood. It requires careful attention to good software engineering practices, continual software testing, and control of numerical discretization errors (through error estimation and error-driven adaptivity). While verification processes are well understood, they require substantial effort. Because verification of numerical results is a prerequisite for reliable computational predictions, verification processes are integral to all activities in the PECOS center.
Validation of physical models and the quantification of uncertainty in predictions are not well established. They are also closely related to each other and to the calibration of uncertain model parameters. At the PECOS center, development of techniques for calibration, validation and uncertainty quantification, are guided by the following principles:
- Reliable experimental data with rigorous uncertainties is necessary for calibration, validation and uncertainty quantification. This requires verification and validation of both the experimental measurements and the subsequent data reduction models.
- Calibration and validation in the presence of uncertainty are statistical, and posed in the context of the Bayesian interpretation of probability and Bayesian inference. Thus, our prior knowledge (or lack thereof) of the model parameters, our uncertainty in the structure of the mathematical model, and the uncertainty inherent in experimental data are all expressed in terms of probability distributions.
- As argued by Karl Popper, mathematical models cannot be determined to be valid, they can only be invalidated by experimental observations. Thus the validation process is one of gaining increasing confidence in a model as repeated attempts to invalidate it by experimental observations fails.
- (in)Validation is done in the context of a specified prediction, and the purpose for which it is being performed. In particular, the (in)validity of a model is assessed in the context of a quantity to be predicted (the Quantity of Interest, or QoI), and the requirements for the decision to be made based on the prediction. The reason for this is that a model can be valid for some predictions, but not others.
- For complex multi-physics systems and models, the calibration and validation process is hierarchical. In such systems, validation is pursued for the simplest possible component models first, and then for increasingly complex combination of coupled models. This is necessary because multi-physics coupling will commonly involve new “coupling” models that need calibration and validation, and because component models may themselves be invalid under the conditions present in the coupled system.
- Calibration and validation are iterative processes: in cases where a model is considered invalid, proper action (for example, requiring more or better calibration data, or using a new mathematical model) is pursued to correct the failure.