Reduction Approximation Machine Learning Surrogates Emulators And Simulators

DOWNLOAD
Download Reduction Approximation Machine Learning Surrogates Emulators And Simulators PDF/ePub or read online books in Mobi eBooks. Click Download or Read Online button to get Reduction Approximation Machine Learning Surrogates Emulators And Simulators book now. This website allows unlimited access to, at the time of writing, more than 1.5 million titles, including hundreds of thousands of titles in various foreign languages. If the content not found or just blank you must refresh this page
Reduction Approximation Machine Learning Surrogates Emulators And Simulators
DOWNLOAD
Author : Gianluigi Rozza
language : en
Publisher: Springer Nature
Release Date : 2024-06-24
Reduction Approximation Machine Learning Surrogates Emulators And Simulators written by Gianluigi Rozza and has been published by Springer Nature this book supported file pdf, txt, epub, kindle and other format this book has been release on 2024-06-24 with Mathematics categories.
This volume is focused on the review of recent algorithmic and mathematical advances and the development of new research directions for Mathematical Model Approximations via RAMSES (Reduced order models, Approximation theory, Machine learning, Surrogates, Emulators, Simulators) in the setting of parametrized partial differential equations also with sparse and noisy data in high-dimensional parameter spaces. The book is a valuable resource for researchers, as well as masters and Ph.D students.
Greedy Dictionary Learning Algorithms For Sparse Surrogate Modelling
DOWNLOAD
Author : Valentin Stolbunov
language : en
Publisher:
Release Date : 2017
Greedy Dictionary Learning Algorithms For Sparse Surrogate Modelling written by Valentin Stolbunov and has been published by this book supported file pdf, txt, epub, kindle and other format this book has been release on 2017 with categories.
In the field of engineering design, numerical simulations are commonly used to forecast system performance before physical prototypes are built and tested. However the fidelity of predictive models has outpaced advances in computer hardware and numerical methods, making it impractical to directly apply numerical optimization algorithms to the design of complex engineering systems modelled with high fidelity. A promising approach for dealing with this computational challenge is the use of surrogate models, which serve as approximations of the high-fidelity computational models and can be evaluated very cheaply. This makes surrogates extremely valuable in design optimization and a wider class of problems: inverse parameter estimation, machine learning, uncertainty quantification, and visualization. This thesis is concerned with the development of greedy dictionary learning algorithms for efficiently constructing sparse surrogate models using a set of scattered observational data. The central idea is to define a dictionary of basis functions either a priori or a posteriori in light of the dataset and select a subset of the basis functions from the dictionary using a greedy search criterion. In this thesis, we first develop a novel algorithm for sparse learning from parameterized dictionaries in the context of greedy radial basis function learning (GRBF). Next, we develop a novel algorithm for general dictionary learning (GGDL). This algorithm is presented in the context of multiple kernel learning with heterogenous dictionaries. In addition, we present a novel strategy, based on cross-validation, for parallelizing greedy dictionary learning and a randomized sampling strategy to significantly reduce approximation costs associated with large dictionaries. We also employ our GGDL algorithm in the context of uncertainty quantification to construct sparse polynomial chaos expansions. Finally, we demonstrate how our algorithms may be adapted to approximate gradient-enhanced datasets. Numerical studies are presented for a variety of test functions, machine learning datasets, and engineering case studies over a wide range of dataset size and dimensionality. Compared to state-of-the-art approximation techniques such as classical radial basis function approximations, Gaussian process models, and support vector machines, our algorithms build surrogates which are significantly more sparse, of comparable or improved accuracy, and often offer reduced computational and memory costs.
A Scientific Machine Learning Approach To Learning Reduced Models For Nonlinear Partial Differential Equations
DOWNLOAD
Author : Elizabeth Yi Qian
language : en
Publisher:
Release Date : 2021
A Scientific Machine Learning Approach To Learning Reduced Models For Nonlinear Partial Differential Equations written by Elizabeth Yi Qian and has been published by this book supported file pdf, txt, epub, kindle and other format this book has been release on 2021 with categories.
This thesis presents a new scientific machine learning method which learns from data a computationally inexpensive surrogate model for predicting the evolution of a system governed by a time-dependent nonlinear partial differential equation (PDE), an enabling technology for many computational algorithms used in engineering settings. The proposed approach generalizes to the PDE setting an Operator Inference method previously developed for systems of ordinary differential equations (ODEs) with polynomial nonlinearities. The method draws on ideas from traditional physics-based modeling to explicitly parametrize the learned model by low-dimensional polynomial operators which reflect the known form of the governing PDE. This physics-informed parametrization is then united with tools from supervised machine learning to infer from data the reduced operators. The Lift & Learn method extends Operator Inference to systems whose governing PDEs contain more general (non-polynomial) nonlinearities through the use of lifting variable transformations which expose polynomial structure in the PDE. The proposed approach achieves a number of desiderata for scientific machine learning formulations, including analyzability, interpretability, and making underlying modeling assumptions explicit and transparent. This thesis therefore provides analysis of the Operator Inference and Lift & Learn methods in both the spatially continuous PDE and spatially discrete ODE settings. Results are proven regarding the mean square errors of the learned models, the impact of spatial and temporal discretization, and the recovery of traditional reduced models via the learning method. Sensitivity analysis of the operator inference problem to model misspecifications and perturbations in the data is also provided. The Lift & Learn method is demonstrated on the compressible Euler equations, the FitzHugh-Nagumo reaction-diffusion equations, and a large-scale three-dimensional simulation of a rocket combustion experiment with over 18 million degrees of freedom. For the first two examples, the Lift & Learn models achieve 2–3 orders of magnitude dimension reduction and match the generalization performance of traditional reduced models based on Galerkin projection of the PDE operators, predicting the system evolution with errors between 0.01% and 1% relative to the original nonlinear simulation. For the combustion application, the Lift & Learn models accurately predict the amplitude and frequency of pressure oscillations as well as the large-scale structures in the flow field’s temperature and chemical variables, with 5–6 orders of magnitude dimension reduction and 6–7 orders of magnitude computational savings. The demonstrated ability of the Lift & Learn models to accurately approximate the system evolution with orders-of-magnitude dimension reduction and computational savings makes the learned models suitable for use in many-query computations used to support scientific discovery and engineering decision-making.