Welcome & Check-In
Un membre distingué de la communauté de recherche donnera le ton avec un bref aperçu des thèmes et des objectifs de la conférence et un accueil chaleureux à tous les participants.
* All times are based on Canada/Eastern EST.
Canada/Eastern
Canada/Eastern
This presentation examines the use of Physics-Informed Neural Networks (PINNs), Variational Physics-Informed Neural Networks (VPINNs), and Deep Ritz methods, combined with stochastic quadrature rules, to solve parametric partial differential equations (PDEs). It begins by introducing parametric PDEs and how these neural network techniques can be used to solve them. The presentation then delves into the challenges of solving these PDEs, including optimization, regularity, and integration. It points out that while PINNs using strong formulations may have trouble with singular solutions, they handle integration better than weak formulation methods like VPINNs or Deep Ritz. To address these challenges, the presenter proposes using unbiased high-order stochastic quadrature rules for better integration and Regularity Conforming Neural Networks to deal with complex solutions and singularities.
Canada/Eastern
In recent years, various neural-network methodologies have been proposed for the numerical approximation of PDEs. Many of these aim to minimize a suitable cost functional over an open set of neural-network functions. In this talk I will present quasi-optimality results for several methodologies, which guarantee near-best approximations for (quasi-) minimizers.First I will discuss an abstract result for minimizing a differentiable, strongly-convex functional over any open set. This result applies to Deep Ritz, PINNs and other PDE-constrained neural-minimization problems. Then I will discuss Minimal-Residual (MinRes) methods involving a separately-discretized dual norm of the residual, hence requires a suitable (Fortin) compatibility. Such a situation is encountered in VPINNs, WANs, and MinRes-FEM using neural-network dual norms.
Canada/Eastern
ctive learning is an important concept in machine learning, in which the learning algorithm can choose where to query the underlying ground truth to improve the accuracy of the learned model. As machine learning techniques come to be more commonly used in scientific computing problems, where data is often expensive to obtain, the use of active learning is expected to be particularly important in the design of efficient algorithms. In this work, we introduce a general framework for active learning in regression problems. Our framework extends the standard setup by allowing for general types of data, rather than merely pointwise samples of the target function. This generalization covers many cases of practical interest, such as data acquired in transform domains (e.g., Fourier data), vector-valued data (e.g., gradient-augmented data), data acquired along continuous curves, and, multimodal data (i.e., combinations of different types of measurements). Our framework considers random sampling according to a finite number of sampling measures and arbitrary nonlinear approximation spaces (model classes). We introduce the concept of generalized Christoffel functions and show how these can be used to optimize the sampling measures. We prove that this leads to near-optimal sample complexity in various important cases. This work focuses on applications in scientific computing, where, as noted, active learning is often desirable, since it is usually expensive to generate data. We demonstrate the efficacy of our framework for gradient-augmented learning, Magnetic Resonance Imaging (MRI) using generative models and adaptive sampling for solving PDEs using Physics-Informed Neural Networks (PINNs).
Canada/Eastern
Gaussian processes (GPs) are powerful machine learning models that have been successfully used in many applications that involve clustering, density estimation, system identification, regression, and classification. The majority of recent works on GPs are aimed at increasing their scalability to high dimensions and/or large datasets. In this talk, we will take a little detour from these works and focus on the fundamental limitations of kernels in the context of scientific machine learning. By addressing these limitations, we demonstrate that GPs can provide computationally efficient and competitive performance in critical applications such as topology optimization and operator learning.
Canada/Eastern
I will present recent results that permit us to characterize the probability of extreme events as a random variable. I will also describe how we can evaluate the "statistical" sensitivity of this random variable to both input random parameters and model errors. Achieving this task hinges on two specific developments: 1) constructing mappings between the coefficients of a polynomial chaos expansion and the associated probability distribution, and 2) characterizing model error withn the polynomial chaos formalism so as to characterize its impact jointly with parametric uncertainty. I will show results from structural mechanics.
Canada/Eastern
Physics-informed neural networks and operator networks have shown promise for effectively solving equations modeling physical systems. However, these networks can be difficult or impossible to train accurately for some systems of equations. One way to improve training is through the use of a small amount of data, however, such data is expensive to produce. We will introduce our novel multifidelity framework for stacking physics-informed neural networks and operator networks that facilitates training by progressively reducing the errors in our predictions for when no data is available. In stacking networks, we successively build a chain of networks, where the output at one step can act as a low-fidelity input for training the next step, gradually increasing the expressivity of the learned model. We will finally discuss the extension to domain decomposition using the finite basis method, including applications to newly-developed Kolmogorov-Arnold Networks.
Canada/Eastern
This talk will use simple examples to demonstrate that ReLU neural networks (NNs) can accurately approximate discontinuous/non-smooth functions with degrees of freedom several orders of magnitude lower than those required by numerical methods on xed meshes. It will then introduce Physics-Preserved Neural Networks (P2NN) methods for interface problems, which rigorously enforce interface conditions at the discrete level. A major computational challenge associated with ReLU NNs is the inherently non-convex optimization problem they produce.This talk will conclude with a discussion of our latest advancements in overcoming this critical issue.
Canada/Eastern
Accurate surrogate models are crucial for advancing scientific machine learning, particularly for parametric partial differential equations. Neural operators provide a powerful framework for learning solution operators between function spaces; however, controlling and quantifying their errors remains a key challengefor downstream tasks, such as optimization and inference. This talk presents a residual-based approach for error estimation and correction of neural operators, thereby improving prediction accuracy without requiring retraining of the original network. Applications to Bayesian inverse problems and optimization are discussed. We will conclude by highlighting emerging ideas and recent progress aimed at enhancing the reliability of neural operators and opening new directions at the intersection of learning, adaptivity, and decision-making.
Canada/Eastern
In this talk we will review and explore new avenues in generative flow model. We discuss their theoretical background as well as their limitations. We then introduce iterative flow matching, which is a technique that allows for more accurate flow matching. Finally, we discuss multiscale acceleration techniques and show how they can be combined for the problem.
Canada/Eastern
Many significant dynamical systems, including Hamiltonian (energy-conserving) and metriplectic (entropy-producing) systems, are governed by algebraic brackets with key structural properties. When creating scientific machine learning (SciML) surrogates for these dynamics, it is crucial to design the learning problem to ensure that the learned system adheres to these fundamental properties, which dictate system behavior. This talk presents recent advancements in achieving this goal within the context of model reduction, from both 'top-down' and 'bottom-up' perspectives. We demonstrate how preserving bracket-based structure in reduced-order surrogates enhances stability, simplifies training, and yields more realistic results at prediction time.