Welcome & Check-In
Notre équipe sera prête à vous aider à vous inscrire, à fournir le matériel de conférence et à répondre à toutes vos questions.
* All times are based on Canada/Eastern EST.
Canada/Eastern
Canada/Eastern
Despite being deemed as universal function approximators, neural networks (NN), in practice, struggle to reduce prediction errors below a certain threshold even with large network sizes and extended training iterations. Here, we developed multi-stage neural networks, with each stage utilizing a new network optimized to fit the NN residue from the previous stage. We demonstrate that the prediction error from multi-stage training for both regression problems and physics-informed neural networks can nearly reach the machine precision of double-floating point. This mitigates the longstanding accuracy limitations of NNs and can be used to address the spectral bias in multiscale problems.
Canada/Eastern
We show that "small" NOs can uniformly approximate the solution operator for structured families of FBSDEs with random terminal times, uniformly on suitable compact sets determined by Sobolev norms, to any given error threshold (ε > 0) using a depth of O(log(1/ε)), a width of O(1), and a sub-linear rank. This result stems from our second main contribution, which demonstrates that convolutional NOs with similar depth, width, and rank can approximate the solution operator for a wide range of elliptic partial differential equations (PDEs). A key insight is that the convolutional layers of our NO efficiently encode the Green’s function related to the elliptic PDEs associated with our FBSDEs. A byproduct of our analysis provides the first theoretical justification for the benefit of lifting channels in NOs: they exponentially slow the rank growth of the NO.
Canada/Eastern
In several fields, autoregressive physics-agnostic deep learning architectures are replacing traditional physics-based solvers to perform near real-time simulation. In this talk, we present a framework for learning non-equilibrium statistical dynamics that occurs during coarse-graining of multiscale systems. For this broad class of multiscale/multiphysics problems, the loss of information during coarse-graining leads to an entropic introduction of dissipation, memory, and fluctuations, with detailed balance between dissipative and stochastic physics being crucial to obtain accurate non-equilibrium statistics. We consider two problems in image-based simulation of colloidal systems and coarse-grained polymer physics where state-of-the-art learning architectures completely fail to capture emergent structure and non-equilibrium behavior. To remedy this, we introduce an abstract framework for extracting metriplectic bracket dynamics from observations of particle trajectories. When combined with a self-supervised learning strategy, this framework allows the unsupervised identification of entropic variables which yield thermodynamically consistent models. We introduce an open source implementation in LAMMPS where models trained on a small representative volume may be generalized at inference time to arbitrarily large systems.
Canada/Eastern
Modern machine learning has shown remarkable promise in multiple applications. However, brute force use of neural networks, even when they have huge numbers of trainable parameters, can fail to provide highly accurate predictions for problems in the physical sciences. We present a collection of ideas about how enforcing physics, exploiting multifidelity knowledge and the kernel representation of neural networks can lead to significant increase in efficiency and/or accuracy. Various examples are used to illustrate the ideas.
Canada/Eastern
Traditional methods like FEM are accurate but computationally intensive for real-time predictions. Neural operators offer efficient alternatives but require extensive training data and often lack physics fidelity. This presentation introduces a hybrid framework coupling FEM with a Physics-Informed DeepONet, combining accuracy, scalability, and efficiency. A domain decomposition strategy uses coarse-mesh FEM globally and PI-DeepONet for detailed subdomains, with Schwarz Coupling linking them. A novel time-advancing scheme reduces error accumulation. This mesh-free surrogate enables multiscale coupling. We will demonstrate the limitations of standalone neural operators and how the hybrid model overcomes them for complex dynamical system simulations.
Canada/Eastern
We introduce the concept of neural control of discrete weak formulations of Partial Differential Equations (PDEs), in which finite element discretizations are intervened by using neural-network weight functions. The weight functions act as control variables that -through the minimization of a cost functional- produce discrete solutions incorporating user-defined desirable attributes.Well-posedness and convergence of the associated constrained-optimization problem are analyzed. In particular, we prove under certain conditions, that the discrete weak forms are stable, and that quasi-minimizing neural controls exist, which converge quasi-optimally. Elementary numerical experiments support our findings and demonstrate the potential of the framework.