Welcome & Check-In
Découvrez les dernières avancées de la recherche en intelligence artificielle responsable et ses implications dans notre société.
* All times are based on Canada/Eastern EST.
Canada/Eastern
Canada/Eastern
Deep learning is having a profound impact on both industry and scientific research. While this paradigm continues to demonstrate impressive performance across a wide range of applications, its mathematical foundations remain insufficiently understood. Motivated by deep learning methods in scientific computing, I will illustrate the framework of practical existence theorems. These theorems aim to bridge the gap between theory and practice by combining constructive approximation results for deep neural networks with recovery guarantees from least squares and compressed sensing theory. They identify sufficient conditions on network architecture, training strategy, and training set size that guarantee a desired level of accuracy for a target function class. I will highlight recent advances in the field and demonstrate the application of practical existence theorems in high-dimensional function approximation, reduced-order modeling, and physics-informed machine learning.
Canada/Eastern
This talk delves into recent advancements in physics-informed neural networks (PINNs) for fluid mechanics. We focus on the PirateNet architecture, which incorporates sequence-to-sequence learning, random weight factorization, gradient norm-based loss balancing, causal training, and random Fourier features. Applications to moving interface problems using the level set method and free-surface flows governed by the Shallow-water equations demonstrate significant performance improvements. Despite the promise of PINNs as alternatives to traditional solvers, challenges in computational cost and generalization persist. We discuss future directions to enhance the scalability and competitiveness of PINNs for real-world applications.
Canada/Eastern
Most machine learning algorithms for time-series forecasting leverage mean-squared-error based loss functions. In systems exhibiting deterministic chaos, such a loss function is insufficient for learning long-term invariant behavior. In this talk, we will discuss modifications to the standard time-series learning paradigm to enable the prediction of chaotic dynamical systems both in terms of short-term deterministic metrics and long-term invariant measures. Such modifications include structural changes to the architecture of the function approximation (for instance through novel types of neural networks) as well as modifications to the optimization algorithms. Examples for the improved performance of the proposed techniques will range from canonical systems such as the Kuramoto Sivashinsky equations to challenging engineering and geophysics problems.
Canada/Eastern
In recent years, machine learning approaches, such as the physics-informed neural networks(PINNs) [1], or the deep Ritz method [2], have shown promising results for several classes of initialand boundary-value problems. However, their ability to surpass, particularly in terms of accuracy,classical discretization methods such as the finite element methods, remains a significant challenge.One of the main obstacles of deep learning approaches lies in their inability to consistently reducethe relative error in the computed solution. We present a novel approach, the multi-level neuralnetworks, in order to reduce the solution error when using deep learning methods. The main ideaconsists in computing an initial approximation to the problem using a simple neural network and inestimating, in an iterative manner, a correction by solving the problem for the residual error with anew network of increasing complexity. This sequential reduction of the residual associated with thepartial differential equation allows one to decrease the solution error, which, in some cases, can bereduced to machine precision. The underlying explanation is that the method is able to capture ateach level smaller scales of the solution using a new network. Additionally, this approach is extendedfor accurate solutions of linear operators using Green operator networks [3].
Canada/Eastern
Part I: I’ll discuss numerical PDE solvers, versus traditional inverse problem bases solver, and deep learning approaches. Part II: I’ll talk about our current assessment of risks from AI (misuse, reliability, and systemic) as well as approaches to AI safety.