* All times are based on Canada/Eastern EDT.

  • AM 8:30

    Canada/Eastern

    AM 8:30 - AM 8:45 EDT
    Salle des promotions

    Allocutions d'ouverture

    Frédéric Sanchez, consul général de France à Québec et un.e intervenant.e à confirmer

    AM 8:45

    Canada/Eastern

    AM 8:45 - AM 9:45 EDT
    Salle des promotions

    Human-Centered Explainable AI (XAI): From Algorithms to User Experiences

    avec Vera Liao, chercheure principale chez Microsoft Research Montréal À PROPOS DE LA CONFÉRENCE / Artificial Intelligence technologies are increasingly used to make decisions and perform autonomous tasks in critical domains. The need to understand AI in order to improve, contest, develop appropriate trust, and better interact with AI systems has spurred great academic and public interest in Explainable AI (XAI). The technical field of XAI has produced a vast collection of algorithms in recent years. However, explainability is an inherently human-centric property and the field is starting to embrace human-centered approaches. Human-computer interaction (HCI) research and user experience (UX) design in this area are increasingly important especially as practitioners begin to leverage XAI algorithms to build XAI applications. In this talk, I will draw on my own research and broad HCI works to highlight the central role that human-centered approaches should play in shaping XAI technologies, including driving technical choices by understanding users’ explainability needs, uncovering pitfalls of existing XAI methods, and providing conceptual frameworks for human-compatible XAI. À PROPOS DE LA CONFÉRENCIÈRE / Vera Liao is a Principal Researcher at Microsoft Research Montréal, where she is part of the FATE (Fairness, Accountability, Transparency, and Ethics of AI) group. Her current research interests are in human-AI interaction, explainable AI, and responsible AI. Prior to joining MSR, she worked at IBM T.J. Watson Research Center, and studied at the University of Illinois at Urbana-Champaign and Tsinghua University. Her research received multiple paper awards at top-tier computer science conferences. She currently serves as the Co-Editor-in-Chief for Springer HCI Book Series, in the Editors team for ACM CSCW conferences, and on the Editorial Board of ACM Transactions on Interactive Intelligent Systems (TiiS).

    AM 9:45

    Canada/Eastern

    AM 9:45 - AM 10:15 EDT
    Salle des promotions

    Pause et séance d'affiches

    AM 10:15

    Canada/Eastern

    AM 10:15 - AM 10:45 EDT
    Salle des promotions

    Interpretability in An Industrial Context – A Sensitivity Analysis Perspective

    avec Sébastien Da Veiga est responsable d'équipe IA pour la conception et la simulation chez Safran. À PROPOS DE LA CONFÉRENCE / Manufacturing production and the design of industrial systems are two examples where interpretability of learning methods enables to grasp how the inputs and outputs of a system are connected, and therefore to improve the system efficiency. Although there is no consensus on a precise definition of interpretability, it is possible to identify several requirements: “simplicity, stability, and accuracy”, rarely all satisfied by existing interpretable methods. In this talk, we will discuss two complementary approaches for designing interpretable algorithms. First, we will focus on designing a robust rule learning model, which is simple and highly predictive thanks to its construction based on random forests. Second, for explaining black-box machine learning models directly, we will develop come connections between variable importance and sensitivity analysis. The objective here is to use sensitivity analysis as a guide for analyzing available importance measures, and conversely to use machine learning tools for proposing new powerful methods in sensitivity analysis. À PROPOS DU CONFÉRENCIER / Sébastien Da Veiga is a senior expert in statistics and optimization at Safran, an international high-technology group supplier of systems and equipment in the aerospace and defense markets. He obtained is PhD thesis in statistics at Toulouse University in 2007, and his habilitation thesis on interpretable machine learning in 2021. He is currently the head of a research team working on the use of artificail intelligence for design and simulation. His research interests include computer experiments modeling, sensitivity analysis, optimization, kernel methods and random forests.

    AM 10:45

    Canada/Eastern

    AM 10:45 - AM 11:15 EDT
    Salle des promotions

    Conférence de David Vigouroux

    David Vigouroux est ingénieur de recherche en IA à l'IRT Saint Exupéry.

    AM 11:15

    Canada/Eastern

    AM 11:15 - AM 11:45 EDT
    Salle des promotions

    Transfer Learning in Industry

    Avec Mathilde Mougeot, chercheure et professeure en science des données è l'ENS Paris-Sarclay. À PROPOS DE LA CONFÉRENCE / Dans le domaine industriel, il est aujourd'hui acquis qu'il est possible de développer des applications à fortes valeurs ajoutées basées sur des modèles d'apprentissage machine calibrées à l’aide de données historiques disponibles en grande quantité. Lors d'un changement de type de production ou de remplacement de capteurs sur une chaine de production, les données collectées évoluent rendant les modèles d’apprentissage machine préalablement calibrés inexploitables. Nous discuterons dans cet exposé l'intérêt d'utiliser différentes techniques d'adaptation de domaine pour contrer la dégradation des performances des modèles. Notre exposé sera illustré sur plusieurs cas industriels réels pour lesquels déterminer la méthode d’apprentissage par transfert adéquate peut s’avérer être un challenge en soi. À PROPOS DE LA CONFÉRENCIÈRE / Mathilde Mougeot is Professeur of Data Science at Ecole Nationale Supérieure d'Informatique pour l'Industrie et l'Entreprise (ENSIIE) and adjunct Professor at ENS Paris Saclay where she holds the Industrial Research Chair "Industrial Data Analytics & Machine Learning". Her research activity is motivated by questions related to concrete applications stemming from collaborative projects with the socio-economic world. Her research focuses mainly on scientific issues related to predictive models in various contexts, such as those of high dimensionality , model aggregation, domain adaptation, data frugality by model transfer or by hybrid models. Since September 2019, she has been Deputy Director of the Fondation Mathématique Jacques Hadamard (FMJH), in charge of the industrial relationships. She has been involved as Delegate Director in the Graduate School of Mathematics of Paris-Saclay University, since January 2020. From 2016 to 2019, she was scientific officer for technology transfer in the Mathematics Division of Centre National de la Recherche Scientifique (CNRS). From 1999 to 2005, she has been contributed to the creation and the development of the start-up Miriad Technologies, specialized in the development of mathematical solutions for the industry based on machine learning, statistics and signal processing techniques. She's presently teaching machine learning and statistics at a master level at ENSIIE and Paris-Saclay university.

    AM 11:45

    Canada/Eastern

    AM 11:45 - PM 12:30 EDT
    Salle des promotions

    A Few Perspectives on Reliability in Reinforcement Learning

    Avec Emmanuel Rachelson, professeur en apprentissage automatique et optimisation à l'ISAE-SUPAERO À PROPOS DE LA CONFÉRENCE / Human (dis)trust in artificial intelligence has multiple causes and can be linked to various subjective factors. Although objectively quantifying these within a single criterion does not seem appropriate, one can try exploring what makes good reliability arguments when learning control strategies for dynamical systems. In this talk, I will try to cover different notions of reliability in the output of reinforcement learning algorithms. Should we trust an agent because it finds good strategies on average (what happens when it does not)? This top-performing AI plays this video game really well, but can I trust it to play new levels? Through recent work on transfer between learning tasks, mitigation of observational overfitting, and robustness to a span of environments, I will explore some of the formal criteria and properties that might lead to better reliability when learning control strategies for dynamical systems. À PROPOS DU CONFÉRENCIER / Emmanuel Rachelson is professor of machine learning and optimization at ISAE-SUPAERO. He earned a PhD in artificial intelligence (2009) and the Habilitation degree (2020) from the University of Toulouse. He has been responsible for the Intelligent Decision Systems minor track (MS, 2012) and has founded the Data and Decision Sciences major track (MS, 2015) in the ISAE-SUPAERO curriculum. He also co-founded the Artificial Intelligence and Business Transformation executive master program (2021) and co-organized the international Reinforcement Learning Virtual School (2021). His research is in the field of reinforcement learning and related topics. He created the ISAE-SUPAERO Reinforcement Learning Initiative (SuReLI, 2016) which fosters interaction between PhD students, postdocs and permanents researchers on reinforcement learning topics and their interplay with other disciplines. He investigates the reliability of reinforcement learning methods from different points of view such as statistical generalization, robustness to uncertainty, transfer, simulation to reality, etc. He is also interested in the practical applications of reinforcement learning such as fluid flow control, parameter control in optimization problems, unmanned vehicles, air traffic management, software testing, or therapeutic planning.

    PM 12:30

    Canada/Eastern

    PM 12:30 - PM 1:30 EDT
    Salle des promotions

    Dîner

    PM 1:30

    Canada/Eastern

    PM 1:30 - PM 2:00 EDT
    Salle des promotions

    Discussion : Regard sur l’initiative de recherche DEEL au Québec

    Discussion autour du projet DEEL-Québec entre Giuliano Antoniol, professeur au Département d'informatique et de génie logiciel à Polytechnique Montréal, et Mario Marchand, professeur au Département d'informatique et de génie logiciel de l'Université Laval.

    PM 2:00

    Canada/Eastern

    PM 2:00 - PM 2:30 EDT
    Salle des promotions

    Interpretable Multiclass Text Classification Using Column Generation

    Avec Krunal Patel, étudiant au doctorat en informatique à Polytechnique Montréal À PROPOS DE LA CONFÉRENCE / In this presentation, we start by discussing a binary classification model for interpretable boolean decision rule generation by Dash et al. 2018 that is solved using column generation. We then talk about how we extended it to a multiclass text classification framework for a prediction problem in the aviation industry, where we needed to classify a set of text messages (NOTAMs) into a specific set of categories (Qcodes). Specifically, we present the techniques we used to tackle the issues related to one-vs-rest classification such as multiple outputs, class imbalances etc. We also talk about using a CP-SAT solver as a heuristic to speed up the training process. Finally, we conclude the presentation with the comparison of our results with the results of some standard machine learning algorithms, and the discussion of future ideas we want to implement for this task. À PROPOS DU CONFÉRENCIER / Krunal Patel is a PhD Student at Polytechnique Montreal and CERC. He is working under the supervision of Prof. Andrea Lodi and Prof. Guy Desaulniers. He graduated from BITS Pilani Goa Campus in 2015 with B.E. (Hons.) Computer Science and M.Sc. (Hons.) Mathematics. After graduation, he worked at Google from 2015 to 2020 primarily with operations research team. His research interests include discrete optimization, column generation and machine Learning.

    PM 2:30

    Canada/Eastern

    PM 2:30 - PM 3:00 EDT
    Salle des promotions

    Domain-Aware Deep Learning Testing for Aircraft System Models

    Avec Houssem Ben Braiek, étudiant au doctorat en informatique à Polytechnique Montréal À PROPOS DE LA CONFÉRENCE / With deep learning (DL) modeling, aircraft performance system simulators can be derived statistically without the extensive system knowledge required by physics-based modeling. Yet, DL's rapid and economical simulations face serious trustworthiness challenges. Due to the high cost of aircraft flight tests, it is difficult to preserve held-out test datasets that cover the full range of the flight envelope for estimating the prediction errors of the learned DL model. Considering this risk of test data underrepresentation, even low-error models encounter credibility concerns when simulating and analyzing system behavior under all foreseeable operating conditions. Therefore, domain-aware DL testing methods become crucial to validate the properties and requirements for a DL-based system model beyond conventional performance testing. Crafting such testing methods needs an understanding of the simulated system design and a creative approach to incorporate the domain expertise without compromising the data-driven nature of DL and its acquired advantages. À PROPOS DU CONFÉRENCIER / Houssem Ben Braiek is a PhD student on Software Engineering at Polytechnic Montreal and his on-going thesis is on Software Debugging and Testing for Machine Learning (ML) Applications. He is a student member of DEEL, and he has been working as a research intern for Bombardier, a DEEL partner, since January 2020. He received a M.sc in Software Engineering from Polytechnic Montreal in 2019, with the Best Thesis Award. He also received a Bachelor in Software Engineering from National Institute of Applied Science and Technology in 2017, with the Highest GPA Award over 5 years. His research interests include dependable and trustworthy ML systems engineering, as well as software quality assurance methods and tools for ML Applications. He published scientific papers on several international conferences including MSR, ASE, and QRS, as well as, on top international journals such as TOSEM, ASE, and JSS

    PM 3:00

    Canada/Eastern

    PM 3:00 - PM 3:30 EDT
    Salle des promotions

    Can We Derive Insight from Post-Hoc Explanations of Uncertain Models?

    Avec Gabriel Laberge, étudiant au doctorat en mathématiques appliquées à Polytechnique Montréal À PROPOS DE LA CONFÉRENCE / Post-hoc explainability tools are becoming increasingly available and easy to use. Indeed, online repositories are plentiful, and often they only require adding a few lines of codes to your ML pipeline to “explain” your model. However this wide availability comes with a caveat; it is possible for uninformed users to misuse these tools and drive incorrect conclusions from them. For instance, one could wrongly assume that certain features are important in the real-life mechanism that generated the data because post-hoc explanations of the available model suggest so. By jumping to such a conclusion, one completely ignores our uncertainty about the model. In this presentation, we argue that explaining a single model is never enough if one is interested in deriving real-world insight from XAI methods.Taking inspiration from ensemble methods for uncertainty quantification, we propose to explain several models in parallel and provide users only with information on which all models agree (i.e reach a consensus). À PROPOS DU CONFÉRENCIER / Gabriel Laberge is a PhD Student at Polytechnique Montreal working under the supervision of Prof. Foutse Khomh and Prof. Mario Marchand. He received a M.Sc in Applied Mathematics from Polytechnic Montreal in 2020, which sparked his curiosity in statistics and machine learning. His research interests currently include post-hoc explainability, decision-making, uncertainty quantification, and causality.

    Powered by