09:00

Canada/Eastern

09:00 - 09:30 EDT
Atrium

Registration of presenters

Registration of presenters who will set up their presentation material.

09:30

Canada/Eastern

09:30 - 09:45 EDT
Atrium

Registration of participants

Registration for participants who will receive their nametag and an attendance gift.

09:45

Canada/Eastern

09:45 - 10:45 EDT
A-1502

Opening Keynote: Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path?

By Yoshua Bengio The leading AI companies are increasingly focused on building generalist AI agents -- systems that can autonomously plan, act, and pursue goals across almost all tasks that humans can perform. Despite how useful these systems might be, unchecked AI agency poses significant risks to public safety and security, ranging from misuse by malicious actors to a potentially irreversible loss of human control. We discuss how these risks arise from current AI training methods. Indeed, various scenarios and experiments have demonstrated the possibility of AI agents engaging in deception or pursuing goals that were not specified by human operators and that conflict with human interests, such as self-preservation. Following the precautionary principle, we see a strong need for safer, yet still useful, alternatives to the current agency-driven trajectory. Accordingly, we propose as a core building block for further advances the development of a non-agentic AI system that is trustworthy and safe by design, which we call Scientist AI. This system is designed to explain the world from observations, as opposed to taking actions in it to imitate or please humans. It comprises a world model that generates theories to explain data and a question-answering inference machine. Both components operate with an explicit notion of uncertainty to mitigate the risks of overconfident predictions. In light of these considerations, a Scientist AI could be used to assist human researchers in accelerating scientific progress, including in AI safety. In particular, our system can be employed as a guardrail against AI agents that might be created despite the risks involved. Ultimately, focusing on non-agentic AI may enable the benefits of AI innovation while avoiding the risks associated with the current trajectory. We hope these arguments will motivate researchers, developers, and policymakers to favor this safer path.

    Plenary

10:45

Canada/Eastern

10:45 - 11:00 EDT
Atrium

Coffee break

Networking with drinks and snacks. Participants head for the Atrium.

    Networking

11:00

Canada/Eastern

11:00 - 12:30 EDT
Atrium

Posters & demos (Session A)

First poster and demonstration session by DIRO graduate students. Participants can move freely from one presentation to another in parallel. Don't forget to vote for the best presentation!

    Presentations

12:30

Canada/Eastern

12:30 - 14:00 EDT
Atrium

Lunch

A hot buffet will be served on site. Networking period.

    Networking

14:00

Canada/Eastern

14:00 - 15:30 EDT
Atrium

Posters & demos (Session B)

Second poster and demonstration session by DIRO graduate students. Participants can move freely from one presentation to another in parallel. Don't forget to vote for the best presentation!

    Presentations

15:30

Canada/Eastern

15:30 - 15:45 EDT
Atrium

Pause

Networking with drinks and snacks. Participants head for room A-1502.

    Networking

15:45

Canada/Eastern

15:45 - 16:30 EDT
A-1502

Closing ceremony

Guest speeches and awards ceremony.

    Plenary

16:30

Canada/Eastern

16:30 - 19:00 EDT
Atrium

Social event

Informal networking with a cocktail reception.

    Networking
Powered by
Run your next event
with Fourwaves