CAISI Research Program at CIFAR

2025 Year in Review: Building Safe AI for Canadians

Introduction

In November 2024, the Government of Canada launched the Canadian AI Safety Institute (CAISI), recognizing that public trust is the primary driver of successful innovation. While CAISI functions as a federal initiative under the Government of Canada’s Department of Innovation, Science and Economic Development Canada (ISED), the CAISI Research Program at CIFAR serves as its independent scientific engine. We are charged with mobilizing the nation’s AI safety experts across disciplines to address the complex technical and social challenges of advanced AI systems.

The Year in Review: Building Safe AI for Canadians summarizes our progress in 2025. By building Canadian research capacity and fostering a critical mass of skilled talent, CIFAR is positioning Canada as a global leader in developing safe and trustworthy AI systems.

Message From the Co-Directors

Nicolas Papernot

Nicolas Papernot

Canada CIFAR AI Chair, Vector Institute, University of Toronto

Catherine Régis

Catherine Régis

Canada CIFAR AI Chair, Mila, Université de Montréal

After a landmark first year, the CAISI Research Program at CIFAR Co-Directors Catherine Régis and Nicolas Papernot share key successes in AI safety research and discuss how the program will continue to build capacity for safe AI that is aligned with Canadian values.

CAISI Research Program at CIFAR By the Numbers

In its first year, the CAISI Research Program at CIFAR delivered significant impacts in AI safety research, training and mobilizing knowledge and insights by:

Building a National AI Safety Research Community

Funding a network of 55+ experts across disciplines, including:

Principal Investigators and 6 CAISI Research Council members
CIFAR AI Safety Postdoctoral Fellows
AI Safety Scientists & Engineers at Amii, Mila and the Vector Institute

Driving High-Impact Research

$2.4M invested to launch 12 new research projects, including:

Catalyst projects
Solution Networks

Delivering Results

active national and international research partnerships
AI safety expert-policymaker roundtables
+
knowledge products/research outputs expected (2026)

Priorities — Solving Today’s AI Safety Challenges

Safeguard­ing Society

Protecting our collective future from the large-scale risks of advanced AI.

Building Trust & Fairness

Actively embedding human values and equity into AI systems.

Securing Critical Systems

Developing rigorous tools to evaluate the safety, accuracy, and reliability of frontier AI.

Scroll to Top