CAISI Research Program at CIFAR

About

Message From the Co-Directors

The year 2025 marked a global turning point for AI safety. With urgent concerns voiced by leading experts, including Canada CIFAR AI Chair Yoshua Bengio and CIFAR Distinguished Fellow Geoffrey Hinton, and the release of the International AI Safety Report, the world recognized the imperative to balance rapid innovation with rigorous risk mitigation.

Catherine Régis

Catherine Régis

Co-Director, CAISI Research Program at CIFAR

Canada CIFAR AI Chair, Mila, Université de Montréal

Nicolas Papernot

Nicolas Papernot

Co-Director, CAISI Research Program at CIFAR

Canada CIFAR AI Chair, Vector Institute, University of Toronto

Canada was uniquely prepared to meet this challenge. Having been appointed to develop and implement the world’s first national AI strategy in 2017, CIFAR has been instrumental in building the deep talent pool and research excellence that defines our nation’s success today. It is upon this proven foundation that the CAISI Research Program at CIFAR was established. Through this initiative, we are building Canadian research capacity and fostering a critical mass of skilled talent in AI safety, positioning Canada as a global leader in developing safe and trustworthy AI systems. We have leveraged this legacy to rapidly mobilize Canada's scientific community, moving from concept to concrete impact in just one year.

As Co-Directors from the distinct fields of law and computer science, we view AI safety as a sociotechnical challenge. Our program bridges the gap between technical and social considerations, ensuring AI systems are both robust and socially responsible. Enabling multidisciplinary collaborations among the research community, government, and other partners is key to addressing the true complexity of AI safety challenges.

In our first year, we funded 12 projects and supported 28 researchers across disciplines. Our program is advancing innovative research that is developing new techniques and practices to ensure AI systems are safe and reliable, and their deployment takes into account our societal values. In a nutshell, ensuring that AI systems can be trusted. We are already delivering critical results:

Defending Democracy

Developing systems to combat foreign influence and disinformation.

Protecting Youth

Creating guardrails to detect and block content that encourages self-harm.

Securing Justice

Launching a Solution Network to safeguard Canadian courts from synthetic AI evidence.

Looking ahead, we will launch the “Lighthouse Portal” to provide small- and medium-sized enterprises with vetted playbooks for safe AI deployment and release judicially-vetted interfaces to verify digital evidence in courts, among many more tools.

All of this wouldn’t be possible without the dedication and commitment of Canada’s AI safety research community, the Government of Canada and ISED for their ongoing investment in AI safety research and talent, and our many supporters, partners and collaborators, including the National Research Council of Canada, the national AI institutes (Amii, Mila and Vector Institute), the International Development Research Centre (IDRC), the UK AI Security Institute and many others.

Together, we are building safer AI for Canadians.

(L-R) Elissa Strome (CIFAR), Joel Martin (National Research Council Canada), Stephen Toope (CIFAR), Yoshua Bengio (Mila), François-Philippe Champagne (Government of Canada), Valérie Pisano (Mila), Tony Gaffney (Vector Institute) and Cam Linke (Amii) at the announcement of the Canadian AI Safety Institute in November 2024

(L-R) Elissa Strome (CIFAR), Joel Martin (National Research Council Canada), Stephen Toope (CIFAR), Yoshua Bengio (Mila, LoiZéro), François-Philippe Champagne (Government of Canada), Valérie Pisano (Mila), Tony Gaffney (Vector Institute) and Cam Linke (Amii) at the announcement of the Canadian AI Safety Institute in November 2024

Scroll to Top
This site is registered on wpml.org as a development site. Switch to a production site key to remove this banner.