{"id":232,"date":"2025-12-13T22:16:27","date_gmt":"2025-12-13T22:16:27","guid":{"rendered":"https:\/\/caisi-2025.local\/?page_id=232"},"modified":"2026-01-13T00:18:11","modified_gmt":"2026-01-13T00:18:11","slug":"proteger-la-societe","status":"publish","type":"page","link":"https:\/\/report.buildingsafeai.ca\/fr\/proteger-la-societe\/","title":{"rendered":"Prot\u00e9ger la Soci\u00e9t\u00e9"},"content":{"rendered":"\n\n\t<p>CAISI Research Program at CIFAR<\/p>\n<h1><\/h1>\n\t<h2>Introduction<\/h2>\n<p>Protecting our collective future from the large-scale risks of advanced AI. This means confronting and mitigating systemic harms, like mass disinformation and economic disruption, and building the tools and policies needed to ensure AI remains a force for public&nbsp;good.<\/p>\n\t<h2>Intelligent Ideas with Geoffrey Rockwell<\/h2>\n<p>Canada CIFAR AI Chair at Amii, Geoffrey Rockwell uses his ethics expertise to bring a philosophical perspective to AI safety research, discussing the role of government in mitigating harm and applying existing safety knowledge and infrastructure to AI deployment.<\/p>\n\t\t<iframe loading=\"lazy\" width=\"560\" height=\"315\" src=\"https:\/\/www.youtube.com\/embed\/qHif8IRY1_Y?si=GJoFqWEp64DFj1Ce\" title=\"YouTube video player\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\t\n\t<p>Spotlight<\/p>\n<h2>Safeguarding Mental Health from AI Companions<\/h2>\n<p>As more Canadians turn to AI chatbots for companionship and self-validation, there is growing proof that misuse and overuse of AI companion chatbots cause mental health harm, ranging from dependency to full AI psychosis and suicide assistance. <a href=\"https:\/\/www.canadianaffairs.news\/2025\/11\/17\/ai-companions-fill-emotional-gaps-and-create-legal-ones\/\" target=\"_blank\" rel=\"noopener\">As many as 70%<\/a> of young people now regularly turn to AI companions, necessitating the need for independent safeguards, including technological guardrails, policies, and education.<\/p>\n<p>To mitigate the risks of harmful chatbot interactions, the CAISI Research Program at CIFAR provided funding to support Mila&rsquo;s AI Safety Studio to undertake this work. This initiative focuses on creating independent, trustworthy AI guardrails and developing exhaustive benchmarks that reflect Canadian cultural and societal diversity to objectively measure the&nbsp;harm.<\/p>\n<p>To date, the Studio has developed its first iteration of a mental health guardrail and benchmark for AI chatbots. They are now working to extend their reach across multiple large language model (LLM) vendors, languages and cultural specificities, using anonymized real-world data and input from mental health&nbsp;experts.<\/p>\n<h3>Cross-Disciplinary Collaboration and Future&nbsp;Focus<\/h3>\n<p>\u00ab\u00a0The most exciting aspect of this work is the unanimous, cross-disciplinary support of a web of partners. The socio-technical collaboration across disciplines &#8211; bridging AI expertise, mental health, policy, education specialists and impacted communities grassroots up, ensures that we&rsquo;ll create a robust, multidisciplinary protection against companion AI mental health harm,\u00a0\u00bb said Simona Grandrabur, Mila&rsquo;s AI Safety Studio&nbsp;Lead.<\/p>\n<p>Over the next year, the Studio plans to develop intelligent filters to block AI-generated content that assists or encourages self-harm or suicide, as well as reliability testing protocols to evaluate the safety and robustness of conversational and generative AI models. Additionally, the Studio will develop psychological and ethical risk assessment tools.<\/p>\n<p>The first official version of the AI Safety Studio benchmark dashboard and guardrails will be released to the public in&nbsp;2026.<\/p>\n\t\t\t\t<img decoding=\"async\" src=\"https:\/\/report.buildingsafeai.ca\/wp-content\/uploads\/2025\/12\/simona-grandrabur.jpg\" alt=\"simona-grandrabur\" itemprop=\"image\" height=\"1000\" width=\"640\" title=\"simona-grandrabur\" onerror=\"this.style.display='none'\" loading=\"lazy\" \/>\n\t<p>\u00ab\u00a0Mila&rsquo;s mental health guardrail and benchmarks will establish an independent and trustworthy means to measure the extent of harmful interactions with AI companions to safeguard our most vulnerable populations, including our children, against suicide assistance.\u00a0\u00bb<\/p>\nSimona Grandrabur<br \/>\nAI Safety Lead, Mila\n\t\t\t<a href=\"https:\/\/mila.quebec\/en\/news\/mila-launches-ai-safety-studio-with-first-initiative-to-help-prevent-suicide\" target=\"_blank\" rel=\"noopener\">\n\t\t\t\t\t\t\tRead the full story\n\t\t\t\t\t<\/a>\n\t<p>Spotlight<\/p>\n<h2>Securing Canada Against Disinformation<\/h2>\n<p>Malicious foreign influence and AI-driven disinformation pose a direct threat to Canadian democracy, aiming to erode trust in our institutions, media and civil society. In response, a 2025 CIFAR AI Safety Catalyst project is developing an advanced AI tool to protect Canadians against disinformation campaigns.<\/p>\n<p>Defending against the malicious use of AI is the focus of this research, which is led by Canada CIFAR AI Chair Matthew E. Taylor (Amii, University of Alberta), Brian McQuinn (University of Regina), and CIFAR AI Safety Postdoctoral Fellow James&nbsp;Benoit.<\/p>\n<h3>An AI Defense System<\/h3>\n<p>The team is developing CIPHER, an advanced human-in-the-loop AI system. The core purpose of this tool is to empower civil society organizations by equipping them to identify and combat sophisticated and coordinated disinformation campaigns. The initial focus of this work is to detect Russian operations across both textual and visual media, providing a vital shield of protection for Canadian society.<\/p>\n<p>\u00ab\u00a0The CIPHER project treats safe and reliable information as a matter of national security. Identifying state-backed disinformation news campaigns can help us all remain rooted in Canadian facts and values. Our goal is to ensure outside influencers don&rsquo;t poison our debates and security decisions,\u00a0\u00bb the team told&nbsp;CIFAR.<\/p>\n<p>This CIFAR AI Safety Catalyst project will deliver tangible impacts by producing:<\/p>\n<ul>\n<li>A rigorously evaluated proof-of-concept of the CIPHER tool, tested in the real world by Canadian and global civil society partners.<\/li>\n<li>Actionable policy briefs to guide government and industry response.<\/li>\n<li>A new public dataset to accelerate further research and development in this critical area, ensuring the project&rsquo;s impact extends far beyond its initial&nbsp;scope.<\/li>\n<\/ul>\n\t\t\t\t<img decoding=\"async\" src=\"https:\/\/report.buildingsafeai.ca\/wp-content\/uploads\/2025\/12\/matthew-taylor.jpg\" alt=\"matthew-taylor\" itemprop=\"image\" height=\"1000\" width=\"640\" title=\"matthew-taylor\" onerror=\"this.style.display='none'\" loading=\"lazy\" \/>\n\t<p>\u00ab\u00a0AI is becoming increasingly common. Rather than outsourcing important decisions to AI, our design makes sure humans are always in the loop. The CIPHER project aims to earn the trust of decision-makers and users to collaboratively defend democratic spaces from disinformation and misinformation.\u00a0\u00bb<\/p>\n<p>Matthew E. Taylor<br \/>Canada CIFAR AI Chair, Amii<\/p>\n\t<h2>Funded Projects<\/h2>\n\t<p>Solution Network: Safeguarding Courts from Synthetic AI Content<\/p>\n<ul>\n<li>Ebrahim Bagheri (University of Toronto) and Maura Grossman (University of Waterloo)<\/li>\n<\/ul>\n\t<p>AI Safety Catalyst Project: On the Safe Use of Diffusion-based Foundation Models<\/p>\n<ul>\n<li>Mijung Park (Canada CIFAR AI Chair, Amii, University of British Columbia)<\/li>\n<\/ul>\n\t<p>AI Safety Catalyst Project: CIPHER: Countering Influence Through Pattern Highlighting and Evolving Responses<\/p>\n<ul>\n<li>Matthew E. Taylor (Canada CIFAR AI Chair, Amii, University of Alberta)<\/li>\n<li>Brian McQuinn (University of Regina), and James Benoit (CIFAR AI Safety Postdoctoral Fellow, Amii)<\/li>\n<\/ul>\n\t\t\t<a href=\"#\" target=\"_blank\" rel=\"noopener\">\n\t\t\t\t\t\t\tSee details of all projects\n\t\t\t\t\t<\/a>\n\t\t\t\t<img decoding=\"async\" src=\"https:\/\/report.buildingsafeai.ca\/wp-content\/uploads\/2025\/12\/20250521_140004.jpg\" alt=\"20250521_140004\" itemprop=\"image\" height=\"1067\" width=\"1600\" title=\"20250521_140004\" onerror=\"this.style.display='none'\" loading=\"lazy\" \/>\n\t<h2>Building Safe AI: Why Canada Matters<\/h2>\n<p>(L-R) Kianna Adams, Golnoosh Farnadi, Mohamed Abdalla and Elissa Strome discuss how to build AI systems that are aligned with the values and safety of society and how Canada can lead in this endeavor. <\/p>\n\n","protected":false},"excerpt":{"rendered":"<p>CAISI Research Program at CIFAR Introduction Protecting our collective future from the large-scale risks of advanced AI. This means confronting [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_acf_changed":false,"site-sidebar-layout":"no-sidebar","site-content-layout":"","ast-site-content-layout":"full-width-container","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"disabled","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"disabled","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-4)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"class_list":["post-232","page","type-page","status-publish","hentry"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Prot\u00e9ger la Soci\u00e9t\u00e9 - CAISI Year in Review<\/title>\n<meta name=\"description\" content=\"Voyez comment nous combattons les risques syst\u00e9miques en cr\u00e9ant des outils et politiques pour que l\u2019IA demeure une force au service du bien commun.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/report.buildingsafeai.ca\/safeguarding-society\/\" \/>\n<meta property=\"og:locale\" content=\"fr_FR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Prot\u00e9ger la Soci\u00e9t\u00e9\" \/>\n<meta property=\"og:description\" content=\"Voyez comment nous combattons les risques syst\u00e9miques en cr\u00e9ant des outils et politiques pour que l\u2019IA demeure une force au service du bien commun.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/report.buildingsafeai.ca\/safeguarding-society\/\" \/>\n<meta property=\"og:site_name\" content=\"CAISI Year in Review\" \/>\n<meta property=\"article:modified_time\" content=\"2026-01-13T00:18:11+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/report.buildingsafeai.ca\/wp-content\/uploads\/2026\/01\/article-1920x1080-eng-1024x576.jpg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:title\" content=\"Prot\u00e9ger la Soci\u00e9t\u00e9\" \/>\n<meta name=\"twitter:description\" content=\"Voyez comment nous combattons les risques syst\u00e9miques en cr\u00e9ant des outils et politiques pour que l\u2019IA demeure une force au service du bien commun.\" \/>\n<meta name=\"twitter:image\" content=\"https:\/\/report.buildingsafeai.ca\/wp-content\/uploads\/2026\/01\/article-1920x1080-eng-1024x576.jpg\" \/>\n<meta name=\"twitter:label1\" content=\"Dur\u00e9e de lecture estim\u00e9e\" \/>\n\t<meta name=\"twitter:data1\" content=\"4 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/report.buildingsafeai.ca\\\/safeguarding-society\\\/\",\"url\":\"https:\\\/\\\/report.buildingsafeai.ca\\\/safeguarding-society\\\/\",\"name\":\"Prot\u00e9ger la Soci\u00e9t\u00e9 - CAISI Year in Review\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/report.buildingsafeai.ca\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/report.buildingsafeai.ca\\\/safeguarding-society\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/report.buildingsafeai.ca\\\/safeguarding-society\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/report.buildingsafeai.ca\\\/wp-content\\\/uploads\\\/2025\\\/12\\\/simona-grandrabur.jpg\",\"datePublished\":\"2025-12-13T22:16:27+00:00\",\"dateModified\":\"2026-01-13T00:18:11+00:00\",\"description\":\"Voyez comment nous combattons les risques syst\u00e9miques en cr\u00e9ant des outils et politiques pour que l\u2019IA demeure une force au service du bien commun.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/report.buildingsafeai.ca\\\/safeguarding-society\\\/#breadcrumb\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/report.buildingsafeai.ca\\\/safeguarding-society\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/report.buildingsafeai.ca\\\/safeguarding-society\\\/#primaryimage\",\"url\":\"https:\\\/\\\/report.buildingsafeai.ca\\\/wp-content\\\/uploads\\\/2025\\\/12\\\/simona-grandrabur.jpg\",\"contentUrl\":\"https:\\\/\\\/report.buildingsafeai.ca\\\/wp-content\\\/uploads\\\/2025\\\/12\\\/simona-grandrabur.jpg\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/report.buildingsafeai.ca\\\/safeguarding-society\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/report.buildingsafeai.ca\\\/fr\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Prot\u00e9ger la Soci\u00e9t\u00e9\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/report.buildingsafeai.ca\\\/#website\",\"url\":\"https:\\\/\\\/report.buildingsafeai.ca\\\/\",\"name\":\"CAISI Year in Review\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\\\/\\\/report.buildingsafeai.ca\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/report.buildingsafeai.ca\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"fr-FR\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/report.buildingsafeai.ca\\\/#organization\",\"name\":\"CIFAR\",\"url\":\"https:\\\/\\\/report.buildingsafeai.ca\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\\\/\\\/report.buildingsafeai.ca\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/report.buildingsafeai.ca\\\/wp-content\\\/uploads\\\/2025\\\/12\\\/CIFAR-logo-red-RGB.svg\",\"contentUrl\":\"https:\\\/\\\/report.buildingsafeai.ca\\\/wp-content\\\/uploads\\\/2025\\\/12\\\/CIFAR-logo-red-RGB.svg\",\"width\":450,\"height\":144,\"caption\":\"CIFAR\"},\"image\":{\"@id\":\"https:\\\/\\\/report.buildingsafeai.ca\\\/#\\\/schema\\\/logo\\\/image\\\/\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Prot\u00e9ger la Soci\u00e9t\u00e9 - CAISI Year in Review","description":"Voyez comment nous combattons les risques syst\u00e9miques en cr\u00e9ant des outils et politiques pour que l\u2019IA demeure une force au service du bien commun.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/report.buildingsafeai.ca\/safeguarding-society\/","og_locale":"fr_FR","og_type":"article","og_title":"Prot\u00e9ger la Soci\u00e9t\u00e9","og_description":"Voyez comment nous combattons les risques syst\u00e9miques en cr\u00e9ant des outils et politiques pour que l\u2019IA demeure une force au service du bien commun.","og_url":"https:\/\/report.buildingsafeai.ca\/safeguarding-society\/","og_site_name":"CAISI Year in Review","article_modified_time":"2026-01-13T00:18:11+00:00","og_image":[{"url":"https:\/\/report.buildingsafeai.ca\/wp-content\/uploads\/2026\/01\/article-1920x1080-eng-1024x576.jpg","type":"","width":"","height":""}],"twitter_card":"summary_large_image","twitter_title":"Prot\u00e9ger la Soci\u00e9t\u00e9","twitter_description":"Voyez comment nous combattons les risques syst\u00e9miques en cr\u00e9ant des outils et politiques pour que l\u2019IA demeure une force au service du bien commun.","twitter_image":"https:\/\/report.buildingsafeai.ca\/wp-content\/uploads\/2026\/01\/article-1920x1080-eng-1024x576.jpg","twitter_misc":{"Dur\u00e9e de lecture estim\u00e9e":"4 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/report.buildingsafeai.ca\/safeguarding-society\/","url":"https:\/\/report.buildingsafeai.ca\/safeguarding-society\/","name":"Prot\u00e9ger la Soci\u00e9t\u00e9 - CAISI Year in Review","isPartOf":{"@id":"https:\/\/report.buildingsafeai.ca\/#website"},"primaryImageOfPage":{"@id":"https:\/\/report.buildingsafeai.ca\/safeguarding-society\/#primaryimage"},"image":{"@id":"https:\/\/report.buildingsafeai.ca\/safeguarding-society\/#primaryimage"},"thumbnailUrl":"https:\/\/report.buildingsafeai.ca\/wp-content\/uploads\/2025\/12\/simona-grandrabur.jpg","datePublished":"2025-12-13T22:16:27+00:00","dateModified":"2026-01-13T00:18:11+00:00","description":"Voyez comment nous combattons les risques syst\u00e9miques en cr\u00e9ant des outils et politiques pour que l\u2019IA demeure une force au service du bien commun.","breadcrumb":{"@id":"https:\/\/report.buildingsafeai.ca\/safeguarding-society\/#breadcrumb"},"inLanguage":"fr-FR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/report.buildingsafeai.ca\/safeguarding-society\/"]}]},{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/report.buildingsafeai.ca\/safeguarding-society\/#primaryimage","url":"https:\/\/report.buildingsafeai.ca\/wp-content\/uploads\/2025\/12\/simona-grandrabur.jpg","contentUrl":"https:\/\/report.buildingsafeai.ca\/wp-content\/uploads\/2025\/12\/simona-grandrabur.jpg"},{"@type":"BreadcrumbList","@id":"https:\/\/report.buildingsafeai.ca\/safeguarding-society\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/report.buildingsafeai.ca\/fr\/"},{"@type":"ListItem","position":2,"name":"Prot\u00e9ger la Soci\u00e9t\u00e9"}]},{"@type":"WebSite","@id":"https:\/\/report.buildingsafeai.ca\/#website","url":"https:\/\/report.buildingsafeai.ca\/","name":"CAISI Year in Review","description":"","publisher":{"@id":"https:\/\/report.buildingsafeai.ca\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/report.buildingsafeai.ca\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"fr-FR"},{"@type":"Organization","@id":"https:\/\/report.buildingsafeai.ca\/#organization","name":"CIFAR","url":"https:\/\/report.buildingsafeai.ca\/","logo":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/report.buildingsafeai.ca\/#\/schema\/logo\/image\/","url":"https:\/\/report.buildingsafeai.ca\/wp-content\/uploads\/2025\/12\/CIFAR-logo-red-RGB.svg","contentUrl":"https:\/\/report.buildingsafeai.ca\/wp-content\/uploads\/2025\/12\/CIFAR-logo-red-RGB.svg","width":450,"height":144,"caption":"CIFAR"},"image":{"@id":"https:\/\/report.buildingsafeai.ca\/#\/schema\/logo\/image\/"}}]}},"_links":{"self":[{"href":"https:\/\/report.buildingsafeai.ca\/fr\/wp-json\/wp\/v2\/pages\/232","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/report.buildingsafeai.ca\/fr\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/report.buildingsafeai.ca\/fr\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/report.buildingsafeai.ca\/fr\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/report.buildingsafeai.ca\/fr\/wp-json\/wp\/v2\/comments?post=232"}],"version-history":[{"count":0,"href":"https:\/\/report.buildingsafeai.ca\/fr\/wp-json\/wp\/v2\/pages\/232\/revisions"}],"wp:attachment":[{"href":"https:\/\/report.buildingsafeai.ca\/fr\/wp-json\/wp\/v2\/media?parent=232"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}