A Roadmap for Responsible AI Leadership in Canada
There’s been enormous buzz around AI lately, and Canadian companies are very much part of the excitement. From cutting-edge healthcare solutions to game-changing cleantech firms, Canada has been a world leader in artificial intelligence for a long time, and we continue to be at the forefront. But here’s the thing: with great AI power comes the need for smart, responsible regulation.
By creating a level playing field with clear rules, the government’s Artificial Intelligence and Data Act (AIDA) can push firms to develop AI that’s safe, reliable, and built with Canadian needs in mind.
Trust is a crucial element in getting this right. As AI finds its way into healthcare, finance, cybersecurity and nearly every other corner of our economy, Canadians need to trust that these technologies are safe. That’s why CCI has been advocating for smart improvements to AIDA that are aimed at fostering trust.
After consulting with our members — Canadian-headquartered scale-up technology companies — and ecosystem partners, CCI has developed A Roadmap for Smart AI Leadership in Canada. Our recommendations lay out a path to balance the need for safety and clear regulation, with the flexibility, innovation and economic potential that Canada needs right now.
Our key recommendations are:
- Build an institutional home for public interest technology expertise by creating an independent, public-facing Parliamentary Technology and Science Officer to advise Parliament and Canadians about technological issues in the public interest as part of AIDA, as a complement to the planned Artificial Intelligence and Data Commissioner housed within the executive.
- Build trust for Canadians by including a preamble that enumerates AIDA’s protections for citizens and users with regard to AI systems, such as protection from biased outputs or harms.
- Ensure that AI regulations, where regulations are the right approach, are sensitive to a range of uses and potential impacts, and potentially incorporate a tiered structure with corresponding rules and responsibilities for specific applications of AI.
- Ensure that the rules and standards innovators must comply with are clear and easy to understand.
- Allow for regulatory sandboxes or pilots for novel use cases.
- Move up the schedule for regulatory development and implementation if AIDA passes (12 months after Royal Assent) and aim for a ‘minimum viable product’ approach that allows for flexibility and iteration.
- Allow for and prioritize the development of standards for AI governance wherever such an approach would represent an improvement in speed over regulation while adequately protecting citizen and user rights.
- Prioritize the creation or adoption of governance measures around higher-impact AI use cases or technologies first and provide for an enforcement ‘on-ramp’ that is sensitive to industry learning curves.
- Ensure that Canadian companies have simple means to get regulatory recognition in other markets by continuing to shape international policy direction and standards setting through forums like the Global Partnership on Artificial Intelligence, the OECD AI Principles, and the G7 Hiroshima Process.
- Ensure that our regulatory framework is clear and useful ‘out of the box’ to inspire other countries to use it.
Download the full policy brief, or read it here:
This report was created in consultation and collaboration with CEOs and commercialization experts from Canada’s AI ecosystem. We thank them for participating in roundtable discussions and interviews that have provided the necessary details to develop a credible roadmap for responsible AI in Canada.
You can read a few of our recent CCI Mooseworks blog posts about A.I. regulation here:
- Hot AI Summer Pt. 1: The US and the UK
- Hot AI Summer Pt. 2: The EU and China
- Hot AI Summer Pt. 3: Canada’s Proposal
If you’d like to discuss the Roadmap in more depth, contact Laurent Carbonneau, CCI’s Director of Policy and Research, and Nick Schiavo, CCI’s Director of Federal Affairs.