AI Roadmap 22
AI Roadmap 22
AI Roadmap 22
Responsible AI
Leadership in
Canada
September 2023
Contents
01 Introduction
06 AI Leadership Principles
08 Acknowledgements
CCI | 1
The Challenge
for Canada
The broad AI sector is currently valued around $200 billion
dollars and by 2030 will likely expand to around $2 trillion.¹
Canada is well-positioned in the industry in terms of highly-
qualified personnel and leading research, but Canadian
companies face significant barriers while scaling.²
A lack of scaling Canadian companies
means that many of the benefits created
from public investment in research and
training, including intellectual property, are
accruing to firms outside of Canada – for
example, nearly 75% of intellectual property
Canada should create and
rights (IPRs) generated through the federal implement a Responsible AI
government’s AI Strategy are owned by Leadership roadmap, based on
foreign entities, including American tech
giants such as Uber, Meta and Google. four principles: cultivating citizen
and consumer trust, regulatory
The significant challenge in Canada will be
constructing a policy and regulatory
clarity and agility, and an export
framework that encourages the rapid focus to international rule- and
growth of domestic companies into global standards-setting.
leaders. The Council of Canadian Innovators
believes that scaling innovative companies
need access to talent, capital and
customers, as well as strong marketplace
frameworks that enable success, including
law and regulation.
¹ https://www.grandviewresearch.com/industry-analysis/artificial-intelligence-ai-market
² Scale AI, “AI At Scale: How Canada Can Build an AI-Powered Economy,” 2023, 3. CCI | 2
Global Regulatory
Landscape for AI
The Canadian federal government tabled the Artificial Intelligence
and Data Act (AIDA) as part of the broader Bill C-27, the Digital
Charter Implementation Act. If passed, AIDA will regulate the design,
development, and use of “high-impact” AI systems in the private
sector. Other countries have already taken steps to regulate AI. As
Parliament and the government consider AIDA, they should weigh
how other jurisdictions have approached regulating AI in their
deliberations, especially in avoiding pitfalls that add complexity and
erode trust as international norms continue to evolve.
European Union
The European Union’s AI Act passed a The first category includes uses like ‘social
significant legislative milestone in June scoring’ or constant facial recognition
2023, setting it on a path to coming into tracking in public, and such uses are simply
force in 2026. The EU Single Market is banned.
significant and regulations and standards
that apply in the European market often The law focuses on ‘high-risk’ systems,
become global leaders. including systems used in products covered
by EU product safety laws, as well as a
The AI Act centres around four tiers for AI broad family of other uses cases where
systems based on their risk to human automated decision-making is directly and
safety, livelihoods and rights: unacceptable, significantly consequential to individuals
high, and low or minimal (which are not with the potential to unfairly discriminate or
regulated at all beyond existing privacy and cause harm – including job applications,
consumer protection rules). admission to educational institutions, and
biometric identification.
CCI | 3
High-risk systems require risk management guidance on the responsible use of data-
measures to identify, evaluate and mitigate driven technologies.
negative impacts, and maintain public
technical documentation and decision logs In March 2023, the UK government
to show compliance. The law mandates published a white paper staking out what it
human oversight and adequate called "A pro-innovation approach to AI
cybersecurity, and providers of high-risk AI regulation." The white paper sets out a
services must notify national governments "flexible" approach to regulating AI that is
that they are making it available. The EU will intended to both build public trust and make
also maintain a union-wide database of high- it easier for businesses to grow.
risk systems.
Rather than creating a new, dedicated
AI systems that present limited risk will not regulatory body or single legal instrument
be as tightly regulated. Instead, they will for AI, the UK Government is encouraging
require simple notification and transparency regulators and departments to tailor
to users. strategies for individual sectors, with the
goal of maintaining support for innovation
The Act creates a limited carveout to and adaptability. The white paper outlines
promote innovation through the creation of five principles that regulators must consider
regulatory sandboxes. The Act also creates to facilitate the safe and innovative use of AI:
a European AI Board, made up of national AI safety, transparency, fairness,
authorities and the EU Data Protection accountability, and redress. The
Supervisor, to advise the Commission on AI government has indicated that the door to
issues and promote best practices. Despite legislation remains open should it be
the fairly far-reaching requirements in the needed, and also recognizes the
legislative text, the EU law, in an important complementary role played by standards.
step towards making compliance simpler,
allows for the development and recognition
of standards to govern regulated activities
rather than prescribing methods through
United States
regulation.
US AI law is mostly being made at the state
level — to date, there is no comprehensive
United Kingdom federal law in place. California and Illinois
have passed laws focused on data privacy
In 2021, the UK government published its and the use of AI.
National AI Strategy, which sets out a plan
for the responsible adoption of AI Federal action is starting to take shape,
technologies. The high-level strategy however. In December 2022, the White
focuses on ethics, transparency, and House’s Office of Science and Technology
accountability. The UK also established an Policy released a blueprint for an AI Bill of
AI Council in 2019 to advise the government Rights to define principles for the
on AI policy and regulation as well as the development and deployment of AI in the
Centre for Data Ethics and Innovation, an US. This document will guide future federal
independent advisory body that provides AI-related policy in the US and could help to
CCI | 4
address some of the key challenges
associated with AI development and
deployment.
The approach that Canada is taking, embodied in the Artificial Intelligence and Data Act, most
closely resembles that of the European Union by creating in legislation categories of AI
technologies to be regulated. Canada is also participating in several international AI
governance fora, including the G7’s Hiroshima Process.
Without the market size and international leadership weight of (especially) the US or the
European Union, Canada should take care to ensure that its eventual governance model does
not stray too far from the emerging global norm – an outlier policy mix in Canada would
drastically harm the efforts of Canadian-headquartered companies to scale globally and to
contribute to Canadian economic and productivity growth and innovation.
In creating a legal framework, CCI believes that Canada can succeed by adopting a strategy
of responsible AI leadership that leverages high trust, clear rules, fast action and global
leadership to pave the way for commercial success at home and abroad for Canadian
companies.
CCI | 5
Responsible AI
Leadership
Principles
Canada has an opportunity to define itself
as a leader in responsible AI development
and deployment and export a flexible
approach that allows for innovation while
protecting citizen and user rights.
CCI | 6
Clarity and Certainty
Develop at Speed
CCI | 7
Gear for Export
Acknowledgements
Tara Dattani, Director of Legal, Ada
This report was created in
Nicole Janssen, Co-Founder & Co-CEO, AltaML
consultation and collaboration Humera Malik, CEO, Canvass Analytics
Dr. Alexandra Greenhill, Founder & CEO, Careteam Technologies
with CEOs and commercialization Ronak Shah, Privacy and AI Product Counsel, Cohere
experts from Canada's AI Laure Lalot, Director, Legal Compliance, Coveo
Nabeil Alazzam, CEO, Forma.ai
ecosystem. We thank them for Amir Sayegh, AVP, Data Product Discovery, Geotab
participating in roundtable Rebecca Wellum, Vice President, Compliance, Geotab
Julia Culpeper, Senior Program Manager, Innovation Asset Collective
discussions and interviews that Mike McLean, CEO, Innovation Asset Collective
have provided the necessary Peggy Chooi, Strategic IP Specialist, Innovation Asset Collective
Geoff MacGillivray, CTO, Magnet Forensics
details to develop a credible Ehsan Mirdamadi, Partner and CEO, NuBinary
roadmap for responsible AI in Sina Sadeghian, Co-Founder & CTO, NuBinary
Adolfo Klassen , CEO, Paladin AI
Canada. Ian Paterson, CEO, Plurilock
Yvan Couture, President & CEO, Primal
Sam Loesche, Head of Policy and Public Affairs, Waabi
Mathieu Letendre, Legal Advisor, Workleap
CCI | 8
About the Council of
Canadian Innovators
CCI | 9