E-AIPOWW’s Canada Tracker shows AI policy and regulation work across Canada; business activities around innovation and development of AI; and civil society responses including unions, NGOs and IGOs, where workers are impacted.
E-AIPOWW’s Canada Jurisdiction Report outlines the history and progress of much of this legislation and surrounding activities, and the impacts that artificial intelligence is having on workers.
Suggested citation: Hung, K.S., Li, L.X., Katrak, M., Attard-Frost, B. (2025). ‘Canada’s High-Stakes AI Gamble: Innovation Policy, Techno-Nationalism, and the Political Economy of Jobs’, Artificial Intelligence Policy Observatory for the World of Work (AIPOWW) Symposium, Global Political Economy, Vol. 4 Issue 2. (September 2025).
In late April 2025, Canadians elected a new government led by Prime Minister Mark Carney, on a platform focused on economic and national security amidst Trump’s annexation threats and trade war. This included building a strong economy through large-scale national infrastructure projects and expanding the adoption and commercialisation of AI. In his May 2025 mandate letter to his new Cabinet, Carney (2025) underscores the scale and urgency of the moment “…the transformative nature of AI will create opportunities for millions of Canadians to find new rewarding careers – provided they have timely access to the education and training they need to develop the necessary skills.” As part of this, Evan Solomon, Canada’s inaugural Minister of AI and Digital Innovation, stressed AI’s economic benefits by “cranking up our commercialisation” and emphasised the importance of getting AI regulation right as crucial to Canada’s “economic destiny” (Karadeglija, 2025).
This emerging stance supports the speculative and aspirational techno-nationalist narrative that Canada’s future prosperity and security depend on unleashing AI to drive domestic economic growth and create good, high-paying jobs. Techno-nationalism, in this context, refers to policies that view domestic AI development as essential to a jurisdiction’s national security and economic growth, often involving measures to encourage or protect domestic AI capabilities while potentially limiting reliance on foreign-produced AI (Aaronson, 2024).
Under the previous Trudeau government (Lepage-Richer and Fenwick, 2022), the AI and Data Act (AIDA) was tabled for the purpose of regulating AI systems across Canada. The failure of the AIDA to pass into law before the 2025 election has left Canada with a significant regulatory gap. Through this piece, which is part of the AI Policy Observatory for the World of Work (AIPOWW) Symposium for Global Political Economy (Hung et al., 2024), we argue that this moment of uncertainty presents a critical opportunity to reconfigure Canada’s approach to AI governance. Rather than treating regulatory gaps as setbacks, they can be leveraged to craft a more coherent, inclusive, and forward-looking framework that delivers shared prosperity and good jobs. We do this by mapping and analysing Canadian AI policies and investments. For details on policies and investments, see our AI Policy Tracker - Canada in Supplemental Data. Our goal here is to further the debate on Canada’s AI future and evolving economic priorities, rather than providing an exhaustive overview.
We assess Canada’s AI choices through its 1) development, 2) regulation, and 3) governance, the three dimensions needed in delivering meaningful and shared outcomes for most workers and Canadians. We argue that Canada’s national AI governance must move beyond the emerging dominant techno-nationalist narratives of economic security and nation-building. Without adequate protections or labour’s meaningful involvement in policymaking and driving AI regulations, the promises of AI will remain unfulfilled. Moving forward, AI legislation must strike a balance between fostering innovation and protecting labour, while recognizing that AI-driven growth is not inherently inclusive. Furthermore, the framework should also prioritize the voices of people, workers, and communities and treat AI-driven growth as a multifaceted issue rather than a guaranteed path to inclusive growth and shared prosperity.
Canada was the first country to launch a national AI strategy in 2017 and has since positioned itself as a global leader in AI. Since 2016, the federal government has invested over $4.4 billion in its AI ecosystem (Garmendia et al., 2025). Increasingly, AI is framed as a critical infrastructure and a strategic national resource, which has become more vital for Canada’s economic security and sovereignty, especially amid rising techno-nationalism and Trump’s 2025 annexation threats and trade war, leading to the digital fragmentation of AI ecosystems across borders (Hung, 2024; Mayer and Nock, 2025).
Canada's AI strategy focuses on building core AI foundations, like talent, data, research, and infrastructure, as key to its global leadership and identity as “a nation of innovators” (O’Reilly Runte, 2024, p. 1). This supports a techno-nationalist agenda and aligns with the modern industrial policies in Canada and other liberal economies such as the United Kingdom and the United States. Germany is reformulating a strategic techno-industrial policy agenda to preserve sovereignty as a form of economic statecraft (Staab et al., 2025). In Canada, AI is being treated as a strategic national resource and critical infrastructure tied to nation-building. While AI is promoted as an “economic engine” promising “good jobs,” there’s little evidence it delivers secure, well-paid employment broadly across the economy.
In this section, we outline recent developments in Canada’s AI ecosystem by detailing key investments made. The cornerstone of this was the launch of the second phase of the Pan-Canadian AI Strategy (PCAIS) in mid-2022, which aims to reinforce Canada’s position as a global leader in AI research and innovation. It committed $443 million over ten years to accelerate AI research, commercialization, and adoption nationwide. This includes funding for standards-setting, assessment programs, and $125 million earmarked for Canada’s Global Innovation Clusters to boost AI commercialisation efforts across sectors. At the core of PCAIS is the objective of building a resilient and competitive Canadian AI talent pipeline. The strategy was designed to build out regional ecosystems, centring on three National AI Institutes: Amii (Edmonton), Mila (Montréal), and the Vector Institute (Toronto), which serve as hubs for research, training, and the translation of AI research into commercial applications. In its second phase, the PCAIS allocated $60 million to commercialization initiatives at these three institutes and an additional $160 million to CIFAR to continue fostering academic excellence through research, innovation, and talent development programs.
In 2024, the government committed an additional $2.4 billion to solidify Canada’s AI advantage and competitiveness. A key component of this new investment is the AI Compute Access Fund, which aims to meet the growing demand for high-performance computing among researchers and industry. Moreover, Canada is developing a national AI Sovereign Compute Strategy delegated to the Digital Research Alliance of Canada (DRAC) to ensure Canadian control and partnerships over critical AI infrastructure.
With these massive spendings, there will need to be transparent oversight and scrutiny for actual good jobs and economic returns. Recognizing the labour market implications of AI, the federal government allocated $50 million, just 2 percent of the $2.4 billion envelope, for AI workforce retraining through the Sectoral Workforce Solutions Program, running parallel with subnational workforce programs. Arguably, the Prime Minister of Canada (2024) claimed there has been some immediate-term growth and talent retention, most notably, over 100 top AI researchers, which is about 10% of global top-tier AI talent, trained 1,500 graduate students, and attracted $8.6 billion in venture capital, supporting 1,500 AI firms and 140,000 professionals as of 2023. In other words, $4.4 billion in total public investments and strategy development since 2016 have translated into 140,000 AI professional jobs, representing Canada’s tangible AI jobs dividend. However, less is known about the quality of these jobs and their future prospects.
Despite these initiatives, Canada’s AI ecosystem continues to face structural challenges. A persistent issue is the talent drain to the United States, particularly to Silicon Valley, where salaries and resources far exceed those available in Canada. One of PCAIS’s implicit roles has been to serve as an anti-brain drain mechanism, albeit with limited success due to international competition. Canada’s AI sector plays second-fiddle to the US, lacking the financial firepower and job opportunities to retain top-tier AI talent, while recognising the substantial talent investments and retention programs.
In 2018, the Quebec government announced the creation of AI industrial clusters by investing $100 million over a period of five years. The aim was to develop Quebec’s AI ecosystem and integrate AI into public administration to enhance the quality, efficiency, and access of public services. Importantly, in 2024, in the francophone majority region of Québec, the Conseil de l'innovation du Québec (CIQ) highlighted 12 priority recommendations to guide the responsible development and deployment of AI aligned with Québec values, and encouraged the ongoing public dialogue. Quebec has also led the way in utilising its public pension funds and investments, Mila - Québec AI Institute, and OBVIA’s (Observatoire international sur les impacts sociétaux de l’IA et du numérique) research initiative, to build out its regional AI capabilities and ecosystem. Together, these programs underscore Québec’s values-based approach to AI.
In Ontario, the focus has been on building institutional and regulatory trust in AI. The Trustworthy AI Framework offers provincial guidelines for the ethical, transparent, and accountable use of AI within government programs. This is bolstered by a $77 million investment in the Ontario Centre of Innovation’s Critical Technology Initiatives and the Vector Institute to enhance AI development in key sectors. These frameworks aim to bridge AI development with public trust and responsible governance.
Due to Canada’s federal style of government with constitutional divisions of legislative powers and competencies, AI policy is decentralized and fragmented across different levels and jurisdictions (Attard-Frost, Brandusescu, & Lyons, 2024). This section maps the evolving Canadian AI regulatory landscape. Canada’s Artificial Intelligence and Data Act (AIDA) was tabled in Canadian Parliament in June 2022 as part of a broader legislative act known as Bill C-27. The purpose of AIDA was to regulate international and interprovincial trade and commerce of AI systems by establishing requirements for the development and use of “high-impact AI systems.” The AIDA was widely criticized for its narrow definitions of key terms such as “high-impact system”, its lack of broad consultation with the public and civil society organizations, and its lack of sufficient protections for workers’ rights (The Dais & Centre for Media, Technology and Democracy, 2023). With the Canadian Parliament prorogued in January 2025, the Bill met its demise.
As one of the first AI ethics statements globally, the Montréal Declaration for the Responsible Development of AI, launched in 2017, provides guidance for developing AI in accordance with key principles, such as diversity, risk mitigation, accountability, sustainability, social benefit, and democratic values. Canada’s Digital Charter, first launched in 2021, outlines ten foundational principles, including universal access, safety, transparency, and robust enforcement. Although non-binding, these principles guide Canada’s commitments to responsible and safer AI. Additionally, a voluntary Code of Conduct for generative AI was introduced in 2023 to provide Canadian organizations with guidance on mitigating various risks associated with the development, use, and management of generative AI systems. However, as voluntary guidance, the Code of Conduct lacks legal force or recourse mechanisms for those who are harmed by generative AI systems. At the time of writing the commentary, this Code of Conduct has attracted nearly 50 signatories, predominantly from the private sector.
Several federal advisory groups, including the Safe and Secure AI Advisory Group, provide guidance on AI safety and risk mitigation to the federal government, as well as to the Canadian Artificial Intelligence Safety Institute (CAISI). The federal government’s AI suppliers list helps government entities engage with vetted AI suppliers, ensuring responsible and secure procurement. Moreover, the Directive on Automated Decision-Making, put into force in 2019 by the Treasury Board of Canada Secretariat, mandates Algorithmic Impact Assessments (AIA) for many federal institutions, aiming to promote transparency and accountability.
As part of the Canadian federal system, provincial regulations have a significant impact on Canada’s decentralised AI governance landscape. While many subnational jurisdictions lack standalone AI legislation, these foundational privacy protections indirectly influence the deployment of AI technologies in workplaces and commercial settings, shaping expectations around data fairness, privacy, transparency, and organizational accountability. Nevertheless, these clauses also raise concerns over increased surveillance, consent, workplace, and worker autonomy.
Ontario’s Bill 194, Strengthening Cyber Security and Building Trust in the Public Sector Act (2024), which takes effect in January 2025, broadly defines AI systems and mandates comprehensive public sector obligations, including disclosure, accountability frameworks, risk mitigation, and human oversight. It introduces privacy impact assessments, breach reporting, and whistleblower provisions, enhancing transparency and accountability. Further, the Working for Workers Four Act (2024), which will be effective January 1, 2026, requires employers who publicise “a publicly advertised job posting” to disclose (if any) the use of AI to “screen,” “select,” or “assess” job applicants. These requirements apply to Ontario employers who employ 25 or more employees on the day the “publicly advertised job posting” is posted. When effective, the Act aims to improve the transparency and fairness in workplace automated recruitment practices.
However, critically speaking, the Working for Workers Four Act (2024) also provides an overly broad definition of AI, limited coverage of “publicly advertised job posting”, and insufficient guidance regarding hiring practices. According to the Act, “AI” refers to “a machine-based system that, for explicit or implicit objectives, infers from the input it receives to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.” In practice, employers may struggle to balance the duty to disclose and protect proprietary information. As for the definition of a publicly advertised job posting, it does not include a posting restricted to existing employees of the employer, nor a general recruitment campaign or “help wanted” sign that does not advertise a specific position (Broad, MacIsaac & Seupersad, 2024).
Additionally, an employer’s internal procedures and rules regarding screening, selection, and assessment are typically confidential and proprietary. Simply requiring employers to disclose the use of AI throughout the hiring process may be impractical and difficult to achieve the intended results. For instance, an employer could simply state that it uses AI without explaining the details or its impact on job applicants, thereby rendering the right to a meaningful explanation inadequately protected.
Privacy Commissioners in Canada play crucial roles in advocating for robust privacy protections in response to AI-driven challenges. Traditionally, privacy regulations were limited and largely non-punitive in nature. Most provinces and territories also have equivalent privacy laws that apply to the collection, use, and disclosure of personal information. Outside Ontario and Québec, no specific AI laws have been enacted, highlighting regulatory gaps and opportunities for harmonization. Privacy Commissioners consistently advocate for greater interoperability among privacy laws nationwide, aligning with broader goals of harmonized AI governance and enhanced protection of individual privacy rights.
However, Quebec’s Law 25 (2021) aligns closely with the EU’s GDPR, enforcing robust data protection standards that include explicit consent and rectification rights, as well as transparency and explainability requirements for automated decision-making. Law 25 mandates comprehensive transparency requirements, explicitly requiring employers to inform employees when their personal data is collected using technological means, including automated systems used to evaluate work performance. Additionally, the Act establishes, when effective, protective measures for employees who report privacy violations to the Commissioner, thereby reinforcing workers’ autonomy and protection against potential abuses stemming from algorithmic surveillance and automated decision-making.
In British Columbia, the Personal Information Protection Act (PIPA, 2003) governs the collection, use, and disclosure of personal information by private organizations. Although not AI-specific, the Act applies to AI systems that process personal data. Section 13 permits the collection of employee personal information without consent under limited conditions, while Section 19 permits the disclosure of employees’ personal information and when the employee must be notified. These provisions address workplace surveillance and data autonomy, particularly as AI tools increasingly mediate employee monitoring and performance evaluation.
Similarly, Alberta’s PIPA (2003) regulates private sector data practices and includes specific protections for employees. Section 15(1) stipulates that the collection of personal employee information must be reasonable and accompanied by reasonable notification, while Section 58 provides protections to employees who disclose information to the Commissioner. Recent initiatives include Alberta’s review of its PIPA (2003), which reflects contemporary privacy demands, such as explicit data access rights and enhanced protection for children’s data. Collaborative principles developed by federal, provincial, and territorial commissioners underline collective efforts to ensure AI technologies respect privacy and address vulnerabilities.
Finally, we examine how the governance of AI-driven growth discourse surrounding technology policy in Canada has consistently been characterized by either a sense of urgency or market-based logics. During the deliberation of Bill C-27, the then Canadian Minister for Industry François-Phillippe Champagne argued the need to expedite the legislative process following the EU AI Act (Wylie and McDonald, 2023). This resulted in the erstwhile Bill C-27, specifically the AIDA, being drafted without any due consultation with stakeholders, including the Assembly of First Nations (2023). As the Information Technology & Innovation Foundation (2024), in their submission to the Standing Committee on Industry and Technology of the Canadian House of Commons (INDU) argued that the expedited discussions of Bill C-27, driven by “fear-based rhetoric about dangerous AI”, could be extremely harmful in the long run, echoing the need to have full public consultation.
Arguably, the rhetoric surrounding AI has now shifted to an economic market-driven logic. Policy discourse surrounding AI in Canada is predominantly framed in terms of strategic national positioning aimed at leveraging AI for economic growth and job creation. As Evan Solomon, Canada’s AI Minister, contends, although regulatory frameworks are essential, they must be designed in a way that does not constrain innovation or result in productivity loss (McLauchlan, 2025). Furthermore, he notes that achieving economic growth through AI should be a priority, requiring scaling the AI industry and increasing AI adoption rates. This could be achieved through the support for Canadian AI companies, where the government serves as an industrial policy to “champion our champions,” such as the Toronto-based AI firm Cohere. This narrative reflects a critical national resource and critical infrastructure, often masked by the rhetoric of broad-based prosperity and good jobs (Baries and Katzenbach, 2021), while these technologies have been uncritically embraced in the Canadian media (Danurand et al, 2023). Brandusescu and Sieber (2025) emphasize that such visions of securing future prosperity often overlook the significant risks associated with AI, including job displacement and other societal harms. These vanguard visions of job creation and economic growth promoted by the government tend to create path dependencies, leading to skewed self-regulations rather than balanced ones.
Beyond the automation of jobs, McKelvey observes that regulation must consider the impact of AI on the quality of work itself (House of Commons, 2023). In pursuit of greater efficiency, managerial prerogatives have increased considerably owing to the rise of algorithmic management, thereby exacerbating worker precarity (Dupuis, 2024). Algorithmic management involves the use of AI software to automate organizational functions that are traditionally carried out by humans.
Potocka-Sionek and Aloisi (2024) contend that these management practices initially confined to the platform economy have now produced a “spillover effect”, spreading across the traditional labour market. As Kellogg et al. (2020) and Wood (2021) observed that the manifestations of algorithmic management are diverse, encompassing the automation of simple tasks to the full automation of all managerial functions, affecting core domains of work management, such as direction, evaluation, and discipline. Much ink has been spilled on examining the impact of AI on workers, including concerns regarding privacy, particularly with respect to the surveillance of workers and the use of their personal data to algorithmically manage the workforce. Additionally, issues pertaining to intellectual property, where AI systems are trained using workers’ creations, as well as the perpetuation of bias and discrimination, with especially pronounced implications for workers from diverse identities and backgrounds (Baiocco et al., 2022). This results in work intensification and stress due to pervasive forms of control. In this regard, Capasso et al. (2024) argue that the use of algorithms extends beyond concerns related to rights to equality and privacy, impacting workers' rights to express themselves in ways that are integral to their identity and job quality.
The erstwhile AIDA largely overlooked how the adoption of AI inherently alters work processes, creating new forms of precarious labour and deepening existing inequalities. As several unions and civil society organizations pointed out, there was a lack of definition of “high impact systems” and “unacceptable risk” categories of AI systems (Canadian Union of Public Employees, 2024). As of now, a core tension within Canada’s regulatory approach lies in the assumption that the adoption of AI would lead to the creation of decent work, which remains a highly contestable assumption. Brandusescu and Sieber (2025) contend that achieving the motives of shared prosperity and fostering a sustainable AI-driven economy necessitates the inclusion of workers’ rights, which, in turn, depends on the participation of workers in both the design and deployment of more human-centred AI systems (Krzywdzinski et al., 2022; Pasquier et al., 2024). This was also echoed by the Canadian Union of Public Employees (2024), which argued that legislation should include a compulsory consultation with workers and unions clause before any AI systems are operationalized in the workplace. Additionally, this should be coupled with whistleblower protection clauses with independent mechanisms to safeguard workers in cases where unethical AI usage is reported.
While unions have begun responding to the imminent technological transformations in the workplace by formulating strategies and negotiating the inclusion of “AI clauses” within collective agreements (Unifor, 2018), there remains a need to collaboratively develop comprehensive regulations that explicitly address the potential harms such technologies may pose to workers. As has been argued by De Stefano and Doellgast (2023), the knowledge and capacity that unions and organizations develop to effectively address the use of AI is critical, as they possess “real influence on decisions, through legal bargaining rights backed up by encompassing collective agreements, employment protections and data protection rules”. In this context, a report adopted by the Standing Committee on Human Resources, Skills Development, Social Development, and the Status of Persons with Disabilities (HUMA) in May 2024 concerning the implications of technologies for the Canadian labour force set forth eight key recommendations (Morrissey, 2024).
Foremost among these was the call to develop, in collaboration with provinces and territories and labour representatives, a comprehensive framework to facilitate the integration of AI technologies in the workplaces. The report further emphasized the importance of investing in skills training initiatives to enhance the capacity and adaptability of the Canadian workforce to the augmented use of AI technologies. It undertook a systematic review of AI’s impact on worker privacy and enacted appropriate regulatory safeguards to protect Canadian workers from potential AI-related risks and displacement. This is coupled with ensuring that the federal advisory council on AI encompasses a wide variety of perspectives, including that of labour, academics, and civil society; and establishing a methodology to monitor labour market impact of AI technologies over time, including unemployment risks associated with AI adoption. As Attard-Frost (2023) observes, the ever-expanding AI usage underscores the need for “co-regulation, coordination and collaboration on AI governance”, one that extends beyond “government and industry and into a great variety of civil society organizations”.
While organizations such as the Council of Canadian Innovators (CCI), which comprises over 120 executives of high-growth Canadian companies, have emphasized the need for Canada to regulate AI at a rapid pace, they have also underscored the need for caution and advocated for the adoption of “Responsible AI Leadership Principles”, which include increasing public trust in AI products and establishing unambiguous regulations. What was evident with AIDA, as the Canadian Labour Congress pointed out, was that the regulatory approach adopted was largely “light-touch,” aimed at not constraining innovation and economic security, which was deeply inadequate. Therefore, calls for new and improved legislation were inevitable.
It is perhaps crucial that the national AI regulatory strategies not be situated and hardened solely within the grand techno-nationalist narratives of economic security and nation-building. Instead, a systematic analysis of the impact of AI on job quality, accompanied by regulatory measures that address this impact, is critical (Brandusescu & Sieber, 2025). To that extent, it is necessary that the next iteration of AI legislation strike a balance between innovation and regulation. While a regulatory framework should create space to ensure Canada’s global leadership in AI technology, it is also necessary to consider the impact of AI technologies, mainly on workers, their job quality, and their workplaces.
Further legislative efforts must also address the criticisms of previous consultations by incorporating more inclusive and transparent stakeholder engagement processes. In particular, the Carney government must ensure representation from civil society, academia, Indigenous communities, and SMEs, rather than predominantly large corporations. This will not only showcase the legitimacy of the process but also ensure that diverse perspectives are considered and rebalanced towards principles of equity and shared prosperity. Furthermore, treating AI as a public good is imperative to promote equitable distribution and usage while mitigating associated risks with many trade-offs (Hung, 2023). Additionally, the government should consider establishing an advisory committee dedicated to AI adoption and robust transparency requirements for government-deployed AI systems. Furthermore, consider leveraging government procurement as a strategy to support Canadian AI companies, foster domestic innovation, and validate domestically built technologies.
Lastly, Canada should strengthen international cooperation, particularly with Europe, to promote regulatory harmonization. This will help ensure global competitiveness while upholding Canada’s commitment to responsible, safe, and secure AI practices, along with high labour standards and greener and more sustainable job creation.
Ultimately, as Canada intensifies its commitment to AI as a nation-building strategy, it faces a clear challenge: striking a balance between economic security and responsible job creation. Governments cannot merely facilitate the building of AI as a national strategic resource and critical infrastructure; instead, they must ensure that AI’s material outcomes, such as good jobs, are integral to the innovation model and nation-building endeavours. A more robust AI governance center must focus more on equity, the voice of the worker and the community, and treat “AI-driven growth” as a contested space, not an automatic path to shared prosperity.
Canada Team Lead
PhD candidate, HEC Montréal