Essex Artificial Intelligence Policy Observatory for the World of Work

European Union

European flags flying in front of the EU headquarters building.

E-AIPOWW’s European Union Tracker shows AI policy and regulation work across the EU; business activities around innovation and development of AI; and civil society responses including unions, NGOs and IGOs, where workers are impacted.

E-AIPOWW’s European Union Jurisdiction Report outlines the history and progress of much of this legislation and surrounding activities, and the impacts that artificial intelligence is having on workers.

A person typing on a laptop.
European Union tracker

The Tracker collates documents relating to the government proposals, papers, and other policy documents regarding legislation around artificial intelligence in the world of work.

View the European Union tracker

AI Regulation in the EU and the World of Work

Suggested citation: Moore, P.V., Petrucci, C., Muldoon, J. (2025). ‘AI Regulation in the EU and the World of Work’, Artificial Intelligence Policy Observatory for the World of Work (AIPOWW) Symposium, Global Political Economy, Vol. 4 Issue 2. (September 2025). 

While the US President has called for a moratorium on artificial intelligence (AI) regulation, other countries are taking varied approaches: China is developing sophisticated systems to challenge Western Big Tech; Brazil is prioritising digital sovereignty and innovation; India is taking moderate steps while attracting inward investment; and Canada is advancing socially responsible AI governance. Amidst these efforts, the EU stands out as the first to design a comprehensive, binding, AI regulation. To illuminate the likely implications for the world(s) of work, we use a bespoke methodology we have designed for the Artificial Intelligence Policy Observatory for the World of Work (AIPOWW). We look at ‘regulation’ in a legislative sense; ‘development’ as related to corporate focus and involvement; and social and political ‘governance’, to assess the weighting between business on the one hand, and workers, on the other. While the AI Act includes important safeguards against the most harmful applications of AI at work, and seeks to mitigate against risk, its provisions overwhelmingly prioritise business interests over those of workers. There is little evidence that AI brings long-term benefits to workers; on the contrary, its integration is more likely to have harmful effects.

AI is not inevitable, but the regulation and governance trajectories surrounding its development demonstrate that there are no opportunities for outright rejection, nor abolition built into legislation. The emphasis governments are placing on AI development reflects a desire for some kind of salvation from a serious economic recession, in the context of geopolitical warfare and widespread moves to the far right. Though the EU is still basing regulation on a repeatedly stated ‘fundamental rights’ framework, still, its advancements of AI are disproportionately shaped by the interests of businesses and employers. Regulation pushes for innovations and avoids discussions of real solutions and certainly does not look for alternatives. AI and its seeming rise require not only just regulatory responses but requires political responses of resistance. This essay, which is part of the Artificial Intelligence Observatory for the World of Work (AIPOWW) Symposium for Global Political Economy, begins by examining the regulatory power of the AI Act, and identifies its weighting toward corporate, rather than social, interests. Deeper conversations about resistance, worker led governance, and what is at stake for the world of work overall as AI is developed, are needed.

Regulation: The Ordinary Legislative Procedure (OLP)

The AI Act differs from other jurisdictions’ AI regulation because it is not only about the way AI is invented, manufactured and produced, but contains a human behavioural component, which recognises its difference to other types of machines. Current and projected use and application, with intention as well as impact, is addressed with its emphasis on risk, which is ‘the probability of an occurrence of harm and the severity of that harm’ (Art. 3, 2). Classifying systems used in the workplace classified as ‘high-risk’, coheres with the EU’s commitment to zero tolerance prevention of accidents in occupational safety and health (EU-OSHA 2025; EC 2021d). High-risk products can be released to the single market, after being tested in simulated environments i.e. sandboxes. It forbids emotion recognition in workplaces. As a frontrunner, the Act offers a potential model for procedural deliberation and offers organisations in regulatory compliance.

European regulations are made according to the Ordinary Legislative Procedure (OLP) framework, under the Treaty of Lisbon. The ways  Rapporteurs are selected during the amendments phases; the balance of interests lobbied across Parliamentary committees; themes and definitions selected for the approved text; disruptions to the OLP by LLMs; and now, harmonisation and standards setting, all matter. The earliest phase occurred between 2018 - 2020, with a series of Guidelines, a Definition, and Assessment documents informed by consultations with 1,200 stakeholders from industry, academia, and civil society. In February 2020 the Commission's White Paper On Artificial Intelligence: A European Approach to Excellence and Trust synthesises this consultation. ‘To address the opportunities and challenges of AI, the EU must act as one and define its own way, based on European values, to promote the development and deployment of AI’ (EC 2020), its introduction reads. To ‘promote the development and deployment’ is a phrase that speaks for itself.

One year later, in April 2021, the EC submitted a Proposal for a Regulation of The European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (EC 2021a). The decision to horizontally regulate AI meant that the policies agreed would eventually be required for implementation in all EU Member States. After the Proposal was published, the European Parliament (EP) and the Council of the European Union went into negotiation with the European Commission (EC) planned for two years. Struggles between corporate and social voices meant that the process took significantly longer.

The Parliament and Council Committees made various amendments suggestions to the Commission’s text and published a series of public reports about the debates in the leadup to the legislation. The publication of the Communication on Fostering a European Approach to Artificial Intelligence on 21/04/2021 facilitated a shift from the soft law approach, to hard law, signalling the intent and commitment for the adoption of a new regulatory framework on AI (EC 2021b). This Communication included both a Proposal for a Regulation laying down harmonised rules on artificial intelligence and the Coordinated Plan on Artificial Intelligence (EC 2021b). The OLP aspects relating to ‘competences’ and ‘amendments’ reflect the deliberations over company and worker interests, as follows.

Competences and amendments

There are over 20 standing committees in the EP, each focusing on a specific policy area, and include the Committee on Legal Affairs (JURI), the Committee on Civil Liberties, Justice and Home Affairs (LIBE), the Committee on Culture and Education (CULT), and the Committee on Industry, Research and Energy (ITRE). Committees lay the groundwork for legislation via research and debate procedures. The selection of Rapporteurs are politically significant for whose interests and which aspects of law will be emphasised in the drafting of legislation. In May 2021, EP Committees went beyond their standard role in amendment suggestions and negotiation procedures for the AI Act and sought to have an influence on the ‘competences’. Committees make nominations for their own Rapporteur positions, who have the responsibility of giving suggested amendments to legislative texts; and nominations for the overarching Rapporteur who fields and synthesises all amendments and prepares drafts. In other words, ‘competences’ refers to lines of responsibility and decision-making power of the committees and the individuals leading them. Rapporteurs have more responsibility and political influence overall, than regular Committee members.

In June 2021, the Committee on the Internal Market and Consumer Protection (IMCO), via the standard EC internal mechanism and based on election results, appointed Brando Benifei of the Group of the Progressive Alliance of Socialists and Democrats in the European Parliament (S&D Italy), as the AI Act Rapporteur. Benifei was not seen as a compromise candidate, and selection was would hopefully lead to an inclusive division of competences across EP committees. Discussions on competences occurred from September - November 2021. Some Committees started to appoint their own ‘opinion rapporteurs’, in the face of the tradition of waiting for the ‘competences’ to be settled. For example, CULT appointed an opinion rapporteur in July; ENVI in September; TRAN in November 2021; and JURI and ITRE in January 2022. Finally, the AI Act file was assigned to the IMCO, as well as LIBE, to manage, which means that specific dimensions, to do with markets and consumers (IMCO); and concepts surrounding justice (LIBE); were prioritised. Dragoș Tudorache was seen as a political counterbalance to Benifei. The Co-Rapporteurs are responsible for fielding Committee amendments, deciding how to integrate them into the text, and producing the edited text for Committee votes.

From April 2021 – end of 2023, EP worked to adopt its negotiation position through a period of implementation phases, and in the final stages, the Trilogues. Co-Rapporteurs fielded thousands of Committee amendments. Most interestingly for workers, amendments suggestions reflect the complexity of attempts to define AI; crystallise the need to delineate ‘high-risk’; show the paradoxes of generative AI; and provide more detail for innovation to be supported by the Act. In 02/2023, an intensive ‘marathon’ discussion was held to iron out amendments. The text maintained a devised Fundamental Rights Impact Assessment which would be required for high-risk areas (see next section). The text emphasises a ban on social scoring, extending this to private companies. New text was added requiring authorities to establish sandboxes to actively supervise developers of high-risk systems, as well as to facilitate and ensure compliance. Further added text stated that: ‘intended purpose and the reasonably foreseeable misuse… should be given particular consideration in assessing risk have been limited to vulnerable groups and children’ (Bertuzi 2023b).

Risks

In 02/2023, the EP Co-rapporteurs Brando Benifei and Dragoș Tudorache began the work of identifying the list of AI uses that are likely to pose risks and what practices should be prohibited. The GDPR already disallows the use of algorithms alone for decision-making at work (Art. 22). The draft on 26/05/2023 considered algorithms assisting decisions related to ‘the initiation, establishment, implementation or termination of an employment relation, notably for allocating personalised tasks or monitoring compliance with workplace rules’ as high-risk (Bertuzzi 2023c). The official Recital (57) summarises these:

(57) AI systems used in employment, workers management and access to self-employment, in particular for the recruitment and selection of persons, for making decisions affecting terms of the work-related relationship, promotion and termination of work-related contractual relationships, for allocating tasks on the basis of individual behaviour, personal traits or characteristics and for monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may have an appreciable impact on future career prospects, livelihoods of those persons and workers’ rights.

AI systems in employment, worker management, and self-employment—such as those used for recruitment, decision-making on work terms, promotions, terminations, task allocation, and employee monitoring, are as such, considered high-risk. This text makes it clear that the use of AI systems can significantly impact career prospects, livelihoods, and workers' rights. Article 26. 7 requires the following:

Article 26. 7.  Before putting into service or using a high-risk AI system at the workplace, deployers who are employers shall inform workers’ representatives and the affected workers that they will be subject to the use of the high-risk AI system.

This is important for worker protections, as new monitoring and tracking technologies are increasingly trialled at work. From Microsoft Co-Pilot to Hubstaff, Veriato to WebWork trackers, the trend is continuing. In the next section, we consider the extent to which AI development forces this story forward. While the phases of amendments and the risks classifications are oriented around foreseeable misuse, safety, and testing, there is an explicit reliance on AI’s inevitability which limits the scope of response from those most affected by the emergence of AI.

Development: Defining and inferring for corporate ends

Specific moments throughout the OLP reflect the support for development and innovation of AI. This section looks at the ways AI has been defined The Committee on Legal Affairs of the EP released a draft opinion on the EU AI Act 02/03/22 (EP 2021), prioritising support for AI development. Committee suggestions included ‘clear[er] rules supporting the development of AI systems’; the establishment of a ‘High-Level Expert Group on Artificial Intelligence’ to oversee the development of ethical guidelines; narrowing the scope of what constitutes an AI system; and expanding the regulatory reach of the AI Act beyond simply systems ‘placed on the market, nor put into service, nor used in the Union.’

On 03/02/22, the EU Parliamentary Committee on Industry, Research, and Energy published a draft opinion on the AI Act. The short justification contained within the opinion identifies the committee’s overarching concerns with the bill at the time, namely, striking the right balance between ‘freedom and supervision’, promoting small and medium sized enterprises’ (SME’s) competitiveness, issuing clear guidelines. The Rapporteur for this Committee, Eva Maydell, proposed the following adjustments:

  1. Enhancing measures to support innovation, such as the ones foreseen for regulatory sandboxes, with a particular focus on start-ups and SMEs.
  2. Providing a concise and internationally recognised definition of Artificial Intelligence System and setting high but realistic standards for accuracy, robustness, cybersecurity and data.
  3. Encouraging the uptake of AI systems by industry by placing an emphasis on social trust and value chain responsibility.
  4. Future-proofing the Act through better linkages to the green transition and possible changes in the industry, technology and power of AI.

These early interventions demonstrate a burgeoning focus on the uptake and development of AI with corporate weighting.

Definitions of AI systems

The definition for AI sets the scope of the law, determining which technologies fall within the Framework, where unpredictable adaptiveness and functionality create unique problems for liability. AI differs from other types of technology because it is based on machine learning, allowing it to learn from data and change performance over time without being explicitly programmed.  ‘Foreseeable misuse’, in fact, may be impossible to predict, because technology develops incredibly quickly, and human decision-making is often obscured. Open source and free software advocates may find the necessity to produce a defined path for use and application of a system as a contradiction for both development of AI and of the liberties and freedoms that technology should permit. Overall, liability is at stake.

A useful definition is available from IBM (2023), which notes that:

…machine learning is a subset of AI. Deep learning is a subfield of machine learning, and neural networks make up the backbone of deep learning algorithms.

AI differs from other technologies in its scope to learn, which makes it difficult to link one definition to a single use and purpose as well as outcome, making regulation exceedingly difficult. The first official definition suggestion was in 04/2021, when the Commission suggested:

An ‘artificial intelligence system' means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.

Are ‘predictions’ and ‘recommendations’ accurately defined as outputs, or are they part of a process without defined ends? There were questions about how much technical specificity should occur in the definition. Corporate-leaning voices advocated for listing very specific functions in the definition. The implications of listing specific functions against a more general definition was tested quite quickly, with the introduction of generative AI in late 2022, when ChatGPT was released. Very quickly after ChatGPT was released onto the market, the Council of the European Union suggested that:

An ‘artificial intelligence system' (AI system) means a system that is designed to operate with elements of autonomy and that, based on machine and/or human-provided data and inputs, infers how to achieve a given set of objectives using machine learning and/or logic- and knowledge based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations or decisions, influencing the environments with which the AI system interacts.

The word ‘infers’ differs from other definitions because of its autonomous competence. LLMs differ from previous forms of narrow AI because they are not designed to accomplish just one task. As such, intended purpose could not be as easily depicted as with other types of machines and technologies. AI means that ex ante accountability should be written into regulation, where its definition should face a tension between agency of users, who are expected to have, know and take responsibility for their ‘intention’ for their use; and the alleged agency or autonomy of machines, via ‘machine learning’, ideally placing responsibility with the deployer. However AI, definitively, is not always predictable, and may ‘exhibit adaptiveness after deployment’ (AI Act Art. 3 Definitions).

In 06/2023 the European Parliament, suggested the adoption of the OECD’s definition:

An 'artificial intelligence system' (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments.

Ultimately, this definition stuck:

‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

A separate definition was designated for ‘general purpose AI’ (GPAI): ‘[an] AI system that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of tasks’ in 11/2022. These steps mark two regulatory first, where AI as the overarching system, and its functions are subsets, whereby a system ‘infers’ and the functions  result from machinic inference. The other stream regulates GPAI encapsulated finally in a specific Code of Practice (EC 2025). The launch of generative AI systems by Big Tech, thus, influenced both OLP delay, and resulted in definition rewriting.

Large language models (LLMs)

The launch of a new generation of chatbots to the public, starting with OpenAI’s ChatGPT, disrupted the OLP. Chat GPT-3 had been launched to the public as a beta version in June 2020, but really took form in 11/2022. Because large language models (LLMs) serve a wide range of purposes rather than a single defined task, it becomes difficult to specify an ‘intended use’. Tools like ChatGPT can be used for everything from letter writing to fashion advice, which undermines the foundational premise of the AI Act—that AI systems can be meaningfully regulated based on predefined use cases and risk categories. Regulating generative AI was like ‘fitting a square peg into a round hole’ (Boine and Rolnick, 2023).

The EP co-rapporteurs worked quickly to produce a document addressing the ‘sensitive topic’ of GPAI, circulated on 14/03/ 2023 (Bertuzzi 2023a). Systems that are designed for a specific set of tasks and applications would not be considered GPAI, as their intended function is always known. LLMs, on the other hand, were ultimately defined as GPAI systems, and separate regulations wre written into the Act specifically for GPAI providers (Hunton, 2024). This disruption to the OLP demonstrates the unpredictabilty of technology developments on the one hand, but the very predictable power of Big Tech, on the other. It is notable that neither workers’ strikes, nor geopolitical tensions, nor economic recessions, stalled the legislative process. Open AI’s market penetration, however, did. Given the skewed weighting towards corporate power, worker protections and union activities in response to the AI Act are more important than ever. We conclude that the AI Act’s tacit reliance on the absolute inevitability of AI, and the promotion for its development over all else, is concerning. While there are admissions of risks, there are no opportunities for refusal.

The AI Act and the World of Work

As indicated, AI systems used in employment environments are all to be placed in the high-risk category, meaning they must go through a period of testing in a sandbox environment. There is also emphasis on the responsibility of deployers of high-risk AI systems tell workers where AI is used (AI Act Introduction, 92).

The main provisions in the AI Act which are relevant for the world of work are contained in Recitals and in the Statutory Rules [1]. Recital 9 reads that the AI Act should not affect the rules whose goal is to improve digitalised working conditions such as in platform work and of the Council on improving working conditions in platform work. In addition, the AI Act should not affect EU law on social policy and national labour law, in compliance with EU law, regarding employment, working conditions, health and safety at work and the relationship between employers and workers.

Article 2(11) can be seen as the statutory rule implementing Recital 9, which reads as follows:

This Regulation does not preclude the Union or Member States from maintaining or introducing laws, regulations or administrative provisions which are more favourable to workers in terms of protecting their rights in respect of the use of AI systems by employers, or from encouraging or allowing the application of collective agreements which are more favourable to workers.

Recital 9 and Article 2(11) mean that the AI Act cannot be invoked neither to justify a restrictive interpretation of the current EU legislation on workers’ rights, nor to prevent Member States from introducing or strengthening at national level workers’ rights and/or higher standards of workers’ protections. The AI Act also indicates that employers using a high-risk AI system at the workplace shall inform workers’ representatives and the affected workers that they will be subject to the use of the high-risk AI system (Article 26(7)). Guidelines provide that the notion ‘workplace’ should be interpreted broadly (para 254), to include physical and virtual space and any other place (indoor or outdoor). The Guidelines specify that ‘workers’ includes an array of types of contracts, including employee, contractor, trainee, and volunteer. Importantly, job candidates are also covered.

Corrigendum

After the EP published its Corrigendum on 19 April 2024, which is a standard technical intervention designed to correct errors for how agreements across the trilogue bodies are represented in the last drafts. These changes, however, can be seen as ‘material amendments’ to the text, rather than just corrections (Bobek 2009). The Corrigendum, we argue, shifts the tone of the AI Act by advancing a business-focussed imperative, by inserting a series of phrases and usage of terminology which are favourable toward AI ‘development’, but that the refinement of definitions of biometric data provides an avenue for worker defense and even resistance.

Italicised text throughout the Corrigendum reflects the EP’s changes, and in bold, we note where the new text shifts the balance toward support for innovation, emphasises the involvement/opinion of European Central Bank, encourages public/private partnerships, and supports SMEs. These changes denote imperatives, leaving no room for dispute. Indeed, 16 modifications were made to the Introductory description (1) – (172), emphasising a market focus, product development, SME and microenterprise focus, and innovation for AI systems as follows:

(1) The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, the placing on the market, the putting into service and the use of artificial intelligence systems (AI systems) in the Union, in accordance with Union values, to promote the uptake of human centric and trustworthy artificial intelligence (AI) while ensuring a high level of protection of health, safety, fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union (the ‘Charter’), including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union, and to support innovation. This Regulation ensures the free movement, cross-border, of AI-based goods and services, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.

(2) This Regulation should be applied in accordance with the values of the Union enshrined as in the Charter, facilitating the protection of natural persons, undertakings, democracy, the rule of law and environmental protection, while boosting innovation and employment and making the Union a leader in the uptake of trustworthy AI

The third paragraph, sentence 3 states:

(3) A consistent and high level of protection throughout the Union should therefore be ensured in order to achieve trustworthy AI, while divergences hampering the free circulation, innovation, deployment and the uptake of AI systems and related products and services within the internal market should be prevented… 

AI inference and emotion recognition forbidden

The ability to infer was once a human prerogative, but the next modifications in the Corrigendum are related to terminology to how ‘biometric data’ can be used for machinic inference. The new text emphasised that:

(12) … A key characteristic of AI systems is their capability to infer.

The Corrigendum (14) provided a new, and more granular breakdown of how biometric data can ‘allow for the authentication, identification or categorisation of natural persons and for the recognition of emotions of natural persons’ is provided, where the differences between these categories (authentication, etc.) will relate to both ‘intended use’ which relates to human users, but confronting AI’s autonomous functions. The distinctions between human physiological activity and emotional expression is also expressed in the new text of the Corrigendum. Biometric data possible uses are listed in Art 5, within the list of prohibited AI practices. Art 5(1)(f) states that the:

…placing on the market a service with the purpose of inferring emotions of a natural person in the workplace and in educational environments, except where it is in place for medical or safety reasons’ is forbidden. 

An AI system identifying someone coughing in a video, should not lead to an inference related to emotion by AI (because a cough is not an emotion). A cough used for identification purposes is permissible. However, an AI should not infer sadness, at least not of someone who is at work or in an educational environment.

Emotion inference via biometrics is banned in workplaces and schools unless for strictly medical or safety reasons. Beyond those environments, it is classified as high-risk, meaning it faces strict compliance rules of risk assessments, documentation, and transparency. Emotions and inference, however, are subjective acts. Biometric data and its usage are likely to become the most complicated dimensions for AI regulation. Ascribing subjective agency to machines, when agency and autonomy for human workers are already weak, is worrying. The concern clearly is where agency ascriptioni and data dispossession in the worst cases can lead to subjectivity modelling (Zehner 2023).

Trade Union Involvement

Unions were not significantly involved in the AI Act process, but across EU and the world, worker representative groups across the EU generally pushed for the AI Act to include stronger protections for workers, greater transparency, and for more involvement of workers in the governance of AI systems. Their call for legislation tended to focus on ways to to ensure that AI is used in ways that benefit workers, rather than exploiting, posing risk to, or replacing them.

The European Trade Union Confederation (ETUC), which represents 45 million workers across Europe was active in expressing the need for the AI Act to include robust worker protections. This included calls for transparency and accountability in AI decision-making processes, especially in hiring, monitoring, and evaluating workers. The ETUC advocated for workers to be informed and consulted about AI systems introduced, where human oversight in AI decision-making, particularly where decisions have significant impacts on workers' lives and careers, is prioritised. The ETUC emphasised the need for worker representatives to be involved in development, implementation, and monitoring of AI systems; and has repeatedly urged for risk assessments which must be undertaken before introducing new technologies.

UNI Global Union represents workers in the service sector. It responded to the AI Act initially with a focus on protecting workers from risks and explicitly advocated for algorithmic fairness, where AI systems should not perpetuate or exacerbate existing biases in recruitment, performance evaluation, and disciplinary measures. UNI showed concern about the invasive use of AI for employee surveillance and called for strict regulations to protect workers’ privacy. Further to this, UNI Global raised alarms about AI’s potential to displace jobs and advocated for the AI Act to include provisions for job retraining and reskilling where AI replaces jobs or automates tasks.

The European Federation of Public Service Unions (EPSU), representing workers in the public sector, showed concern surrounding the use of AI in public services regarding its impact on employment and service quality. EPSU was wary about the increasing automation of public sector jobs and called for the AI Act to ensure AI does not lead to a decline in the quality of public services. The EPSU advocated for ethical use of AI in public services, ensuring that AI systems are used to support, not replace, human workers. With regards to training and development, EPSU called for the AI Act to include provisions for ongoing training and professional development for public sector workers to adapt to AI technologies.

The European Transport Workers' Federation (ETF), representing transport workers, was critical of the AI Act for not adequately addressing the specific challenges faced by workers in the transport sector, and were particularly concerned about the impact of AI on job security in the transport sector. That includes the potential for AI to replace jobs in logistics,. They emphasised the need for AI systems in transport to be rigorously tested for safety, ensuring that they do not put workers or the public at risk, and advocated for mandatory consultation before AI systems are implemented in their sector.

The European Federation of Food, Agriculture and Tourism Trade Unions (EFFAT) represents workers in food, agriculture, and tourism, and also responded to the AI Act, identifying the use of AI for worker surveillance in industries like agriculture and hospitality, where monitoring could lead to exploitative practices. EFFAT expressed concerns about AI replacing low-wage jobs across the sectors they represent and stressed the importance of involving workers and their unions in discussions about AI deployment. 

Finally, ver.di, in Germany, was actively involved in the discourse on AI regulation, emphasizing workers' rights. ver.di called for care in algorithmic decision-making processes in workplaces and the right for workers to understand and contest decisions made by AI. The union advocated for collective bargaining agreements to include provisions on the use of AI in the workplace. They stressed the need for robust data protection measures to prevent misuse of workers' data by AI systems. Overall, unions wanted better communication between workers and management, more transparency, testing for safety and ample training for workers. Why, however, has there been so little to no discussion opting out of AI, resisting AI altogether, and abolition possibilities?

In conclusion

The AI Act is probably the strongest piece of legislation globally so far, with relevance for workers’ rights. However, corporate protections, ultimately, outweigh workers. Innovation is prioritised; opt-out clauses are non-existent. Technological development of LLMs, and technological determinism; not worker protests nor any other social or economic issues; triggered disruptions to the legislative process. In March 2025, an open letter was published by 46 large company CIOs, calling for a pause to the AI Act roll-out, of two years (EU AI Champions Initiative 2025). The data collection, processing and storage and large data sets, as well as the brutal work of data workers for AI to exist, is happening now. Pausing the Act will not protect, but will rather exacerbate risks for workers. Given unresolved issues, even as the Platform Directive come into force and now the AI Act and soon the Machinery Regulation, the ‘frenzied race to regulate AI’ is open to criticism and could even allow regulatory arbitrage (Aloisi and De Stefano, 2023).

Unlike EU Directives, which require Member States to implement them by introducing national legislation, a Regulation is legally binding once it enters into force. It does not require Member States to formally pass legislation so that the regulation is enforceable at national level. A Regulation may require Member States to put in places measures so that the Regulation is effective, but it does not affect the fact that legally speaking, a Regulation applies to public bodies and private parties, when it enters into force. The Act is entering into force iteratively following the publication in the Official Journal are:

  • February 2025: Chapters I i.e. General Provisions, & II, i.e. Prohibited AI Systems, apply;
  • August 2025: Chapter III Section 4, Notifying Authoritiey, Chapter V General Purpose AI Models, Chapter VII Governance, Chapter XII Confidentiality and Penalties, and Article 78 Confidentiality will apply, except for Article 101 (Fines for General Purpose AI providers);
  • August 2026: the whole AI Act will apply, except for Article 6(1) & corresponding obligations (one of the categories of high-risk AI systems);
  • August 2027: Article 6(1) & Corresponding Obligations will apply. (Jarovsky 2024)

Autonomy and agency are already difficult for workers to achieve. The attribution of autonomy and inferential capacity to AI systems, once considered uniquely human traits, is deeply concerning. While the AI Act offers some potential to mitigate the worst uses of AI in the workplace, it makes no room for the possibility of abolition. While the AI Act requires communication with workers when high-risk AI is used, is ‘transparency’ sufficient, and can workers ever truly consent to being managed and monitored (Sturdy et al 1992;  Bohns, V. K., Schlund, R. 2020; Tourish and Willmott 2023; Moore 2024a)? Being told that your privacy is reduced does not lessen the feeling of surveillance, nor it does prevent potential violations of rights. Accountability is obscured, liability is difficult to pinpoint. Rather than a reliance on performed or passive consent, workers should be given the right to refuse (Moore 2024b). Alternatives should be sought (Guglielmo 2025).

The rise of corporate power in regulation demonstrates a weakening of the capitalist hegemony within which AI is apparently inevitably developing and the rise of authoritarianism (Bloom 2023). Coercion is more dominant in the way AI is being advanced, than any other potentials for consent, nor opting out, at work where technological integration is concerned. Even within the allegedly most liberal social democratic decision-making process around regulation, the scope for resistance or alternatives to dominant models is reducing rapidly. The scale and impact of AI development and corresponding weakness in regulatory and governance competences, demand a political response, not just a regulatory one.

Footnotes

[1] Recitals are not binding but explain the rationale of the statutory rules. Statutory Rules in the form of Articles, however, are legally binding.

Bibliography