E-AIPOWW’s China Tracker shows AI policy and regulation work across China; business activities around innovation and development of AI; and civil society responses including unions, NGOs and IGOs, where workers are impacted.

E-AIPOWW’s China Jurisdiction Report outlines the history and progress of much of this legislation and surrounding activities, and the impacts that artificial intelligence is having on workers.

A person typing on a laptop.
China tracker

The Tracker collates documents relating to the government proposals, papers, and other policy documents regarding legislation around artificial intelligence in the world of work.

View the China tracker

AI Regulation, Development, and Governance: the case of China

Suggested citation: Donoghue, R. (2025). ‘AI Regulation, Development, and Governance: the case of China’, Artificial Intelligence Policy Observatory for the World of Work (AIPOWW) Symposium, Global Political Economy, Vol. 4 Issue 2. (September 2025).

Introduction

China has positioned itself as a global leader in AI, driven by a combination of vast data resources, strategic government planning, and a technological eco-system of rapid innovation. At the core of China’s success in the tech sector is its embrace of government-led industrial planning, a unique approach in the era of a hyper-competitive international political economy. The enactment of numerous strategic government initiatives has been critical to China’s standing as a world leader in AI research and development. Whilst government leadership has played a decisive role in shaping the national AI landscape, China’s tech sector has benefited from extensive collaboration between public and private institutions. China is home to some of the world’s largest technology giants like Alibaba, Tencent, Huawei, Baidu, and ByteDance – rivaled only by U.S. competitors in Silicon Valley. Over the last decade, global investors have shown an exploding interest in China’s tech sector (including AI startups), although recent geopolitical tensions and a weakening venture capital market has dampened such enthusiasm in the most recent fiscal years. Not only is China making incredible gains internally, but its influence is also growing rapidly around the world through international agreements and collaborations like the Belt and Road imitative. The export of its AI technologies and systems will continue solidify the country as a dominant force in shaping the future of AI.

Whilst China certainly views AI development as critical for its future success, the government has shown a willingness to take seriously the social threats associated with its expansion. Government leaders have overseen a regulatory process defined by long-term planning and the release of both sweeping and targeted legislation, regulations, and industrial strategies. To date, most of the laws and policies regarding data and AI governance have taken an omnibus approach, meaning that the workplace isn’t targeted as a specific site of regulatory concern (Donoghue et al., 2024). One could say, then, that the Chinese government is working to construct regulatory infrastructure that can aggressively intervene to minimize the social risks caused by AITs as they emerge. However, interpreting the exact consequences of these efforts is complicated, partly because they are not always clear and partly because there are so many different policies, provisions, and laws that have been enacted. Moreover, there is still considerable ambiguity surrounding the substantive implications of these initiatives and laws for workers. Their application may vary across provincial and local jurisdictions, and the legal system will play a crucial role in clarifying how they are to be implemented in the workplace.

The regulatory approach of the government may be expansive, but public dissatisfaction with AI technologies suggests there is much more to be done. China has seen several largescale protest movements against the rollout of various AITs, especially facial recognition and algorithmic management systems. Often these public displays of frustration unfold through social media campaigns on popular messaging apps like WeChat, QQ, and Momo. China is also experiencing a noted uptick in labour unrest over working conditions in the platform economy, with private hire drivers and couriers staging protests across the country. A key point of tension is how algorithms are manipulating working conditions to the disadvantage of workers and the lack of transparency over how these systems operate. Data subjects of all kinds are engaging the courts, media, and trade unions to challenge and contest the harms of AI systems that regulators are failing to address. There is growing evidence that these alternative modes of governance are having an impact on how businesses are using AI tools.

This article further explores these complex and interwoven dynamics, offering a brief tour of the major milestones within China related to the development, regulation, and governance of AI. It concludes by examining the potential implications of these developments for the world of work, with particular attention paid to labour platforms and algorithmic management systems. In doing so, it is made evident how evolving governance frameworks meaningfully shape the lived experiences of workers embedded in increasingly AI-dominated ecosystems. This publication is part of the Artificial Intelligence Observatory for the World of Work (AIPOWW) Symposium, comprised of similar case studies from countries around the world.

Regulation

China’s response to the onset of AITs stands out in several ways, but arguably mostly for its regulatory and legislative speed. In the supposed race to set global regulatory standards, China has made an ambitious bid with a flurry of policy enactments over the last decade. China commenced a new era of regulatory activity around AITs beginning with the New Generation AI Development Plan (AIDP) in 2017 (Webster et al., 2017) [1]. Its purpose, as set out by the State Council, was to develop a strategy that would see China become the world leader in AI by 2030. The Plan identifies AI as key to challenge current and arising issues like economic growth, national security, and technological innovation. It proposes as solutions cultivating internal research and development capacities via building out greater AI infrastructure.

Two years later in 2019, the New Generation AI Governance Expert Committee (established by the Ministry of Science and Technology MOST) published Governance Principles for a New Generation of Artificial Intelligence (Laskai and Webster, 2019). This document introduces eight core principles for the development and regulation of ‘responsible AI’: harmony and friendliness, fairness and justice, inclusivity and sharing, respect for privacy, secure/safe and controllable, shared responsibility, open collaboration, and agile governance (Library of Congress, 2019). This served as a foundation for the Ethical Norms for New Generation Artificial Intelligence (ENGAI) published by the same committee in 2021 (Center for Security and Emerging Technology, 2019). ENGAI is responsible for implementing the 2019 governance principle in a ‘detailed manner’ by providing ‘ethical guidance to natural persons, legal persons, and other related institutions engaged in AI-related activities’ (Center for Security and Emerging Technology, 2019). ENGAI is comprised of five sections that total 25 Articles, each of which provide ethical guidance on a variety of issues from research and development to AI management.

A signature policy to have come out of the Cyber Security Administration (CAC) is the Internet Information Service Algorithmic Recommendation Management Provisions, jointly issued with the State Internet Information Office, the Ministry of Industry and Information Technology, the Ministry of Public Security and the State Administration for Market Regulation. A draft proposal was first released on 27 August of 2021, followed by a public comment period until 26 September of that year. A finalized version was published on 4 January 2022, and the law went fully into effect on 1 March 2022. The IISARMP consists of several dozen rules or provisions that regulate the development and use of internet algorithmic recommendation services with wide-reaching implications. The law codifies technical and policy requirements, ethical requirements, and prohibited behaviour for algorithmic providers and operators.

As we have noted elsewhere, a central IISARMP reads as an attempt to improve individuals’ capacity to interact with algorithms or platforms on their own terms (Donoghue et al., 2024). The law emphasises the ’protection of user rights’, referencing key features like user notification, norms disclosure, procedures for obtaining consent, provisions for opting out of monitoring and surveillance, and control over personal data. We, argue, then that “These protections constitute a considerable set of tools for individuals to circumscribe and resist the power of algorithms and contest their outputs” and “This may have major implications for workers as labour is increasingly pushed into digitalized space” (Donoghue et al., 2024). Another core theme of the IISARMP is a commitment to aligning algorithmic recommendation services to the promotion of the ‘common good’. This objective underscores the Chinese government’s intention to implement a social model of governance that directly oversees the absorption of algorithmic and AI technologies and is reflected in the IISARMP’s commitments to increased social supervision, heightened transparency, and the instalment of clearly defined enforcement mechanisms and bodies.

The IISARMP works in tandem with a bevy of other policies related to governance of digital platforms, algorithms, and artificial intelligence. The most notable examples are the Personal Information Protection Law (PIPL), the Data Security Law, and the Cybersecurity Law. A final version of PIPL was released on 20 August of 2021 and became effective law several weeks later on 1 November. PIPL is “a special legislation on personal information protection… [that] contains the basic principles, requirements and related systems for the protection of personal information” (Xiao and Shen, no date). It sometimes described as a Chinese version of the EU’s GDPR “given the deep parallels and similarities between them”, however there are meaningful differences (Donoghue et al., 2024). This legislation is expected to have a significant impact because it “solves the [current] problem of inadequate and scattered personal information protection legislation” (Xiao and Shen, no date).

A central objective of PIPL is to clarify the rights of users over their personal information when interacting with internet-based services. The law introduces new protections against profiling and extends new rights for users to customize how their data is used by data processors. Examples include the ability to turn off targeting based on individual characteristics, request exclusion from automated decision-making, and the ability to provide user feedback. The bill also advances key changes to the control, ownership, and use of personal data. Through PIPL, individuals gain the rights to, “inquire about what personal data is being collected and stored by the data processor… to request a copy of their personal data, correct any inaccurate personal information, and delete their personal information when withdrawing consent or terminating the use of the product or service” (Zhang, 2021).

Other relevant rights from PIPL come in the form of explicit duties placed on data processors. The requirement for processors to obtain user’s consent (to data collection) will encourage processors to introduce an ‘opt-in’ interface as opposed to an opt-out one. Relatedly, Article 15 stipulates that processors must “provide a convenient way to let the user withdraw their consent” (Xiao and Shen, no date). These strong consent-based rights are further enhanced by the fact that, under PIPL, processors cannot refuse service if a user denies or withdraws their consent, unless the service could not process without it. Finally, the law also provides safeguards against unwanted sharing of data with third-party entities by requiring processors to obtain additional and separate consent to do so. Additional consent is also required for the processing of ‘sensitive personal information’, which is a considerable step given that, “The scope of ‘sensitive personal information’ in the PIPL is much broader than in the GDPR,” with, “financial information, transaction records, and location tracking are [being] regarded as sensitive personal information.”

One of the most recent regulatory focuses of the Chinese government is ‘deep fake’ technologies and services. On 10 January 2023 China released is ‘Deep Synthesis Provisions’ to “to strengthen its supervision over deep synthesis technologies and services” (Kachra, 2024). These provisions extend obligations on both generators and users of deep synthesis services, with the aim of promoting transparency, data and personal information protection, content management and labelling, and technical security. The ‘comprehensive scope’ of the deep synthesis provisions makes China a leader in the regulation of this kind of technology. As Kachra (2024) explains, “While the UK is also intending to ban the creation and dissemination of deepfake videos without consent, China’s law goes beyond this. The regulation creates rules for every stage of the process involved in the use of deepfakes, from creation to labelling to dissemination, leaving room for the potential suppression of organically captured content as well.”

On January 17th, 2024, the MIIT "Guidelines for the Construction of a Comprehensive Standardization System for the National Artificial Intelligence Industry.” The guidelines provide a strategic framework for creating unified standards across China's AI sector to encourage and support collaboration among government, industry (particularly key sectors like healthcare and autonomous vehicles), and academia. One way this is achieved is through putting forward unifying technical standards to improve interoperability between systems and promote more seamless communication. The government is making additional adjacent efforts to improve the reliability and clarity of the AI ecosystem. Take, for instance, The Measures for Labeling Artificial Intelligence-Generated Content, issued by China's Cyberspace Administration. These Measures are designed to ensure that AI-generated content is clearly identified to enhance transparency for users and combat misinformation.[2] Effective from September 1, 2025, they will require both explicit (visible text, audio cues, or graphics) and implicit (content's origin and unique identifiers embedded in metadata) to indicate AI involvement in the production of some output. The rules will apply to AI content creators, online platforms, app distributors, and users. They will also be obligated to ensure proper labeling and even prohibiting the removal or alteration of these labels.

Another important and targeted regulation recently put forward by the CAC is The Measures for the Security Management of the Application of Facial Recognition Technology. It was jointly issued by China's Cyberspace Administration and Ministry of Public Security on March 21, 2025 and is set to take effect on June 1 of the same year (Center for Security and Emerging Technology, 2025). These regulations lay out a comprehensive framework governing the use of facial recognition technology within China, with the primary objective of protecting personal information rights and ensure public safety. The Measures put forward a number of new requirements for users of this technology: (1) it should only be used for legitimate purposes and when necessary, (2) users must provide justification for using facial recognition, (3) it cannot be the sole verification method when other non-facial recognition methods can achieve the same result, (4) users are required to conduct Personal Information Protection Impact Assessments (PIPIA), among other important obligations.

Development

The Chinese government has approached AI development with both great caution and enthusiasm. Whilst the risks are well-recognized by State leaders, China is taking a multi-faceted approach to support the development and growth of its AI industry. Key efforts include implementing a strong and clear regulatory framework, promoting AI infrastructure development and security, cultivating internal talent through increased R&D funding, and international collaboration.

State-led policy initiatives are playing a sizable role in the growth of China’s global standing in the AI race. One of the earliest testaments to China’s ambitions in this space was the ‘Made in China 2025’ initiative launched in 2015. It commenced a significant shift in China’s determination to be a major player in the technology space on a global level, rivaling Europe and especially the United States (Kania, 2019). Key objectives of the plan included upgrading China’s internal manufacturing capabilities, a reduction in dependence on foreign technology, and fostering innovation across several industrial sectors (especially high-tech ones). The government no longer refers to the ‘Made in China 2025’ policy, but the effects of it are still observable and discussed.

The New Generation AI Development Plan released in 2017, discussed in the previous section, was a landmark policy that has served as a roadmap for China’s AI development strategy. The publication of this Plan demonstrates China’s unique long-termist approach to AI development that places great emphasis on the State as an active participant. The Plan stresses the importance of cross-sector collaboration and integration to reach key goals by 2030. State-led imitative is also observable in the ‘Five-year plans’ released by the Communist Party of China (CPC). These ‘short and general guidelines’ function as a “central means of organizing policy in China, especially in the areas of environmental protection, education, and industrial policy” (Wikipedia, 2024). The most recent 14th plan (2021-2025) emphasized AI as central industrial sector for rapid technological innovation and economic growth.

Another important AI-related project unfolding in China are AI pilot zones. The Ministry of Science and Technology reports that China intends to build “around 20 AI pilot zones by 2023” (State Council of the People’s Republic of China, 2021). These zones will serve as investment hotbeds to “to innovate a number of practical and effective policy tools, explore models integrating AI and economic development, gain experience that can be repeated and promoted, and build highlands for leading AI” (Ibid, 2021). These zones are marketed as opportunities for foreign investors to support the AI sector at the regional level. According to Wan Gang, president of the China Association for Science and Technology, “will attract global innovation resources to deeply integrate with China's real economy, and constantly create a new trend of industrial development while becoming a new engine of economic and social development” (Global Times, 2023). As of the most recent reporting in May 2023, China has already established 18 zones. Each one is intended to address local technological needs and target different aspects of AI growth and development.

New AI Innovation Zones in China (from Asia Briefing LTD)
Province New AI Innovation Zone Key activities
Beijing Beijing
  • Accelerating technological R&D in areas, such as core algorithms, software, hardware.
  • Developing key areas, such as smart infrastructure, smart networked vehicles, smart cities, and continuing to promote a deep integration of AI in the real economy.
Tianjin Tianjin (Binhal New Area)
  • Coordinating the developing strategy of Beijing-Tianjin-Hebei, addressing practical needs of industrial intelligent transformation, upgrading government services, and improving people's livelihood.
  • Promoting key areas, such as smart manufacturing, smart ports, and smart communities.
Zhejiang Hangzhou
  • Deepening the application of AI technology in urban management, intelligent manufacturing, and smart finance.
Guangdong Guangzhou
  • Focusing on the development of intelligent industries, such as intelligent key devices, intelligent software, and intelligent equipment as part the development requirements of the Guangdong-Hong Kong-Macao Greater Bay Areas.
  • Exploring scenarios of AI's application in key technological directions, such as computer vision, and other fields, such as industry and commerce.
Sichuan Chengdu
  • Leveraging its strategic location along the "Belt and Road", grasping the opportunities for the construction of the Chengdu-Chongqing double-city economic circle, and using AI to empower small and medium-sized enterprises.

The innovative development techniques by China also include the creation of regulatory sandboxes. The government has launched initiatives to develop these sandboxes so that AI technologies can be tested in real-world conditions whilst mitigating any potential risks. The Shanghai Regulations on Promoting the Development of the AI Industry includes were passed in part to “further pave the way for sound and sustainable development of the AI technology through a grading management and “sandbox” supervision” (Wu, 2022).  The purpose of this regulation in the eyes of government officials is ‘to provide sufficient space for companies to explore and test out their technologies.’ Similar projects have been launched in Beijing, where the China Securities Regulatory Commission has “launched the fintech innovation pilot projects in Beijing to promote the stable development of fintech in the securities and futures industry” (Foreign Affairs Office, People’s Government of Beijing Municipality, 2022). The Director of the Institute of Finance and Banking in the Chinese Academy of Social Sciences, Yin Zhentao, has noted that the “the fintech innovation pilot program is an important form of "regulatory sandbox", and it supports and encourages financial innovation, especially fintech innovations on the basis of keeping regulatory compliance and risks under control” (Ibid, 2022).

A variety of indicators suggest that China’s industrial strategies are paying off. The rise of the industry, with heavy state-backing, has been meteoric. The country has witnessed an explosion of economic activity in the AI sector, which has now reached a valuation of 578.4 billion yuan ($79.67 billion) in 2023, with the number of AI enterprises exceeding 4,000 (Xingzhou, 2024). A white paper released by Deloitte emphatically notes that “China has become one of the world's fastest growing countries in AI development” (Deloitte China, 2018). As of the second quarter of 2024, the “(AI) sector has nudged aside finance to top the country’s salary rankings” (Chen, 2024). These trends are expected to continue into the foreseeable future. A report released by the CCID Consulting expects the “the market scale of China's AI industry will reach 1.73 trillion yuan by 2035, accounting for 30.6 percent of the world's total” (Antara News Agency, 2024).

Whilst recent years have seen great AI prosperity in China, significant obstacles loom ahead. The growing geopolitical rift between Beijing and Washington threatens to cause a slowdown in the industry, and in fact it already is having such an affect. In addition to state resources, foreign private capital has been a key ingredient fueling the rapid growth of the sector in China – especially venture capital from the United States. This source of investment, however, has been impacted by U.S.-China decoupling over geopolitical tensions: “American venture capital, which has historically been the major drive of growth in China’s internet sector, has plummeted [in 2023]” (Liao, 2024). The drying up of U.S. capital is the result of several reasons, many of which stem from the U.S. increasingly regarding China as a geopolitical foe instead of a mere competitor in the global economy.

Governance

The courts are already playing a key role in the governance of AITs, some with implications for the world of work. For example, an important recent case heard by the Hangzhou Internet Court, Case No. Zhejiang 0192 - Civil No. 1587, ruled in February of 2025 that a generative AI platform was contributorily liable for copyright infringement. Tsuburaya Productions, the rights holder of the Ultraman series, claimed that an Gen-AI platform “allow[ed] users to train and share models based on copyrighted material” (Digital Policy Alert, 2025). Digital Policy Alert (2025) explains that “The court found that the platform had knowledge of the infringing activities, profited from them, and failed to take reasonable measures to prevent infringement.” The ruling also came with an order for the AI-platform to terminate its infringement activities and pay a considerable fine.

Another important angle of China’s governance approach to AITs has emerged from the State’s reactions to numerous protest movements or bouts of civil unrest demonstrating widespread dissatisfaction with algorithmic management systems. Drivers for ride-hail apps and couriers for delivery apps have repeatedly staged demonstrations and protests over poor working conditions and the harms of algorithmic management. On the 16th-18th of November, for example, a nationwide strike was conducted by drivers Huolala “in large part over a new multi-factor ordering system on the platform” (China Labour Bulletin, 2023a). This new ordering system, drivers have argued, is reducing the price of trips and driving down incomes – and it is all the harder to know why because the algorithm is opaque. The China Labour Bulletin (2023a) notes that in the Huolala situation, “[platform] Workers are up against an unclear and inhumane algorithm, a powerful company monopolising and setting standards in the industry, and a regulatory process that is at most a slap on the wrist.”

Much of the resistance to AITs in China come directly from mass-movements that ultimately require a response by regulators. A well-documented example of this is how gig workers have forced a national reckoning on the use of algorithms by major tech companies in the platform economy. In September 2020, a report called “Delivery Rider, Trapped in the System” went viral on social media, triggering the most heated discussion ever on possible algorithmic manipulation in China (Lai, 2020). Workers who are managed and controlled by algorithmic systems are exposed to high risk during the course of their work, necessitating active regulation on AITs used to operate platform work.

Following that public outburst, in July of 2021 a guideline called ‘Guiding Opinions on Protecting the Labour Rights and Interests of Workers under New Forms of Employment’ (hereafter as ‘Document No.56’) was published by the Ministry of Human Resources and Social Security (MOHRSS) together with seven other departments. This was the first time the labour department responded to and addressed the issue concerning the way algorithms apply to the work field, despite the fact that the issue has existed for some time (Donoghue et al., 2024). At the core of, Document No. 56 is the imitative to promote greater algorithmic transparency. The regulation says “that those algorithms directly related to workers' rights and interests shall be fully asked for the opinions and suggestions of trade unions or workers' representatives” (Ibid, 2024). A more detailed document called ‘Guidance on the implementation of the responsibility of online catering platforms to effectively safeguard the rights and interests of take-away food delivery personnel’ was published by State Administration for Market Regulation (SAMR) with other government agencies, specified in the field of online catering. In accordance with the requirement for transparency in Document 56, the Guidance stipulates that “algorithm rules affecting the vital interests of food delivery workers must be made public in advance so that food delivery workers, unions, and other parties can provide their advice” (Ibid, 2024).

The role of trade unions is a much more complicated affair. This is in part because the All-China Federation of Trade Unions (ACFTU) is “China’s sole legally authorized central trade union, which comprises layers of unions at the regional and industrial levels” (DelGrande, 2022). The international trade union movement has been very critical of this fact, arguing that China ultimately lacks any significant or real trade union movement as the ACFTU is loyal to the CPC. The limits of the ACFTU are deeply evident when it comes to platform workers. In their major review of the Platform Economy, last updated April of 2023, the China Labour Bulletin laments that, “China's official union has failed to organise workers and initiate collective consultation in the industry, and platform workers continue to lack adequate representation” (China Labour Bulletin, 2023b). It is for this reason they recommend that “the official trade union take a greater role in protecting platform workers' rights and interests by creating platform industry sub-divisions within its official structure”. This would allow for union members, or workers’ to directly participate in union activities, and “establish a level of accountability.”

There is some evidence, however, that the government is creating space for a more powerful and active labour movement. This shift can be linked to widespread pressure by labour activists, but also the policy paradigm shift commenced with Xi’s ‘Common Prosperity’ agenda (Huld, 2022). The recent regulatory initiatives undertaken by the Chinese government described above, include efforts to improve workplace conditions, protections, and voice to contest AIT abuse and overreach. In July 2021, China’s Ministry of Human Resources and Social Security published a document titled Guiding Opinions on Protecting Labor and Social Security Rights and Interests of Workers Engaged in New Forms of Employment that “specifically tartget[s] the rights of platform workers” (DelGrande, 2022). A key innovation in these Guidelines is the mandate that “employers enter into contracts with workers even when their relationship does not meet all of the requirements for a legal “labor relationship”’ and also the emphasis the Guidelines place on the role of labour unions (Ibid, 2022). This is especially evident in Article 10, which makes reference to not only trade unions but ‘workers’ representative’. This underscores a new flexibility and openness to the claims of workers directly, likely an outflow from the vocal criticism by workers of platform companies and the ACTFU.

Another key development is the recent revision of China's Trade Union Law. This law is responsible for setting out the trade union and collective bargaining rights prescribed to workers, including the right to organise, oversight of termination and layoffs, health and safety requirements, autonomy in union activities, and so on. The Trade Union Law was amended in 24 December 2021, with the revisions take effect 1 January 2022. One of the key changes to the law was the addition of the following clause to Article 3: Trade unions adapt to the development and changes of enterprise organisation forms, workforce structure, labour relations, employment forms, etc., and safeguard workers' rights to join and organise trade unions according to law. This constitutes a potential major step forward for platform workers as “It is generally believed that this new clause clarifies gig workers' right to join and organise trade unions” (Li and Ma, 2022). However, as noted in the regulation, there is ambiguity about what this change means, and it may take time for the courts to establish its significance. A further example of how labour activism is changing the trade union landscape is Article 10 of Document No. 56. This guideline requires that digital labour platforms explicitly “consult workers on how algorithms should be designed and used when those algorithms directly affect labor rights… [and] may lead to the co-production or co-determination of algorithmic rules that better track the interests of workers” (Donoghue et al., 2024).

Concluding Remarks: Consequences for the World of Work

China offers a distinctive case study in state-driven regulation, development, and governance of artificial intelligence technologies. It stands in stark opposition to the often laissez-faire or corporate-led models of AI governance, especially in the United States, or the slow and detail-oriented approaches of the EU. Instead, the Chinese government is moving swiftly to confront a wide array of AI-related harms. Yet, while this regulatory assertiveness signals a clear intention by the State (and the Party) to control and shape the societal impacts of AI, its implications for the world of work remain ambiguous and, in certain key respects, underdeveloped.

I conclude by noting three broad findings from this case study. The first is that workers’ interests appear to be largely subordinated to broader public concerns and their needs are diluted within larger considerations of national development and social stability. The result is a regulatory landscape that is actively expanding, but one that still fails to prioritise labour protections as a core ambition. A close reading of the enacted guidelines, measures, recommendations, laws, and so on, reveal little direct mention of workers and their rights. This does not mean that these initiatives have no relevance for workers. We saw, for example, that both the Personal Information Protection Law (PIPL) and the Internet Information Service Algorithmic Recommendation Management Provisions (IISARMP) have implications for the world of work, though they are often indirect and underdeveloped compared to consumer protections. The PIPL, which regulates the collection, storage, and use of personal data, applies to all data handlers which includes employers, and thus it imposes obligations on how companies manage the employee data they collect. Such obligations include requirements around transparency, consent, and data minimization. These requirements could, in principle, enhance worker privacy and limit intrusive surveillance practices.

Meanwhile, the IISARMP, which governs algorithmic recommendation systems, could affect workers—especially gig and platform workers—whose job assignments, evaluations, and pay are often mediated by algorithms. The provisions prohibit harmful algorithmic practices such as manipulating prices and require transparency in certain algorithmic decisions.[3] Yet, the focus remains largely on protecting users and consumers, not employees. Overall, while regulations such as PIPL and IISARMP constitute regulatory tools that could be leveraged to protect workers from AIT-related harms, they do not foreground workers’ well-being or address power imbalances between employers and workers in algorithmic management systems.

A second core theme to emerge from this case study is the lack of clarity surrounding how Chinese regulatory activity will alter the world of work. We can see that new laws and guidelines are rapidly unfolding, which does indicate a sense of urgency and adaptability on the part of the central government. Infeed, the initiatives surveyed above underscore the tendency of top-down model of policymaking in China wherein the central government and its leading agencies (such as the Cyberspace Administration of China and the Ministry of Industry and Information Technology) articulate the broad policy priorities. However, it is the regional and local governments are largely responsible for translating these directives into concrete administrative procedures and practices. Some scholars have termed this kind of governing arrangement “fragmented authoritarianism,” to signify a system where the central government has ultimate authority but relies on a vast and semi-autonomous bureaucratic apparatus to interpret and implement its directives.

As a result, the meaning and impact of AI-related regulations remain far from known. The same law may be interpreted differently in two different regions, and its enforcement may depend not just on decision-making of institutional actors in those regions, but also on local economic and political contextual factors. Whilst this decentralised structure allows for a degree of adaptability for administrators — by enabling regional governing bodies to implement practices reflective of local labor markets — it can also introduce regulatory variation and unpredictability. For workers this might be experienced as inconsistency in how their rights and responsibilities are understood, experienced, and enforced across jurisdictions. Regulatory inconsistency can, in turn, induce regulatory arbitrage where firms seek more lenient jurisdictions – thereby putting downward pressure on working conditions. Moreover, because China's judiciary plays a relatively limited but growing role in legal interpretation, there may be delays before a stable and authoritative understanding of these regulations emerges through court rulings or administrative case law.

The third core theme revealed in conducting this case study is the preference by the State to be the central mediator of conflict and solver of problems in the world of work. This observation comes through in several different ways. The enacted regulations themselves often point to China’s broader political tendency to favour the State, and not independent civil society or labour organizations, as the principal mechanism for safeguarding workers’ interests. The government is legislated to have a highly interventionist role in the regulation of algorithms and AITs, placing it as the core enforcer of ethical and social norms and legal obligations. Additionally, as noted in the previous section, labour unrest has been met with significant and rapid responses by the State. We saw in the governance section how the State reacted with the publication of Document 56, The Guiding Opinions on Protecting Labor and Social Security Rights and Interests of Workers Engaged in New Forms of Employment and changed to the Trade Union Law. This attentiveness could be seen as an effort by the State to retain its legitimacy as the protector of workers’ interests. Although, it was also noted above that there are some promising movements by the Xi Administration for the promotion of ‘collective voice’. It is difficult to interpret what these actions mean, as they could be empowering of trade union movements or simply efforts to enhance collective voice through less unionist options like requiring companies to engage in ‘worker consultation’.

These trends suggest that while China is pioneering a model of AI governance that is aggressively interventionist, it isn’t one that necessarily prioritises workers. However, we draw this conclusion with some reservations given that more time is needed to meaningfully evaluate the impact of the regulations that have been passed in recent years. Lastly, we cannot discount the intricacy and complexity of China’s system of governance – and how that can complicate our understanding of the country’s progress on regulation, development, and governance of AITs. As countries around the world look to China’s example for building their own AI ecosystems, they should endeavour to keep in mind the unique political and social institutions that constitute the Chinese system.

Footnotes

[1] Full translation: https://digichina.stanford.edu/work/full-translation-chinas-new-generation-artificial-intelligence-development-plan-2017/

[2] The full text of the Measures can be found at: https://www.chinalawtranslate.com/en/ai-labeling/

[3] Specifically, Article 21 of the IISARMP states that algorithmic recommendation service providers must not use algorithms to engage in "unreasonable differential treatment in trading conditions such as trading prices" based on consumers' tendencies, trading habits, or other characteristics.

References