E-AIPOWW’s USA Tracker shows AI policy and regulation work across the USA; business activities around innovation and development of AI; and civil society responses including unions, NGOs and IGOs, where workers are impacted.
E-AIPOWW’s USA Jurisdiction Report outlines the history and progress of much of this legislation and surrounding activities, and the impacts that artificial intelligence is having on workers.
At the time of writing, the United States is undergoing a period of radical rightwards transformation. On taking office, the second Trump administration immediately set about reorganising state capacity: dismissing hundreds of thousands of federal public servants, while announcing dramatic cuts in government spending. Trump’s chaotic pivot in international trade policy will likely reshape not just American domestic production, but the global political economy. Meanwhile, authoritarian policy agendas – including mass, unlawful deportations – are pursued even more zealously than during Trump’s first term. Commentators speak of democratic backsliding, assaults on the rule of law, and constitutional crisis.
At the centre of all this, we find a group of extraordinarily powerful and wealthy individuals: the technology oligarchs. This is a group of people who embody, as Julie Cohen (2025, 50) puts it, “a toxic blend of extreme personal wealth, de facto personal control of infrastructures for economic, social, and political interchange, near-total lack of personal accountability, and utter faith in the rightness and righteousness of a hyper-rationalist vision for social reinvention.” Most notably, Elon Musk, who reportedly spent around $300 million in helping to elect Donald Trump, was – for a time – given unparalleled influence over public policy. In November 2024, the president-elect announced that Musk would be appointed to lead efforts “to dismantle Government Bureaucracy, slash excess regulations, cut wasteful expenditures, and restructure Federal Agencies.”
This article proceeds from the assumption that contemporary US technology and innovation policy can only be understood in light of both these shifting state-capital (and state-oligarchy) relationships, and of the technofuturist visions of AI that accompany them. Technofuturism – sometimes known as techno-optimism – can be understood as a reactionary sociotechnical imaginary that promises endless economic expansion, seeks to build racialised hierarchies of ‘progress’, and aims to push digital technologies beyond democratic oversight. In the posthumously published Cyberlibertarianism: The Right-Wing Politics of Digital Technology, David Golumbia (2024) traced the genealogy of this ideological current and its strongly fascist undercurrents. Typified by venture capitalist Marc Andreesens’s 2023 Techno-optimist Manifesto, this position is now inextricably linked to federal policymaking.
Before his election as Vice President, J.D. Vance spent five years in Silicon Valley, building connections with the likes of Andreesen, and the Palantir founder, Peter Thiel. The latter contributed significant financial backing to Vance’s burgeoning political career. Since then, JD Vance has emerged as a would-be moderator of two factions: the so-called populist right, and the tech right. In March 2025, speaking at a tech summit hosted by Andreesen’s venture capital firm, Vance attempted to position himself as broker of these two sides. Making the case for deregulation, tax cuts, and the maintenance of a strict global division of labour, he claimed that both the interests of workers and the interests of capitalists could be protected. The vision for AI was resoundingly optimistic: “Yes, there are concerns. Yes, there are risks. But we have to be leaning into the AI future with optimism and hope, because I think real technological innovation is going to make our country stronger.”
When it comes to AI technologies, US policy direction has already shifted rapidly towards deregulation. Between 2021 and 2025, President Biden had spearheaded several initiatives that aimed to govern the development, adoption, and use of AI technologies. These included a landmark Executive Order 14110 (see Regulation below). Most of these measures have now been overturned in favour of a highly permissive regulatory environment and accelerated investment in AI. If Biden aspired to promote “safe, secure, and trustworthy” development of AI, Trump has almost entirely disavowed it. In July 2025, the Trump administration unveiled its AI Action Plan. Measures to promote trustworthy or responsible AI were noticeably absent.
The fiercely anti-regulation Trumpist position is clearly not one shared across the entire breadth of the United States. A closer examination of policymaking at state level reveals a far more dynamic picture. According to the National Conference of State Legislatures, 31 states passed legislation or adopted resolutions related to the production and use of AI technologies in 2024. These range from comprehensive AI legislation in Colorado, to softer measures like the creation of an Artificial Intelligence Commission in Delaware. These laws are not without challenge, however. Though eventually removed, a proposed federal moratorium on AI regulation was included in the recent “One Big, Beautiful Bill” or H.R. 1. If enacted, this provision would have rolled back all existing state legislation on AI, and prevented the adoption of new legislation for a decade. Though paused for now, we are likely to see similar political rifts between Washington and state legislatures further down the line.
Drawing on analysis of legal and policy materials, case law, technical standards, and secondary literature, this article examines the AI governance landscape in the United States in 2025. It follows the approach developed by the Artificial Intelligence Policy Observatory for the World of Work (AIPOWW), and forms part of a symposium on AI and the world of work. It begins by presenting the political and economic context in which technology and innovation policy is developed within the United States. This is followed by a discussion of current regulatory environment(s), including proposed and enacted legislation at the federal and state levels that aims to regulate the development, adoption, and use of AI technologies. The article then turns to analysis of emerging governing arrangements and practices in industrial relations.
As a federal democratic republic, governing authority in the United States is shared between the federal and state levels. In each case, government is tripartite, comprising the executive (the president/governor and their cabinet), a bicameral legislature, and the judiciary. At the federal level, policymaking is distributed across executive departments, each led by a Secretary appointed by the President. Currently, there are fifteen executive departments: State, Treasury, Defense Justice, Interior, Agriculture, Commerce, Labor, Health and Human Services, Housing and Urban Development, Transportation, Energy, Education, Veterans Affairs, and Homeland Security. Each department also has specialist agencies. For example, the Wage and Hour Division (WHD) – the federal office responsible for promoting and achieving compliance with federal labour standards – is part of the Department of Labor. Some federal agencies (e.g., the CIA) sit outside of executive departments and are meant to be relatively insulated from the office of the President.
Across government, policymakers have directed attention towards AI technologies. Although no major federal legislation has been passed by Congress, several Executive Orders have been issued by the White House first under President Biden, and now under President Trump (see Regulation). For many, this lack of AI-specific federal legislation presents a clear regulatory gap. With the balance of power currently on the side of the Republican Party, this gap seems only likely to widen.
Policymaking on AI in the United States is undoubtedly determined in part by the country’s position as a global economic hegemon and military superpower. Many of the world’s largest technology companies are based in the United States and, according to Stanford’s 2025 AI Index Report, the country continues to lead in terms of production of top AI models. While China produced 15 notable AI models in 2024, the US released 40. Based on popular performance benchmarks like Measuring Massive Multitask Language Understanding (MMLU), however, the quality gap appears to be closing. China also exceeds US output in AI publications and patents. The US meanwhile exerts considerable power over the manufacture and trade of micro-hardware. Recent analysis by Cai and Maquieira-Alonzo has found that cooperation with US controls on chips often comes down more to security alignment, than to market interests or economic dependence: “in cases where governments and firms have a strong security alignment with the US, their cooperation with US economic statecraft is stronger, even when economic interests might push them to resist” (2025, 2).
Just days after re-entering the White House, President Trump signed an Executive Order entitled “Removing Barriers to American Leadership in Artificial Intelligence” that asserts his ambition of solidifying American dominance in AI development. Though not explicitly named in the Executive Order, the US’s primary competitor is, of course, China. While AI development and regulation are undoubtedly shaped by these global competitive dynamics, their precise nature is disputed. Kak and Myers West (2023) point to the persistence of the “AI arms race” as a discursive frame that actors from the tech industry and defence establishment have employed as part of efforts to forestall regulatory intervention. An understanding of the United States as engaged in a race against China for AI supremacy has “not only kindled an appetite, across party lines, for increased support of escalated AI development and deployment, but also served to push back against calls for slower, more intentional development and stronger regulatory protections.”
Despite the appeal of certain anti-regulation arguments, including those based on national security concerns, regulators have nonetheless felt empowered to intervene. The 2021-25 chair of the Federal Trade Commission (FTC), Lina Khan, instigated a radical shake-up of antitrust enforcement. Khan led efforts to address anti-competitive practices of Big Tech companies such as Amazon, Google, and Nvidia – and with some success (FTC 2025). However, her departure in early 2025 however leaves much room for doubt. Tech companies have made extensive use of licensing and hiring agreements which come with less regulatory scrutiny compared to mergers and acquisitions. Department for Justice officials and Trump's FTC Chair, Andrew Ferguson, have, however, signalled their willingness to take enforcement action against anticompetitive conduct (Reuters 2025).
Meanwhile, military use cases of AI are set to expand. Firms including Anthropic have recently announced their intention to form military and security partnerships. Soon after Trump took office, Google (Alphabet) dropped its policy of barring its technologists from pursuing “technologies that cause or are likely to cause overall harm.” In response, unionised Google workers, led by Parul Koul, president of the Alphabet Union Workers-CWA, expressed their dismay: “It's deeply concerning to see Google drop its commitment to the ethical use of AI technology without input from its employees or the broader public, despite long-standing employee sentiment that the company should not be in the business of war” (Alphabet Workers Union, 2025). While a clear public statement of dissatisfaction, it may not represent a bold assertion of worker power. Whether the Silicon Valley workforce would have the capacity or willingness to employ more radical forms of resistance is uncertain.
In the world of work more broadly, we see some signs of mobilisation through collective action, bargaining and litigation, that could open possibilities for workers to demand greater protections related to the use of AI. This must be understood within the context of US labour law, however. As Kate Andrias summarizes: “American law establishes a system of voluntaristic, decentralized unionism against a background presumption of employment-at-will and broad managerial rights; collective bargaining is a private negotiation between individual employers and employees that exists only where a majority of employees, despite extraordinary employer resistance, chooses to unionize” (2019, 58).
To understand the current political and policy climate in the United States, we can also look at recent judicial activity. In the landmark ruling in Loper Bright Enterprises v Raimondo (603 U.S. 369 (2024)), the Supreme Court overruled the Chevron doctrine requiring deference to reasonable agency statutory interpretation of ambiguous laws. While the overall effect of this decision has been to significantly defang regulatory agencies, its precise impact on the authority of National Labor Relations Board (NLRB) is still being debated by lawyers (Jacob and Lofaso, 2025).
President Trump re-entered the White House at an important point for AI development and diffusion, giving him and the Republican Party significant power to chart the path ahead. Amongst Trump’s first acts was the repeal of Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (EO 14110), signed by President Biden in 2023. As stated in the introduction to this commentary, the direction of regulatory policy has shifted dramatically in recent months. What seemed like a turn towards more muscular approach to AI regulation under President Biden has given way to a bonfire of regulation under Trump – along with an unprecedented about-turn on trade policy (see Tzouvala, 2025). Speaking at the Paris AI Summit in February 2025, US Vice President JD Vance warned of “excessive regulation” of AI. He expressed concerns that “some foreign governments are intending to tighten the screws on US tech companies with international footprints.” These and similar statements have set the tone for the current administration’s approach to AI regulation. This section will set out the most significant developments in AI regulatory policy in recent years. It will outline both key changes at the federal level, and some of the most significant policy developments in state governments.
Compared to other jurisdictions and polities, the United States has taken a laissez-faire approach to AI regulation. Shaped by what the legal scholar Anu Bradford has called “the country’s uncompromised faith in markets and scepticism toward government regulation” (2024, 387) this approach has ensured that no federal legislation specifically regulating AI has ever been enacted. Though AI-specific federal legislation, such as the National Artificial Intelligence Initiative Act (2020), and the AI in Government Act (2020), these are not comprehensive regulatory instruments comparable to, for example, the European Union’s AI Act. Instead, these instruments are concerned with research and development, and driving AI programmes across government respectively. They largely do not regulate the development, adoption, or use of AI technologies in the private sector.
More comprehensive, sector-neutral legislation has been, at various points, proposed in congress. The Algorithmic Accountability Act was introduced in 2022 by three Democratic senators. Its main aim was to "bring new transparency and oversight of software, algorithms and other automated systems." The bill would have required companies to conduct impact assessments for automated decision systems (ADS) that make critical decisions, and the FTC to host a public repository for such systems. The legislation did not pass before the 117th Congress adjourned, and has not been reintroduced since.
Where federal policy has taken a more decisive stance is in the issuance of Executive Orders. These are official documents, signed by the President, which direct how the federal government operates. Though they are binding on the executive branch of government, they do not go through any legislative process and do not create obligations for private actors. Under the first Trump administration, two notable AI-related Executive Orders were issued. The first, EO 13859, entitled Maintaining American Leadership in Artificial Intelligence aimed to establish federal principles and strategies to strengthen AI capabilities, and to promote scientific discovery, economic competitiveness, and national security. It was followed in December 2020 by EO 13960 Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, which required agencies to create an inventory of AI use cases.
Under the Biden administration, EO 14110 was issued. Entitled Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, this EO is widely considered to be the most assertive federal intervention on AI regulation to date. It was rescinded by President Trump in January 2025. The EO was structured around eight guiding principles that included supporting workers. Amongst other provisions, it directed the FTC to use its rulemaking authority “to ensure fair competition in the AI marketplace and to ensure that consumers and workers are protected from harms that may be enabled by the use of AI” (Section 5.3(a)). Section 6 introduced a series of requirements related to world of work. These included an order for the Secretary of Labor to develop “principles and best practices for employers that could be used to mitigate AI's potential harms to employees' well-being and maximize its potential benefits” (Section 6 (b)).
Pursuant to EO 14110, the Department of Labor released its Principles on AI and worker wellbeing in May 2024, and Best Practices in October 2024. Although the Department of Labor has since signalled that these documents no longer reflect current policy, following the rescission of EO 14110, it is worth detailing the main provisions:
These guidelines also made a positive assessment of the role of trade unions in setting AI policy, noting that in recent years, “unions and employers have come together to collectively bargain new agreements setting sensible, worker-protective guardrails around the use of AI and automated systems in the workplace.”
Prior to publication of Biden’s EO 14110, the administration produced two highly significant federal policy instruments governing the development and use of AI. First, the Blueprint for an AI Bill of Rights was issued by the White House Office of Science and Technology Policy (OSTP) in 2022. This instrument was non-binding and can be understood as a precursor to EO 14110. Second, the National Institute of Standards and Technology NIST, which is part of the Department of Commerce, published its AI Risk Management Framework in January 2023. This is a comprehensive technical framework, developed as a requirement of the National Artificial Intelligence Initiative Act and aligned with international standards such as those published by the Organisation for Economic Co-operation and Development (OECD).
With the publication of its July 2025 AI Action Plan, the Trump administration continues to dismantle the existing federal AI governance regime. The Action Plan comprises three main pillars: (1) Accelerate AI Innovation, (2) Build American AI Infrastructure, and (3) Lead in International AI Diplomacy and Security. Its publication follows over 10,000 public comments, and fulfils the core requirement of EO 14179 on “Removing Barriers to American Leadership in AI.” The Action Plan seeks to revise existing measures such as the NIST AI Risk Management Framework to “eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change.” Although it does not place direct restrictions on state level regulation of AI, it does discourage regulation by recommending that the federal government withhold funding from “states with burdensome AI regulations.”
On 23 July, President Trump signed three further executive orders to implement the AI Action Plan. These implement its recommendations relating to infrastructure, so-called “woke AI” and government procurement, and AI exports. For the latter, EO 14320 establishes an “American AI Exports Programme.” According to a White House press release, the programme will allow the United States to “strengthen ties with allies, promote [its] standards and governance models, and maintain technological dominance.”
The AI Action Plan has at least two sections that directly address the world of work. The “Empower American Workers in the Age of AI” initiative sets out proposed actions for the Department of Labor to undertake to prioritise “AI skill development” and to fund the retraining of workers impacted by AI-related job displacement. The “Train a Skilled Workforce for AI Infrastructure” initiative meanwhile recommends that the Department of Labor work with other federal agencies to identify high-priority occupations for AI infrastructure. These initiatives do not include any provisions that would extend protections for workers against harms associated with AI, such as discrimination or surveillance.
Experts and worker representatives have expressed unanimous concern at these recent policy changes. As industrial relations scholar David Madland (Guardian, 4 March 2025) put it, recent rollbacks under Trump have “made it even more so that workers are at the whims of their employers on how AI is deployed.”
At the state level, a large volume of legislation relevant to the governance of AI has already been proposed or enacted. There is now a growing patchwork of laws and policy initiatives that varies from state to state. As any detailed analysis of these instruments would far exceed the length of this commentary, coverage will be limited to key developments.
In May 2024, Colorado became the first state to enact comprehensive AI legislation. The Concerning Consumer Protections in Interactions with Artificial Intelligence Systems Act (Colorado AI Act) requires developers of high-risk systems to use reasonable care to protect consumers and workers from algorithmic discrimination. Originally scheduled to take effect in February 2026, at the time of writing (August 2025), the implementation date has been delayed following intense lobbying (Oduro, 2025). While efforts to enact similarly comprehensive legislation have recently faltered in Connecticut and Virginia, other states including New Mexico and Massachusetts are considering legislation that would provide similar protection to that included in the Colorado AI Act. In the latter, the Bill entitled “An Act to Prevent Unlawful Algorithmic Discrimination” aims to prevent discriminatory outcomes caused by the use of automated systems. This would include decisions related to employment, such as hiring, termination, and pay.
In many cases, legislative proposals originally intended as expansive sector-spanning law, have been stripped of key provisions. In June 2025, the Governor of Texas, Gregg Abbott, signed the Texas Responsible AI Governance Act (TRAIGA) into law. The version introduced in December 2024 bore strong similarities to the Colorado Act, with its focus on “high-risk” AI systems, and its imposition of substantial requirements and liability for public and private sector actors. However, the amended version that eventually passed into law either removes the most onerous provisions entirely or limits their reach to government actors. TRAIGA imposes no obligation on private sector employers to disclose the use of AI in employment related decision making, and is broadly silent on the duties of employers.
An earlier example of AI specific legislation is the New York City algorithmic bias audit law (LL 144). This was the first attempt to create an algorithm auditing regime for commercial machine-learning systems, and required employers using automated employment decision-making tools to be subject to annual independent bias audits. According to analysis by Groves et al. (2024), LL 144 has failed to create an effective auditing regime, in part due to lobbying that led to the narrowing of relevant definitions for automated decision systems.
In California, the heart of the AI industry, Governor Gavin Newsom blocked the landmark AI safety bill SB1047 in 2024. This followed strong opposition from companies including Google, Meta and Open AI. Nevertheless, representatives led by Rebecca Bauer-Kahan continue to push for legislation that would require audits and comprehensive risk assessments for AI-based hiring tools.
While public policy plays an important role in setting many of the parameters for AI adoption and use in the world of work, it is not the only site for contestation. Civil society, including organised labour, plays an important role in shaping our working lives. Perhaps the best-known example of collective struggle to address AI related harms is the 2023 Writers Guild of America strike, which led to the first enforceable generative AI agreement in the workplace. Grohmann, Rocha and Guilherme (2025) describe the strike action as a “paradigmatic for debates on worker-led AI governance” that demonstrates how, under specific conditions, worker power and struggle can be mobilised for the governance of generative AI. In another recent success for workers, in early 2025, the International Longshoremen's Association (ILA) secured an agreement over the use of automation in ports and maritime cargo centres.
Some unions have taken a proactive approach to AI by publishing guides and tools for members and negotiators on how to include provisions on AI in collective bargaining agreements. The Communication Workers of America (CWA), for example, has a Committee on AI which has produced Principles and Recommendations. Included in these documents is a statement that the effects of AI systems are not inevitable, and that members will bargain for contract language that protects against AI harms. They identify the harms as “invasive surveillance, unfair automated decision-making across all aspects of our members’ employment (including hiring, discipline, and pay-setting), unsafe and stressful work intensification and speed-up, the reduction of compensation or benefit levels and the movement of work functions out of the bargaining unit.”
More recently, Lisa Kresge of UC Berkeley Labor Center’s Technology and Work Program has developed a searchable inventory of collective bargaining documents, from more than 175 agreements. It highlights strategies to shape how technologies are introduced and used in workplaces (UC Berkeley Labor Center 2025). One of the most important types of collective bargaining provisions related to workplace technology is the recognition of fundamental rights when an employer intends to introduce technologies. These provisions can offer “preemptive safeguards to ensure that workers and unions are meaningfully involved in shaping how technologies affect jobs, working conditions, and organizational structures.” The inventory also covers provisions related to employer commitments (including job security), the governance of workplace technology solutions (including surveillance), worker participation in technology governance (such as advisory groups), and other provisions around data rights and protection. While the first of its kind, and a valuable resource, the inventory is intended to be a research tool, and does not provide model language for contracts nor does it indicate best practice.
Many labour law scholars view technological advancements, including AI, as forces that will erode workers’ ability to bargain. By increasing the firm’s ability to replace employees, technologies like AI serve to increase the inequality of bargaining power between workers and employers (Estlund 2023). The radical rightwards shift in US policymaking under the second Trump administration, and the sluggish pace of legislation at the state level, only add to this very bleak assessment.
Yet, workers continue to collective identify and resist threats to their rights and livelihoods. AI technologies make it easier for employers to discriminate, drive down wages, and surveil workers. If used inappropriately, they threaten health and safety and undermine workers’ autonomy and welfare. These are just some of the challenges workers across the United States are now confronted with. Speaking on the Power At Work podcast (Burnes Center for Social Change, 2025), Lorena Gonzalez, President of the California Labor Federation recently put it: “It is up to us to determine what kind of society we want to live in and what kind of regulations we live around that protect us.” Increasingly, US workers understand technological transformation as the defining labour issue of our time.