The Biden-Harris Administration has coincided with the rise of artificial intelligence (AI)—a technology that threatens to reshape labor in ways not seen since the industrial revolution. Broadly speaking, “artificial intelligence” refers to a set of technologies that simulate human intelligence and are capable of performing complex decision-making and problem-solving tasks. Generative AI and large language models now can automate many non-routine tasks—especially that of high-skilled workers. Predictions about the degree to which AI will restructure labor runs the gamut, from negligible interference (in which the technology in fact ends up requiring more, not less, human oversight) to a full-system overhaul (in which intelligent machines categorically replace human labor). Here we evaluate how the Biden-Harris Administration has responded to the “AI threat” in labor policy by providing a brief account of the few instances of union action in the US and the response of the Biden administration, particularly the signing of the Executive Order 14110 in fall 2023. We then highlight several key dimensions of the American political economy and place the American context in a comparative perspective, arguing that the lack of collaborative state-market relations in the US have led to an AI policy that fails to adequately protect workers.
What is the Biden-Harris Labor Agenda in AI?
Although trade unions had previously warned policymakers of the AI threat, it took Hollywood glitz to make the issue front-page news. Last year, a five-month summer strike by the Writers Guild of America (WGA)—the second longest in its history—protested the use of artificial intelligence in the screenwriting process. Protesting alongside them (and through November) was the American actors’ union (SAG-AFTRA), raising concerns about the digital recreation of actors on set. With AI, studios could replicate performers on screen and not have to hire or pay the actors themselves. This historic “double strike” ended with the negotiation of contracts that sought to implement guardrails to protect writers and actors from AI. While the WGA agreement established that AI cannot be used to supplant writers and reduce their compensation, it received significant backlash for altering its May 2023 proposal to allow studios to train AI models on preexisting material. The SAG-AFTRA agreement faced similar criticism when it failed to uphold any serious sanctions against “Independently Created Digital Replicas,” synthetic performers generated entirely by AI and owned by the studios. Further, the many loopholes in this agreement—e.g., the lack of protections against retaliation should actors not consent to the use of their likeness to generate “Employment-Based Digital Replicas” and the use of unclear language regarding the storage protocols for such likenesses—has created an atmosphere of uncertainty about the role AI will play in the studio process.
Of course, Hollywood was not the only industry affected. At a press conference, Fran Drescher, president of SAG-AFTRA, told the crowd that soon, “all [fields of labor were] going to be in jeopardy of being replaced by machines.” Around the same time, health professionals became more vocal about the AI threat in their field and advocated to take a more active role in regulating its use in medicine. Similar concerns have been raised by those working in telecommunications and other STEM fields. Like Drescher above, labor leaders regularly note that the problem is even more widespread, estimating that “70 percent of workers are afraid that technology is going to affect them negatively and take their job away.” On the other hand, Silicon Valley-based non-profits like Alliance for the Future and more fringe groups like the effective accelerationist movement (e/acc) have called for an open-source, innovation-first approach to AI that minimizes protections and maximizes development.
By the fall, the Presidency took action. Signing Executive Order 14110, Biden initiated several programs to address the social impact of AI on a range issues, including labor. Section 6, “Worker Support,” took a two-pronged approach. It commissioned a report reviewing the AI threat to labor and identify areas to strengthen federal support for workers facing it. It also asked the Department of Labor to develop principle-driven guidelines for employers deploying AI. The E.O. echoes the October 2022 Blueprint for an AI Bill of Rights released by the White House’s Office of Science and Technology Policy (OSTP) that delineated five core principles that should govern responsible use of AI. This nonbinding document adopted a rights-based approach that, among other issues, highlighted the right to data privacy and the right to protection from algorithmic discrimination as central concerns in mitigating community harm from AI. Both the Blueprint and the E.O. offered to survey and guide the problem, if not resolve it. This is evident in the uneven adoption of specific precepts of the E.O. across private firms. For example, the White House mandated the designation of chief AI officers (CAIOs) across federal agencies, and it encouraged private firms to do the same. But private companies have not uniformly done so. When they do, moreover, the functions of the CAIO vary widely across industries. Other firms, meanwhile, emphasize the need to embed AI expertise in each and every segment of the business ecosystem, from sales and marketing to computing, instead of having a designated CAIO on staff.
Labor leadership welcomed E.O., to a degree. Although the AFL-CIO formally welcomed the actions of Section 6, President Liz Shuler also made it clear that reports and guidelines were not enough. Only stronger collective bargaining rights would more substantially improve labor’s chances of regulating the AI threat. Such rights would improve labor’s chances of negotiating investments to re-train workers displaced by AI, the right to vet (and adapt) technologies before they enter the market or workplace and ensure fairness in AI-enabled hiring processes. Further, a sectoral approach is needed to identify and mitigate industry-specific issues. For example, in June 2023, response to an inquiry into automated surveillance of workers by OSTP, the Consumer Financial Protection Bureau (CPFB) embarked on a similar inquiry into the specificities of this practice in the data broker industry. Policy recommendations by experts echo this line of thinking. Broadening the range of issues over which workers have a right to bargain and providing workers adequate access about AI technology and its uses would center the role of workers in the implementation of AI in the workplace. Many of these requests, though, remain unanswered.
Explanatory features of the American political economy
The present stand-still on the AI threat in labor policy results from several key dimensions of the American political economy.
AI threatens knowledge workers: The first dimension is common to all industrialized countries. While earlier technologies such as automation and robotics have long posed replacement threats to routine workers and lower-income earners, the newer AI now can replace skills required for knowledge-intensive, software-dependent, and usually higher-income professions. Consider how digital platforms like Uber now use algorithms to perform managerial tasks, rendering middle-class administrators redundant in their business model. Now that AI can produce document research and typewriting, professions such as law and journalism, are under threat. Workers in these higher-educated or higher-income professions, though, may have more sway in public policy than their lower-income, automation-threatened counterparts. The recent Hollywood writers’ strike offers a case in point. Here, higher-educated scriptwriters successfully mediatized their grievances against AI, winning significant concessions from employers.
The two-party system: In the context of the U.S. two-party system, these high-skill workers play a crucial electoral role. Often urban liberals, these upper-middle class Americans are active voters who are often reluctant to vote for the socially conservative Republican party. Democrats like Biden, therefore, are keen to cultivate their political support, reinforcing their dependence on urban knowledge workers. Consider who pressured Biden to incorporate the AI Bill of Rights Blueprint into his Executive Order: a group of congressional Democrats organized by politicians representing Seattle and Boston, two urban knowledge and tech hubs (Rep. Pramila Jayapal and Sen. Ed Markey, respectively). But Biden was unable to deliver the binding reforms they requested. Indeed, once in power, these politicians have difficulty reversing the status-quo (and heavily business-oriented) bias of American political institutions to enact comprehensive protections for AI threatened-workers, even those who are high-skilled.
Weak trade unions: What happened in Hollywood is the exception, not the rule. Low unionization rates, combined with the weak influence of trade unions on American labor policy, make it difficult for the Biden-Harris Administration to offer much more than an Executive Order. This small-scale policy action requires neither the consent of Congress nor the consent of workers and the companies that employ them; it merely signals the Administration’s attention to a prized constituency. Hence why AFL-CIO views the Order as a tangential measure. Comprehensively addressing the AI threat would require a structural transformation of the American labor system.
Fragmented government: A weak central state, the federal bureaucracy, lacks both the capacity and coordinating mechanisms found in the governments of many other affluent democracies to steer policy in favor of labor. Such characteristics can make it difficult, for example, to recruit the relevant technical staff to civil service careers or to synchronize policy across a sprawling array of agencies and offices. Both the federal government and labor also must contend with the powerful and varied challenges wrought by the 50 states. The rights of states over their own political economies can both fragment the influence of workers and pose significant coordination problems. The porous common law system, furthermore, often gives business an edge. Firms can often draw on their plentiful financial resources to re-shape employment law in their favor.
State-Market Relations, Labor Power, and the Future of AI
Cross-national comparative studies have identified three ways that that states can shape the AI industry: “promotion,” “development,” or “control.” Recent events in the United States put it squarely in the first camp. In this decentralized approach, state oversight takes a backseat to private sector development, creating the optimal conditions for the growth of the private sector, thus sidelining labor protections. This pattern is evident. The nonbinding nature of the E.O., the lack of legal backing for federal agencies like the National Labor Relations Board to implement AI governance, as well as the absence of specific regulation on AI-enabled workplace surveillance and hiring practices highlight the American tendency to promote corporate interests. U.S. political and tech leaders prioritize innovation over protection, often citing national competitiveness with China as a reason to resist attempts to regulate the sector of AI development. Ironically, China has made AI industry globally competitive through “development,” a more collaborative approach that can still produce powerful results for business.
Importantly, collaborative state-market relations are not unique to authoritarian countries. The affluent democracies that give workers more influence over labor policy, such as Germany and Norway, have addressed the AI threat head-on—and still managed to benefit from rapidly developing AI technologies. In both the German and Norwegian cases, workers’ representatives played a central role in crafting regulations that protected workers’ interests. In Germany, this resulted in a formal work agreement that specified limits to employee oversight and delineated the mechanisms by which employees could contest algorithmic decision making. Norwegian workers’ representatives preferred an ongoing informal dialogue with management to ensure employee privacy and data protection. Such examples demonstrate how and why the AI threat can come to heel – but they underscore that such outcomes are attainable only if the political and economic context permits it.
Are similar arrangements possible to achieve in the United States? From an institutional standpoint, it is very difficult. The features of the American political economy raised above–the two-party system, the structure of industrial relations, fragmented government—bias the country towards the “promotion” of an unchecked AI industry. But workers can and do seek opportunities to leverage specific policy venues at the city, state, or firm level. It is also important to convince firms that regulating AI in the workplace is good for business. Such frame shifts can help to propel and justify the structural transformation required to support inclusive growth.