Cooperation between the US Congress and the executive branch on artificial intelligence is threatened if the Supreme Court overturns the current productive “division of labor”.

A bipartisan working group of US senators convened by Senate Majority Leader Chuck Schumer in late 2023 to address artificial intelligence (AI) has released a relatively modest roadmap.  Despite the many warnings and promises surrounding the new technology, including those issued at the nine forums of CEOs, advocates, and academics organized by the working group, the roadmap provides “guidelines”, in Schumer’s words, for Senate committees to consider through the upper chamber’s normal process. 

The recommendations include increasing appropriations to $32 billion a year for nondefense AI research and development, with additional funding for national security. The roadmap also urges committees to consider policies to address AI harms, including online child sexual abuse material and nonconsensual distribution of intimate images, safety, barriers for startups and small businesses, the need for worker training, and consumer protection and civil rights violations. 

Even if Senate committees can develop and consider bills before the end of the year, the House of Representatives is unlikely to be ready. There, the majority and minority leaders announced a bipartisan AI task force only two months ago.

Congress can afford to act deliberately, however, because even as they get up to speed, the government is not standing still. The executive branch is busy implementing a White House executive order (EO) instructing agencies on applying current law to the new technology. The EO, issued six months ago, tells offices with grantmaking and purchasing funds to promote AI risk mitigation in bioengineering and accuracy in AI products. Agencies are directed to determine methods for protecting critical infrastructure from cyberattacks and consumers from deceptive uses of AI. Parts of the government that enforce civil rights laws are instructed to upgrade their procedures for protecting citizens against AI-related violations of those laws. 

At the 180-day mark, the Biden administration announced impressive early results. These include a 288% increase in AI job applications, new guidance on generative AI tools for hiring, creation of a safety and security board focused on AI, and new generative AI guidance for federal purchasers. 

In addition, the administration has imposed export controls on key semiconductor inputs to advanced AI systems and worked closely with Congress on the CHIPS and Science Act to fund US manufacturing and research. And it is working with allies on international governance of AI.

This division of labor between congress and agencies is a functioning democracy in action. The Supreme Court blessed such an approach in 1984 when it decided Chevron v. National Resources Defense Council and upheld the Environmental Protection Agency’s reinterpretation of the Clean Air Act provision to grant companies more leniency in meeting the law’s requirements. Justice John Paul Stevens wrote in the decision that an agency had the leeway to change its interpretation of a statute. 

The court, however, may pull the rug out from the government’s ability to modernize regulations. In mid-January, a majority of justices signaled their intention to overturn this “Chevron deference doctrine”. This comes on top of the court’s new “major questions doctrine”, which precludes agencies from acting unless a statute gives them clear authority to do so. 

Overturning the 1984 ruling would herald a dramatic shift that would hamstring the deployment of new technologies such as AI. It would also force Congress to create entirely new statutes before an issue is ripe and, as Justice Elena Kagan noted, kick the interpretations of agency experts “who actually know about AI” to courts that know even less. This technology of concern is just emerging, and, as Kagan stressed, legislators themselves “know Congress can hardly see a week in the future”.  After all, their job is to be generalists, not technology or technology policy experts. 

Congress should not create rigid laws before the relevant challenges and opportunities of new technologies come into better focus. They can, instead, conduct hearings as long as they can rely on agencies to adapt existing laws to these technologies or developing circumstances. The new Senate roadmap notes that the working group “believes that existing laws, including [those] related to consumer protection and civil rights, need to consistently and effectively apply to AI systems and their developer, deployers, and users”. The roadmap also encourages relevant committees to consider ensuring that regulators can access information relevant to enforcing existing law. 

In fact, the roadmap anticipates almost an iterative process through which Congress learns from the administration’s early steps. It “encourages the executive branch and the Senate Appropriations Committee to continue assessing how to handle ongoing needs for federal investments in AI” and asks the executive branch to share updates on administration activities related to AI “to better inform the legislative process”. 

If, however, experts from the Department of Energy to the Department of Homeland Security are precluded from reviewing the application of current law to AI, Congress will be compelled to enact new statutes through which they will attempt to forestall problems before they occur. The result will be the worst of all worlds for a critical new industry: innovation-curbing overreach that fails to protect and tremendous uncertainty as rules are challenged and work their way through the courts. 

Investors, companies, users, and workers would be left without signals about, for example, how the Federal Trade Commission (FTC), the Department of Housing and Urban Development, and the Consumer Financial Protection Bureau could use AI to enforce the anti-discrimination provisions of the Fair Housing Act and the Fair Credit Reporting Act. 

Washington’s ability to respond to a new technology, as the administration does with its nose-to-the-grindstone EO implementation and as Congress does with its working group and task force approach, is to be encouraged, especially at a time when trust in government is low due, in part, to its failure to address the challenges of the social media era. 

The world will not sit still. The Chinese government is promulgating rules to ensure its control over the new technologies. Europe is enacting a far-reaching new AI law, first drafted in early 2021 before generative AI was rolled out. In the contest for setting global rules, the United States will be sidelined if the Supreme Court neuters the executive branch, with its expertise, from leveraging existing laws. The country will end up idly watching as new laws wend their way through often-slow judicial processes.