SVG
Commentary
RealClearMarkets

Artificial Intelligence Meets Artificial Regulation

harold_furchtgott_roth
harold_furchtgott_roth
Senior Fellow and Director, Center for the Economics of the Internet
kirk-arner
kirk-arner
Legal Fellow, Center for the Economics of the Internet
.
Caption
United States President Joe Biden speaks about artificial intelligence, in the Roosevelt Room of the White House in Washington, DC, on July 21, 2023. (Andrew Caballero-Reynolds/AFP via Getty Images)

There’s an old military adage: “never volunteer for anything.”  At first blush, it might seem downright unpatriotic.  But perhaps this saying was coined by an officer who spent too much time in Washington.

Last month, the Biden administration marshalled executives from Google, Amazon, Microsoft, Meta, OpenAI, Anthropic, and Inflection to the White House  that, among other things, would subject their AI systems to audits before public release and require data sharing with the government and academics.

If the agreement were truly “voluntary,” there would be no need for anyone to travel to D.C., much less the White House.  Willing businesses could simply sign the pledge virtually, promising best efforts towards an ostensibly noble goal—assuming that doing so wouldn’t invite antitrust questions.

But, of course, the agreement was anything but voluntary.  It was planned and coordinated by the White House.  America has hundreds of thousands of businesses that enter into contracts and agreements on daily basis.  Few, if any, of them visit Washington—much less the White House—purely of their own volition.

So then, why did major tech and AI executives find themselves in D.C. last month? 

Simply put—at the moment, there are no federal laws or regulations governing the use of AI.  And the Administration .

To attempt to remedy this, the administration negotiated the now-signed “voluntary” agreement with America’s leading AI firms.  It’s likely fair to assume that the White House may have made concessions to individual companies that signed the agreement, and in turn may have threatened actions if a company refused to sign.  Indeed, just as unlucky military members “discover” reasons to volunteer for dangerous missions, there were undoubtedly reasons these companies signed the White House’s “voluntary” agreement.

It gets worse.  The signatories’ not-so-voluntary volunteerism will continue far beyond the agreement’s signing, if the federal government has its way.  Astonishingly, the Washington Post reports that deviations from the agreements by its signatories , according to a Commission official “who spoke on the condition of anonymity to discuss the agency’s thinking on enforcement.”

Nevertheless,  that “the pledge does not include specific deadlines or reporting requirements—and the mandates are outlined in broad language—which could complicate regulators’ efforts to hold the companies to their promises.”  Yet, during a speech about the pledge, President Biden claimed that “these commitments are real, and they are concrete.”  So which is it?  How would enforcement work?  Are only the seven businesses that signed the so-called “voluntary” agreement liable for breaking it?  Are firms that did not sign it liable too?

None of this is how our government is supposed to work.  Usually, government officials are required publicly to explain how they will enforce regulations either in court or through agency administrative processes.  Alternatively, they would garner public support for legislation and get a bill signed into law .

But there are no regulations for AI; nor are there clear and accepted guidelines to distinguish fair and unfair trade practices for AI.  An anonymous government official might lob threats as to how an agency might punish deviations from a “voluntary” agreement, but any FTC efforts to do so would have to go to court in public, where a skeptical judge might wonder how a “voluntary” agreement to which the FTC is not a party could be enforced by it.

Indeed, asking a federal agency to enforce a voluntary agreement to which it is not a party makes little sense, particularly when the agency has no statutory authority to regulate the content of the agreement.   To the extent AI requires regulation, the better approach would be for Congress to pass a law carefully instructing an agency such as the FTC on how to regulate AI.  That hasn’t happened.

Inviting top business leaders to the White House to sign a “voluntary” agreement that a federal agency secretly seeks to enforce—despite no statutory authority to regulate the substance of the agreement—is no way to run a government. 

Ultimately, the White House’s AI moves should be considered “artificial regulation.”  While the threats are real, the regulation is anything but.