Senators Propose New AI Regulatory Agency


Sen. Lindsey Graham: “Do you agree, Mr. Altman, that these tools you’re creating should be licensed?”

Sam Altman (CEO of OpenAI): “Yes. We’ve been calling for this.”

Sen. Lindsey Graham: “And do you agree with me that the simplest way and the most effective way [to implement AI licensing requirements] is have an agency that is nimbler and smarter than Congress, which should be easy to create, overlooking what you do?”

Sam Altman: “Yes. We’d be enthusiastic about that.”

AI regulation is a matter of what and when, not if. That creates both risks and opportunities for founders of, executives at, and investors in AI companies. In this article, I summarize the key business-relevant developments from today’s three-hour Senate hearing on AI regulation.

The most repeated concerns from senators were:

  • Misinformation (especially related to elections and healthcare).
  • Intellectual property rights (especially related to the use of copyright-protected works that are used to train generative AI models).
  • Safety (a catch-all term that was used numerous times to refer to everything from child safety to non-proliferation of AI that could design novel biological agents to not encouraging self harm).

The (not-mutually-exclusive) list of possible solutions that had the most momentum in the committee consisted of:

  • “Nutrition Labels” for AI models. These would be required consumer disclosures about what data a model was trained on and how it scores on various bias benchmark tests.
  • FDA-like Clinical Trials. AI models would be subject to clinical trial-like testing before they could be put into production. Third-party scientists would participate in auditing the AI models to ensure safety standards were met.
  • A federal right-of-action. This would be a statute that goes beyond simply clarifying that section 230 doesn’t apply to generative AI companies. It would also preempt states and ensure that anyone who was harmed by a generative AI company could sue that company in a federal court.
  • Copyrights for AI. Copyrighted works could not be used to train generative AI models without the copyright holder’s permission.
  • A new AI regulatory agency. This agency would require licensure of any AI models above a certain scale of computational power, number of users, and/or abilities. Multiple senators compared this to how nuclear reactors or pharmaceutical drugs require licensure.

The AI regulatory agency was the most discussed solution and was supported by OpenAI CEO Sam Altman as well as a core bipartisan group of 5 senators:

  • Senator Michael Bennett (D-CO)
  • Senator Peter Welch (D-VT)
  • Senator Lindsey Graham (R-SC)
  • Senator Cory Booker (D-NJ)
  • Senator John Kennedy (R-LA)

Some other senators including Sen. Jon Ossoff (D-GA) seemed to be seriously and genuinely considering the agency solution.

However, it’s likely that if a new agency is created to regulate AI, the scope of the new agency wouldn’t actually be restricted to just AI. There are multiple groups of senators with their own pet bills that they have been trying to pass in recent years to regulate tech companies, and any new agency bill would probably incorporate at least some of those.

For example, last year Sen. Bennett and Sen. Welch introduced the Digital Platform Commission Act to try to create a new agency that would regulate tech companies. During today’s hearing, Sen. Welch said that they would be reintroducing the bill this year (with AI rebranding).

Similarly, Senators Klombuchar, Coons, and Cassidy have their own pet: the “Platform Accountability Transparency Act” which require that social media companies disclose their algorithms to certain researchers. Given that many of the senators and witnesses at today’s hearing advocated for various types of data or model transparency for AI, Sen. Klombuchar’s bill could easily be merged with Sen. Bennett’s bill to create a bill that has sweeping consequences for the entire tech industry from social media companies to AI companies.

“We cannot afford to be as late to responsibly regulate generative AI as we have been to social media, because the consequences, both positive and negative, will exceed those of social media by orders or magnitude.”

Senator Christopher Coons (D-CT), May 16, 2023 Senate Hearing on AI

Ricky Nave

In college, Ricky studied physics & math, won a prestigious research competition hosted by Oak Ridge National Laboratory, started several small businesses including an energy chewing gum business and a computer repair business, and graduated with a thesis in algebraic topology. After graduating, Ricky attended grad school at Duke University in the mathematics PhD program where he worked on quantum algorithms & non-Euclidean geometry models for flexible proteins. He also worked in cybersecurity at Los Alamos during this time before eventually dropping out of grad school to join a startup working on formal semantic modeling for legal documents. Finally, he left that startup to start his own in the finance & crypto space. Now, he helps entrepreneurs pay less capital gains tax.

Recent Posts