U.S. senators and technology experts met for the second of Senate Majority Leader Chuck Schumer’s AI Insight Forums Oct. 24. Among the 21 invitees were venture capitalists, academics, civil rights campaigners, and industry figures.
The discussion at the second Insight Forum, which was closed to the public, focused on how AI could enable innovation, and the innovation required to ensure that AI progress is safe, according to a press release from Schumer’s office.
In the previous forum, attended by the CEOs of most of the large tech companies, Schumer asked who agreed that some sort of legislation would be required. All attendees assented.
This time, he asked for a show of hands to see who agreed whether significant federal funding would be required to support AI innovation. Again, all hands were raised, according to Suresh Venkatasubramanian, a professor of data science and computer science at Brown University, who attended the forum.
“I was pleasantly surprised to see that many of the folks who would, on paper, identify as people from the business side of the world were advocating forcefully for the need for regulation,” says Suresh Venkatasubramanian, a professor of data science and computer science at Brown University, who attended the forum.
After the forum, Senator Mike Rounds, a Republican from South Dakota, said that, to fuel AI development, $8 billion would be required next year, $16 billion the following year, and $32 billion the year after—estimates which originated in the 2021 National Security Commission on Artificial Intelligence’s final report.
Schumer, a Democrat from New York; Todd Young, a Republican from Indiana; and Rounds also identified other issues with bipartisan support. These included the need to outcompete China, and the need for workforce initiatives, such as immigration reform and training programs.
Schumer’s Insight Forums remain the most visible sign of AI action in Congress. But lawmakers from both houses have started to introduce bills and propose frameworks, as they make the case for their preferred federal approach to this transformative technology.
A growing number of proposals
The proposed legislation and legislative frameworks fall into a number categories. Broad regulatory proposals, which would apply regardless of the context in which the AI system is used, are perhaps the most highly contested.
One such proposal, aimed at curbing online harms to U.S. citizens, would include mandated disclosure of the data sources used to train an AI system and watermarking AI-generated outputs so that they can be identified.
Another, more focused on risks to public safety, would require companies seeking to develop sophisticated general purpose AI models, like OpenAI’s GPT-4, to acquire a license and submit to audits from an independent oversight body, and hold AI companies legally responsible for harms caused by their models.
In contrast, a third “light touch” bill would require companies to self-certify that their systems are safe.
A number of legislative proposals seek to regulate specific uses and potential harms from AI. These include the REAL Political Advertisements Act, which would require a disclaimer on political ads that use images or video generated by artificial intelligence, and the Artificial Intelligence and Biosecurity Risk Assessment Act, which would require the the Department of Health and Human Services to assess and respond to public health risks caused by AI progress.
Some proposals aim to boost innovation rather than regulate harms. The CREATE AI Act, would establish the National Artificial Intelligence Research Resource to provide academic researchers with the computational capacity, the data, and the tools required to keep pace with industrial AI research.
Finally, some proposals seek to ensure the U.S. has access to skilled workers. The Keep STEM Talent Act would aim to increase the share of foreign STEM graduates from U.S. universities who remain in the U.S, and the “AI Bill”—based on the GI Bill—would retrain U.S. workers.
Not all the action is happening at the federal level. A report from Software Alliance, a trade group, found that, as of Sept. 21, state legislators had introduced 191 AI-related bills, a 440% increase on the previous year. In particular, California state legislators could play an important role, given the large number of leading AI companies based there.
Not all government action is legislative, either. The Biden Administration has extracted voluntary commitments to follow AI safety best practices from leading AI companies, and an AI executive order, which will require AI models to undergo safety assessment before being used by federal workers, is expected to land in the next week. Federal agencies have already begun to act—in July, the Federal Trade Commission opened an investigation into OpenAI over potential consumer protection violations.
What comes next?
Schumer has said he wants to develop a comprehensive AI legislative package. The many bills and frameworks that lawmakers are starting to introduce could be integrated into that vision, says Klon Kitchen, managing director and global technology policy practice lead at Beacon Global Strategies.
Introducing bills and putting them through the committee process allows lawmakers to refine their proposals and understand which would command sufficient support. Then, the Senate leadership will be able to select from bills that cover similar issues—such as the public safety-focused regulatory proposal and the “light touch” bill—to put together their package, he says.
The process is similar to the passage of the CHIPS and Science Act, which provides funding for domestic semiconductor R&D and other scientific initiatives and was signed by President Biden in August 2022, says Divyash Kaushik, associate director for emerging technologies and national security at the Federation of American Scientists, a think tank. The CHIPS and Science Act also began with an announcement from Senators Schumer and Young, and progressed through senate committees.
But that process took years, and passing ambitious legislation will become a lot more difficult in 2024 as the presidential election begins to dominate, says Kitchen. “I suspect that because AI’s implications are so vast and still so poorly understood, that what we’ll ultimately end up doing is tracking more toward incremental, narrow fixes and points of engagement.”
That could change “if there is a significant piece of disinformation AI-enabled misinformation or disinformation,” Kitchen says. “If that happens, then lawmakers are going to be highly motivated to do something and start holding people accountable, much in the same way that they did with fake news back in the 2016 election.”