AI companies upped their federal lobbying spend in 2024 amid regulatory uncertainty

Share this post:

Companies spent significantly more lobbying AI issues at the U.S. federal level last year compared to 2023 amid regulatory uncertainty.

According to data compiled by OpenSecrets, 648 companies spent on AI lobbying in 2024 versus 458 in 2023, representing a 141% year-over-year increase.

Companies like Microsoft supported legislation such as the CREATE AI Act, which would support the benchmarking of AI systems developed in the U.S. Others, including OpenAI, put their weight behind the Advancement and Reliability Act, which would set up a dedicated government center for AI research.

Most AI labs — that is, companies dedicated almost exclusively to commercializing various kinds of AI tech — spent more backing legislative agenda items in 2024 than in 2023, the data shows.

OpenAI upped its lobbying expenditures to $1.76 million last year from $260,000 in 2023. Anthropic, OpenAI’s close rival, more than doubled its spend from $280,000 in 2023 to $720,000 last year, and enterprise-focused startup Cohere boosted its spending to $230,000 in 2024 from just $70,000 two years ago.

Both OpenAI and Anthropic made hires over the last year to coordinate their policymaker outreach. Anthropic brought on its first in-house lobbyist, Department of Justice alum Rachel Appleton, and OpenAI hired political veteran Chris Lehane as its new VP of policy.

All told, OpenAI, Anthropic, and Cohere set aside $2.71 million combined for their 2024 federal lobbying initiatives. That’s a tiny figure compared to what the larger tech industry put toward lobbying in the same timeframe ($61.5 million), but more than four times the total that the three AI labs spent in 2023 ($610,000).

TechCrunch reached out to OpenAI, Anthropic, and Cohere for comment but did not hear back as of press time.

Last year was a tumultuous one in domestic AI policymaking. In the first half alone, Congressional lawmakers considered more than 90 AI-related pieces of legislation, according to the Brennan Center. At the state level, over 700 laws were proposed.

Congress made little headway, prompting state lawmakers to forge ahead. Tennessee became the first state to protect voice artists from unauthorized AI cloning. Colorado adopted a tiered, risk-based approach to AI policy. And California Governor Gavin Newsom signed dozens of AI-related safety bills, a few of which require AI companies to disclose details about their training.

No state officials were successful in enacting AI regulation as comprehensive as international frameworks like the EU’s AI Act, however.

After a protracted battle with special interests, Governor Newsom vetoed bill SB 1047, which would have imposed wide-ranging safety and transparency requirements on AI developers. Texas’ TRAIGA bill, which is even broader in scope, may suffer the same fate once it makes its way through the statehouse.

It’s unclear whether the federal government can make more progress on AI legislation this year versus last, or even whether there’s a strong appetite for codification. President Donald Trump has signaled his intention to largely deregulate the industry, clearing what he perceives to be roadblocks to U.S. dominance in AI.

During his first day in office, Trump revoked an executive order by former President Joe Biden that sought to reduce risks AI might pose to consumers, workers, and national security. On Thursday, Trump signed an EO instructing federal agencies to suspend certain Biden-era AI policies and programs, potentially including export rules on AI models.

In November, Anthropic called for “targeted” federal AI regulation within the next 18 months, warning that the window for “proactive risk prevention is closing fast.” For its part, OpenAI in a recent policy doc called on the U.S. government to take more substantive action on AI and infrastructure to support the technology’s development.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *