14-05-2025
Congress Finally Took On AI Policy. It's Just Getting Started
Advocates for ideas and draws conclusions based on the interpretation of facts and data.
Newsweek AI is in beta. Translations may contain inaccuracies—please refer to the original content.
Congress' recent passage of the TAKE IT DOWN Act marks a pivotal breakthrough. The bill, which helps stop the spread of non-consensual intimate images and deepfakes online, is the first major internet content legislation since 2018, and arguably the first law ever to address harms from generative AI.
Finally, we have proof that Washington can act on AI and digital harms. Now, we need to keep the momentum going.
The U.S. Capitol building is seen at sunset.
The U.S. Capitol building is seen at years, Congress stalled on technology policy. This wasn't for lack of warning signs. In 2021, Facebook whistleblower Frances Haugen went public with internal research showing that Instagram was toxic for many teens. Two years later, another whistleblower, Arturo Béjar, came forward with allegations that Meta ignored growing evidence of harm to young Facebook users.
Meta wasn't alone. A wealth of research over the last five years has found evidence of platforms including TikTok, Snapchat, and YouTube recommending harmful content to teens. Polling from my organization, Americans for Responsible Innovation, shows that 79 percent of Americans are now concerned about AI's impact on younger generations.
These warning signs and increased public awareness created an environment where public policy mitigating online harms became increasingly possible. Still, Congress stalled for years.
One of the prime suspects behind this legislative paralysis was Big Tech's lobbying clout. The industry spent an estimated $250 million to stop regulatory bills in the 117th Congress. Hyper-partisan divides didn't make legislative movement on tech policy any easier.
In the era of AI, past failures to act on social media cast a long shadow. Would Washington wait until AI harms were rampant and entrenched before responding? Many in tech policy braced for another round of inaction. Thankfully, two important things changed.
First, the tech industry's stance toward regulation shifted. For years, major platforms treated any new regulation as a mortal threat, deploying lobbyists to kill even modest proposals. Now, we're seeing a more strategic approach. In the case of the TAKE IT DOWN Act, Big Tech did something almost unheard of: it didn't fight the bill. In fact, several Silicon Valley giants, including Meta, Snapchat, Google, and X, actively backed it. Even hardline industry groups backed off.
The change of heart may partly be due to a shifting regulatory environment. In the absence of federal laws, states started advancing their own digital rules, creating a patchwork that was even harder for industry to swallow than federal regulation.
The second change is within Congress itself. Burned by years of inaction on social media, lawmakers in both parties want to get ahead of the curve on AI. Over the past year, instead of waiting for the next whistleblower crisis, Congress did something novel: it educated itself and built bipartisan consensus early.
The Senate convened a series of AI insight forums that brought in experts from all sides. Bipartisan working groups in the House and Senate built out roadmaps on AI policy priorities. This process treated AI policy as a shared challenge requiring knowledge and nuance. It's a heartening contrast to the spectacle of social media hearings from the 2010s.
The TAKE IT DOWN Act itself is a step forward that offers a template for future political success. It zeroes in on a specific, clearly harmful phenomenon (non-consensual intimate images), and provides a remedy: a federal mandate that such images be swiftly taken down at victims' request. As some lawmakers in Congress have noted, the TAKE IT DOWN Act's passage shows Congress is getting serious about addressing the harms posed by new technologies.
And when it comes to bipartisan opportunities to pass tech legislation through Congress, there are plenty of bills to choose from. There's the NO FAKES Act, which would outlaw unauthorized AI deepfakes of real people's likenesses, the CREATE AI Act, which would expand access to AI resources for students and researchers, and the TEST AI Act to set up sandbox environments to evaluate new AI models.
As happened with the TAKE IT DOWN Act, tech industry leaders are starting to come to the table rather than trying to block progress.
The key going forward will be to keep this spirit alive. Now is the time for Congress to schedule hearings and markups to move additional bipartisan bills through the pipeline, building a suite of smart guardrails for AI and online platforms. These measures can protect consumers and society from the worst harms while encouraging innovation.
A year ago, many would have laughed at the idea of Congress leading on issues like novel harms from generative AI. But lessons have been learned. The combination of public pressure, shifting industry attitudes, and lawmakers doing their homework has created an opening. Now it's up to us to widen it.
Brad Carson is president of Americans for Responsible Innovation (ARI). Carson is a former congressman representing Oklahoma's 2nd District and served as acting undersecretary of Defense.
The views expressed in this article are the writer's own.