A Step Toward Safer AI: Governor Hochul Signs the RAISE Act

b1

In a significant move for the future of technology, Governor Kathy Hochul has signed the RAISE Act into law, positioning New York as a national leader in artificial intelligence accountability. By enacting this major safety legislation, New York joins California in a growing state-led effort to establish crucial guardrails around AI development while federal regulations continue to lag.

The path to this moment, however, featured a familiar tension between innovation and accountability. After state lawmakers passed the bill in June, intense lobbying from the tech industry prompted Governor Hochul to propose scaling it back. In a classic political compromise, a deal was struck: the Governor agreed to sign the original, stronger version of the bill now, with lawmakers pledging to address her requested modifications in the next legislative session.

What Does the RAISE Act Actually Do?

b2

At its core, the law is about transparency and accountability for the most powerful AI systems. It introduces concrete requirements designed to protect the public:

  • Safety Transparency: Large-scale AI developers will be required to disclose their safety testing protocols and data security measures publicly.
  • Rapid Incident Reporting: Companies must report serious AI safety incidents to a new state office within 72 hours—a crucial step in addressing risks promptly.
  • Enforcement with Teeth: A dedicated new office within the Department of Financial Services will monitor the AI landscape. Companies that fail to submit safety reports or make false statements face substantial penalties: fines of up to $1 million for an initial violation, and $3 million for subsequent ones.

A Unified Front Among Tech States

Governor Hochul directly acknowledged California’s parallel action, framing the two laws as a coordinated benchmark. “This law builds on California’s recently adopted framework,” she stated, “creating a unified benchmark among the country’s leading tech states as the federal government lags, failing to implement common-sense regulations that protect the public.”

This sentiment of state-level resolve was echoed forcefully by one of the bill’s sponsors, State Senator Andrew Gounardes. In a social media post celebrating the signing, he highlighted the struggle to get the bill across the finish line: “Big Tech thought they could weasel their way into killing our bill. We shut them down and passed the strongest AI safety law in the country.

What This Means Moving Forward

The RAISE Act represents more than just a new set of rules for New York. It signals a shifting landscape where the demand for responsible and safe AI is being translated into actionable law. By prioritizing public safety and corporate transparency, New York is helping to chart a course for how powerful technologies can be harnessed responsibly, setting a precedent other states—and perhaps eventually the federal government—may follow.

A Divided Tech Industry: Support, Opposition, and Political Battles

b3

The reaction from the technology sector has been a study in contrasts, highlighting the complex debate surrounding AI governance. On one side, leading AI developers like OpenAI and Anthropic have expressed public support for the RAISE Act. Sarah Heck, Anthropic’s head of external affairs, framed the state actions as a catalyst for federal action, telling The New York Times: “The fact that two of the largest states in the country have now enacted AI transparency legislation signals the critical importance of safety and should inspire Congress to build on them.”

However, this supportive rhetoric exists alongside more aggressive political opposition. A super PAC backed by venture capital firm Andreessen Horowitz and OpenAI President Greg Brockman has now set its sights on challenging Assemblyman Alex Bores, a key co-sponsor of the bill. This move reveals a tactical divide within the industry: supporting regulation in principle while working to unseat the lawmakers who successfully pass it.

Assemblyman Bores responded to the targeting with characteristic dry wit, telling journalists, “I appreciate how straightforward they’re being about it.”

The Growing State vs. Federal Divide

Governor

The clash in New York is a microcosm of a larger national struggle over who gets to set the rules for AI. This state-level momentum now faces a direct challenge from the federal executive branch. Recently, President Donald Trump signed an executive order—spearheaded by his AI czar, David Sacks—that directs federal agencies to actively challenge state AI laws they deem to stifle innovation.

This order represents the Trump Administration’s most forceful attempt yet to curtail state regulatory authority over the tech sector, setting the stage for inevitable legal battles. The alignment of figures like Sacks and venture capital interests in opposing state regulation underscores the high-stakes political and economic battle shaping up around AI’s future.

(We delved deeper into the implications of Trump’s executive order and the roles of David Sacks and Andreessen Horowitz on the latest episode of the Equity podcast, for those interested in a more detailed analysis.)

The Road Ahead

The enactment of the RAISE Act is far from the end of the story. It is a pivotal event at the convergence of technology, politics, and policy. New York and California have planted a flag for transparency and safety, earning measured praise from some industry leaders while inciting political retaliation from others. Now, these state laws will collide with a federal strategy aimed at preempting them.

What unfolds next will test the resilience of state-led regulation and define the playing field for American AI innovation for years to come. One thing is clear: the debate has moved from theoretical discussions to tangible laws and political consequences, with New York firmly in the lead.

FAQ: Understanding New York’s RAISE Act & The AI Regulation Debate

Q1: What is the RAISE Act in simple terms?
A: It’s a new New York state law that requires major AI companies to be more transparent about how they test for safety and to quickly report any serious safety incidents. It also creates a state office to monitor AI and imposes hefty fines for companies that don’t comply.

Q2: Who does this law affect?
A: Primarily, it affects large-scale AI developers (the exact threshold will be defined by the new state office). It’s designed to regulate the companies building the most powerful and potentially risky AI systems.

Q3: Why is New York doing this instead of the federal government?
A: There is currently no comprehensive federal law regulating AI safety. States like New York and California are stepping in to fill what they see as a critical gap in public protection, hoping their actions will push Congress to create a national standard.

Q4: What kind of penalty could a company face?
A: Companies that fail to submit required safety reports or make false statements can be fined up to $1 million for a first violation and up to $3 million for subsequent violations.

Q5: I heard AI companies like OpenAI supported this. Is that true?
A: Publicly, yes. Companies like OpenAI and Anthropic expressed support for the bill’s goals of transparency and safety. However, the political action is more complicated. A super PAC backed by key tech figures is now targeting one of the bill’s sponsors for re-election, showing a split between supportive statements and political opposition.

Q6: How is this different from California’s AI law?
A: The laws are similar in spirit—both focus on safety, transparency, and incident reporting for large AI models. Governor Hochul directly stated that New York’s law builds on California’s framework to create a “unified benchmark” between the two major tech states.

Q7: What is the conflict with the federal government about?
A: President Trump recently signed an executive order directing federal agencies to challenge state AI laws that his administration believes hinder innovation. This sets up a potential legal battle over whether states have the right to regulate AI within their borders, creating a significant conflict between state and federal authority.

Q8: What happens next with this law?
A: The law will now be implemented, which involves setting up the new monitoring office and defining specific rules. Furthermore, the political and legal battles will continue, with the tech industry’s political challenges and the impending federal-state court clashes determining the law’s long-term stability and influence.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top