[ 2025-12-22 22:33:58 ] | AUTHOR: Tanmay@Fourslash | CATEGORY: POLICY
TITLE: New York Governor Signs Landmark AI Safety Bill into Law
// New York has enacted the Responsible AI Safety and Education Act, mandating safety protocols for large AI developers to mitigate risks like bioweapons and cyberattacks. The law establishes enforcement mechanisms and surpasses similar measures in Californi
- • The RAISE Act mandates large AI developers to create, implement and annually review safety plans addressing risks like bioweapons and model theft.
- • New York establishes a dedicated AI safety office within the Department of Financial Services, funded by developer fees, to enforce the law and issue annual reports.
- • The bill passed despite opposition from tech lobbyists and imposes fines up to $3 million for repeat violations, with incident reporting required within 72 hours.
New York Enacts Nation-Leading AI Safety Legislation
New York Governor signed the Responsible AI Safety and Education Act (RAISE Act) into law, establishing stringent requirements for large artificial intelligence developers to address severe risks posed by advanced AI systems. Sponsored by State Senator Andrew Gounardes and Assemblymember Alex Bores, the legislation aims to balance innovation with public safety amid growing concerns over AI's potential for misuse.
The RAISE Act targets 'frontier' AI models—those at the cutting edge of capability—from developers whose systems meet certain scale thresholds. It requires these companies to develop comprehensive safety and security protocols to prevent harms such as assisting in bioweapon creation or enabling automated criminal activities. The law applies to developers operating in New York, affecting major players in the AI industry.
Key Provisions of the RAISE Act
Under the new law, covered AI developers must prepare, implement, publish and adhere to detailed safety plans. These plans must outline:
- Assessments of model safety risks. - Application and review of risk-mitigation techniques. - Use of third-party evaluations for catastrophic risks. - Cybersecurity measures to prevent model theft. - Procedures for identifying and responding to safety incidents. - Frameworks to ensure ongoing compliance with best practices.
Developers are required to review and update these plans annually, publishing justifications for any changes within 30 days. Critical safety incidents must be reported to authorities within 72 hours, a shorter timeline than the 15 days mandated in similar California legislation.
The act creates a dedicated Office of AI Safety within the New York State Department of Financial Services. This office, funded through fees assessed on AI developers, will enforce the law, promulgate regulations, collect fees and issue an annual public report on AI safety trends and compliance. Access to safety incident reports will be expanded for oversight purposes.
Penalties for non-compliance are steep: fines reach up to $3 million for repeat violations, an increase over prior standards. The law also broadens the scope of reportable incidents, requiring developers to notify regulators whenever they reasonably suspect a safety breach has occurred.
Context and Expert Warnings
The legislation responds to escalating warnings from global experts about AI's existential risks. The International AI Safety Report, compiled by advisors from 30 countries, highlights potential threats including large-scale labor disruptions, AI-facilitated hacking, biological attacks and loss of human control over general-purpose AI systems. A separate report notes that the window for establishing effective governance may soon close as AI capabilities advance rapidly.
New York's RAISE Act builds on but exceeds California's recent AI safety measures, which lack a dedicated enforcement body and have longer reporting deadlines. Proponents argue the law sets a national benchmark, fostering responsible innovation in sectors like healthcare and climate modeling while safeguarding against misuse.
Legislative Path and Opposition
The bill, designated S6953B in the Senate and A6453B in the Assembly, was introduced earlier in 2025 and navigated committees before reaching the governor's desk. It faced significant resistance from technology lobbyists and venture capitalists, who reportedly spent millions in efforts to block or dilute its provisions. Despite this, bipartisan support in the state legislature propelled it forward.
State Senator Andrew Gounardes, chairman of the Senate Committee on Budget and Revenue, representing the 26th District, emphasized the law's role in prioritizing public safety over corporate profits. Assemblymember Alex Bores echoed this, stating the act ensures AI serves people rather than unchecked corporate interests.
The signing occurs against a backdrop of federal debates on AI regulation. Recent statements from federal officials have raised concerns about state-level interventions, though the RAISE Act focuses on state jurisdiction over in-state operations.
Broader Implications
As AI integrates deeper into daily life—from medical diagnostics to environmental forecasting—New York's law could influence national and international standards. Experts predict it may encourage other states to adopt similar frameworks, potentially leading to a patchwork of regulations that pressures federal action.
The Department of Financial Services is expected to begin rulemaking soon, with the first safety plans due from developers within months. Annual reports from the new office will provide transparency into compliance and emerging risks, aiding policymakers in refining approaches.
This development underscores New York's position as a leader in technology governance, following precedents in data privacy and cybersecurity. While the tech industry warns of innovation stifling, supporters contend the measures are essential to harness AI's benefits without courting catastrophe.
Tanmay is the founder of Fourslash, an AI-first research studio pioneering intelligent solutions for complex problems. A former tech journalist turned content marketing expert, he specializes in crypto, AI, blockchain, and emerging technologies.