New York's RAISE Act expands state control over AI safety regulations
- The RAISE Act was enacted in June 2023 to regulate AI developers in New York State and impose safety standards.
- The legislation grants considerable enforcement authority to the attorney general to define and respond to AI risks.
- Critics argue that the act could stifle innovation and lacks necessary public oversight and participation in decision-making.
In June 2023, New York enacted the Responsible AI Safety and Education (RAISE) Act, which aims to regulate artificial intelligence (AI) developers within the state. This legislation follows similar regulatory efforts in California and Colorado and seeks to impose transparency and safety requirements on companies developing advanced AI systems, particularly those deemed high-risk. However, the RAISE Act is distinctive in its focus on the largest AI developers, requiring them to maintain detailed safety plans and respond to major incidents swiftly. The bill was sponsored by Assembly member Alex Bores and Senator Andrew Gounardes, receiving bipartisan support before proceeding to Governor Kathy Hochul for approval. One of the controversial aspects of the RAISE Act is the authority it grants to the state’s executive agencies, particularly the attorney general's office. The act enables these entities to determine what constitutes an “unreasonable risk” associated with AI systems. This level of discretion is concerning for critics who argue that it undermines legislative oversight and public input on evolving safety standards. The enforcement framework established by the RAISE Act positions state officials as the primary guardians of AI safety, thus minimizing the role of the legislature and civil society in policymaking. The act mandates companies that invest significantly in AI to publish safety plans, maintain documentation of risk mitigation strategies for up to five years, and address safety incidents within 72 hours. Critics have pointed out potential drawbacks, including vague definitions and a broad scope that could stifle innovation. They caution that the mechanism for determining compliance may lack the transparency necessary for democratic input, as key decisions are left in the hands of state agencies. This design raises questions about accountability and the balance of power in AI governance, particularly with the potential for hefty penalties ranging from $5 million to $15 million for noncompliance. As New York joins a growing list of states looking to regulate AI, the implications of the RAISE Act will likely resonate across various sectors. Industries reliant on AI technologies must navigate the increasingly complex regulatory landscape. As New York's approach emphasizes enforcement over input and participation, stakeholders may face challenges in adapting to these changes while striving to innovate responsibly. The future of AI governance in New York remains to be seen as these laws and regulations evolve in response to technological advancements and public safety concerns.