Assessing AI Risks: When is it Too Powerful to Control?
- Regulators are trying to determine the computing power threshold that indicates when AI systems may pose security risks.
- California has set a high threshold of 10 to the 26th flops, while the EU proposes a lower threshold of 10 to the 25th flops.
- Critics argue that relying solely on compute thresholds may not effectively mitigate risks, highlighting the need for adaptable regulations.
Regulators are grappling with how to determine when an AI system becomes powerful enough to pose security risks, particularly in relation to weapons of mass destruction and cyberattacks. Current regulations, including an executive order from President Biden, utilize a threshold of computing power measured in floating-point operations per second (flops) to assess AI capabilities. California has set a high bar at 10 to the 26th flops, while the European Union's AI Act proposes a lower threshold of 10 to the 25th flops. Despite these measures, critics argue that relying solely on compute thresholds is shortsighted and may not effectively mitigate risks associated with advanced AI systems. Some experts believe that the flops metric is becoming outdated and may not account for the rapid advancements in AI technology. The debate highlights the challenges regulators face in keeping pace with the evolving landscape of AI development. The lack of publicly available models that meet California's stringent threshold raises concerns about whether companies are adequately sharing safety precautions with the government. As AI systems become more capable, the need for effective regulation becomes increasingly urgent. In response to the criticism of the current regulatory framework, proponents argue that the established thresholds are necessary to exclude less capable models from safety testing requirements. The ongoing discussion emphasizes the importance of developing a regulatory approach that can adapt to the fast-paced advancements in AI technology while ensuring public safety.