Aug 29, 2024, 12:00 AM
Aug 29, 2024, 12:00 AM

U.S. AI Safety Institute partners with OpenAI and Anthropic for model testing

Highlights
  • OpenAI and Anthropic have agreed to allow the U.S. AI Safety Institute to test their new models before public release.
  • This agreement follows the Biden-Harris administration's executive order on AI, which emphasizes safety assessments and ethical considerations.
  • The collaboration aims to enhance safety practices and address concerns about the rapid advancements in AI technology.
Story

The U.S. AI Safety Institute has established a testing and evaluation agreement with OpenAI and Anthropic, two leading AI companies, to assess their new models before public release. This initiative follows heightened concerns regarding safety and ethical implications in artificial intelligence, particularly after the Biden-Harris administration's executive order in October 2023, which mandated safety assessments and research on AI's societal impacts. OpenAI's CEO, Sam Altman, expressed support for the collaboration, emphasizing the importance of safety best practices. The agreement allows the institute, part of the National Institute of Standards and Technology, to access major models from both companies prior to and after their public launch. This partnership aims to enhance collaborative research on evaluating AI capabilities and safety risks, as well as developing methods to mitigate these risks. Jason Kwon from OpenAI and Jack Clark from Anthropic both highlighted the significance of this collaboration in advancing responsible AI development. In light of these developments, there have been ongoing discussions about the need for effective oversight in the rapidly evolving AI landscape. Concerns have been raised by current and former OpenAI employees regarding the lack of accountability and transparency in AI companies, which they argue could hinder public trust and safety. Additionally, regulatory scrutiny is increasing, with the FTC and Department of Justice reportedly launching antitrust investigations into major AI players. California lawmakers have also passed a bill aimed at enforcing mandatory safety testing for AI models, reflecting a growing recognition of the need for regulatory frameworks in the AI sector.

Opinions

You've reached the end