OpenAI to Share New AI Model with U.S. Safety Institute
- OpenAI is partnering with the U.S. AI Safety Institute to provide early access to its new AI model.
- The collaboration aims to allow the U.S. institute to evaluate OpenAI's upcoming flagship model.
- This partnership highlights the importance of AI ethics and safety in the tech industry.
OpenAI CEO Sam Altman announced that the company is partnering with the U.S. AI Safety Institute to provide early access to its upcoming generative AI model for safety testing. This initiative aims to address growing concerns about AI risks, particularly following OpenAI's controversial decision to disband a safety-focused team earlier this year. The announcement, made via a post on X, follows a similar agreement with the U.K.'s AI safety body, signaling OpenAI's attempt to counter perceptions that it has deprioritized AI safety in favor of advancing its technology. The disbandment of the safety team, which was tasked with developing controls to prevent potential risks from superintelligent AI, led to the resignation of key figures, including co-founder Ilya Sutskever. Critics have raised alarms over OpenAI's commitment to safety, especially after the company eliminated non-disparagement clauses that previously discouraged whistleblowing. In response, OpenAI pledged to allocate 20% of its computing resources to safety research, a promise that had not been fulfilled for the disbanded team. Despite these commitments, skepticism remains regarding OpenAI's safety measures, particularly after the company filled its newly formed safety commission with insiders. Recent inquiries from five U.S. senators have further highlighted concerns about OpenAI's policies. In a letter, OpenAI's chief strategy officer emphasized the company's dedication to rigorous safety protocols. The timing of OpenAI's agreement with the U.S. AI Safety Institute raises questions about potential regulatory influence, especially as the company has ramped up its lobbying efforts, spending significantly more in 2024 compared to the previous year. Altman's involvement in the Department of Homeland Security's AI Safety and Security Board adds another layer to the scrutiny surrounding OpenAI's approach to AI governance.