ChapGPT-maker OpenAI partners with AI Safety Institute
OpenAI has partnered with the U.S. AI Safety Institute, a federal government body, to give them early access to the company’s next foundational model for safety testing. In a post on X, CEO Sam Altman noted that their goal was to “work together to push forward to science of AI evaluations.”
Earlier this year in May, the ChatGPT-maker had disbanded their Superalignment team which was started with the need to ensure that their AI products align with human intentions and prevent them from going “rogue.” The move led to resignations from the team’s leads Jan Leike who went on to join rival Anthropic AI’s safety research team and co-founder Ilya Sutskever, who then started his own safety-focused AI startup called Safe Superintelligence Inc.
After his departure, Leike had indicated in a post that his team hadn’t been given the compute promised as the focus was entirely on launching new products.
In the same post Altman made announcing the partnership with the U.S. AI Safety Institute, he emphasised that they would stick to the portion of 20% compute allocated for safety efforts.
(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)
In June, retired U.S. army general Paul M. Nakasone joined the AI firm’s board of directors. In the blog post making the announcement, it was said that, “Nakasone’s appointment reflects OpenAI’s commitment to safety and security, and underscores the growing significance of cybersecurity as the impact of AI technology continues to grow.”
Altman also confirmed that they will be removing the indefinite non-disparagement clause from employee contracts that had received criticism for its severity. “We want current and former employees to be able to raise concerns and feel comfortable doing so,” he noted.