Ilya Sutskever, co-founder of OpenAI and trailblazer in the field of artificial intelligence, has launched Safe Superintelligence (SSI), a new venture. Following his recent exit from OpenAI, this decision signifies a significant shift in the trajectory of AI security development.
Sutskever made a big statement on X, expressing his idea with force and clarity that he is launching a new firm, and here, he is aiming to go at safe superintelligence head-on, with a single focus, objective, and output. This statement provides the core of SSI’s goal, highlighting a unique commitment to enhancing AI safety.
Sutskever was the lead scientist of Open AI; therefore, his resignation is noteworthy not just for the role he played but also for the time and place. OpenAI’s Superalignment team was jointly managed by Sutskever and Jan Leike. Leike departed the company in May to join the competitor AI company Anthropic. The Superalignment team, which was responsible for the direction and management of AI systems, was dissolved shortly after their departures. This is a significant turn in the internal governance strategy for AI at OpenAI.
Sutskever publicly apologized for his role in the event after Altman’s abrupt firing and subsequent reinstatement. On November 20, he posted on X that he regretted his participation in the board’s actions. He also stated that he never intended to cause any harm to OpenAI, but that he adored what they had created together. This apology shows his sincere remorse as well as his dedication to the ideas and figures that have formed OpenAI.
Sutskever’s move to SSI from OpenAI marks a turning point in both his career and AI advancement. His new firm, with its laser-focused aim on safe superintelligence, has the potential to impact the path of AI research and implementation significantly. By focusing on safety and avoiding distractions, SSI hopes to solve one of the AI community’s most pressing concerns: the responsible and ethical evolution of AI technology.