Safe Superintelligence Inc.'s primary goal, as stated on their website, is to build safe superintelligence. They approach safety and capabilities in tandem, aiming to solve these issues through revolutionary engineering and scientific breakthroughs. The company plans to advance capabilities as quickly as possible while ensuring their safety measures remain ahead. This venture is a continuation of Ilya Sutskever's work from his time at OpenAI, where he was part of the superalignment team tasked with designing ways to control powerful new AI systems.
Ilya Sutskever co-founded Safe Superintelligence Inc. (SSI) after leaving OpenAI. His partners in this venture are Daniel Levy, a former OpenAI colleague, and Daniel Gross, who previously led AI efforts at Apple and co-founded Cue.
The founders of Safe Superintelligence Inc. (SSI), Ilya Sutskever, Daniel Levy, and Daniel Gross, plan to address the balance between advancing capabilities and ensuring safety in AI development by focusing on "revolutionary engineering and scientific breakthroughs." They aim to advance capabilities as fast as possible while making sure safety always remains ahead. SSI will pursue safe superintelligence in "a straight shot, with one focus, one goal, and one product."