OpenAI co-founder Ilya Sutskever launches new startup Safe Superintelligence Inc.

Ilya Sutskever, the co-founder and former chief scientist of OpenAI, has announced the launch of his new venture, Safe Superintelligence Inc. (SSI).  Alongside co-founders Daniel Gross and Daniel Levy, Sutskever aims to address what they believe to be the most critical problem in the field of AI: developing a safe and powerful superintelligent AI system. The company’s statement, posted by Sutskever on X, declares, “Superintelligence is within reach. Building safe superintelligence (SSI) is the most important technical problem of our​ time. We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.” I The post OpenAI co-founder Ilya Sutskever launches new startup Safe Superintelligence Inc. appeared first on DailyAI.

Jun 20, 2024 - 16:00
 13
OpenAI co-founder Ilya Sutskever launches new startup Safe Superintelligence Inc.

Ilya Sutskever, the co-founder and former chief scientist of OpenAI, has announced the launch of his new venture, Safe Superintelligence Inc. (SSI). 

Alongside co-founders Daniel Gross and Daniel Levy, Sutskever aims to address what they believe to be the most critical problem in the field of AI: developing a safe and powerful superintelligent AI system.

The company’s statement, posted by Sutskever on X, declares, “Superintelligence is within reach. Building safe superintelligence (SSI) is the most important technical problem of our​ time. We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.”

The founders describe SSI as not just their mission but also their name and entire product roadmap. 

“SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI,” the statement reads.

An antithesis to OpenAI?

While Sutskever and OpenAI CEO Sam Altman have publicly expressed mutual respect, recent events suggest underlying tensions. 

Sutskever was instrumental in the attempt to oust Altman, which he later stated he regretted. Sutskever formally resigned in May having kept a low profile on X, with onlookers wondering about his whereabouts. 

This incident and the departure of other key researchers citing safety concerns at OpenAI raises questions about the company’s priorities and direction. 

OpenAI’s “superalignment team,” tasked with aligning AI to human values and benefits, was practically dismantled after Sutskever and fellow researcher Jan Leike left the company this year. 

Sutskever’s decision to leave seems to stem from his desire to pursue a project that aligns more closely with his vision for the future of AI development – a vision where OpenAI is seemingly failing as it drifts from its founding principles

Safety-first AI

The risks surrounding AI are hotly contested. 

While humanity has a primal urge to fear artificial systems that are more intelligent than us – a totally fair sentiment – not all AI researchers think this is possible in the near future. 

However, a key point is that neglecting the risks now could be devastating in the future.

SSI intends to tackle safety simultaneously to developing AI: “We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead,” the founders explain.

This approach allows SSI to “scale in peace,” free from the distractions of management overhead, product cycles, and short-term commercial pressures. 

“Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures,” the statement stresses.

Assembling a dream team

To achieve their goals, SSI is assembling a “lean, cracked team of the world’s best engineers and researchers dedicated to focusing on SSI and nothing else.” 

“We are an American company with offices in Palo Alto and Tel Aviv, where we have deep roots and the ability to recruit top technical talent,” the statement notes. 

“If that’s you, we offer an opportunity to do your life’s work and help solve the most important technical challenge of our age.”

With SSI, yet another player joins the field.

It will be very interesting to see who joins SSI, and particularly if there’s a strong movement of talent from OpenAI.

The post OpenAI co-founder Ilya Sutskever launches new startup Safe Superintelligence Inc. appeared first on DailyAI.