OpenAI’s superalignment meltdown: can the company salvage public trust?

Ilya Sutskever and Jan Leike, the co-leads of OpenAI‘s crucial “superalignment” team, abruptly resigned from OpenAI this week, casting a long shadow on the company’s commitment to safe and responsible AI development under CEO Sam Altman. Leike, in particular, did not mince words. “Over the past years, safety culture and processes have taken a backseat to shiny products,” he declared in a parting shot, confirming the fears of those watching OpenAI‘s breakneck pursuit of advanced AI with growing unease. Yesterday was my last day as head of alignment, superalignment lead, and executive OpenAI?ref_src=twsrc%5Etfw”>@OpenAI. — Jan Leike (@janleike) May 17, 2024 The post OpenAI’s superalignment meltdown: can the company salvage public trust? appeared first on DailyAI.

May 18, 2024 - 19:00
 11
OpenAI’s superalignment meltdown: can the company salvage public trust?

Ilya Sutskever and Jan Leike, the co-leads of OpenAI‘s crucial “superalignment” team, abruptly resigned from OpenAI this week, casting a long shadow on the company’s commitment to safe and responsible AI development under CEO Sam Altman.

Leike, in particular, did not mince words. “Over the past years, safety culture and processes have taken a backseat to shiny products,” he declared in a parting shot, confirming the fears of those watching OpenAI‘s breakneck pursuit of advanced AI with growing unease.

Sutskever and Leike are just the latest safety-conscious employees to head for the exits. 

Since November 2023, when Altman narrowly survived a boardroom coup attempt, at least five other key members of the superalignment team have either quit or been forced out:

  • Daniel Kokotajlo, who joined OpenAI in 2022 hoping to steer the company toward responsible AGI development, quit in April 2024 after losing faith in leadership’s ability to “responsibly handle AGI.”
  • Leopold Aschenbrenner and Pavel Izmailov, superalignment team members, were allegedly fired last month for “leaking” information, though OpenAI has provided no evidence of wrongdoing. Insiders speculate they were targeted for being Sutskever’s allies.
  • Cullen O’Keefe, another safety researcher, departed in April.
  • William Saunders resigned in February but is apparently bound by a non-disparagement agreement from discussing his reasons. 

Amid these developments, OpenAI has allegedly threatened to remove employees’ equity rights if they criticize the company or Altman himself, according to Vox

That’s made it tough to truly understand the issue at OpenAI, but evidence suggests that safety and alignment initiatives are failing, if they were ever sincere in the first place.

OpenAI’s controversial plot thickens

OpenA, founded in 2015 by Elon Musk and Sam Altman, was thoroughly committed to open-source research and responsible AI development.

However, as OpenAI’s vision has ballooned in recent years, the company has retreated behind closed doors. In 2019, it transitioned from a non-profit research lab to a “capped-profit” entity, fueling concerns about a shift toward commercialization over transparency.

Since then, OpenAI has guarded its research and models with iron-clad non-disclosure agreements and the threat of legal action against any employees who dare to speak out. 

Other key controversies in the startup’s short history include:

  • In 2019, OpenAI stunned the AI ethics community by transitioning from a non-profit research lab to a “capped-profit” company, fueling concerns about a shift toward commercialization over transparency and the public good.
  • Last year, reports emerged of closed-door meetings between Altman and world leaders like UK Prime Minister Rishi Sunak, in which the OpenAI CEO allegedly offered to share the company’s tech with British intelligence services, raising fears of an AI arms race.
  • Altman‘s erratic tweets have raised eyebrows, from musings about AI-powered global governance to admitting existential-level risk in a way that portrays himself as the pilot of a ship he cannot steer when that isn’t the case. 
  • Behind the scenes, sources describe a pressure-cooker environment where safety concerns are routinely brushed aside in the pursuit of headline-grabbing breakthroughs and lucrative partnerships.
  • In the most serious blow to Altman‘s leadership yet, Sutskever himself was part of a failed boardroom coup in November 2023 that sought to oust the CEO. While Altman managed to cling to power, it showed that Altman is well and truly bonded to OpenAI in a difficult way to pry apart. 

When we glance over this timeline, separating OpenAI‘s controversies from the leadership is challenging.

There’s no doubt that the company itself is formed by hundreds of talented individuals who genuinely want to channel their efforts towards the net good of society.

OpenAI is becoming the antihero of generative AI

While armchair diagnosis and character assassination of Altman are irresponsible, his reported history of manipulation, lack of empathy for those urging caution, and pursuit of visions at the sacrifice of collaborators and public trust raise questions.

Conversations surrounding Altman and his company have become increasingly vicious across X, Reddit, and the Y Combinator forum.

For instance, there’s barely a shred of positivity on Altman’s recent response to Leike’s departure. That’s coming from people within the AI community, who perhaps have a stronger cause to empathize with Altman‘s position than most.

It’s become increasingly difficult to find Altman supporters within the community. While tech bosses are often polarizing, they usually win strong followings, as Musk strongly demonstrates.

Others, like Microsoft CEO Satya Nadella, win respect for their corporate nouse and controlled, mature leadership style, to which Altman stands as antithetical. 

Let’s also mention how other AI startups, like Anthropic, manage to keep a fairly low profile despite their models equalling, even exceeding OpenAI‘s. OpenAI has created an intense, grandiose narrative that keeps it in the spotlight. 

In the end, we should say it how it is. The pattern of secrecy, the dismissal of concerns, and the relentless pursuit of headline-grabbing breakthroughs have all contributed to a sense that OpenAI is no longer a good-faith actor in AI. 

The moral licensing of the tech industry

Moral licensing has long plagued the tech industry, where the supposed nobility of the mission is used to justify all manner of ethical compromises. 

From Facebook’s “move fast and break things” mantra to Google’s “don’t be evil” slogan, tech giants have repeatedly invoked the language of progress and social good while engaging in questionable practices.

OpenAI’s mission to research and develop artificial general intelligence (AGI) “for the benefit of all humanity” invites perhaps the ultimate form of moral licensing.

Like Icarus, who ignored warnings and flew too close to the sun, Altman‘s laissez-faire attitude and relentless pursuit of AI advancements seem to propel the company beyond the limit of safety.

The danger is that if OpenAI were to develop super-powerful AGI, society might become tethered to its feet if it falls.

So, what can we do about it all? Well, talk is cheap. Robust governance, continuous progressive dialogue, and sustained pressure are key.

Some criticized the EU AI Act for being intrusive and destroying European competition, but maybe it’s right on the money. Maybe it’s better to create tight and intrusive AI regulations and back out as we better understand the technology’s trajectory.

As for OpenAI itself, as public pressure and media critique of OpenAI grow, Altman’s position could become less tenable. 

If he were to leave or be ousted, we’d have to hope that something positive fills the vacuum he’d leave behind. 

The post OpenAI’s superalignment meltdown: can the company salvage public trust? appeared first on DailyAI.