A former OpenAI researcher opened up about how he “ruffled some feathers” by writing and sharing some paperwork associated to security on the firm, and was ultimately fired.
Leopold Aschenbrenner, who graduated from Columbia College at 19, in keeping with his LinkedIn, labored on OpenAI’s superalignment group earlier than he was reportedly “fired for leaking” in April. He spoke out concerning the expertise in a current interview with podcaster Dwarkesh Patel launched Tuesday.
Aschenbrenner mentioned he wrote and shared a memo after a “main safety incident” that he did not specify within the interview, and shared it with a few OpenAI board members. Within the memo, he wrote that the corporate’s safety was “egregiously inadequate” in defending in opposition to the theft of “key algorithmic secrets and techniques from overseas actors,” Aschenbrenner mentioned. The AI researcher beforehand shared the memo with others at OpenAI, “who principally mentioned it was useful,” he added.
HR later gave him a warning concerning the memo, Aschenbrenner mentioned, telling him that it was “racist” and “unconstructive” to fret about China Communist Occasion espionage. An OpenAI lawyer later requested him about his views on AI and AGI and whether or not Aschenbrenner and the superalignment group have been “loyal to the corporate,” because the AI researcher put it.
Aschenbrenner claimed the corporate then went by way of his OpenAI digital artifacts.
He was fired shortly after, he mentioned, with the corporate alleging he had leaked confidential data, wasn’t forthcoming in its investigation, and referenced his prior warning from HR after sharing the memo with the board members.
Aschenbrenner mentioned the leak in query referred to a “brainstorming doc on preparedness, on security, and safety measures” wanted for synthetic basic intelligence, or AGI, that he shared with three exterior researchers for suggestions. He mentioned he had reviewed the doc earlier than sharing it for any delicate data and that it was “completely regular” on the firm to share this type of data for suggestions.
Aschenbrenner mentioned OpenAI deemed a line about “planning for AGI by 2027-2028 and never setting timelines for preparedness” as confidential. He mentioned he wrote the doc a few months after the superalignment group was introduced, which referenced a four-year planning horizon.
In its announcement of the superalignment group posted in July 2023, OpenAI mentioned its objective was to “clear up the core technical challenges of superintelligence alignment in 4 years.”
“I did not assume that planning horizon was delicate,” Aschenbrenner mentioned within the interview. “You understand it is the form of factor Sam says publicly on a regular basis,” he mentioned, referring to CEO Sam Altman.
An OpenAI spokesperson informed Enterprise Insider that the issues Aschenbrenner raised internally and to its Board of Administrators “didn’t result in his separation.”
“Whereas we share his dedication to constructing protected AGI, we disagree with most of the claims he has since made about our work,” the OpenAI spokesperson mentioned.
Aschenbrenner is considered one of a number of former staff who’ve not too long ago spoken out about security issues at OpenAI. Most not too long ago, a gaggle of 9 present and former OpenAI staff signed a letter calling for extra transparency in AI corporations and safety for many who categorical concern concerning the expertise.