Mrinank Sharma, the head of Anthropic’s safeguards research team, has announced his resignation in a cryptic and deeply philosophical resignation letter shared on X (formerly Twitter) on 9 February.
In the letter, Sharma warned that the world is currently facing a “threshold” of interconnected crises.
His departure comes at a pivotal moment for the Amazon and Google-backed firm, as it transitions from its roots as a “safety-first” laboratory into a commercial powerhouse seeking a reported $350 billion valuation.
In his letter, which heavily referenced the works of poets such as Rainer Maria Rilke and William Stafford, Sharma suggested that humanity’s technical capacity is outstripping its moral foresight.
“We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences,” Sharma wrote. He further noted that the world is in peril from a “whole series of interconnected crises unfolding in this very moment,” extending beyond just the risks posed by AI.
The resignation has sparked intense debate regarding the internal culture at Anthropic. Originally founded by former OpenAI executives who left due to concerns over commercialisation, Anthropic is now facing similar scrutiny.
Sharma admitted to the difficulty of allowing values to truly govern actions within a fast-moving organisation. “I’ve repeatedly seen how hard it is to truly let our values govern our actions,” he stated. “I’ve seen this within myself, within the organisation, where we constantly face pressures to set aside what matters most.”
Significantly, Sharma revealed that one of his final projects focused on how AI assistants might “distort our humanity” or make users “less human”, which seems to be a deep concern as the company pivots towards “agentic” AI designed to handle complex office tasks.
The timing of the exit is notable, occurring just days after the launch of Claude Opus 4.6, an upgraded model designed for high-end coding and workplace productivity.
Industry observers suggest the push to “ship fast” to satisfy investors and compete with OpenAI’s latest models may have compromised the rigorous safety protocols Sharma’s team was tasked with maintaining.
Sharma is not the only high-profile departure; last week, leading AI scientist Behnam Neyshabur and R&D specialist Harsh Mehta also left the firm.
Anthropic has yet to officially comment on the resignation or the specific concerns raised in the letter.
Mrinank Sharma resignation, Anthropic AI safety lead quits, Anthropic safeguards team, Claude Opus 4.6 safety, AI ethics resignation letter, Technology news, tech news #Anthropic039s #safety #Mrinank #Sharma #resigns #039world #peril039 #resignation #letter1770718365











