The resignation of OpenAI‘s head of robotics and consumer hardware, Caitlin Kalinowski, announced last Saturday, sheds light on cracks within the company over its growing involvement in the defence sector. For an organisation that is aggressively pursuing new revenue streams under Sam Altman, the public opposition of such a high-profile executive is a wake-up call about corporate governance and team stability.
The immediate reason for Kalinowski’s departure was OpenAI’s contract with the US Department of Defence. According to her account, the company decided to deploy its models on the Pentagon’s secret cloud networks without due deliberation or establishing clear controls. The former leader, who previously led the development of AR glasses at Meta Platforms for years, argues that rushing into such strategic contracts is a management error. In her view, the lines of demarcation between supporting national security and uncontrolled surveillance or autonomous combat systems have been blurred in the process.
For OpenAI, this is an image and operational blow. Kalinowski joined the company in just 2024, tasked with creating momentum for the startup’s hardware ambitions. Her departure suggests that there is growing resistance within the organisation to the pace at which the mission of ‘AI for the good of humanity’ is being redefined for geopolitical pragmatism.
While OpenAI responded almost immediately with a statement about ‘red lines’ that exclude participation in domestic surveillance or weapons development, the no-strings-attached narrative made public by Kalinowski may make it more difficult for the company to attract further engineering and ethical talent. From a business perspective, this situation exposes the challenge facing AI giants: how to scale government partnerships without losing the trust of key leaders.
