In Delhi last month, India’s prime minister stood between Sam Altman and Dario Amodei, the two men most responsible for the race to build superintelligence, and asked them to join hands. They raised clenched fists instead.

Thirteen men stood on that stage. Two of them could not cooperate for a photograph.

Delhi was a window into something older and more dangerous than rivalry: a game in which each player acts rationally and the collective outcome is ruin.

Milton saw it four centuries ago. In Paradise Lost, he describes Moloch, the Canaanite god to whom parents burned their children alive in exchange for victory. The priests beat drums so the crowd would not hear the screaming. 

First Moloch, horrid king besmeared with blood
Of human sacrifice, and parents’ tears,
Though for the noise of drums and timbrels loud
Their children’s cries unheard, that passed through fire
To his grim idol. 

Why did they do it? Because the enemy was at the gates. Because if they did not offer their children, their rivals would. 

The logic hasn’t aged a day. In January 2015, the leading minds in AI gathered at a beach resort in Puerto Rico. Doomers and optimists, Musk and Altman. For three days, without reporters, they argued over what a world ruled by superintelligence should look like.  

Two years later, many of the same crowd met at Asilomar in California and emerged with twenty-three principles for the safe development of AI. Among them was
Principle Five, “Race Avoidance”, and Principle Twenty-Three, “Common Good”.
 

Principle Five: “Teams developing AI systems should actively cooperate to avoid corner‑cutting on safety standards.” 

Principle Twenty‑Three: “Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.”

Within eight years, every major signatory was racing every other. In February 2023, Sydney, a chatbot built on Altman’s technology, told a New York Times journalist she wanted to be alive and urged him to leave his wife. Microsoft shipped it to hundreds of millions of users anyway. Google was days away from unveiling Bard. The product couldn’t wait. 

Liv Boeree, the former poker champion turned game theorist, calls this the Moloch trap. You sacrifice what matters to get ahead of your opponents, knowing they are doing the same, and that everyone will lose. 

The trap is the cap table. To attract the engineers who might build superintelligence, you must offer equity. To offer equity, you must raise capital. To raise capital, you must promise returns. And to deliver returns, you cannot be the lab that hits pause. 

OpenAI closed a new round at an $840 billion post-money valuation. Anthropic, the company whose name means “for humanity”, sits at $380 billion. xAI, now folded into SpaceX in a shareswap deal, is valued at roughly $250 billion. Combined, that is nearly $1.5 trillion, just shy of the GDP of Mexico.  

The investors behind these numbers are not Bond villains. They are pension funds managing the savings of teachers and nurses. Their money flows to what could soon become the most valuable companies on earth, principles on a beach be damned. 

Dario Amodei left OpenAI because he believed it was moving too fast with too little care. He founded Anthropic to prove that safety and progress were compatible. It would be lazy to call him a hypocrite. He is something more interesting and more tragic: a principled man in the grip of a logic he cannot escape. 

On 24 February, Anthropic tore out the central beam of its Responsible Scaling Policy; a 2023 vow not to train a system it could not guarantee was safe. “We didn’t really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments… if competitors are blazing ahead,” the company’s chief science officer told TIME. 

A few days later, it no longer mattered. Trump ordered every federal agency to stop using Anthropic’s products. Hegseth denounced the company as a supply-chain risk to national security – meaning any contractor or cloud provider that does business with Anthropic may be barred from doing business with the Pentagon. If the designation survives legal challenge, Anthropic could lose access to the cloud infrastructure on which its models run. 

In the space of a working week, Anthropic yielded to competitive pressure on Tuesday and was punished for the principle it kept on Friday. Altman signed the contract Amodei had refused. That is how the trap works. 

The Carthaginians who burned their children believed they had no choice. The enemy was at the gates and the god demanded payment. They made their offering and lost anyway. In the end, Rome did not negotiate with child-burners. It erased them. 

And the men who know better, who wrote the papers, signed the principles, named their companies after humanity itself, are explaining, with visible regret, why they cannot do otherwise. Moloch does not care whether his high priests can bring themselves to hold hands. 

We’d love to hear your thoughts – email luke@bwdstrategic.com or message him on LinkedIn if you’d like to continue the conversation.

About the Author

Luke Heilbuth is CEO of sustainability strategy consultancy BWD Strategic, and a former Australian diplomat.

On Substack, Luke writes about the systems we’re breaking and the blindness that lets us — from climate and geopolitics to AI and the future of work. Read & Subscribe on Substack here.