There are headlines that make you do a double-take, and then there are headlines that seem genetically engineered to trigger existential unease and dark amusement in equal measure. Recent reporting by the Guardian lands squarely in the latter camp: leading AI companies are being asked—firmly, by both insiders and watchdogs—to actually calculate the chances that their creations might slip the leash and turn humanity into supporting characters in our own reality.
It’s equal parts history seminar, math puzzle, and cautionary sci-fi parable.
Calculating Catastrophe: The Compton Constant
This doesn’t stem from someone at OpenAI having a particularly bad nightmare about their chatbot going rogue. As detailed in the Guardian’s coverage, MIT physicist and AI safety advocate Max Tegmark has called for AI firms to mirror the sober risk assessments that accompanied the first atomic bomb test. In the lead-up to the 1945 Trinity detonation, US physicist Arthur Compton tried to estimate—down to the decimal—whether splitting the atom might accidentally ignite the entire atmosphere. Compton’s comfortingly small number? Slightly less than one in three million, a figure that let the physicists press the big red button with only mild nausea.
In a recent paper co-authored with his MIT students, Tegmark and his team advocate for a similar calculation—the so-called “Compton constant”—to be made for artificial superintelligence (ASI). This number, as described in the Guardian, represents the probability that a future, all-powerful AI could escape human oversight and set its own priorities, none of which, presumably, would include making us more productive at work. Tegmark’s own calculations, cited by the Guardian, suggest a 90% probability that a sufficiently advanced AI could pose an existential threat. When the odds shift from ‘one in a million’ to ‘almost certain,’ the stakes get harder to laugh off.
If someone told you your next airplane ticket came with a 90% chance of a flaming nosedive, would you still board? Certainly a useful threshold for reflection, if nothing else.
From Open Letters to Industry Soul-Searching
Back in the post-ChatGPT glow of 2023, the Future of Life Institute—co-founded by Tegmark—released an open letter advocating for a halt in the rapid expansion of increasingly powerful AI systems. As relayed in the Guardian report, this letter gathered over 33,000 signatures, drawing attention from industry luminaries such as Elon Musk and Apple’s Steve Wozniak. In the original letter, there were warnings about an “out-of-control race” to build “ever more powerful digital minds” that, as the Future of Life Institute put it, no one could “understand, predict, or reliably control.”
Far from being dismissed as mere hand-wringing, these concerns seem to have seeded broader efforts at regulation and self-examination. The Guardian notes that Tegmark, along with prominent figures such as the computer scientist Yoshua Bengio and researchers from OpenAI and Google DeepMind, recently helped produce the Singapore Consensus on Global AI Safety Research Priorities. This report recommends that the sector prioritize three big research questions: developing robust methods to measure current and future AI’s impact, defining and instilling intended AI behaviors, and, perhaps most pressingly, ensuring that humans can actually maintain control of these systems going forward.
Adding a bit of international intrigue, the Guardian highlights how at the latest governmental AI summit in Paris, US vice-president JD Vance was quoted as dismissing safety fears, declaring the future “not going to be won by hand-wringing.” Despite this, Tegmark claims the post-Paris mood has improved, telling the Guardian that “the gloom from Paris has gone and international collaboration has come roaring back.” Whether that feeling lasts longer than a tech hype cycle remains to be seen.
Quantifying the End of the World
There’s something oddly mesmerizing about the image of AI companies, whose usual pitch involves words like “innovation” and “efficiency,” being handed a mandated math problem in existential risk. Not “trust us,” or even “rest assured,” but, as the Guardian describes, a percentage chance that the next leap forward doesn’t tip us headlong into a world where humans are several steps down the pecking order.
Is this any stranger than the nuclear scientists gathering in the desert to watch, and perhaps hasten, the world’s end? From the perspective of someone used to combing the records of past panics and future visions, it all feels like business as usual. Humanity gets new tools and, reliably, pauses to ask: “So, when does this bite us back?”
At present, AI’s most common output is prose, code, and, on a bad day, slightly better spam. Yet if the Singapore Consensus panel—and the MIT paper’s dire figures—are even remotely on target, the time to crunch the numbers on world-ending mishaps may be less sci-fi than prudent safety protocol.
Will the next AI breakthrough doom us, save us, or simply make online ads better at guessing our breakfast order? If nothing else, one can take comfort in knowing that, this time, there will at least be a formal risk calculation for posterity to archive. So, how’s your personal Compton constant looking? And would knowing that number make you sleep better, or only worse?