Wild, Odd, Amazing & Bizarre…but 100% REAL…News From Around The Internet.

So, Skynet: AI Firms Told to Figure Out the Whole ‘World Domination’ Thing

Summary for the Curious but Committed to Minimal Effort

  • AI safety advocates led by MIT’s Max Tegmark propose an AI ‘Compton constant’—mirroring the atomic-era risk calculation—to quantify the chance that superintelligent systems escape human control, with Tegmark estimating a 90% existential‐risk probability.
  • The 2023 Future of Life Institute open letter (33 000+ signatories including Elon Musk and Steve Wozniak) spurred the Singapore Consensus on Global AI Safety Research Priorities, focusing on risk measurement, safe behavior definitions, and guaranteed human oversight.
  • Despite US vice-president JD Vance dismissing safety concerns at the Paris AI summit, Tegmark reports that international collaboration on formal AI risk assessment has rebounded strongly.

There are headlines that make you do a double-take, and then there are headlines that seem genetically engineered to trigger existential unease and dark amusement in equal measure. Recent reporting by the Guardian lands squarely in the latter camp: leading AI companies are being asked—firmly, by both insiders and watchdogs—to actually calculate the chances that their creations might slip the leash and turn humanity into supporting characters in our own reality.

It’s equal parts history seminar, math puzzle, and cautionary sci-fi parable.

Calculating Catastrophe: The Compton Constant

This doesn’t stem from someone at OpenAI having a particularly bad nightmare about their chatbot going rogue. As detailed in the Guardian’s coverage, MIT physicist and AI safety advocate Max Tegmark has called for AI firms to mirror the sober risk assessments that accompanied the first atomic bomb test. In the lead-up to the 1945 Trinity detonation, US physicist Arthur Compton tried to estimate—down to the decimal—whether splitting the atom might accidentally ignite the entire atmosphere. Compton’s comfortingly small number? Slightly less than one in three million, a figure that let the physicists press the big red button with only mild nausea.

In a recent paper co-authored with his MIT students, Tegmark and his team advocate for a similar calculation—the so-called “Compton constant”—to be made for artificial superintelligence (ASI). This number, as described in the Guardian, represents the probability that a future, all-powerful AI could escape human oversight and set its own priorities, none of which, presumably, would include making us more productive at work. Tegmark’s own calculations, cited by the Guardian, suggest a 90% probability that a sufficiently advanced AI could pose an existential threat. When the odds shift from ‘one in a million’ to ‘almost certain,’ the stakes get harder to laugh off.

If someone told you your next airplane ticket came with a 90% chance of a flaming nosedive, would you still board? Certainly a useful threshold for reflection, if nothing else.

From Open Letters to Industry Soul-Searching

Back in the post-ChatGPT glow of 2023, the Future of Life Institute—co-founded by Tegmark—released an open letter advocating for a halt in the rapid expansion of increasingly powerful AI systems. As relayed in the Guardian report, this letter gathered over 33,000 signatures, drawing attention from industry luminaries such as Elon Musk and Apple’s Steve Wozniak. In the original letter, there were warnings about an “out-of-control race” to build “ever more powerful digital minds” that, as the Future of Life Institute put it, no one could “understand, predict, or reliably control.”

Far from being dismissed as mere hand-wringing, these concerns seem to have seeded broader efforts at regulation and self-examination. The Guardian notes that Tegmark, along with prominent figures such as the computer scientist Yoshua Bengio and researchers from OpenAI and Google DeepMind, recently helped produce the Singapore Consensus on Global AI Safety Research Priorities. This report recommends that the sector prioritize three big research questions: developing robust methods to measure current and future AI’s impact, defining and instilling intended AI behaviors, and, perhaps most pressingly, ensuring that humans can actually maintain control of these systems going forward.

Adding a bit of international intrigue, the Guardian highlights how at the latest governmental AI summit in Paris, US vice-president JD Vance was quoted as dismissing safety fears, declaring the future “not going to be won by hand-wringing.” Despite this, Tegmark claims the post-Paris mood has improved, telling the Guardian that “the gloom from Paris has gone and international collaboration has come roaring back.” Whether that feeling lasts longer than a tech hype cycle remains to be seen.

Quantifying the End of the World

There’s something oddly mesmerizing about the image of AI companies, whose usual pitch involves words like “innovation” and “efficiency,” being handed a mandated math problem in existential risk. Not “trust us,” or even “rest assured,” but, as the Guardian describes, a percentage chance that the next leap forward doesn’t tip us headlong into a world where humans are several steps down the pecking order.

Is this any stranger than the nuclear scientists gathering in the desert to watch, and perhaps hasten, the world’s end? From the perspective of someone used to combing the records of past panics and future visions, it all feels like business as usual. Humanity gets new tools and, reliably, pauses to ask: “So, when does this bite us back?”

At present, AI’s most common output is prose, code, and, on a bad day, slightly better spam. Yet if the Singapore Consensus panel—and the MIT paper’s dire figures—are even remotely on target, the time to crunch the numbers on world-ending mishaps may be less sci-fi than prudent safety protocol.

Will the next AI breakthrough doom us, save us, or simply make online ads better at guessing our breakfast order? If nothing else, one can take comfort in knowing that, this time, there will at least be a formal risk calculation for posterity to archive. So, how’s your personal Compton constant looking? And would knowing that number make you sleep better, or only worse?

Sources:

Related Articles:

Think you’ve seen speedcubing? Think again. Purdue engineers built a robot that solves a Rubik’s Cube faster than you can blink—literally, in just 0.103 seconds. Is humanity officially outmatched, or is there comfort in watching the absurd boundaries of “faster” become, well, unseeable? Read on for a peek behind the blur.
Turns out, when it comes to hygiene and healing, we may not be as unique as we think. Recent research out of Uganda’s Budongo Forest shows chimpanzees using leaves for everything from wound care to post-coital cleanup with surprising finesse—even tending to each other’s injuries. Maybe civilization isn’t invention after all, but simply clever opportunity—one leaf at a time. Curious yet?
Ever suited up for an interview, only to end up talking to a glitched-out AI on endless repeat? As more companies hand hiring over to algorithms, the process is getting stranger—and a little less human. Curious what happens when robots run the interview room? The details are equal parts absurd and eye-opening.
How long is a “lifetime” in the world of online deals? If you were holding a VPNSecure “forever” subscription, the answer turned out to be: only until the new owners checked their inbox. In a true tale of digital déjà vu, vanished VPN access, and a company apparently blindsided by its own customer base, the plot thickens—because sometimes, truth really is stranger than fiction.
When turning on the tap becomes a game of “guess that color,” you know it’s been a memorable week in Kimballton, Iowa. A water tower hiccup transformed the town’s drinking water a vibrant pink, leaving residents marveling at their own unintentional strawberry milk supply. Curious how day-to-day infrastructure can so quickly take center stage? You might want to pour yourself—well, anything but this.
Just when you think Mother’s Day might pass without cosmic interruption, Western Australia gets a meteor show that would make even the most distracted skywatcher look up from their toast. Vivid green and orange lit up the dawn—a brief bit of accidental theater, all thanks to some smoldering space rock and basic chemistry. Isn’t it oddly reassuring when the heavens out-weird even our earthbound escapades?