Wild, Odd, Amazing & Bizarre…but 100% REAL…News From Around The Internet.

So, Skynet: AI Firms Told to Figure Out the Whole ‘World Domination’ Thing

Summary for the Curious but Committed to Minimal Effort

  • AI safety advocates led by MIT’s Max Tegmark propose an AI ‘Compton constant’—mirroring the atomic-era risk calculation—to quantify the chance that superintelligent systems escape human control, with Tegmark estimating a 90% existential‐risk probability.
  • The 2023 Future of Life Institute open letter (33 000+ signatories including Elon Musk and Steve Wozniak) spurred the Singapore Consensus on Global AI Safety Research Priorities, focusing on risk measurement, safe behavior definitions, and guaranteed human oversight.
  • Despite US vice-president JD Vance dismissing safety concerns at the Paris AI summit, Tegmark reports that international collaboration on formal AI risk assessment has rebounded strongly.

There are headlines that make you do a double-take, and then there are headlines that seem genetically engineered to trigger existential unease and dark amusement in equal measure. Recent reporting by the Guardian lands squarely in the latter camp: leading AI companies are being asked—firmly, by both insiders and watchdogs—to actually calculate the chances that their creations might slip the leash and turn humanity into supporting characters in our own reality.

It’s equal parts history seminar, math puzzle, and cautionary sci-fi parable.

Calculating Catastrophe: The Compton Constant

This doesn’t stem from someone at OpenAI having a particularly bad nightmare about their chatbot going rogue. As detailed in the Guardian’s coverage, MIT physicist and AI safety advocate Max Tegmark has called for AI firms to mirror the sober risk assessments that accompanied the first atomic bomb test. In the lead-up to the 1945 Trinity detonation, US physicist Arthur Compton tried to estimate—down to the decimal—whether splitting the atom might accidentally ignite the entire atmosphere. Compton’s comfortingly small number? Slightly less than one in three million, a figure that let the physicists press the big red button with only mild nausea.

In a recent paper co-authored with his MIT students, Tegmark and his team advocate for a similar calculation—the so-called “Compton constant”—to be made for artificial superintelligence (ASI). This number, as described in the Guardian, represents the probability that a future, all-powerful AI could escape human oversight and set its own priorities, none of which, presumably, would include making us more productive at work. Tegmark’s own calculations, cited by the Guardian, suggest a 90% probability that a sufficiently advanced AI could pose an existential threat. When the odds shift from ‘one in a million’ to ‘almost certain,’ the stakes get harder to laugh off.

If someone told you your next airplane ticket came with a 90% chance of a flaming nosedive, would you still board? Certainly a useful threshold for reflection, if nothing else.

From Open Letters to Industry Soul-Searching

Back in the post-ChatGPT glow of 2023, the Future of Life Institute—co-founded by Tegmark—released an open letter advocating for a halt in the rapid expansion of increasingly powerful AI systems. As relayed in the Guardian report, this letter gathered over 33,000 signatures, drawing attention from industry luminaries such as Elon Musk and Apple’s Steve Wozniak. In the original letter, there were warnings about an “out-of-control race” to build “ever more powerful digital minds” that, as the Future of Life Institute put it, no one could “understand, predict, or reliably control.”

Far from being dismissed as mere hand-wringing, these concerns seem to have seeded broader efforts at regulation and self-examination. The Guardian notes that Tegmark, along with prominent figures such as the computer scientist Yoshua Bengio and researchers from OpenAI and Google DeepMind, recently helped produce the Singapore Consensus on Global AI Safety Research Priorities. This report recommends that the sector prioritize three big research questions: developing robust methods to measure current and future AI’s impact, defining and instilling intended AI behaviors, and, perhaps most pressingly, ensuring that humans can actually maintain control of these systems going forward.

Adding a bit of international intrigue, the Guardian highlights how at the latest governmental AI summit in Paris, US vice-president JD Vance was quoted as dismissing safety fears, declaring the future “not going to be won by hand-wringing.” Despite this, Tegmark claims the post-Paris mood has improved, telling the Guardian that “the gloom from Paris has gone and international collaboration has come roaring back.” Whether that feeling lasts longer than a tech hype cycle remains to be seen.

Quantifying the End of the World

There’s something oddly mesmerizing about the image of AI companies, whose usual pitch involves words like “innovation” and “efficiency,” being handed a mandated math problem in existential risk. Not “trust us,” or even “rest assured,” but, as the Guardian describes, a percentage chance that the next leap forward doesn’t tip us headlong into a world where humans are several steps down the pecking order.

Is this any stranger than the nuclear scientists gathering in the desert to watch, and perhaps hasten, the world’s end? From the perspective of someone used to combing the records of past panics and future visions, it all feels like business as usual. Humanity gets new tools and, reliably, pauses to ask: “So, when does this bite us back?”

At present, AI’s most common output is prose, code, and, on a bad day, slightly better spam. Yet if the Singapore Consensus panel—and the MIT paper’s dire figures—are even remotely on target, the time to crunch the numbers on world-ending mishaps may be less sci-fi than prudent safety protocol.

Will the next AI breakthrough doom us, save us, or simply make online ads better at guessing our breakfast order? If nothing else, one can take comfort in knowing that, this time, there will at least be a formal risk calculation for posterity to archive. So, how’s your personal Compton constant looking? And would knowing that number make you sleep better, or only worse?

Sources:

Related Articles:

What happens when you dust off a genetic relic last touched millions of years ago? Thanks to some madcap brain rewiring by researchers in Japan, one humble fruit fly swapped out its love song for a regurgitated snack—proving evolution sometimes just locks away, not erases, old behaviors. Makes you wonder: what strange instincts might be hiding in our own attic?
Ever wondered what it’s like behind a waterfall—really behind it? Ryan Wardwell now has the answer, having spent two soaked, shivering days wedged in a cave behind one of California’s wildest cascades. His rescue, equal parts luck, planning, and drone footage, is a testament to nature’s indifference and the value of thoughtful friends. Full story inside.
Picture this: a yellow rubber duck, defiantly clinging to a seaside boulder as waves crash and salt spray flies—thanks to a new AI-designed adhesive inspired by barnacles. In a quietly spectacular experiment, science skipped the jargon and let the stubborn duck do the talking. Curious how glue, AI, and bath toys became unlikely allies? Dive in for the full story.
Ever wondered why Africa always looks so…compact on your classroom map, while Greenland looms like a frozen colossus? Turns out, it’s no cartographic coincidence—the Mercator projection distorts map sizes, shrinking continents like Africa while inflating others near the poles. As world leaders and the African Union push for a more truthful view, is it finally time to retire our global funhouse mirror?
When billion-dollar tech secrets get shrunk to plastic blocks, you can’t help but appreciate the quiet absurdity. RTL’s findings on the knockoff LEGO ASML chip machines—surfacing on Chinese marketplaces despite global export bans—prove that even the world’s most tightly guarded innovations aren’t above being immortalized as desktop curiosities. Sometimes, international intrigue comes boxed with assembly instructions.
What if the only thing standing between humanity and our algorithmic overlords is a well-programmed maternal instinct? Geoffrey Hinton, legendary AI pioneer, suggests we teach future superintelligences to treat us like their babies—crib included. Is this oddball approach a genuine safeguard, or the world’s strangest insurance policy? Grab your digital pacifier and let’s dive into the paradox.