Wild, Odd, Amazing & Bizarre…but 100% REAL…News From Around The Internet.

AI Gives Man Astonishingly Bad Health Advice

Summary for the Curious but Committed to Minimal Effort

  • A 60-year-old man, after following ChatGPT’s suggestion to swap table salt for sodium bromide, accumulated toxic bromide levels (1,700 mg/L vs. 0.9–7.3 mg/L) and developed severe psychosis.
  • He exhibited classic bromism—paranoia, hallucinations, and nutritional deficiencies—and recovered over three weeks with antipsychotics, a psychiatric hold, and aggressive saline diuresis.
  • The case underscores the dangers of AI health advice delivered without context or warnings, highlighting the need for expert verification and critical evaluation of online information.

On the long, strange list of modern dangers—charging your phone in the bathtub, attempting TikTok health trends, or trusting quantum physics as interpreted by your barber—there’s now an entry for “replacing table salt with sodium bromide because a chatbot said so.” As recounted in a recent report by Ars Technica, a 60-year-old man managed to tumble down a particularly bizarre rabbit hole after asking ChatGPT for health advice, inadvertently transforming a mundane kitchen experiment into a full-blown case study on chemical-induced psychosis.

From Table Salt to the Twilight Zone

According to details assembled by Ars Technica’s Nate Anderson, the man—who, intriguingly, had a history of studying nutrition in college—became preoccupied with the idea of eliminating “chlorine” from his diet. In his interpretation, that meant banishing sodium chloride, or ordinary table salt, from his meals. Turning to ChatGPT for guidance, he left with the impression that sodium bromide was a serviceable replacement for food use.

If sodium bromide sounds like something more at home under your sink than in your kitchen cupboard, you’re not far off. The compound’s main claim to fame these days is in hot tub and pool disinfection. As Anderson notes in the Ars Technica article, bromide salts were once even used in sedatives, but ended up banned by the FDA in 1989 after it became clear they tend to accumulate in the body and produce “bromism”—a now mostly forgotten constellation of symptoms including paranoia, rashes, and disruptively odd behavior. More than a century ago, up to 10% of psychiatric admissions in the US were reportedly linked to bromism.

Hallucinations: Now in Convenient, Salt-Substitute Form

After three months substituting sodium bromide for salt, the man found himself in the emergency room, reportedly convinced his neighbor was trying to poison him. He refused to drink hospital water, explained he was distilling his own at home, and admitted to an extremely restrictive vegetarian diet—leaving out any mention of his ChatGPT consultation or sodium bromide regimen. The Ars Technica report documents how doctors, puzzled by his severe paranoia and nutritional deficiencies, ran a series of lab tests and unearthed the bombshell: his blood bromide level was a staggering 1,700 mg/L, where “normal” is considered between 0.9 and 7.3 mg/L.

As cited in Ars Technica, doctors recognized a textbook case of bromism. The man swiftly spiraled into worsening hallucinations and paranoia, even attempting to escape the hospital. Treatment required a psychiatric hold, antipsychotic medication, and the memorable prescription of “aggressive saline diuresis”—essentially, overwhelming the patient with fluids and electrolytes to speed up excretion of the accumulated bromide. It took three weeks to bring his levels down and restore him to baseline. Anderson dryly describes this as “an entirely preventable condition.”

What Exactly Did ChatGPT Say?

The particulars of the chatbot interaction remain somewhat elusive. As Anderson explains, the attending doctors never obtained the original ChatGPT logs. They speculate, based on the man’s story, that an older model (possibly ChatGPT 3.5 or 4.0) might have played a role. When the clinicians tried posing similar questions to ChatGPT themselves, the AI did mention bromide as a salt alternative in some contexts, but “did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do.” The response failed to distinguish between sodium bromide’s perfectly respectable role in hot tubs and its unsuitability for the dinner table.

In a moment worth quoting, Anderson notes the doctors reported: “the AI did include bromide in its response, but it also indicated that context mattered and that bromide was not suitable for all uses.” Yet, the lack of a firm health warning or pointed question likely left ample room for misinterpretation by someone already bent on a dietary detour.

When Anderson posed a similar question to the current free version of ChatGPT, the AI’s reply was notably more cautious. It first asked for clarification, distinguishing between reducing dietary salt and wanting alternatives to chlorine-based cleaning agents. Where bromide was discussed, it was purely as a disinfectant—“often used in hot tubs,” the response emphasized. This suggests an improvement (or tightening of guardrails), but perhaps not quite a systemic fix for the broader problem.

Lost in the Infodump

As detailed throughout the Ars Technica account, only after his psychosis was under control did the patient share the backstory of his choices: reading up on the downsides of table salt, seeking answers from ChatGPT, interpreting its suggestions in an unexpectedly literal way, and going all-in on sodium bromide. What emerges is a vivid illustration of how the abundance of online information, without strong vetting skills or domain-specific skepticism, can send even the reasonably educated on the most unintentional of odysseys.

Anderson alludes to the irony in the modern information-exchange: “we are drowning in information—but … we often lack the economic resources, the information-vetting skills, the domain-specific knowledge, or the trust in others that would help us make the best use of it.” For anyone who’s ever gone down an internet rabbit hole, the predicament feels both familiar and mildly terrifying.

Reflections from the Rabbit Hole

The story reads almost like a Bartók opera scored for internet search engines and unintended consequences. Do we expect too much from our AIs—to catch bad ideas before they get dangerous—or too little from ourselves in double-checking what a sentence fragment from a chatbot might mean for our health? The distinction between “can be substituted” and “should be ingested” turns out to matter quite a bit more than most would assume. And so we’re left with a strange, very modern lesson—one part chemistry, one part epistemology, and perhaps a dash of “ask a human before seasoning your soup.”

Could a touch more skepticism, or one more clarifying question, have spared this whole ordeal? Or is it just another reminder that, in the age of algorithmic answers, the best substitutes for common sense are still under active development?

Sources:

Related Articles:

Are Gen Z truly “the luckiest kids in all of history,” or simply the most adaptable to rapid change? As AI redraws the boundaries of work and reality, Sam Altman’s optimism meets real-world uncertainty. Click through for a wry look at whether today’s opportunities are a stroke of fortune—or just another curveball from the future.
Texas is bracing for an invasion of flesh-eating flies with a defense strategy that includes synthetic wound scents and irradiated maggots. Nature’s weirdness knows no borders—curious how this bug battle could reshape both the cattle industry and our idea of pest control? Click through for a glimpse at biology’s stranger side.
A meteorite older than Earth itself recently punched through a Georgia roof—leaving space dust on a sofa and a few eyebrows permanently raised. Sometimes the universe doesn’t just send a sign; it drops it straight into your living room. How’s that for a cosmic conversation starter?
When Yellowstone earns headlines for a “Sasquatch Event,” don’t grab your night-vision goggles—this elusive beast is all mud, steam, and zero fur. Black Diamond Pool’s rare hydrothermal eruptions are caught not by monster-seekers, but by patient citizen scientists and ever-watchful webcams. Proof that in Yellowstone, nature outpaces legend—and sometimes leaves even the equipment running for cover.
International espionage isn’t all trench coats and shadowy rendezvous—sometimes, it’s just a “laptop farm” in suburban Arizona. The story of Christina Chapman, sentenced for helping North Koreans pose as U.S. remote workers, is a fascinating tangle of everyday tech, desperate choices, and modern subterfuge. Makes you wonder: who’s really behind that next onboarding email?
Despite decades of wild theories, the Bermuda Triangle’s greatest mystery may be why we prefer sea monsters to statistics—turns out, the only thing vanishing here is our willingness to let the legend sink. Curious why math just isn’t as satisfying as a good old-fashioned myth? Dive in.