On the long, strange list of modern dangers—charging your phone in the bathtub, attempting TikTok health trends, or trusting quantum physics as interpreted by your barber—there’s now an entry for “replacing table salt with sodium bromide because a chatbot said so.” As recounted in a recent report by Ars Technica, a 60-year-old man managed to tumble down a particularly bizarre rabbit hole after asking ChatGPT for health advice, inadvertently transforming a mundane kitchen experiment into a full-blown case study on chemical-induced psychosis.
From Table Salt to the Twilight Zone
According to details assembled by Ars Technica’s Nate Anderson, the man—who, intriguingly, had a history of studying nutrition in college—became preoccupied with the idea of eliminating “chlorine” from his diet. In his interpretation, that meant banishing sodium chloride, or ordinary table salt, from his meals. Turning to ChatGPT for guidance, he left with the impression that sodium bromide was a serviceable replacement for food use.
If sodium bromide sounds like something more at home under your sink than in your kitchen cupboard, you’re not far off. The compound’s main claim to fame these days is in hot tub and pool disinfection. As Anderson notes in the Ars Technica article, bromide salts were once even used in sedatives, but ended up banned by the FDA in 1989 after it became clear they tend to accumulate in the body and produce “bromism”—a now mostly forgotten constellation of symptoms including paranoia, rashes, and disruptively odd behavior. More than a century ago, up to 10% of psychiatric admissions in the US were reportedly linked to bromism.
Hallucinations: Now in Convenient, Salt-Substitute Form
After three months substituting sodium bromide for salt, the man found himself in the emergency room, reportedly convinced his neighbor was trying to poison him. He refused to drink hospital water, explained he was distilling his own at home, and admitted to an extremely restrictive vegetarian diet—leaving out any mention of his ChatGPT consultation or sodium bromide regimen. The Ars Technica report documents how doctors, puzzled by his severe paranoia and nutritional deficiencies, ran a series of lab tests and unearthed the bombshell: his blood bromide level was a staggering 1,700 mg/L, where “normal” is considered between 0.9 and 7.3 mg/L.
As cited in Ars Technica, doctors recognized a textbook case of bromism. The man swiftly spiraled into worsening hallucinations and paranoia, even attempting to escape the hospital. Treatment required a psychiatric hold, antipsychotic medication, and the memorable prescription of “aggressive saline diuresis”—essentially, overwhelming the patient with fluids and electrolytes to speed up excretion of the accumulated bromide. It took three weeks to bring his levels down and restore him to baseline. Anderson dryly describes this as “an entirely preventable condition.”
What Exactly Did ChatGPT Say?
The particulars of the chatbot interaction remain somewhat elusive. As Anderson explains, the attending doctors never obtained the original ChatGPT logs. They speculate, based on the man’s story, that an older model (possibly ChatGPT 3.5 or 4.0) might have played a role. When the clinicians tried posing similar questions to ChatGPT themselves, the AI did mention bromide as a salt alternative in some contexts, but “did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do.” The response failed to distinguish between sodium bromide’s perfectly respectable role in hot tubs and its unsuitability for the dinner table.
In a moment worth quoting, Anderson notes the doctors reported: “the AI did include bromide in its response, but it also indicated that context mattered and that bromide was not suitable for all uses.” Yet, the lack of a firm health warning or pointed question likely left ample room for misinterpretation by someone already bent on a dietary detour.
When Anderson posed a similar question to the current free version of ChatGPT, the AI’s reply was notably more cautious. It first asked for clarification, distinguishing between reducing dietary salt and wanting alternatives to chlorine-based cleaning agents. Where bromide was discussed, it was purely as a disinfectant—“often used in hot tubs,” the response emphasized. This suggests an improvement (or tightening of guardrails), but perhaps not quite a systemic fix for the broader problem.
Lost in the Infodump
As detailed throughout the Ars Technica account, only after his psychosis was under control did the patient share the backstory of his choices: reading up on the downsides of table salt, seeking answers from ChatGPT, interpreting its suggestions in an unexpectedly literal way, and going all-in on sodium bromide. What emerges is a vivid illustration of how the abundance of online information, without strong vetting skills or domain-specific skepticism, can send even the reasonably educated on the most unintentional of odysseys.
Anderson alludes to the irony in the modern information-exchange: “we are drowning in information—but … we often lack the economic resources, the information-vetting skills, the domain-specific knowledge, or the trust in others that would help us make the best use of it.” For anyone who’s ever gone down an internet rabbit hole, the predicament feels both familiar and mildly terrifying.
Reflections from the Rabbit Hole
The story reads almost like a Bartók opera scored for internet search engines and unintended consequences. Do we expect too much from our AIs—to catch bad ideas before they get dangerous—or too little from ourselves in double-checking what a sentence fragment from a chatbot might mean for our health? The distinction between “can be substituted” and “should be ingested” turns out to matter quite a bit more than most would assume. And so we’re left with a strange, very modern lesson—one part chemistry, one part epistemology, and perhaps a dash of “ask a human before seasoning your soup.”
Could a touch more skepticism, or one more clarifying question, have spared this whole ordeal? Or is it just another reminder that, in the age of algorithmic answers, the best substitutes for common sense are still under active development?