Wild, Odd, Amazing & Bizarre…but 100% REAL…News From Around The Internet.

Reportedly Healthy Report Relies on Unreported Reports

Summary for the Curious but Committed to Minimal Effort

  • Kennedy’s 73-page “Make America Healthy Again” report cites over 500 studies—NOTUS and the Associated Press found at least seven completely fabricated and numerous misattributions.
  • Recurring errors—nonexistent studies, broken links, exaggerated findings—have fueled speculation of AI-generated hallucinations; the White House insists they’re mere “formatting issues.”
  • The MAHA mishaps underscore the dangers of sloppy citation practices and potential overreliance on automation in shaping high-stakes health policy.

Unraveling the “Make America Healthy Again” (MAHA) Commission report this week, one can’t help but get a whiff of déjà vu from the world of internet chain emails: bold headlines, science-y claims, and a fact-checker’s nightmare underneath. According to Fox’s summary article, which draws on reporting from the Associated Press and investigations by NOTUS, Health and Human Services Secretary Robert F. Kennedy Jr.’s latest health policy opus is laced with references to studies that—much like my hopes for finding a universal remote for life—do not exist.

A Fictional Footnote Frenzy

Let’s set the stage: Kennedy’s 73-page report aims to reposition America’s health trajectory, tackling everything from childhood anxiety to the alleged scourge of over-prescribed medication. The White House rolled it out, complete with policy suggestions and a request for $500 million in funding, as Fox details, drawing from the Associated Press. There’s only one hiccup: of the over 500 cited studies, NOTUS found at least seven that simply aren’t real, a finding also summarized by Fox and AP.

To clarify, these aren’t difficult-to-access sources or unpublished preprints—they appear not to exist at all. NOTUS, as cited in the Associated Press and Fox reports, spoke with epidemiologist Katherine Keyes, who was named as co-author on a cited paper about adolescent anxiety allegedly published in JAMA Pediatrics. Keyes confirmed, “The paper cited is not a real paper that I or my colleagues were involved with,” and noted to NOTUS that while she’s done work on the topic, nothing matches the cited title, venue, or authorship.

Other citations, according to NOTUS and interviews relayed by the Associated Press, misrepresented or exaggerated the findings of actual studies. Pediatric pulmonologist Harold J. Farber, whose work was referenced for claims about the overmedication of American children, explained it was “a tremendous leap of faith to generalize from a study in one Medicaid managed care program in Texas using 2011 to 2015 data to national care patterns in 2025.” Farber denies writing the report as cited, underscoring both misattribution and misrepresentation.

Was That the Clickety-Clack of AI?

Patterns in the errors—nonexistent studies, broken links, and casual misattribution—have prompted outlets like Futurism to suggest the involvement of generative AI. The article highlights that these mishaps, such as hallucinated citations and overgeneralized conclusions, are a well-known signature of large language models like ChatGPT. While no definitive proof exists that Kennedy’s Department of Health and Human Services used AI, the evidence is, as the article describes, “extremely sketchy.”

When pressed in a briefing, White House Press Secretary Karoline Leavitt sidestepped direct questions about AI’s involvement, instead referring reporters to the Department of Health and Human Services and attributing the errors to “formatting issues.” This exchange, captured by inkl, saw Leavitt defend the report as a “transformative” achievement, maintaining that the errors “do not negate the substance of the report.”

Formatting or Fabrication?

Despite the defense from the White House, which repeatedly described the errors as stemming from formatting mishaps rather than deeper problems, NOTUS’s investigation revealed an array of issues beyond missing studies: broken links, citation errors, and several instances where actual findings were mischaracterized. Fox and inkl both note that while the White House promises corrections, there’s been no clear accounting of how the erroneous citations entered the MAHA report in the first place.

Science communicator Joe Hanson, quoted in Futurism’s coverage, put it bluntly: “AI is useful for many things. Making or guiding government policy is not one of them!” He added, “Seems like the kind of thing someone might do if they were interested in publishing propaganda to support a particular agenda rather than letting science guide their health policy.”

A Hallmark of the Times, or Just Sloppy Homework?

The MAHA report isn’t just another wonkish white paper—its recommendations are intended to shape future health policy, with significant funding implications. The situation, as outlined collectively by Fox, the Associated Press, NOTUS, Futurism, and inkl, raises serious questions not only about the accuracy of this specific document, but about the reliability of public policy in an era increasingly touched by automation and, evidently, citation-generating software.

So when a major federal health strategy is propped up by studies as tangible as a mirage, what does that mean for future reports—and more importantly, for the policies those reports inform? Is this another case of bureaucratic corner-cutting, or a sign that citing ghosts has gone mainstream in policymaking?

In an age of instant information and equally instant misinformation, perhaps a little more old-fashioned library science—and a lot less “formatting issue”—wouldn’t go amiss.

Sources:

Related Articles:

Ever wondered what happens when bargain-bin robotics meets carnival sideshow sensibilities? Enter Clippy, Temu’s robot attack dog—a plastic creation that belly-flops, “urinates” on command, and weaves tales of toothless cats (in a cartoonish whine). Endearingly useless, oddly captivating, and proof we’ll always have an appetite for the ridiculous—curious yet? Clippy’s chaotic charm awaits.
Louisiana’s latest legislative escapade—an earnest attempt to ban “chemtrails”—proves that, sometimes, statehouses are where conspiracy theories get their moment in the sun. As lawmakers debate cloudy skies and “nanochemicals,” you have to wonder: Is this public policy or performance art? When policy meets paranoia, even the EPA’s dry rebuttals sound stranger than fiction.
You know American politics has taken a surreal turn when the White House hosts a celebratory toast—not with champagne, but with raw, glyphosate-free milk and honey—between RFK Jr. and a carnivore diet influencer. Behind the quirks and headlines, is this a genuine health movement, political performance, or simply the latest chapter in governmental oddity? Click through for a look at this dairy-fueled spectacle.
When grief meets the uncanny promise of science, we get stories like Clare McCann’s attempt to cryogenically preserve her son: heart-wrenching, improbable, and impossible to ignore. Her journey—equal parts public protest and futuristic hope—makes you wonder: when loss feels insurmountable, just how far would any of us go for a sliver of possibility?
World leaders don’t usually get caught up in action movie theatrics mid-flight, but rumors of Putin’s helicopter dodging swarms of Ukrainian drones in Kursk have social media abuzz and propaganda meters spinning. With scant hard evidence and a storyline that feels more Hollywood than Kremlin, we’re left to wonder: is this drama for the cameras, or just another act in the information war?
Meta’s latest move has machines judging the risks of other machines—turning privacy and safety checks for billions over to AI and sidelining human review. Efficiency is the name of the game, but as automation speeds ahead, one wonders: who’s watching while the oversight ouroboros quietly munches its own tail? Click through to explore the ironies and implications.