Wild, Odd, Amazing & Bizarre…but 100% REAL…News From Around The Internet.

Reportedly Healthy Report Relies on Unreported Reports

Summary for the Curious but Committed to Minimal Effort

  • Kennedy’s 73-page “Make America Healthy Again” report cites over 500 studies—NOTUS and the Associated Press found at least seven completely fabricated and numerous misattributions.
  • Recurring errors—nonexistent studies, broken links, exaggerated findings—have fueled speculation of AI-generated hallucinations; the White House insists they’re mere “formatting issues.”
  • The MAHA mishaps underscore the dangers of sloppy citation practices and potential overreliance on automation in shaping high-stakes health policy.

Unraveling the “Make America Healthy Again” (MAHA) Commission report this week, one can’t help but get a whiff of déjà vu from the world of internet chain emails: bold headlines, science-y claims, and a fact-checker’s nightmare underneath. According to Fox’s summary article, which draws on reporting from the Associated Press and investigations by NOTUS, Health and Human Services Secretary Robert F. Kennedy Jr.’s latest health policy opus is laced with references to studies that—much like my hopes for finding a universal remote for life—do not exist.

A Fictional Footnote Frenzy

Let’s set the stage: Kennedy’s 73-page report aims to reposition America’s health trajectory, tackling everything from childhood anxiety to the alleged scourge of over-prescribed medication. The White House rolled it out, complete with policy suggestions and a request for $500 million in funding, as Fox details, drawing from the Associated Press. There’s only one hiccup: of the over 500 cited studies, NOTUS found at least seven that simply aren’t real, a finding also summarized by Fox and AP.

To clarify, these aren’t difficult-to-access sources or unpublished preprints—they appear not to exist at all. NOTUS, as cited in the Associated Press and Fox reports, spoke with epidemiologist Katherine Keyes, who was named as co-author on a cited paper about adolescent anxiety allegedly published in JAMA Pediatrics. Keyes confirmed, “The paper cited is not a real paper that I or my colleagues were involved with,” and noted to NOTUS that while she’s done work on the topic, nothing matches the cited title, venue, or authorship.

Other citations, according to NOTUS and interviews relayed by the Associated Press, misrepresented or exaggerated the findings of actual studies. Pediatric pulmonologist Harold J. Farber, whose work was referenced for claims about the overmedication of American children, explained it was “a tremendous leap of faith to generalize from a study in one Medicaid managed care program in Texas using 2011 to 2015 data to national care patterns in 2025.” Farber denies writing the report as cited, underscoring both misattribution and misrepresentation.

Was That the Clickety-Clack of AI?

Patterns in the errors—nonexistent studies, broken links, and casual misattribution—have prompted outlets like Futurism to suggest the involvement of generative AI. The article highlights that these mishaps, such as hallucinated citations and overgeneralized conclusions, are a well-known signature of large language models like ChatGPT. While no definitive proof exists that Kennedy’s Department of Health and Human Services used AI, the evidence is, as the article describes, “extremely sketchy.”

When pressed in a briefing, White House Press Secretary Karoline Leavitt sidestepped direct questions about AI’s involvement, instead referring reporters to the Department of Health and Human Services and attributing the errors to “formatting issues.” This exchange, captured by inkl, saw Leavitt defend the report as a “transformative” achievement, maintaining that the errors “do not negate the substance of the report.”

Formatting or Fabrication?

Despite the defense from the White House, which repeatedly described the errors as stemming from formatting mishaps rather than deeper problems, NOTUS’s investigation revealed an array of issues beyond missing studies: broken links, citation errors, and several instances where actual findings were mischaracterized. Fox and inkl both note that while the White House promises corrections, there’s been no clear accounting of how the erroneous citations entered the MAHA report in the first place.

Science communicator Joe Hanson, quoted in Futurism’s coverage, put it bluntly: “AI is useful for many things. Making or guiding government policy is not one of them!” He added, “Seems like the kind of thing someone might do if they were interested in publishing propaganda to support a particular agenda rather than letting science guide their health policy.”

A Hallmark of the Times, or Just Sloppy Homework?

The MAHA report isn’t just another wonkish white paper—its recommendations are intended to shape future health policy, with significant funding implications. The situation, as outlined collectively by Fox, the Associated Press, NOTUS, Futurism, and inkl, raises serious questions not only about the accuracy of this specific document, but about the reliability of public policy in an era increasingly touched by automation and, evidently, citation-generating software.

So when a major federal health strategy is propped up by studies as tangible as a mirage, what does that mean for future reports—and more importantly, for the policies those reports inform? Is this another case of bureaucratic corner-cutting, or a sign that citing ghosts has gone mainstream in policymaking?

In an age of instant information and equally instant misinformation, perhaps a little more old-fashioned library science—and a lot less “formatting issue”—wouldn’t go amiss.

Sources:

Related Articles:

What happens when you dust off a genetic relic last touched millions of years ago? Thanks to some madcap brain rewiring by researchers in Japan, one humble fruit fly swapped out its love song for a regurgitated snack—proving evolution sometimes just locks away, not erases, old behaviors. Makes you wonder: what strange instincts might be hiding in our own attic?
Ever wonder what happens when curiosity—and a chihuahua—collide with the bizarre side of veterinary science? This real-life case of a dog testing positive for cocaine and fentanyl is part cautionary tale, part eyebrow-raiser. Dive in for the full story behind one pup’s wild encounter with the unexpected.
Ever wondered what lengths world leaders go to protect their secrets? At the Alaska summit, Putin’s bodyguards turned heads with a suitcase dedicated to, quite literally, presidential waste. Turns out, state secrets aren’t always digital—sometimes they’re biological. Curious how far this strange tradition goes? You’ll want to keep reading.
Ever wondered what it’s like behind a waterfall—really behind it? Ryan Wardwell now has the answer, having spent two soaked, shivering days wedged in a cave behind one of California’s wildest cascades. His rescue, equal parts luck, planning, and drone footage, is a testament to nature’s indifference and the value of thoughtful friends. Full story inside.
Picture this: a yellow rubber duck, defiantly clinging to a seaside boulder as waves crash and salt spray flies—thanks to a new AI-designed adhesive inspired by barnacles. In a quietly spectacular experiment, science skipped the jargon and let the stubborn duck do the talking. Curious how glue, AI, and bath toys became unlikely allies? Dive in for the full story.
Ever wondered why Africa always looks so…compact on your classroom map, while Greenland looms like a frozen colossus? Turns out, it’s no cartographic coincidence—the Mercator projection distorts map sizes, shrinking continents like Africa while inflating others near the poles. As world leaders and the African Union push for a more truthful view, is it finally time to retire our global funhouse mirror?