Unraveling the “Make America Healthy Again” (MAHA) Commission report this week, one can’t help but get a whiff of déjà vu from the world of internet chain emails: bold headlines, science-y claims, and a fact-checker’s nightmare underneath. According to Fox’s summary article, which draws on reporting from the Associated Press and investigations by NOTUS, Health and Human Services Secretary Robert F. Kennedy Jr.’s latest health policy opus is laced with references to studies that—much like my hopes for finding a universal remote for life—do not exist.
A Fictional Footnote Frenzy
Let’s set the stage: Kennedy’s 73-page report aims to reposition America’s health trajectory, tackling everything from childhood anxiety to the alleged scourge of over-prescribed medication. The White House rolled it out, complete with policy suggestions and a request for $500 million in funding, as Fox details, drawing from the Associated Press. There’s only one hiccup: of the over 500 cited studies, NOTUS found at least seven that simply aren’t real, a finding also summarized by Fox and AP.
To clarify, these aren’t difficult-to-access sources or unpublished preprints—they appear not to exist at all. NOTUS, as cited in the Associated Press and Fox reports, spoke with epidemiologist Katherine Keyes, who was named as co-author on a cited paper about adolescent anxiety allegedly published in JAMA Pediatrics. Keyes confirmed, “The paper cited is not a real paper that I or my colleagues were involved with,” and noted to NOTUS that while she’s done work on the topic, nothing matches the cited title, venue, or authorship.
Other citations, according to NOTUS and interviews relayed by the Associated Press, misrepresented or exaggerated the findings of actual studies. Pediatric pulmonologist Harold J. Farber, whose work was referenced for claims about the overmedication of American children, explained it was “a tremendous leap of faith to generalize from a study in one Medicaid managed care program in Texas using 2011 to 2015 data to national care patterns in 2025.” Farber denies writing the report as cited, underscoring both misattribution and misrepresentation.
Was That the Clickety-Clack of AI?
Patterns in the errors—nonexistent studies, broken links, and casual misattribution—have prompted outlets like Futurism to suggest the involvement of generative AI. The article highlights that these mishaps, such as hallucinated citations and overgeneralized conclusions, are a well-known signature of large language models like ChatGPT. While no definitive proof exists that Kennedy’s Department of Health and Human Services used AI, the evidence is, as the article describes, “extremely sketchy.”
When pressed in a briefing, White House Press Secretary Karoline Leavitt sidestepped direct questions about AI’s involvement, instead referring reporters to the Department of Health and Human Services and attributing the errors to “formatting issues.” This exchange, captured by inkl, saw Leavitt defend the report as a “transformative” achievement, maintaining that the errors “do not negate the substance of the report.”
Formatting or Fabrication?
Despite the defense from the White House, which repeatedly described the errors as stemming from formatting mishaps rather than deeper problems, NOTUS’s investigation revealed an array of issues beyond missing studies: broken links, citation errors, and several instances where actual findings were mischaracterized. Fox and inkl both note that while the White House promises corrections, there’s been no clear accounting of how the erroneous citations entered the MAHA report in the first place.
Science communicator Joe Hanson, quoted in Futurism’s coverage, put it bluntly: “AI is useful for many things. Making or guiding government policy is not one of them!” He added, “Seems like the kind of thing someone might do if they were interested in publishing propaganda to support a particular agenda rather than letting science guide their health policy.”
A Hallmark of the Times, or Just Sloppy Homework?
The MAHA report isn’t just another wonkish white paper—its recommendations are intended to shape future health policy, with significant funding implications. The situation, as outlined collectively by Fox, the Associated Press, NOTUS, Futurism, and inkl, raises serious questions not only about the accuracy of this specific document, but about the reliability of public policy in an era increasingly touched by automation and, evidently, citation-generating software.
So when a major federal health strategy is propped up by studies as tangible as a mirage, what does that mean for future reports—and more importantly, for the policies those reports inform? Is this another case of bureaucratic corner-cutting, or a sign that citing ghosts has gone mainstream in policymaking?
In an age of instant information and equally instant misinformation, perhaps a little more old-fashioned library science—and a lot less “formatting issue”—wouldn’t go amiss.