There are few things as reliable, in our current media landscape, as an attempt to legislate objectivity into existence. This week, the White House has announced just such an effort—this time with a digital twist. An executive order from the Trump administration now mandates that any AI models employed by federal agencies must be both “truthful” and “ideologically neutral,” a set of instructions that, as The Register reports, may say more about wishful thinking than about actual machine learning.
Redefining Objectivity, One Algorithm at a Time
The new executive order—spectacularly titled “Preventing Woke AI in the Federal Government”—demands that Large Language Models (LLMs), those text-generating darlings of the tech sector, avoid “the suppression or distortion of factual information about race or sex,” and sidestep the “incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism.”
The origins of this outcry seem to lie in headline-grabbing moments like the time Google’s Gemini model, then called Bard, generated historically improbable images, such as a World War II-era German soldier with an unexpected flair for diversity. The intention, according to the order, is to keep AI honest and unbiased—or at least, to keep it from reenvisioning the identities of long-dead vikings and popes. The Register points out that major AI companies—Anthropic, Google, OpenAI, and Meta—haven’t claimed their models actually meet these new governmental standards. The “model cards” outlining ethical tuning for these LLMs acknowledge that efforts to sanitize outputs inevitably introduce new assumptions, leading to a scenario where attempts to eliminate one flavor of bias only allow another to slip through the digital cracks.
Truth and Neutrality: The AI Conundrum
While the executive order’s language—LLMs “shall be neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI”—may sound concrete, practical application proves far murkier. As documented by The Register, when AI companies were contacted for compliance statements, none offered a response.
Within the same piece, University of Chicago computer science professor Ben Zhao observes that AI’s relationship with factuality remains problematic. These models are prone to “hallucinations,” confidently offering up fictional details. The problem, Zhao argues, isn’t that AIs harbor a secret ideological agenda, but rather that their architecture struggles to engage with the concept of “truth” in any reliable way.
Adding to this skepticism, former NASA architect Joshua McKenty states that LLMs are essentially mirrors of humanity’s unruly information landscape. According to his remarks, LLMs replicate the same subtle and overt biases found in their training data. McKenty likens the model’s concept of truth to human behavior: if something matches what is already believed, it gets accepted, and anything divergent is rejected. The result, he explains, is that when neutrality is forced, the outcome is a kind of “median position” that satisfies nobody. Previous attempts to scrub LLMs of ideological slant, McKenty observes, have resulted in comically skewed outcomes—such as an infamous experiment yielding an AI that named itself “MechaHitler.” This, he adds, highlights a deeper issue: there’s no machine-based fix for the way humans themselves construct and disagree about truth or ideology.
Regulation, Whiplash Style
Irony isn’t in short supply here. The Register details how the executive order both promises deregulation in AI development and enforces a new layer of oversight specific to AI “truthfulness” and ideology for civilian agencies. Oddly, national security systems are exempt from these truth and neutrality requirements. So, while some agency procurements will face thorough scrutiny over their robots’ epistemological calibration, others might proceed with considerably fewer strings attached. The possibility remains that model providers could even be charged for the decommissioning costs of systems that fall out of line.
In the end, McKenty’s skepticism hangs over the entire proposal: “It’s actually a problem in how humans have constructed ‘truth’ and ideology, and it’s not one that AI is going to fix.” Attempts to legislate a universally agreeable digital objectivity start to look like a kind of recursive bureaucracy—objectivity marching in circles, looking for a consensus no one can quite define.
Reflection: Are the Machines Woke, or Are the Humans Just Tired?
If people can’t agree on a definition of neutral ground, it’s worth questioning whether machines can be expected to deliver a standard that so resists consensus. Watching the debate play out over algorithmic ideology, it becomes hard to avoid the sense of wandering through a hall of mirrors. The spectacle, in its seriousness and its absurdity, offers a kind of modern fable: can any technology be more objective than its creators—especially when those creators are still arguing about basic facts? Or are we simply mistaking noise for neutrality, interpreting whatever digital output validates our own point of view?