The arguments deployed in favor of harvesting user data are rarely short on creativity, but Meta’s recent submission to Australia’s Productivity Commission review manages to chart some especially novel territory. As detailed by the Guardian, Meta contends that its AI can’t really ‘get’ what makes Australians tick—lingo, quirks, maybe even the delicate politics of tomato sauce—unless it’s allowed to chew through public Facebook and Instagram posts. Evidently, the subtlety of a “servo run” or the gravity of snag etiquette is not something an AI can absorb from reading laws alone.
Poring Over Posts: AI’s Crash Course in Australian Culture
Meta’s take, outlined in its submission to the commission, is that generative AI models like Llama “require large and diverse datasets,” and that synthetic, machine-generated data just isn’t enough. The company puts it this way: official databases and legislative texts are all well and good, but they don’t capture how real people chat about culture, art, and what’s trending. The digital detritus of daily life—memes about magpies, rants about the price of groceries, or nostalgia for Aeroplane Jelly—are, according to Meta, “vital learning” for artificial intelligence.
It’s an argument that tries to present comment threads as anthropology fieldwork. Meta maintains that, for its tools to be useful and authentically tuned, they must be raised on the diet of day-to-day digital banter, not just high-fiber constitutional law. Whether Llama really needs to trawl through late-night footy banter to function is, apparently, a question for policymakers.
Playing the ‘International Norms’ Card
As highlighted in the Guardian’s report, Meta was required to allow European users to opt out of AI training after legal interventions there—a concession the company says isn’t mirrored in Australia because of differences in the legal landscape. In its submission, Meta warns that pushing ahead with stronger privacy laws could leave Australia “out of step with international norms,” risk conflicting with digital policy goals (like age-appropriate content and safety), and potentially make the country less attractive to AI investment.
This theme—that stricter data protection may cast Australia as a digital outlier—finds a chorus among other large businesses. Retailer Bunnings, fresh off being pinged over trials of in-store facial recognition, weighed in with its own concerns, pointing out the need to balance privacy with staff safety and legal obligations to maintain a secure shopping environment. Woolworths, for its part, voiced support for privacy reform but flagged what it called “unnecessary challenges” for personalizing customer experiences if the current proposals go ahead. Meanwhile, Google warned about regulatory “uncertainty” and reignited its campaign for updates to copyright law, arguing that current frameworks get in the way of training AI on Australian content.
So in the collective view of these companies, the push-pull between privacy and “progress” hinges on how frictionless, or lucrative, policy changes might make their use of data.
Culture in the Wild: From Comment Threads to Code
Meta’s main gambit here is a kind of peculiar appeal to authenticity. “Human beings’ discussions of culture, art, and emerging trends are not borne out in such legislative texts,” the submission notes, suggesting that the real meat of cultural understanding is marbled throughout informal posts and shared selfies. There’s a certain wry humor in picturing a Nobel-prize-level AI in need of Facebook drama to spot the regional difference between a “chook” and a “bin chicken.”
Yet, as the Guardian points out, this marks a departure from the company’s public stance elsewhere—Europeans received an opt-out option for training, but Australians have not, with Meta attributing this to “a very specific legal frame” overseas.
The result is an odd sense that what truly distinguishes a nation’s digital soul lives in its GIFs, typos, and collective inside jokes—material that is simultaneously mundane and, in Meta’s telling, essential for artificial smarts.
Whose Banter, Whose Benefit?
This round of submissions, described in the Guardian, circles back to a familiar, almost ritualized corporate refrain: yes, user privacy, but let’s be sure we don’t hamstring innovation… or our quarterly reports. Each statement wraps business interests in the language of public good: safer workplaces, more personalized shopping, or, in Meta’s case, smarter AI. Whether a company genuinely believes an algorithm will misinterpret a “servo” without sifting 10,000 late-night road trip stories, or just prefers the convenience of a massive training dataset, is left to speculation.
In the end, what gets lost in this ongoing data tug-of-war is the fact that culture emerges from people—often in the throwaway comments, the odd inside joke, the things we post when we’re not thinking about being studied. Should we assume every note about a sausage sizzle will feed a learning machine? At what point does the distinction between sharing and mining tip over from useful to uncanny?
For all the talk of AI progress, isn’t there something a bit incongruous about insisting that posts about dropped pies or lost thongs are now the foundation for the next leap in machine intelligence? Or have the digital traces of ordinary life become, in their way, more valuable than anyone expected? The answer may lie somewhere between a late-night meme and a lawyer’s submission—two places, incidentally, not known for subtlety.