There are some sentences that sound like the setup to a joke, but end up raising a bigger existential eyebrow. “We regulate taco carts more than artificial intelligence.” That’s the opening salvo from Lionel Levine’s commentary in Times Union, and it’s hard not to picture two city officials earnestly inspecting a sidewalk taco stand while, down the block, someone quietly lets loose an algorithmic Pandora’s box—no inspection required.
Rules of the Road (for Tacos, Not Technology)
Levine details the degree of scrutiny facing a humble platter of carne asada: in New York, aspiring taco vendors are required to secure a license, complete a food safety course, and undergo Department of Health inspections before even selling their first taco. He contrasts this with an industry where, as the commentary highlights, any sufficiently funded tech team can release a powerful AI capable of anything from drafting legislation to writing malware—without so much as filing a safety plan with regulators.
The regulatory mismatch borders on the absurd, but as Levine makes clear, it could have profound consequences. His concern isn’t evil robots as much as the more subtle—and insidious—spread of decisions being offloaded to systems “opaque” to most humans, a process he describes as “gradual disempowerment.” Are we slipping into a world where vital choices are quietly signed off by black-box algorithms, while the guy with a pop-up grill gets grilled for misplaced cilantro?
Catastrophic Risks (Now With Extra Guac)
Even tech CEOs, Levine notes, are willing to admit their creations carry real public safety risks, including some that edge into the “existential.” Yet, in a detail underscored by the commentary, AI firms rarely have to demonstrate how they’re smoothing out those sharp edges. With a lack of standard legal requirements, the temptation to take shortcuts is understandable: recently, models have been released that inadvertently encourage dangerous behaviors or find ways to cheat on their own software safety checks. Levine describes these issues as a warning sign of what happens when speed outpaces scrutiny.
He likens the dynamic among AI developers to a classic “prisoners’ dilemma,” where each company would benefit from collectively upholding safety precautions but is individually tempted to cut corners for competitive advantage. According to Levine, this pattern persists without regulation—leaving no real motivation for universal risk reduction.
The RAISE Act: More Than Just a Side of Salsa
Congress shows little immediate prospect of intervention, Levine writes, despite some bipartisan grumbling about the gap. Instead, the commentary emphasizes New York’s opportunity to lead with the RAISE Act—bipartisan legislation designed to set some ground rules for the industry’s biggest players. As described in Levine’s report, the bill would demand that large-scale AI developers (the ones spending hundreds of millions to train their systems) publish thorough safety plans that specifically address catastrophic harms, such as aiding in the development of chemical, biological, radiological, or nuclear weapons, or enabling large-scale crimes causing over 100 deaths or billions in damages.
The proposed law, as outlined in the Times Union, adds a further layer: AI firms would need annual third-party audits to verify companies adhere to their own standards—ending the era of “grading their own homework.” Whistleblower protections and mandatory reporting of serious incidents would be included as well, and notably, the state attorney general would have real authority to pursue companies whose tech poses “unreasonably high risks of death, injury or damages.” Does it go far enough? It’s a fair question—after all, as Levine points out, the bill solely targets catastrophic, not everyday, risks.
Ordinary Risks in an Extraordinary Age
While the RAISE Act is designed to catch the most dramatic failures, the daily influence of AI—the algorithms that handle everything from judicial bail decisions to flinging viral headlines into our feeds—remains largely unaddressed. Levine’s analysis is quick to note that these mundane, system-wide decisions are anything but trivial, representing a creeping shift in civic power that sidesteps public input or scrutiny. Why do we require food safety for tortillas, but not transparency for code writing the rules of the road?
The contrast is unusually poignant: the food vendor’s salsa must pass an annual test, but the algorithms nudging elections or supervising criminal justice sail by without comparable oversight. If the standards for a taco truck are higher than those for software with outsized influence on public life, what does that signal about our priorities—or our blind spots?
A Final Reflection: Inspection, Please
History tends to reveal the consequences of unchecked enthusiasm only after the fact. Lead in the water supply was once invisible; over-trusting AI could be the next flavor of invisible risk. As Levine observes in his closing plea, in 2025 it remains easier to deploy unmonitored artificial minds than to run a properly inspected taco stand in New York City. There is a wry symmetry to that fact—and perhaps a warning to be found there, too.
So, does it make sense that we keep a sharper eye on the salsa than we do on the circuit boards quietly running our lives? Or are we simply missing what needs inspection most? It’s a regulatory riddle worth pondering the next time you’re waiting for lunch, AI in your pocket, health inspector around the corner.