The Sleeper Awakens: AGI, Acceleration, and the Vanishing Middle
When I wrote about AGI as the sleeper agent of change, I described something barely detectable but deeply transformative—tech that would slowly weave itself into the fabric of how we live and work. But if recent conversations among government insiders, technologists, and venture capitalists are any indication, that agent is no longer asleep. It’s wide awake!
Episode 139 of The Artificial Intelligence Show—hosted by Paul Roetzer and Mike Kaput—served as a kind of red alert for those paying attention. Their discussion centered around a recent interview from The Ezra Klein Show, in which Klein spoke with Ben Buchanan, the former Special Advisor for AI to the Biden White House. The message was clear: AGI is no longer a speculative future—it’s a real possibility within the next two to three years.
And yet, despite the clarity of that warning, neither the government nor the private sector seems truly prepared for what’s coming.
The Government Knows—But Isn’t Ready
Buchanan’s interview reveals something critical: AGI is not being developed by governments or public research institutions in the way past transformative technologies were. Nuclear weapons, the internet, GPS—all were developed under the direct oversight and funding of the U.S. Department of Defense. But generative AI, large language models, and reasoning agents? These are being built by private labs—OpenAI, Google DeepMind, Anthropic—using venture capital, not government grants.
This shift has major implications. The U.S. government is largely reactive, not proactive. As Buchanan said, this is “the first revolutionary technology in modern history that was not primarily government funded.” As a result, the usual safeguards—strategic foresight, national interest alignment, regulation—are playing catch-up.
Even more concerning: Buchanan admitted that during his tenure, the government couldn’t even use top-tier models like Claude to run scenarios or test economic simulations, due to internal policy restrictions. Ezra Klein rightly pushed back: “That’s damning in and of itself, isn’t it?” It is!
The Real Target: Cognitive Work
One of the most unsettling themes in the podcast is how fast AGI-style agents are targeting not just routine tasks—but sophisticated, white-collar, knowledge-based jobs.
OpenAI has reportedly pitched new tiers of AI agents to investors. According to The Information, they plan to sell:
• Low-end agents at $2,000/month for general productivity
• Mid-tier agents for software engineering at $10,000/month
• High-end “PhD-level” agents at $20,000/month to perform complex research tasks
The business case is blunt: $20,000/month for an AI that never sleeps, takes no benefits, and performs the work of 10 employees earning a combined $500,000 annually. As Paul Roetzer puts it: “This is not theoretical. VC money is funding these replacements now.”
Klein adds his own example: using OpenAI’s DeepResearch tool, he requested a comprehensive report on U.S. political party polarization. The result? “Better than what my producers usually deliver in days—this took minutes.”
The implications are profound. We’re not just optimizing workflows. We’re replacing entire workflows—and the workers within them.
No One’s at the Wheel
If the risks are this clear, why is there no coordinated response?
The answer, according to Buchanan, is that meaningful action would require Congressional support, which “just wasn’t in the cards.” So while the White House ran internal discussions and met with economists, the outcome was largely “an intellectual exercise” rather than policy.
Klein was visibly frustrated: “You were the top advisor on AI. You were at the nerve center of this. And you’re telling me there was no real plan for the labor disruption this would cause?”
The answer was silence.
This absence of foresight is made worse by the fact that AI is already embedded in national security concerns. The focus of federal AI strategy, it turns out, isn’t economic stability—it’s cyber warfare and geopolitical dominance, especially in the context of China. Buchanan made this clear: the U.S. is far more concerned with AI-powered cyberattacks than mass unemployment.
In other words: the government sees the storm on the horizon, but it’s only building levees for some of the flood.
We Are the Researchers Now
Paul Roetzer said something in this episode that I’ve been coming back to again and again: “If you think someone else is figuring this out—they’re not.”
That line hit like a bell. It reframes the whole conversation. There is no roadmap. No top-down solution. No invisible hand steering us safely through this transition.
It’s on us—educators, designers, storytellers, researchers, parents—to do the thinking, the questioning, and the imagining. We don’t have to build AGI to understand its implications. We just have to stay awake as it moves through our worlds.
In the classroom, in the studio, at home with my kids—I keep seeing traces of the coming shift. Sometimes it’s thrilling. Sometimes it’s disorienting. But always, it’s a reminder that we’re not in a waiting period. We’re in a threshold.
The sleeper has awakened. The idea we once kept at arm’s length is now shaping policies, redirecting capital, and rewriting job descriptions. Whether or not AGI arrives in 2026, as some predict, may be beside the point. What matters more is that we act like it could—and that we prepare accordingly.
Not with panic. But with curiosity, collaboration, and a clear-eyed sense of what’s at stake.
We are not just observers anymore. We are participants. And whether we like it or not, we are also test subjects in the largest experiment modern labor has ever seen.