Cameron Parker

May 30, 2025

DOGE and the Tech Right have pushed out my AGI timeline.

I was DOGE-curious and DOGE-skeptical. That means on net I was more optimistic about DOGE than about the Trump administration broadly, which I just assumed was going to be terrible -- full stop. 

I've had to seriously revise my priors in light of the past several months. DOGE was a disaster, and somehow seems to be have distinguished itself for being MORE incompetent than the Trump administration overall. 

This has forced me to reassess some prior convictions.

  • Increased probability that Elon is a drug addict and/or mentally compromised
  • Increased assessment of the base level competency of the civil service
  • Decreased perception of the base level of corruption and waste/fraud/abuse in the bureaucracy (I didn't think there was much to begin with, but still)
  • Modest increase in the weight I give to bad incentives and regulation as the driving force of "why we can't just do things" in American life

As with everything else these days, though, there's an AI angle.

I don't think Elon is "representative" of very much. But I do think that he and the DOGE effort overall, which had a lot of broad enthusiasm from the Tech Right and to a lesser extent the broader tech ecosystem, are indicative of a set of beliefs about human institutions. These beliefs stress the malleability of institutions (or break-ability), great man theory, and optimism about human embrace of technology acceleration.

I do not believe that all of these ideas are wrong per se.

But I think this set of beliefs greatly overestimates the ability of a person or small group of even exceptionally charismatic and talented people to change the world (and Elon is only super talented, not charismatic). There is no actual John Galt. And this set of beliefs also definitely overestimates society's enthusiasm for change and disruption.

I asked Gemini to produce a brief report of AGI timelines. Prominent tech leaders who are willing to stake a claim on this seem to be aligning on 2030 as an outside date, by which time AI will have deeply embedded itself in the economy.

The complete failure of DOGE has me updating my thinking. I realize this is an N=1 sample, but it feels instructive. I'm happy to call it speculative. Technologists as a class have a serious blind spot to the human and organizational bottlenecks that AI is going to run headlong into. I now anticipate that status quo forces will be mostly successful in  resisting AI integration. In part, because technology pioneers will greatly overestimate their ability to convince people to use AI, and will engage in counterproductive tactics as a result of their hubris. And in part, because status quo bias will prevail even in organizations that are ostensibly committed to innovation.

Axios published this prognostication two days ago (5/28/25). In light of what we know about a recent whole of government approach to up level automation and efficiency in the federal bureaucracy, does this prediction seem realistic?

image.png


Of course, you can say that DOGE was always an ideological purge and not a genuine push for efficiency. But testimony of at least some of the volunteer effort on the ground suggests a good faith effort.

Tyler Cowen is on the record that AI will give us no more than 0.5% incremental growth. He lays out the bottlenecks. This is not a novel idea. I am just saying I am being nudged in that direction. 

So what will happen in practice? Over the next five years, 
  • Microsoft, the most benevolent and ubiquitous tech company in workers' lives, is going to give everyone AI capabilities. It will mostly suck because of bad data architecture and systems integrations
  • Nevertheless, every knowledge worker is going get the kind of executive assistant that used to be reserved for senior leaders
  • Every knowledge worker is going to have a competent research assistant they can query and ask to do basic analysis and drafting of presentations and documents
  • A few tech heavy industries and the AI labs are going to leap forward and automate almost all of their research and work with agents, but it will be sandboxed and struggle to penetrate the broader economy

To be clear, even this scenario is a huge win for everyone and a step change in productivity. And these views are still tentative and subject to further updating.

I will enjoy coming back to this post in light of future events.