AI-pilling for Fun and Profit
This blog post contains my thoughts on what we should do about future artificially intelligent workers (AI workers), artificial general intelligence (AGI), and superintelligent artificial intelligence (super AI). It is also a review of two self-proclaimed "AI-pilled" books: The Coming Wave by Mustafa Suleyman and Situational Awareness by Leopold Aschenbrenner. The topics discussed are varied and frequently highly speculative. Parts of this post may not be clear without at least some prior knowledge of the ideas being discussed, but I try to be as helpful with context as possible. I added a succinct summary at the very end if you just want to read the main arguments that the authors put forward and my main rebuttals. That summary starts with the heading "TL;DR".
Part 1: What Is Intelligence And What Is Technology?
The Coming Wave by Mustafa Suleyman starts off on the wrong foot and never really recovers. On page seven:
"Contemplating the profound power of human intelligence led me to ask a simple question, one that has consumed my life ever since: What if we could distill the essence of what makes us humans so productive and capable into software, into an algorithm?"
Why am I so critical of this? I have two main reasons.
Reason #1: He is referring to human intelligence as profound, productive, and capable without even attempting to describe what he's talking about when he refers to "human intelligence", let alone how productivity and capability might be measured (in nearly 300 pages he never does!). This is a classic hype technique. By leaving the most important concepts undefined the person doing the hyping can let their audience fill in the blanks with whatever assumptions and expectations they bring with them, while also giving the hype person maximum flexibility to switch between different popular conceptions without ever stating so directly. Even more concerning, however, is the fact that human intelligence is such a complex and nuanced topic in itself. By not defining it Suleyman gets to use oversimplified versions of it at will.
Take, for example, how learning relates to human intelligence. Leslie Valiant stakes a whole new scientific discipline in the study of learning from a computational standpoint. In Valiant's book Probably Approximately Correct he questions whether the algorithms that humans use to learn new things are part of a larger space of learning algorithms that exist in the space of what is computationally possible in our universe. This has fascinating consequences such as the possibility that human learning might only be a subset of the space of possible ways to learn. This would also apply to machines and machine learning. Or consider the fact that humans benefit from around 4 billion years of evolution encoded in our DNA. How might that create differences between human learning and programmed machine learning? While I don't expect Suleyman to address these exact questions, my point is that he refers to human intelligence and learning without ever even defining these things and then applies these concepts directly to machines as if these details don't matter. That should give us pause and engender skepticism in his wild claims. After all, extraordinary claims ought to require extraordinary evidence.
Reason #2: Suleyman's vagueness is on purpose. Through-out The Coming Wave he refers to human intelligence alternately as profound and as pitiful. By alternating between profound (he refers to the creation of artificial intelligence (AI) at one point as having "cosmic significance") and pitiful (late in the book he claims technology in general has made modern nation states like the USA "increasingly fragile", "weak", and "impulsive"), he can make the fallacious argument on which the whole book rests: our human intelligence is so profound that we can develop advanced technology, but it is so pitiful that we can't control it. His posited lack of control then allows him to say that our only recourse is to push ahead with ever more advanced technology. In particular, he believes that super intelligent (beyond human level intelligence) AI, which from here on out I will refer to simply as super AI, can save us from ourselves. The minor issues of human agency and who gets to benefit from the super AI and all of the power it implies is conveniently forgotten about. By the way, we can most certainly identify one person who is currently benefiting from the development of artificial intelligence: the serial AI tech startup entrepreneur Mustafa Suleyman. In 2010 he founded a startup you may have heard of: DeepMind.
Even his descriptions of technology are vague and misleading. For example, throughout the book he uses the mistaken idea that technology is "evolving" as if technology were a natural thing that itself is adapting to its environment and reproducing. Technology is not a single, monolithic thing, nor is it a natural force on its own. He writes on page 25:
"Technology has a clear, inevitable trajectory: mass diffusion in great roiling waves. This is true from the earliest flint and bone tools to the latest AI models. As science produces new discoveries, people apply these insights to make cheaper food, better goods, and more efficient transport. Over time demand for the best new products and services grows, driving competition to produce cheaper versions bursting with yet more features. [...] Costs continue to fall. Capabilities rise. Experiment, repeat, use. Grow, improve, adapt. This is the inescapable evolutionary nature of technology."
Here he glibly conflates humans conducting necessary tool making produced from found objects to humans creating technologies through the application of scientific methods, supply and demand markets, and modern economies. Then he reduces all of it to the "evolutionary nature of technology." You may recall from high school biology that biological evolution is not directed, nor is it achieving greater efficiency or "better" forms. Why does he choose to manipulate us this way into accepting technology as naturally inevitable, always better than what came before it, and something that humans must accept?
Part 2: Is Technology Truly A Force of Nature?
On page 43 he talks about the elephant in the room (although in this case the room is filled with other elephants that he never talks about): nuclear weapons. He refers to them in the context of what he refers to as the "nuclear exception" as if it were the only example of regulated and limited technology. It is most definitely not. Not only are numerous non-nuclear weapons successfully contained (including, for example, 3D printed guns in the US), but other technologies like software, cryptographic techniques, computer hardware, medical treatments, drugs, biological agents, poisons, dangerous chemicals, GMO seeds, etc are all successfully limited through the use of treaties and international law, international organizations, law enforcement, border patrol and customs, simple license agreements and regular criminal and civil courts, etc. The key ingredients there being government, elected officials, and the application of public trust and rule of law. If AI is as dangerous as nuclear weapons, as some AI proponents themselves claim (including Suleyman!), then why wouldn't we treat it in a similar way to nuclear weapons? He answers this by presenting a fallacy: that if something cannot be 100% contained then we should give up trying to contain it. Yet, the numerous historical examples of successfully regulated and restricted things beg to differ. In fact, I'll argue below that it is likely far easier to contain super AI.
In all of this he is painting technology as an unstoppable force of nature. If that's true then why would he need to write an entire book with the aim of trying to convince us not to contain it? Why does he choose to fallaciously manipulate us into less regulation? One reason is that on page 47 he makes it clear that he feels the need to absolve himself and others for their own inaction to contain challenging technologies. If technology is inevitable then there's nothing anyone could have done anyway. It seems that some people like Suleyman who directly financially benefit from the development of technology (in his case AI) will jump through any hoop to rationalize why they deserve the profit from selling their technology all the while externalizing significant social and economic costs.
Part 3: How Special Is Artificial Intelligence?
By page 58, the hubris is extraordinary:
"Technology is core to the historical pattern in which our species is gaining increasing mastery of atoms, bits, and genes, the universal building blocks of the world as we know it. This will amount to a moment of cosmic significance."
Think about that phrasing, "This will amount to a moment of cosmic significance." The word "this" is referring to the development of AI. I'm a software engineer and early adopter of technology, I love technology, but this strikes me as quite a strange way to look at things. There is quite a lot human activity that throughout history has had far reaching significance for humanity and even for the planet Earth. Some of that was related to the development of technology, but a lot of it was related to ideas, philosophy, ideology, art, community and organizational behavior, and a whole lot more. Still, it strikes me as quite bizarre to think that one single thing humanity accomplishes has cosmic significance. Not that humanity can't have cosmic significance, but rather I wonder where is the evidence that AI is a particular thing that has cosmic significance, other than him stating so? To me, this is indicative of Suleyman being a grandiose hype man for AI. I would have been fine with pronouncements of AI being the best thing since sliced bread. Maybe let's leave the cosmos out of it.
As another aside, I think what bugs me so much about his grandiosity is that he's talking about AI as if building AI is the same thing as understanding the universe (as he put it in the quote above, mastering the building blocks of the world as we know it). Building AI isn't even the same thing as understanding intelligence or consciousness. I hope as much as the next person that making progress on AI can help humanity better understand these things. But the fact that Suleyman doesn't talk about these things in his book should give us pause. I'll come back to this later.
Exemplifying the fact that he is merely trying to manipulate us into believing technology is an unstoppable wave, he even tells us to not bother thinking the implications through. On page 75:
"We don't need to get sidetracked into arcane debates about whether consciousness requires some indefinable spark forever lacking in machines, or whether it'll just emerge from neural networks as we know them today. For the time being, it doesn't matter whether the system is self-aware, or has understanding, or has humanlike intelligence. All that matters is what the system can do. Focus on that, and the real challenge comes into view: systems can do more, much more, with every passing day."
Thanks, but actually I do care if my AI is conscious, because then it would be incumbent on me to treat it differently and to have a larger discussion in my community and society about what we should do ethically, legally, and so on.
Summary of Parts 1 through 3
For all of Suleyman's anecdotes, waffling, and contradictions he ultimately relies on oversimplified binaries to justify his arguments. We can be more nuanced and realistic. We can use consistent and detailed evidence rather than cherry picked anecdotes. When we do that we see that technology is neither inevitable nor impossible to regulate and that, in some cases, we should be proactive in regulating it. Nuclear weapons required proactive secrecy and extremely heavy regulation before they were ever built. Nuclear weapons are, in fact, a great example of the nuanced and complex reality we face with technology and, in the extreme cases, how humans can come together and balance different competing priorities. Sometimes we do better than at other times and perhaps we should have worked harder to regulate, keep secret, and de-escalate nuclear weapons technology much earlier than we did. Climate change, the result of the relatively unregulated proliferation of non-weapons technologies, is another example, and quite different than nuclear weapons: we failed to recognize the harms for nearly a century. Yet even there, leaded gasoline and other types of pollution proved far easier to regulate and contain than carbon dioxide emissions have. Nuclear weapons and climate change are proof that humanity can't bury it's head in the sand and just hope that future generations figure it out later. We have a responsibility to ourselves and also to future generations to get the use and maintenance of all of these technologies correct so that they benefit us more than they hurt us.
Suleyman presents a fun narrative about technology that even involves the origins of humanity. However, we need to be very clear that he does not present evidence for this in any rigorous fashion. His narrative is overly simple and vague. Worse than that, it is based on cherry-picked, often context-less, and ultimately contradictory anecdotes. We must remain highly skeptical.
Part 4: What Does Evidence for The Future Advancement of AI Look Like?
In the previous paragraph I mentioned using specific and detailed evidence. Leopold Aschenbrenner, a Silicon Valley technologist and soothsayer, in proper open source fashion collected a series of lectures and articles he wrote about the AI revolution and published them for free on his website situational-awareness.ai where you can also download a free PDF version of it. Aschenbrenner does an exemplary job detailing some specific evidence on why he believes we'll develop true artificial general intelligence (AGI: AI that can reason and learn about any topic similar to humans - note how we basically can only compare it to human intelligence, because we're not working with a concrete definition of intelligence) and AI workers (AI that can replace at least some human workers) by about 2027. His arguments concerning near term development of AI are at least somewhat credible. Not likely, but they make sense as having some amount of weight. Interestingly, when he starts speculating beyond 2027 some of his arguments start looking more like Suleyman's. I'll show this below and continue my critiques of Suleyman's The Coming Wave alongside Aschenbrenner's Situational Awareness.
Unlike Suleyman, Aschenbrenner starts off very strong. The first 70 pages or so are focused on specific evidence of how and why AI will advance. For example:
- A broad time series analysis of order-of-magnitude advances in computational resources complete with graphs and comparisons to Moore's Law, showing a correlation between LLM's test taking capabilities and the amount of computational resources needed to train it.
- Increases in algorithm efficiency allowing larger and more advanced models to be trained with smaller increases in computational resources.
- New pattern matching and reasoning capabilities that we can plausibly expect from current state of the art as well as specific new learning techniques (which he refers to collectively as "unhobbling"). To oversimplify it a bit, it's like combining different existing learning techniques and removing specific limitations of LLMs which were originally put in place to do things like increase computational efficiency. For example, certain types of machine reinforced learning which have been eschewed due to resource limitations.
- A positive feedback loop of increasing capabilities where each new technique helps the system and humans learn how to get to the next new technique.
According to him, these things lead directly to the creation of super AI. In particular, if and when we develop artificially intelligent AI engineers this will lead to an "intelligence explosion", as he puts it. (I will shorten the phrase "artificially intelligent AI engineers" to "AI AI engineers" for the rest of this blog post. It means an AI that can more or less independently work on open problems in machine learning engineering and is a term I just made up in order to differentiate human AI engineers from AI AI engineers).
That's it. That's the evidence that we can develop truly intelligent AI systems. In these two books about the coming AI apocalypse we have about 70 pages of fairly concrete, if still conjectural, evidence. As mentioned, that evidence is only found in one of the books. That's 70 out of nearly 400 pages or about 18%. We'll examine the details of the evidence and see if they hold up, but my point right now is to get you thinking about why these two AI experts (and I think they are AI experts by any reasonable definition of the term) would spend so much time talking about things that aren't directly related to the development of more advanced AI systems. Instead, they treat it as a foregone conclusion that we will develop these systems. After that, they speculate wildly about what future systems might be able to accomplish. It's really breathtaking, in a bad way.
For example, developing super AI is nowhere near as likely as Aschenbrenner seems to think it is. Just consider the truly gargantuan amount of computational resources that Aschenbrenner says will be necessary to accomplish it. It would be an undertaking very significantly more costly in resources than the Manhattan Project. While the creation of AI AI engineers and putting together resources on an almost unimaginable scale are theoretically possible (almost anything is), I do think Aschenbrenner is far more optimistic that it will all work and that our nation state adversaries (like China) also have similar capabilities. I'll address these things more later in this post.
(Note: I read the PDF of Situational Awareness on my computer and also on Kindle and the page numbers were slightly different so I apologize for any inaccurate page numbers here).
On about page 73 of Situational Awareness Aschenbrenner says:
"The intelligence explosion and the immediate post-superintelligence period will be one of the most volatile, tense, dangerous, and wildest periods ever in human history."
Even though I think he's overly optimistic about our ability to achieve superintelligence, I do think he is right that if AI development progresses the way he thinks it will then we're in for a wild and dangerous time. But let's take a step back for a moment. Remember when Suleyman proclaimed that the development of AI would have "cosmic significance" for humanity? How could we then possibly entertain Suleyman's idea of merely riding the wave of technology no matter where it leads, if we also think it portends an extremely tumultuous and dangerous period for humanity the likes of which we've never seen?
Part 5: Is China's AI Capability A Danger to The US?
Aschenbrenner takes his own ideas very seriously and this leads him to claiming the US needs to set aside climate change concerns and business and energy regulations in order to achieve AI supremacy (i.e., developing super AI) before anyone else does. However, one thing he doesn't grapple with is that if AI supremacy is so costly and difficult then how could any country other than the US hope to achieve it? Why should the US abandon all regulation? Why should the US create a Manhattan Project style push for AI supremacy, when there is no legitimate concern that some other country could achieve it before the US does anyway? Aschenbrenner's evidence here is lacking.
Additionally, I want to point out just how very glib and vague he is on the topic of regulation. Once you remove the urgency of an all out race to achieve AI supremacy, then it is as though he is simply saying that this technology is far more important to humanity than any mere regulatory concerns. In this way Aschenbrenner is like any other self-centered tech founder and we should be highly skeptical that AI startups are any different than say Uber. The more disruptive the product, the more these startups seem to dismiss law, regulation, and social and political ramifications of their products. That is, they look for ways to externalize more of the costs of their products.
For example, Aschenbrenner around page 112 begins undermining his own fears about China:
"On the current course, the leading Chinese AGI labs won’t be in Beijing or Shanghai—they’ll be in San Francisco and London. In a few years, it will be clear that the AGI secrets are the United States’ most important national defense secrets—".
And on the next page:
"All the trillions we will invest, the mobilization of American industrial might, the efforts of our brightest minds—none of that matters if China or others can simply steal the model weights (all a finished AI model is, all AGI will be, is a large file on a computer) or key algorithmic secrets (the key technical breakthroughs necessary to build AGI)."
Let the implications sink in for a moment. The US and UK are very close allies. The US in particular is the only country that has the talent, the capital, and the other resources that could all together plausibly be mustered to create an AGI or maybe even a super AI model (we'll look at the resources necessary below). Aschenbrenner doesn't provide evidence that any other country could actually do this. He simply states that China is an existential threat in this race while in the previous breath stating Chinese AI labs will be in the US and the UK. In my opinion, this comes down to whether or not the US can maintain its current advantage for the foreseeable future as well as keep the models a secret. These things seem much easier to do than the further claim that the US needs to pour all of its resources into creating super AI as quickly as possible. Given the actual reality of the situation, why not take the time to do it well?
The Lawfare Daily podcast interviewed Sam Bresnick about his recently published paper: China’s Military AI Roadblocks: PRC Perspectives on Technological Challenges to Intelligentized Warfare (Lawfare Daily: What China Thinks of Military AI with Sam Bresnick). Bresnick researched hundreds of Chinese-language journal articles for his analysis and one key takeaway is that China has nowhere near the AI capabilities that the US has. He says specifically that the defense companies in the US that are pushing the narrative that China is going to leapfrog us are just plain wrong. In that light, we should seriously question Aschenbrenner's "situational awareness". Perhaps he is too aware of the narratives being pushed by the companies that will benefit from the belief that China is overtaking us on AI. No wonder Aschenbrenner left OpenAI to start his own AI investment firm!
Of course, one published paper and some self-owns from Aschenbrenner are not much evidence that I'm correct. Later in this post I'll get into more granular details concerning resource allocation and what the deployment of advanced AI might actually look like. These points will further support my argument that China is not the clear and present danger Aschenbrenner would have us believe. At least not when it comes to AI.
Part 6: Can We Detect An Ulterior Motive?
Here, the comparison between Suleyman and Aschenbrenner is enlightening. Where Suleyman speaks only in the most hyped and vague ways about technology, largely saying very little at all and definitely nothing new or interesting, Aschenbrenner gets right into some back of the envelope calculations and detailed technicalities of AI. These calculations and details are quite important and go a long way in making his early points about near term AI development, out to about 2027-2030, both credible and understandable while also enlightening those of us who are not, as he puts it, situationally aware. Beyond that, however, Aschenbrenner turns into a Suleyman: AI supremacy should be humanity's and the US's chief goal and we should get rid of most regulation, legal, economic, social or political concerns that could stand in its way. However, no details are given as to what exactly are the regulatory problems he is concerned about. We only get vague references to business and energy regulation and how it will be hard to, you know, expand the US's electricity output by at least 10% (according to him) for the sole purpose of powering AI. Two points about this change in Aschenbrenner's arguments:
Point #1: This is the usual tech "visionary" marketing pitch: I have a new revolutionary and proprietary technology, just let me deploy it without regard to cost and in particular let me externalize as many costs as possible and also let me create a sense of urgency so that I can develop a profitable network effect, which also happens to make it easier to brush these costs under the rug. In my opinion, we should be especially skeptical of narratives from those who have vested interests in AI. Suleyman is a founder of multiple AI companies, including the infamous DeepMind. Aschenbrenner on the other hand worked on the Superalignment team at equally infamous OpenAI and recently founded an investment firm focused on AGI. He also owns stock in companies like Nvidia specifically to benefit from the AI boom. Around page 88 he writes:
"What all of this means for NVDA/TSM/etc I leave as an exercise for the reader. Hint: Those with situational awareness bought much lower than you, but it's still not even close to fully priced in."
Situational awareness leads to the moon too, I guess (🚀🚀🌝).
Point #2: This is also the classic issue with predicting the future and technology in particular. We don't actually know how easy it will be to utilize AI beyond its current incarnations like ChatGPT. Speculation is helpful for preparing society for some possibilities, and there's good reason to think the future of humanity looks very different than the humanity of today, but we really don't know what the future holds nor when it will hold it. Consider other technologies that Aschenbrenner and Suleyman bring up. Will robots become more useful or will some other technology make them unnecessary or too expensive in comparison? Will fusion energy bring an era of cheap and plentiful clean energy or will it remain impractical or expensive? Will artificially intelligent workers be autonomous enough and predictable enough to replace human workers (this is actually a huge open question that neither Aschenbrenner nor Suleyman try to meaningfully answer)? Could useful artificially intelligent workers be cheaper than human workers or will they actually be rather expensive for businesses to utilize? While it's fun to speculate on the what ifs, the fact is that we won't know exactly how and when new technologies will take off.
Part 7: How Much Do LLMs Really Tell Us About Future AI Systems?
Some recent technological examples I like to think about: fusion energy has languished for far longer than many thought it would, blockchain remains a technology looking for a killer application (bitcoin was invented over 15 years ago in 2008!), the utility of smart phones was largely unforeseen, steady progress in making more efficient and useful green energy technologies (solar, wind, batteries, etc) has been both a net good but also too slow to meet climate change CO2 emissions reductions milestones, and the list goes on. We can apply this to AI itself: who knew that large language models (LLMs) would be so useful to humans and get us to systems that we call AI and can beat the Turing Test? And who knew that these same AIs, the current ChatGPTs of the world, would still not be widely regarded as either particularly intelligent or even a little bit conscious? Or how about the fact that LLMs require vastly more input data and training than we can account for and that this is still an open question in the statistical physics of computation (see Lenka Zdeborova's Simons Foundation lecture Statistical Physics of Machine Learning, for example)?
Aschenbrenner makes the case that human-replacing, expert artificially intelligent workers could exist by 2027-2030. The details of how they'll work and how much they'll cost, however, are unknown and highly speculative. Aschenbrenner largely elides this. In his defense, his overall argument regarding this is that we need to plan carefully for the possibility in case it happens (because by his argument it means super AI will be soon to follow), not that he knows for sure it will happen. But he also says that he's extremely confident it will happen. Anyway, the hosts of the excellent science and critical thinking podcast The Skeptic's Guide to the Universe wrote a whole book about thinking about the future, and how we get technology predictions so wrong. History and literature is littered with numerous examples of people getting it very wrong. Not so much with people getting it right!
Despite what the Aschenbrenners and Suleymans of the world say, the development of artificially intelligent workers that can replace human workers (Aschenbrenner specifies this clearly when he refers to AI workers as "drop in replacements" for remote workers) are no guarantee. We haven't even gotten to discussing a super AI yet. Consider, for example, the fact that humans use far more common sense and experiential intelligence than we typically realize. An artificially intelligent worker might have legitimately expert knowledge of facts and superior pattern recognition capabilities, but have considerable problems doing anything beyond answering questions and doing tasks that are often already automated without AI. Leslie Valiant addresses the issue of common sense directly as related to machine learning in his book Probably Approximately Correct:
"In order to understand a novel many facts need to be known that are not stated in the novel; many are so obvious that they are nowhere stated in print. This is not merely a feature of complicated adult novels; it has been remarked that children's stories require almost as much common sense knowledge as do novels written for grownups. Unfortunately for Turing's dream, babies arrive miraculously well informed and well prepared to be informed even better."
Valiant was referring to the fact that humans benefit from a kind of learning that is encoded in our DNA from billions of years of evolution on Earth, and also other kinds of embodied learning. Currently, to my knowledge, these kinds of learning remain poorly defined and so we can't assume machines will benefit from the same kinds of learning that humans do nor can we assume machines will "see the world" the same way we do. As mentioned previously in this post, this represents a significant blind spot in discussions of AI that I see. Aschenbrenner and Suleyman are no different in leaving out these topics. Intelligence, learning, and consciousness are almost never defined and it's assumed that AI thought and consciousness is the same as human thought and consciousness. Yet, we know that even human thought is not well defined and individual human experiences of their own thoughts seems to vary considerably (I wrote a blog post about this recently). For this reason, it's nearly impossible to give a measure of likelihood to any speculation about what future forms of AI will achieve.
To be clear I'm not suggesting that we cannot one day create or shouldn't try to create AGI or super AI. I simply mean that there could be significant, non-trivial, or even damning challenges that come up along the way and slow or stop progress on more advanced AI altogether. Will AI be like fusion energy? Fusion energy is clearly theoretically possible, we can even achieve ignition and energy output in specific controlled experiments. But so far it has been extraordinarily difficult to scale up and turn into a commercially viable source of energy. Will it turn out even worse than that and AGI and super AI won't be achievable at all?
Part 8: Is a 10x Manhattan Project Realistic?
Aschenbrenner even acknowledges that he is "AI-pilled" when he talks about the absurd amount of resources that would be necessary just to supply microprocessors for the development of AGI and the creation of a super AI. I appreciate his specificity and clarity on the economic challenges, as opposed to Suleyman's vague techno optimism that brushes the challenges under the rug, but let's consider the projected expenditures. Around page 86 Aschenbrenner writes:
"A new TSMC Gigafab costs around $20B in capex and produces 100k wafer-starts a month. For hundreds of millions of AI GPUs a year by the end of the decade, TSMC would need dozens of these [...] It could add up to over $1T of capex."
Around page 82 in Situational Awareness he compared the yearly funding needed to achieve AGI to the costs of the Manhattan Project and the Manhattan Project was, in fact, considerably cheaper, even as a percentage of GDP. According to Aschenbrenner the Manhattan Project at its peak was about 0.4% of GDP (about $100 billion, inflation adjusted to today's dollars) compared to the $1 trillion a year he projects for the development of AGI (3% to 4% or more of GDP). Yes, ten times that of the Manhattan Project! In my opinion, in order to justify such a truly enormous outlay of resources, over such a short period of time no less, there would need to be a clear and present existential threat to the US. I think this is a much more significant problem with the argument than Aschenbrenner gives credit. It's one thing to say the US should be first in super AI. It's quite another to say we need to spend at least $1 trillion every year on achieving it. I said earlier that extraordinary claims require extraordinary evidence (a phrase Carl Sagan popularized). In this case we could modify it to: extraordinary outlays of national resources require extraordinary justification and extraordinary national leadership.
Aschenbrenner brings up the need to keep AGI model weights and algorithms secret. I agree on this. They do need to be kept secret and in a serious, national security type of manner. However, this too very much undermines his argument for speed and reducing regulation. The Manhattan Project was government run and heavily regulated for security, budget, resources, etc. Aschenbrenner and Suleyman both contend we should do the opposite: keep most regulation out of it and spend as much money as possible. In fact, they are typical tech entrepreneurs who attempt to externalize the costs of security, regulation, etc while maximizing their own financial benefit. What we really should be taking from their arguments is that artificially intelligent workers who can replace humans, AGI, and super AI should be heavily regulated and largely in the hands of the government where security and resource allocation can be highly controlled and in the national interest. Not because government is the best at this, but because private industry can't muster these kinds of resources while also defending against nation state cyber attacks and doing what's in the best interest of the country as a whole.
Part 9: Should We Slow Down The Pace of Development Instead?
Aschenbrenner and Suleyman both rely on the speed fallacy: that the US needs to be first in the AGI and super AI race or else China will get there first. But as I mentioned above, there is no realistic fear that China will get there first (without stealing it) and neither of them lay out any details to support their fear that China would be able to. Aschenbrenner, to his credit brings this up, although it's buried at the end of his book around page 132, and in my opinion is overly optimistic with regard to China's capabilities: yes they theoretically could do the necessary things like build out the capacity to generate more electricity in the simplistic sense that an authoritarian government could quickly direct resources to such a thing. On the other hand, doing so at an enormous cost percentage-wise of their GDP while the West is actively restricting AI engineering and microprocessor resources and dealing with an on-going trade war seems like significant and effective acts of containment to me. So I disagree that we have to throw caution to the wind and go as fast as possible. If the US had specific evidence that China was building out its AI infrastructure at worrisome speeds, then I would think differently, but as we see this isn't born out by logic or evidence. Check out also a recent Lawfare Daily podcast episode with China policy experts discussing AI that is significantly less alarmist than Aschenbrenner and supports my position.
The US should be slowing down in our race to AGI and super AI so that we can develop safer and more regulated systems with greater government ownership, and, yes, securing these systems through secrecy and national security clearances, etc. I'll outline this further below.
Aschenbrenner's arguments about superalignment are equally as half-baked. Although, at least he tries to articulate the issues rather than glossing over the challenges completely like Suleyman does. Superalignment is the name of the collection of techniques and methods for ensuring a super AI remains controlled by humans. Aschenbrenner talks about superalignment as though super intelligence and agency are the same thing as consciousness and desire. As I mentioned earlier, these are enormous assumptions that he never even addresses directly. Having a truly air-gapped AI system with national defense level security protocols in place for human access should be sufficient for containing the AI. Aschenbrenner doesn't provide details or evidence contradicting this. We can implement stringent access protocols that limit the AI's ability to manipulate users (e.g., review and censorship of all user input and AI output, etc). Experiments can be run on methods of superalignment as well as implementing things like the ability to pause computation. These seem like reasonable backstops on bad or wild AI that, assuming they are implemented in a zealous national security style (i.e., owned by the federal government and treated as a state secret), then we can at least avoid the worst scenarios such as super AI exfiltrating itself, devising and enacting alien or unknowable plans and theories, manipulating users, etc. Presumably, if humans can learn quantum physics then we can, given time and resources, learn and understand the output and behavior of a super AI, no matter how alien. We just need the infrastructure and planning to keep us in the loop and in physical control of the super AI. Aschenbrenner does not address why he thinks this could be more difficult than I present here or even impossible.
Aschenbrenner and Suleyman both treat super AI as an unstoppable and unknowable force of nature. This is all the more reason to have it owned by the government, heavily regulated and controlled, and have it be directed at specific aims. Why would we allow a private company to wield a super AI? And for what purpose anyway? Clearly super AI ought to be used for the benefit of humanity and the national interest. Not for the purposes of creating profit for a small group of unelected private business people and investors. This is why it's so important to read between the lines from people like Suleyman and Aschenbrenner and understand why they want us to believe so much in super AI and their urgency that we achieve it as quickly as possible. Ultimately, this is about power and it is stupefying to imagine a private company wielding this much power, assuming the predictions of these AI-pilled folks come to fruition.
We talked about Aschenbrenner's calculations of how much all of this will cost. Let's look again at externalized costs: private AI entrepreneurs want to externalize over $1 trillion dollars a year in infrastructure and development costs (if we go with Aschenbrenner's projections), plus the costs of removing business and energy regulations (e.g., increased CO2 emissions, increase in fraud, etc) and whatever else might be involved in that. So tax payers should foot the bill (and deal with the fall-out) so that private business people can profit? And what if super AI never materializes or takes far longer than expected to develop? This should be a complete non-starter for any tax payer. Suleyman in particular has an enormous blind spot here. On the other hand, Aschenbrenner does eventually make the argument for full government ownership, but not until the end of his book. It feels like Aschenbrenner needed to scare himself with his own speculative fears of super AI before he could bring himself to state this.
Aschenbrenner and Suleyman both treat super AI as an unstoppable and unknowable force of nature. This is all the more reason to have it owned by the government, heavily regulated and controlled, and have it be directed at specific aims. Why would we allow a private company to wield a super AI? And for what purpose anyway? Clearly super AI ought to be used for the benefit of humanity and the national interest. Not for the purposes of creating profit for a small group of unelected private business people and investors. This is why it's so important to read between the lines from people like Suleyman and Aschenbrenner and understand why they want us to believe so much in super AI and their urgency that we achieve it as quickly as possible. Ultimately, this is about power and it is stupefying to imagine a private company wielding this much power, assuming the predictions of these AI-pilled folks come to fruition.
We talked about Aschenbrenner's calculations of how much all of this will cost. Let's look again at externalized costs: private AI entrepreneurs want to externalize over $1 trillion dollars a year in infrastructure and development costs (if we go with Aschenbrenner's projections), plus the costs of removing business and energy regulations (e.g., increased CO2 emissions, increase in fraud, etc) and whatever else might be involved in that. So tax payers should foot the bill (and deal with the fall-out) so that private business people can profit? And what if super AI never materializes or takes far longer than expected to develop? This should be a complete non-starter for any tax payer. Suleyman in particular has an enormous blind spot here. On the other hand, Aschenbrenner does eventually make the argument for full government ownership, but not until the end of his book. It feels like Aschenbrenner needed to scare himself with his own speculative fears of super AI before he could bring himself to state this.
Part 10: Will Super AI Be An Uncontrollable Monster?
In Aschenbrenner's defense, he does also bring up air-gapping and guarding against user manipulation around page 123. But then on the next page he makes the truly stupefying statement:
"True superintelligence is likely able to get around most-any security scheme[.]"
If that were true and super AI is truly uncontrollable, then why did he recently found an investment firm focused on investing in AGI? After all, he claims AGI is a necessary precursor to a machine intelligence explosion that will inevitably result in super AI. I imagine that investors are looking for a commercially viable product, not something that, according to Aschenbrenner, will directly lead to a scary and uncontrollable "superintelligent" monster. Sounds more like liability to me. Furthermore, in that scenario wouldn't humanity be better off following the sage advice of Joshua, the fictional AI in the 1983 movie War Games? Maybe creating super AI is as Joshua says, "a strange game. The only winning move is not to play." I guess Joshua is pretty level headed, as far as AIs go.
Aschenbrenner's flights of fancy are more detailed and therefore insightful than Suleyman's. Suleyman simply goes through historical example of technological advancement after historical example as if that was justification enough to prove that AI will revolutionize everything. This conflation of vaguely related far future possibilities with actual current progress is one of the most common techno-optimist fallacies they use to manipulate the AI-pilled and investors. Thanks to the evidence he presents, with one example we can see how Aschenbrenner too makes amazing leaps of logic that aren't supported. On around page 134 he writes:
"A dictator who wields the power of superintelligence would command concentrated power unlike any we've ever seen [...] Millions of AI-controlled robotic law enforcement agents could police their populace; mass surveillance would be hypercharged; dictator-loyal AIs could individually assess every citizen for dissent, with advanced near-perfect lie detection rooting out any disloyalty."
That sure is scary! But having a super AI come up with these things and then actually implementing them and gathering the resources necessary to carry out their manufacture is quite another. If super AI and robots that are cheap and easy to make were sure things, then I agree, every country with draconian governments like China, North Korea, and others would be deeply investing in super AI and robots as a means of state control of their populace. But right now they aren't. Not only is this pure speculation, but it would likely require far more resources to pull off than even creating the super AI to begin with. Earlier I posited that China is the only adversary of the US who could theoretically muster the resources to build a super AI, but realistically can't. I don't buy that they could also muster the additional enormous resources needed to build a robot army.
How about in the US? Could a US president use super AI to become a dictator of the US? In places like the US we can fall back to separation of powers, federalism, and classic types of layered regulation. A US president would certainly have the authority to use a nationalized super AI, just as they have authority for launching nuclear weapons. For that reason, US voters will need to be aware of the super AI's capabilities to some degree so that they can ensure their elected officials in Congress regulate it. If Congress is properly balancing the power of the Executive branch and effectively providing oversight, then the US president won't be able to utilize super AI for their own self-interested ends. It's not a perfect system, but nor is it weak, inconsequential, or ineffective. In fact, we rely on it everyday. I do think it's vitally important that US voters bring their concerns to their representatives early and often.
Part 11: Can A Dictator Build A Robot Army Using Super AI?
Let's take a short step back to look more closely at the details of Aschenbrenner's basic example of an existing dictator utilizing super AI. A dictator outside of the US would probably not create a super AI (recall the price tag is in the trillion dollars a year range with enormous data center, microprocessor, and energy costs that far outstrip anything currently in existence on the planet). The dictator would steal the model instead. This supports my thesis: the US government should work on AGI and super AI as a state owned secret and not allow private companies to work on it. Once the dictator stole the super AI and spent time building out systems of control and infrastructure to run and interact with it (let's assume that's relatively trivial for the purpose of this highly speculative thought experiment), they would use it to create detailed plans to create a robot army. For simplicity, let's make yet another assumption: the plans are easy to understand. Ok, well now the dictator does need to turn the detailed plans generated by the super AI into a reality by doing a lot of work. Maybe the dictator is lucky and controls a land area with plentiful resources well suited to manufacturing robots. Ok, but none of what's needed aside from the raw materials exist yet. The dictator might force their subjects to build the initial facilities which presumably would become automated and self-replicating in some fashion so that work on the robot army could eventually continue without additional human help. But these initial steps are actually no different than already existing issues with the trade of arms. Dictators already have forced labor, illegal arms trade, and spend more money on their own ends than on their subjects well being.
These questions always rest on relative risk and likelihoods of success. How big of a robot army could China build? How about North Korea? What advanced manufacturing techniques and technology will be necessary and how long will it take to get everything built out and in working order? Presumably they will need advanced microprocessors, various kinds of specialized robot mechanisms, etc which would require an entire vertical chain of manufacturing resources that will need to be tested and proven before scaling up. How long will that take? How would they ensure control of the robots? What if the US hacked the robots and took over some or all of the army? I think you can see just how speculative all of this is. It is very hard to justify proactive actions based on such flimsy speculation. It may be that a super AI could help the dictator find new and cheaper or more creative ways to keep them in power and maybe expand their power, but realistically resources need to come from somewhere and the international politics of trade and war will not, as far as I can tell, be dramatically changed. After all, we already live in a world of various kinds of containment and mutually assured destruction. We need to take threats seriously, but we also need clarity of the space of both what's possible and when in time the threats will become possible. Then the question becomes what do we need to do now? I think we're largely doing what we need to do now: pulling the levers of trade and diplomacy to slow our adversaries' pace of AI development.
While the general fear of adversaries obtaining advanced AI is valid, we shouldn't let it get the better of us and start assuming dictators will immediately have unimaginable power the moment they turn on their AI. We want to keep super AI out of dictators' hands and the only way to do that is through stringent and careful government ownership. Not by racing headlong into super AI with competing private companies. It's not until about page 141, close to the end of his book, that Aschenbrenner states what we realized earlier in this post:
"I find it an insane proposition that the US government will let a random [San Francisco] startup develop superintelligence. Imagine if we had developed atomic bombs by letting Uber just improvise."
Part 12: Are AI Accountants A More Clear And Present Danger than Robot Armies?
It's not only that AGI and AI AI engineers (artificially intelligent AI engineers) could theoretically help create a super AI. What happens even before we get to super AI? Will lawyers be replaced by artificially intelligent lawyers? How about AI accountants? Engineers? Teachers? Doctors? Scientists? If we also take Aschenbrenner's and Suleyman's predictions about an explosion of robot technology, then the list needs to include every kind of manual labor and customer service job as well. Their discussions of proliferating artificially intelligent workers entirely leaves out how, if the predictions come true, they would utterly upend numerous professions and create one of the most enormous socio-economic disruptions in modern history. Yet, Aschenbrenner wants us to focus on national security and winning the arms race instead. Suleyman just wants us to do nothing more than embrace and adapt to new technology come what may!
Admittedly, Suleyman finally brings up this AI worker issue on page 178 of his book, but other than saying it's a problem he offers nothing. In fact, he uses it as evidence that modern society and nation states are fragile. This fragility is a central theme of his book. The basic argument is that technology empowers more people and organizations, then the balance of power shifts, and, according to him, the modern nation state might not survive these changes. I don't think he provides good evidence for this, and without an awful lot of speculation this is only tangentially related to AI anyway. The thing to take away is that for Suleyman the technology that he personally financially benefits from also just happens to be the technology he believes will change human communities and organizations such that nation states will no longer exist, causing a re-balancing of power so that people like him have more power in the vacuum of nation states. Quite literally he is saying that he wants us to embrace "the coming wave" of new technologies so that people like him have more power and nation states have less. It's a wonderful fantasy. For him. I urge you not to give in to it.
These narratives remind me of billionaire Ray Dalio. I'm reading Rob Copeland's The Fund: Ray Dalio, Bridgewater Associates, and the Unraveling of a Wall Street Legend, and Copeland makes a stellar case by showing the painfully obvious evidence that Dalio has declared the US to be going into a recession (or even depression!) nearly every year since about 1982. He has a habit of scaring people. It turns out, doing so can be good for his business because then people trust him with their money: Dalio scares you about losing money and then shows you how only he can save you from losing money. The luck is that in 1987 a reporter, and in 2007 a future Treasury Secretary, heard his pronouncements without the context of all of the times Dalio incorrectly called a recession, and then declared Dalio a prophet who foretold those market crashes. A broken clock appears to be right two times a day. Dalio capitalized enormously on his luck both times and grew his hedge fund immensely. People believed in Dalio's ability to see the future, and many still do. Dalio's benefited both financially and in terms of power. His hedge fund wielded immense power by controlling hundreds of billions of dollars, including for pension funds, and also because Dalio had direct access to powerful bureaucrats in the US Treasury and politicians who listened to his advice. Advice from a broken clock.
I'm not the only one suggesting these kinds of narratives are fantasy, mere rationalization of luck and circumstance. Around 2012 Dalio penned his magnum opus, How the Economic Machine Works. Copeland details British historian and Harvard professor Niall Ferguson's review of the work:
"One of Ferguson's animating philosophies was that Western civilization was more fragile than it appeared [... he] read the more than one-hundred-page document [How the Economic Machine Works] sent over from Bridgewater [Dalio's hedge fund] laying out the economic machine. He noticed almost immediately what he considered to be fundamental flaws. The paper ignored that one nation's culture might lead to better or worse economic outcomes. It also discounted what Ferguson called 'the caprices of decision makers,' including the role of human agency and ingenuity that could, for instance, lead one country to declare war on another, or to choose peace. If this work had been done by one of his graduate students, Ferguson would have flunked the person. He couldn't believe that he was reading, as he put it, the 'holy texts' of Bridgewater. [...]
While it was possible to cherry-pick historical examples of nations that had collapsed under their debts, plenty of countries had grown fast enough to render the debts moot. Also: wars, coups, cultural changes, competing legal systems, effective and ineffective political leaders, and all sorts of other factors, including human consciousness, couldn't ever be quantitatively measured, let alone cleanly placed into a formula. 'There is no cycle of history. It's a fantasy,' Ferguson [told Dalio]. Dalio jumped to his feet, shaking. [...] 'Where's your fucking model, Niall?' he bellowed at his guest."
Of course, Ferguson's criticisms sound similar to my criticisms of AI narratives. Why should we let the interested parties in AI scare us in the same way Dalio does? Might all of these people just be financially interested, power hungry, broken clocks who incessantly declare doom for their own ends?
Part 13: Is Technology Dematerializing... Things?
Suleyman conflates economics with technology (an old and fallacious trope) when he writes on page 189:
"In the last [technology] wave, things dematerialized; goods became services. You don't buy software or music on CDs anymore; it's streamed. [...] Everywhere you look, technology accelerates this dematerialization, reducing complexity for the end consumer by providing continuous consumption services rather than traditional buy-once products."
As an example, he brings up "Uber, DoorDash, and Airbnb". I hate to break it to him, but DoorDash has struggled to even generate a profit. These companies also didn't replace existing things in the way that streaming music technology replaced physical CD technology, although even that is a mischaracterization; digital music stored on hard drives and RAM and served from data centers over the internet to smart phones is what has largely replaced physical CDs, CD players, and the logistical chain supporting them. In case it's not obvious, you can't actually "dematerialize" taxis (Uber), beds (Airbnb), or food (DoorDash)! What Suleyman is referring to is an economy that is service-based rather than manufacturing-based. He wants so badly for technology to be the invisible hand not just of the economy, but of all humanity. I'm not convinced and I don't think you should be either.
Summary of Where Suleyman and Aschenbrenner Have Led Us So Far
Suleyman repeatedly and throughout uses the old creationist rhetorical technique called a Gish Gallop in his book: he brings up new historical tidbits and quotes (notably with limited or no context), that seem related to each other so that it sounds like he has so much wisdom and different lines of evidence to support his points. Aschenbrenner does the same thing when he starts trotting out the numerous things he think will be possible once we can wield super AIs. When we've drilled down on some these in this blog post we find they have little support, but it's hard to address every single cliched and cherry-picked anecdote they bring up. Suleyman also simply declares the modern nation state to be "fragile", "weak", "nervous", and "impulsive" with no evidence and then proceeds to explain how this is caused by the very waves of technology that he says we must accept.
The last time I checked, modern human civilization is great not only because of embracing technological change, but also because we are capable of adapting our behavior. We can learn and change. We can learn and change so that we treat all people more equally over time, more effectively help those in need, reduce disease and famine, and lessen violence and serious crime. None of that stems directly from technology. Technology can be used as a tool to aid those things just as easily as to make those things worse. We don't have to follow Suleyman's blind worship of technology and the idea that we have no control over it. We often do have control, in fact. It is very telling that Suleyman can't bring himself to acknowledge this in any meaningful way in his book and that he instead focuses only on how great and unstoppable technology is. Ultimately, it's a nonsensical polemic that I believe is made to paper over the egos and responsibilities of privileged people.
Part 14: Can A Super AI Do Super Hacking?
Where Aschenbrenner beats his magical robot war drum, Suleyman trots out Lawnmower Man style AI hackers. To be fair, Aschenbrenner is also pretty vague about his super hacking super AI claims, but Suleyman takes it to a Hollywood level of absurdity on page 162:
"Now imagine if, instead of accidentally leaving open a loophole, the hackers behind WannaCry had designed the program to systematically learn about its own vulnerabilities and repeatedly patch them. Imagine if, as it attacked, the program evolved to exploit further weaknesses. Imagine that it then started moving through every hospital, every office, every home, constantly mutating, learning. It could hit life-support systems, military infrastructure, transport signaling, the energy grid, financial databases. As it spread, imagine the program learning to detect and stop further attempts to shut it down. A weapon like this is on the horizon if not already in development."
That super AI hacker sure sounds scary! But if you read carefully it falls apart pretty fast. For example, he conflates ransomware (a program or collection of programs) with the hackers who use various technical and social techniques to get the ransomware on the victim's machine and then execute it. Are we to believe that Suleyman, someone who wants us to take him seriously as an inventor of AI, wants us to believe that malware will be conscious or have some kind of agency? The quote above says "imagine the program", emphasis mine. But the fact is, even an AI model that merely has a human preschooler level of "intelligence" (let alone can learn and reason at that level!) is quite large, running a gigabyte or more in size, and would need to be run with additional software and inputs it could understand, not merely dropped on a random computer in its basic form as a model. No knock on human preschoolers, but I don't think of them as particularly good at hacking. My own daughter, a ten year old, asked an AI chat bot recently, "how to hack my dad's ipad", and it declined to answer on the basis that it's not allowed to tell its users how to hack. Now that's an aligned AI!
Due to the malware's size and running time it actually should be quite easy to spot and stop it from a technical cybersecurity standpoint. Of course, it might get through in some cases, but it certainly wouldn't be unstoppable or invisible. It could take an hour or more to download somewhere and probably minutes, hours, or even days to "think" through its next steps. This super AI hacking program would not be running on an enormous data center cluster anymore, it would be on the victim's machine, remember? So instead of having the blazing speed and power of thousands of GPUs, it might be running on your smartphone or corporate-issued, ten year old, four core Intel CPU. I suspect it may take a little longer to make its computations in that case. I don't know. Just running sentiment analysis machine learning models on text from novels can take a lot of RAM and time (tens of minutes to hours) on my own laptop. I can't say for sure that a future AI model might not be very small and very flexible and thus not require much RAM and CPU, but we're now in the territory of pure speculation. Check out open source AI models on HuggingFace to see for yourself how large these things currently are. They tend to be made up of millions or billions of parameters and weights. Usually, the more flexible the model, the more parameters it has. An "evolving", persistent malware threat would likely need to be very large indeed.
Granted, if the malware could make internet requests to a command center then it could have more formidable compute resources behind it. However, from a cybersecurity and forensics perspective we would expect that to also make it a lot easier to identify (oh, look at all those odd network requests!) and also to narrow down where it's command data center is located. Plus, the round trip time for numerous requests would also slow it down: the internet often seems nearly instantaneous, but as any web or mobile app developer knows, variations in networks, making large number of requests to remote servers, and sending large amounts of data could all lead to considerable latency. My point is not that super hacking isn't possible in some general and theoretical sense, but rather that the specific fears brought up in these books are not founded on evidence or sound technical details. They rather rely on the public's lack of understanding of what hacking actually is as well as the false notion that a small software program will itself have super AI capabilities. Suleyman calls it a "worm", although that term has a more specific meaning in cybersecurity than the way he uses it.
While we can imagine super AI achieving amazing feats, I don't think it will magically overcome the known laws of physics and computation to shrink itself down and prolifically develop efficient algorithms that are orders of magnitude more efficient than anything we know of. Especially not while using an old CPU, 16 GB of RAM, and a Comcast internet connection. Regardless, we're speculating wildly here. Currently, AI models are trained and don't "learn" much, if at all, after the training. That's kind of an important detail, which we'll talk about in more detail. And anyway, why would a super AI with super abilities even need to hack us in the ways Suleyman and Aschenbrenner suggest? Seems unnecessary at that point when it could probably achieve far more significant things.
Part 15: What Is The Current State of The Art for Commercial AI?
To hammer the point home: Aschenbrenner and Suleyman are speculating not based on current abilities of machine learning, but on totally new and never before seen abilities. For example, Large Language Models which are the state of the art in AI at the moment, don't actually do much learning after their models are built. This is because the main machine learning systems feed vast quantities of digitized data, such as websites, to their machine learning algorithms which spend enormous amounts of energy on innumerable computations, spitting out a model in the end. It's a compressed, computerized form of learning and the best we know of for machine learning right now. From there, the latest commercial systems do some additional machine learning called "reinforced learning from human feedback" (RLHF) which, as the name implies, involves humans (much slower!). This learning helps the system develop something like behavioral guardrails or norms so that its output is more in line with what is culturally acceptable to humans. Like learning that lying is bad, that making up facts is unacceptable, etc. This process is far from perfect, but it makes the systems much more useful to people.
Notice also that we're once again butting up against the issue of what do we even mean by intelligence and learning. I don't have a detailed answer for you. It is after all an open question. Regarding AI, some people started the ArcPrize competition to suggest that there are certain reasoning skills that an AGI should have: https://arcprize.org. They use what they call an Abstraction and Reasoning Corpus (ARC-AGI), which might be described as a collection of computational puzzles that are easy for humans, but hard for AI. To win the top prize an AI must be able to achieve a ranking of 85% success rate on the puzzles. The current leader at the time of this writing is MindsAI with a 39%. This is important work in the area of developing benchmarks and standards. Without things like this it is nearly impossible to compare different AI models. That said, I don't know how reasoning puzzles are chosen for ARC. Are these puzzles little more than an AI IQ test which might not be a true measure of intelligence? How can we know if we don't even have human intelligence concretely defined and reliably testable?
The inventor of the ArcPrize, François Chollet, did a deep dive on his perspective on intelligence as well as how LLMs work in a very detailed way with physicist Sean Carroll on the latter's podcast Mindscape. I highly recommend this episode, check it out! If you listen to it you'll hear Chollet explain very clearly that his own definition of intelligence completely undermines the idea that increasing the compute resources will be enough to make an LLM like ChatGPT more intelligent. That is, Aschenbrenner's order-of-magnitude argument is invalid. Granted, Chollet is arguing against a bit of a strawman version of it. Aschenbrenner doesn't say that more compute alone is enough, but rather that more compute included with new machine learning techniques combined. Aschenbrenner points out that as companies like OpenAI become more closed-source to protect their proprietary discoveries, the public increasingly lacks situational awareness of AI's development. Who should we believe in this case, Chollet or Aschenbrenner? They are both financially benefiting from the development of AI. I don't know, but based on the information available I think experts like Chollet give us legitimate reasons to be skeptical of the doomers, alarmists, and others who insist we have to build super AI as fast as possible.
I'll leave you with a couple of recent papers showing that at least by some measures the reasoning capability of current state of the art AI models remains low or different than human reasoning:
- LingOly: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low-Resource and Extinct Languages: https://arxiv.org/html/2406.06196v2
- Evaluating Large Vision-and-Language Models on Children's Mathematical Olympiads: https://arxiv.org/abs/2406.15736
Part 16: Will AI Put Bioweapons in The Hands of Evil People?
One more example of the possible scary power of super AI both Aschenbrenner and Suleyman bring up is bioweapons. Similarly with robots and hacking, there is a significant difference between merely identifying a potential bioweapon and actually synthesizing it, testing it in the real world, and releasing it in a way to achieve an objective. The way that Aschenbrenner and Suleyman refer to all of these doomsday scenarios (robot armies, super hackers, and super deadly bioweapons) you would be forgiven for imagining that the mere existence of super AI would bring these things into reality almost immediately. In reality, each of these things requires physical resources, real world know-how, and probably significant amounts of controlled testing. Even if a super AI could do all of that, it would take real time and real physical resources to carry out.
Sure, in theory a super AI could simulate lots of things and if the super AI has enough computational resources it might be able to simulate the real world in speed runs. In theory this would only be limited by its model of the world and its computational resources. I think it's very likely the super AI would have to actually carry out testing in the real world with physical resources. Otherwise, all of its inventions, good or bad, could easily suffer from significant flaws that built up from even very small errors in its models of reality. These flaws are the kinds of things we humans are already good at fixing: a so-so essay written by ChatGPT right now can pretty easily be turned into a good essay by having a human review it and simply make small, but very meaningful, changes. Maybe a super AI could make those kinds of changes to a ChatGPT essay.
However, we're not talking about essays. We're talking about advanced, highly complex physical inventions that require significant resources, in some cases entire vertically integrated supply chains to produce. Even if the super AI has humans working for it they would need to carry out all manner of physical and economic resource planning and construction, oversee robots who in all likelihood will similarly not be 100% human capable in the real world, etc.
Let me provide more clarity with a real example. A good friend of mine started an open source and public meetup at a community biolab. There are open, community biolabs all over the world, including in places like San Francisco and New York (you can see the one we were at here: Counter Culture Labs). My friend had an idea to create a full web-to-bacteria stack. That sounds like tech word salad, but from a practical standpoint he wanted to give people a way to turn pixel art into bacteria-produced art and automate as much of that process as possible. To explain further, because it probably still sounds bizarre: a user goes to bioartbot.org (check out the site!), creates pixel art using the web interface, that pixel art is sent to BioArtBot's server where it is converted into something a pipetting robot can understand (a.k.a. a liquid handling robot, an Opentrons brand one in this case), and turns it into a job for the robot.
Quick side note, the last time I was working with my friend on this project was years ago so the steps I mention here reflect where we were at that time (2019?). Back to the steps: the robot now has a job in its queue. I think it was at this point that we needed to intervene manually, because the Opentrons robot we had (which was graciously donated to the lab by a local biotech startup that went out of business) needed to be given a bacterial culture in a petri dish. In fact, we needed to handle these petri dishes by hand, using petri dishes with bacteria food (agar) that we purchased along with color-expressing bacteria that we also purchased. We had to use specialized, but mostly manual equipment in the lab (all donated) to handle things in a way to reduce contamination. We lost bacterial cultures sometimes because we apparently weren't consistent enough with our clean handling techniques. We'd store the bacteria in the lab, prepare the Opentrons pipettes with the liquid bacterial mediums for each color, and then position a petri dish in the robot's main housing. Then we could tell the robot to start its next job, which it would carry out automatically, placing drops of color expressing bacteria medium in the mapped grid the robot computed. Humans had to come back once the robot was done placing drops, remove the petri dish (which now contained drops of color-expressing bacteria mapped to the original pixel art), and manually place it in an incubator which would enable the bacteria to grow. As the bacteria grew it would naturally expel colored waste, thus creating the art right there in the petri dish. Then a human would take a picture of the art and email it to the original pixel art creator, post it to the website, job done.
It's a fun project. What I wanted to show here was that it was not trivial to set the system up and even with all of the resources of a community biolab and getting lucky with a practically brand new Opentrons being donated we still had a lot of manual, human-involved steps that we couldn't automate. Importantly, small things could easily ruin the process: coding bugs, robot mechanical and electrical issues, bacterial contamination, chemical contamination, forgetting to incubate or messing up the incubation temperature or timing, differences in batches of color-expressing bacteria and proper storage of the bacteria and the petri dishes that contained agar. The list goes on. Notice how we're overlooking the building and power infrastructure, the people and know-how resources, and things like the weather. There's a lot that we humans don't spend much time thinking about thanks to cultural knowledge and existing infrastructure, but which the super AI might have to figure out and get right! Imagine this process for automated bioengineering or industrial production really of anything. I can only imagine that even a super AI would take real time and resources merely getting started, let alone trying to get the whole process right.
Part 17: Super AI Bioweapon Counter Arguments
Of course, if a super AI figures out a frighteningly deadly, but surprisingly simple to manufacture bioweapon that even a resource poor country like North Korea could easily create, then my argument here falls apart, right? I don't quite think it does, because currently there is no reason to believe that humans don't already have the ability to figure it out on our own without a super AI. The AI-pilled have two chief arguments in this regard, which neither Aschenbrenner nor Suleyman really bring up, but which are:
Counter argument #1: super AI will be able to work through far more possibilities than humans have been able to. My counter to this is that existing machine learning algorithms already do this, including AlphaFold from Suleyman's own DeepMind. Oops.
Counter argument #2: super AI will be able to come up with new, far deadlier ideas that humans wouldn't be able to due to super AI being fundamentally different than human intelligence. This is a nicely vague, but smart sounding argument. How cool would it be for humans to be able to create what is essentially an alien intelligence? But as I'm showing in this blog post it remains pure speculation built on top of speculation. Two speculations don't add up to a truth.
The fact is, in the US we monitor and regulate the development of bioweapons already. Aschenbrenner and Suleyman prefer vague fear of wildly speculative capabilities, rather than a sober assessment of government monitoring and regulation in the light of realistic, near future AI capabilities.
Part 18: Will Super AI Advance Quantum Computer Technology?
Now, I understand that part of Aschenbrenner's assumptions is that an AGI or especially a super AI will be able to do things on a scale that humans can scarcely imagine and it won't be bound by human intelligence and biological bodies. In this context both Aschenbrenner and Suleyman mention quantum computers, the implication being that super AI will have super computational power by advancing quantum computation technology itself (note: current state of the art quantum computers aren't too useful, so they are explicitly expecting AI to advance quantum computer hardware and software). Like AI, quantum computers are very much over hyped in the public imagination. Physicist and quantum computer researcher Chris Ferrie wrote about this in a free book that I highly recommend reading: What You Shouldn't Know About Quantum Computers (it has a forward by famous theoretical computer scientist Scott Aaronson!). I've also interviewed Chris Ferrie previously about his related earlier book Quantum Bullsh*t. At the end of the day, all of this is speculation and we're talking about things that are seriously at the limits of engineering, physics, and computation as we know them, plus making wild leaps about what an AI will be able to effect in the real world. I would never accept speculating about something that directly contradicted what we know from science (e.g., I am comfortable claiming that super AI will never develop a free energy machine). The AI-pilled fantasy is taking what we know about science to the extremes where reality blurs into speculation and mere science fiction, getting awfully close to the unbelievability of things like free energy conspiracies.
The fact is, we don't really know how useful quantum computers will be. It seems likely they will help us simulate some quantum physics models that are impossible with existing digital computers (due to algorithmic/computational limitations; there are genuine limits to what is computable). We will be able to break previously state of the art kinds of encryption (they relied on certain mathematical operations being computationally intractable by digital computers; the latest encryption standards take this into account and are thought to be quantum-ready). But beyond that everything is speculation. A quantum machine learning algorithm that itself produces useful and previously unknown quantum algorithms would be amazing! But we don't even know if quantum computers will be particularly good for those kinds of computations, we don't even have the digital, non-quantum version of that kind of algorithm, and quantum computer hardware is still very much in its infancy... So. Much. Speculation.
Part 19: The US Government's Response to Nuclear Weapons Technology Versus AI Technology
This all leads me back to the example of nuclear weapons. Specifically, with nuclear weapons the US government at first lurched slowly into motion only after there was a consensus among physicists that a controlled fission reaction was possible. To my knowledge (I could be wrong!), there is no consensus among theoretical computer scientists concerning the capabilities of a purported super AI, much less a definition of what a super AI is or how exactly to achieve it in the first place (DM me Scott Aaronson, if you know). One may be tempted to compare Aschenbrenner's speculation about super AI robotic, hacking, and bioengineering capabilities to when physicists couldn't rule out that a nuclear fission chain reaction might ignite Earth's atmosphere. Except by that point scientists had already demonstrated controlled nuclear fission based on rigorously defined models, backed by experimental evidence, and there were far fewer questions about the mathematics of fission and the scope of how it worked. I don't believe we're anywhere like that with AGI, let alone super AI. We don't have models of learning and intelligence that we can use to determine how an AGI would behave or whether a super AI would be conscious, etc.
Additionally, nuclear weapons are fairly specific in their uses and how to use them. They can't make choices themselves. Saying the military should have well regulated access to nuclear weapons thus makes some sense if you have a level of trust in that government and military. Fair enough if you don't! But importantly we're not being asked to trust the nuclear weapons themselves. With super AI the question of consciousness and agency makes super AI very different than nuclear weapons. It is not even clear that we can just "use" a super AI. With nuclear weapons we can share military and commercial fission techniques with a range of allies and partners in international exchanges for the promise that they won't develop their own nuclear weapons programs. I'm not sure we'll be able to share AI workers, AGI, or super AI in that way. If the AI has agency and chooses what it works on and who it works with, then all bets are off. In that light it's possible to see that a secret US government project to develop these AI technologies might remain mostly secret and non-shareable indefinitely, or a system implemented whereby restricted access to the US's AGI would be allowed only to allies and perhaps certain academic institutions and companies. That's a very different world than the anarchic, cyberpunk extremes envisioned by these authors.
Part 20: Will Super AI Super Spy on Us?
Another topic that surfaces in both Aschenbrenner and Suleyman is surveillance. Surveillance, especially online, is a topic near and dear to my heart. I've implemented marketing data aggregation for tech startups in Silicon Valley. Capturing everything from mapping a user's every movement through a website to leveraging their social media profiles to give us more granular and useful data on our users. That experience is the main reason I choose to use a de-Googled version of Android OS (a version of Android that has Google's APIs removed: for example /e/OS or GrapheneOS) on my smartphone. Well, that and my experience working at a cybersecurity startup and reading books like The Age of Surveillance Capitalism by Shoshana Zuboff.
Like robots, hacking, and bio-engineering Suleyman is far too inaccurate, glib, and vague about what surveillance might be like under AI. Or even what it's like today: according to Suleyman London is the same as China in terms of amount of surveillance. The reality is that surveillance is more complicated than that and pulling together numerous disparate sources of information and synthesizing it is not trivial. In places like the US there are complicated patchworks of laws and regulations from the federal, state, and local levels which govern how and by whom data can be collected, stored, transmitted, and shared or sold. Even the government buying sensitive data for the purposes of a law enforcement investigation is not as simple or straight forward as you might think in the US: When the Government Buys Sensitive Personal Data.
People in the US care about privacy, but also balance privacy with convenience. Our northern neighbors in Canada are similar. Consider Google's Sidewalk Labs project in downtown Toronto. For years Sidewalk Labs tried to implement a "smart city" in a development of a few blocks of Toronto's downtown. However, the behavioral, geo-spatial, and other invasive data collection that would enable the automation and statistical analysis to drive the "smart" part of the "smart city" turned out to be above and beyond what residences were comfortable with, especially from a private company as opposed to trusted government bodies. In the end the project was canceled. You can read an interesting perspective here as well as an important research paper about it here.
In the MIT Technology Review article one person is quoted as saying that people in the US would be more open to Sidewalk Labs smart cities than Canadians. However, the fact that as far as I know the US has a grand total of zero developments that are comparable to the failed Toronto project speaks for itself. Suleyman again is trying to scare us about technology that already exists and demonstrably hasn't become what he fears, because people do in fact have a voice and a measure of control, especially in democracies with elected officials and local communities.
Additionally, Suleyman's boogeyman of an ultra-surveillance super AI powered China leaves out any mention of what a country that has a more open and constructive attitude might achieve. Couldn't an open and well-regulated society that utilizes advanced technology end up out-competing and containing a China that is limited by its own need to control its citizenry? It's not that AI is without dangers, but rather that Suleyman is manipulating his narrative to cherry pick specific negative cases in order to scare us. At heart, all he's saying is that we can't let China get a super AI first, otherwise the world will be overrun by China and become a global dystopian surveillance state. But we've already looked at the arguments underpinning fears of China leapfrogging us.
Part 21: Can We Regulate AI?
What does the regulation of AI look like in practice, anyway? Well, the vast majority of existing regulation is focused on existing AI, not really the speculative stuff what we're talking about here. There's good reason for that. Government regulation of technology and products tends to be reactive, rather than proactive, so that inventors and entrepreneurs can innovate. Taking a broad look at the government response to AI technologies in the US we can see this focus largely on existing commercial uses. As evidence I present a couple of recent podcasts that discuss recent legislation: Lawfare Daily: Chinny Sharma and Yonathan Arbel on the Promises and Perils of Open-Source AI and
Lawfare Daily: David Rubenstein, Dean Ball, and Alan Rozenshtein on AI Federalism.
Let's dig into one particular example. What is the US Department of Homeland Security's (DHS) regulatory perspective and purview about threats posed by AI? Here's the DHS Assistant Secretary explaining on a podcast posted on July 15, 2024: Lawfare Daily: DHS Assistant Secretary Mary Ellen Callahan on AI Threats. The DHS has started an initiative to hire 50 AI experts in 2024. Callahan explains that they are eschewing alarmism while staying clear-eyed about AI threats. She talks at length about their new report concerning weapons of mass destruction specifically. DHS refers to them as CBRN threats: chemical, biological, radiological, and nuclear. She outlines a number of specific levers their agency can pull, many of which we've discussed here in this post. Importantly, they are also building the infrastructure to provide effective regulatory oversight as well as ensure an information advantage over threat actors. Callahan mentions the ability to provide off ramps and guard rails so that threat actors can't utilize AI to achieve terrible ends, but instead are stymied and caught before they can harm anyone. This is precisely my earlier point about fears that AI will open a pandora's box; there simply isn't much evidence that AI cannot be contained using traditional incremental policy development.
On the DHS's website under DHS Publishes Guidelines and Report to Secure Critical Infrastructure and Weapons of Mass Destruction from AI-Related Threats they have a link:
To read the DHS report on Chemical, Biological, Radiological, and Nuclear (CBRN) threats, please visit: FACT SHEET: DHS Advances Efforts to Reduce the Risks at the Intersection of Artificial Intelligence and Chemical, Biological, Radiological, and Nuclear (CBRN) Threats.
This is all falling under President Biden's Executive Order "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence".
At this point we've become far to grounded in reality for the likes of Aschenbrenner and Suleyman. Let's speculate about super AI once again. The scary thing that Aschenbrenner hints at and Suleyman doesn't even mention, is that a super AI, if it is ever achieved and if it ever could do things effectively in the real world and it had its own aims... could slowly and over a long period of time pretend to be doing good, helping humans and extending its reach into every part of American (even global) society and economy, perhaps over many generations. And then one day decide to take over a la The Terminator's Skynet. If that kind of scenario ever becomes realistic, then government ownership of super AI and treating it as an air-gapped national security issue should go a long way toward ensuring containment. Ensuring that a super AI wasn't connected to every network system and database in the US shouldn't be an impossible feat and certainly is something the federal government can achieve if they owned the project. If the super AI (or even AGI) harms outweigh the benefits, or super AI turns out to be ungovernable, then we need to stop it fast. Government can more or less be trusted to do that, at least in ways that we'd never trust a private company or private group.
OpenAI was originally supposed to be developing AI purely for the benefit of humanity, through the goodness of Sam Altman's own heart. Yet its just over four-tweet-long charter allows it to make millions of dollars through partnerships with other private tech companies while Board members play musical chairs and its corporate structure changes as needed to optimize making money. It's not optimizing public safety and the national interest ahead of profit. I'll go with the US government, thanks.
Part 22: Will Advancements in AI Tear Society Apart from The Inside?
On page 199 Suleyman finally starts addressing some of his society-level fears with more specificity. The specifics sound like they were cribbed from '80s and '90s cyberpunk science fiction. Anyone who has read William Gibson's Neuromancer or Neal Stephenson's Snow Crash will immediately understand what he is getting at. These anarchist-lite and libertarian-lite trans-humanist cyberpunk ideas have been in the popular imagination for at least that long, with significant antecedents predating them considerably. This is not to say that Suleyman is entirely wrong, but rather that it is well-tread territory and in a lot of ways is the world we already live in.
For example, he speculates on things like AI-aided schools that remove all wokeness from their curriculum, DeFi services that circumvent traditional financial systems, and Extinction Rebellion groups that... do something with AI and synthetic biology (a la the sci-fi movie 12 Monkeys I guess? He doesn't specify). Notably, you may have heard of these things already. You know, because they already exist. Undoubtedly, AI can and is currently aiding people in doing these things. However, many people who live in places that have stable and more or less trustworthy government services continue to ultimately rely on regulation and rule of law. Few people who have been burned by fraud in crypto or education want to entirely abandon the stability that government or a strong traditional banking system and academia provides in those areas. Both in enforcing high standards as well as protecting people from fraud. And it's not just government, it's also local communities which people rely on in other ways.
A few months ago I wrote a review of Michael Muthukrishna's book A Theory of Everyone. In it, Muthukrishna suggests that the future city will be a decentralized autonomous city run like a startup. Somehow, childcare will magically fit the working parents' work schedule, even while the parents will be required to vote using blockchain smart contracts to determine on an on-going basis how the city is managed (on top of doing their jobs) and then the city will be managed autonomously based on its blockchain code, which is law. According to him, immigrants will need to perform well in a job interview in order to be allowed to live there. Only those with verifiable skills and expertise that the city needs will be allowed in. The monomaniacal focus on productivity and work is never explicitly mentioned, just assumed.
As astonishingly narrow-minded and privileged as that is, my main point in bringing this up is that people are currently trying to run companies and other organizations using tools from DeFi like decentralized autonomous organizations (DAOs), blockchain, and smart contracts. But at the end of the day they are far more complicated, and less reliable and useful, than the crypto-pilled would have us believe. If you look back through Matt Levine's Money Stuff newsletter on Bloomberg, you'll see numerous stories of DeFi and cryptocurrency projects that end up merely reinventing basic "traditional" finance, committing serious fraud, relying on speculation with little material support, and running into legal trouble and in some cases even running up against their own ungovernable nature. After all, who is legally responsible, who is an owner, and how do debts get repaid in a DAO where lines of code (and any of its bugs) determine how they operate? Are the people involved in DAOs considered shareholders and their tokens considered securities? More importantly for our discussion, how will AI solve any of these issues so that DeFi ends up replacing "traditional" finance?
DeFi is simply not something that a lot of people are embracing. Importantly, there is a well trusted and extensive traditional financial system that already exists, already solved many problems, and is much more cautious about introducing new systemic problems. It's nowhere near perfect, but it does have very mature regulatory and legal mechanisms to protect the parties involved. DeFi is nowhere close to this and has a terribly public habit of reinventing traditional finance in cumbersome, inefficient, and very costly ways. If financial firms could save money or increase profits by using blockchain technology you better believe they would. Suleyman, much like Muthukrishna, ignores all of the problems and argues that DeFi is a prime example of the new technology wave that is taking over. For Suleyman, this is also another example of how nation states are weak and that we should be fearful. To him, nation states can't even stop people from circumventing their traditional financial systems; proof that nation states are no longer powerful. However, these new technologies only support Suleyman's arguments (and Muthukrishna's different arguments) if they are actually replacing traditional structures and actually changing the balance of power. If we look carefully we find the situation is far less dramatic. The technologies are new and making a splash, but people are generally skeptical and pushing for the kinds of regulation and protections they normally expect. Equally importantly, governments are slowly but surely exerting their power.
If you're interested in reading about how old fashioned US law enforcement like the FBI or the IRS's criminal investigations unit use existing laws to effectively go after criminals utilizing cryptocurrency check out Andy Greenberg's excellent Tracers in the Dark. It's very important to realize, and significantly undermines Suleyman's argument, that even without the writing of new laws existing government agencies are able to limit and regulate the use of this new technology. They do in fact wield considerable power. The interesting use cases that are in the gray areas of law, such as DAOs, will likely either become regulated (in the case that significant numbers of people find them useful and actually participate in them) or else they will remain a bizarre curiosity that most people stay away from due to the potential for fraud and/or their complexities and inefficiencies.
Summary of Parts 18 to 22
It shouldn't surprise me, but Suleyman comes right out and glibly states why he is in favor of having unelected, private sector people running things: when he was 21 years old he saw how slow and bloated government was while interning for a government agency. That single experience caused him to decide that government couldn't achieve anything important. He says this on page 149 of The Coming Wave, right after describing how people are selfish and technology is unstoppable. If we take him at his word, then one reason he builds these fallacious arguments over 300 pages and attempts to undermine the modern nation state could be to justify his feelings from a single experience that he perceived to be bad.
It is very important to realize that people like Suleyman dismiss and ignore the role of government, law, and regulation and then, after they get you on board with their viewpoint expect you to be afraid at how weak the government or society is, according to them. It's a simplistic, circular, and manipulative argument meant to hide their lack of evidence and cogent logic. Suleyman even starts a discussion about other organizational forms taking power from nation states by explaining what Hezbollah is (this is around page 198). Imagine if he didn't start out the discussion with the example of Hezbollah. Would his readers, mostly educated Westerners, be as attentive to his words or as fearful of what he says is coming? The fear he engenders more and more over the course of his book is meant to keep you looking to him for leadership. The 300 pages speaks for itself: he doesn't have actual answers, only the appearance of authority. The Suleyman's of the world who helped develop the very technology that enables our seemingly chaotic, cyberpunk-esque present, are desparate to stay in charge and remain relevant. Rather than encouraging people to realize the truth, that technology is a tool for humans for both good and bad, they want to shroud it in mystery and power so that we fear it. They must promote the contradictory narrative, because otherwise we'd realize that we are empowered by modern technology to have more control over our own lives as well as to not have to be controlled by people like Suleyman.
I already use AI to help me code more quickly and to produce social media content for my Skeptic of NYC Instagram and TikTok accounts. Truthfully, I would publish far less content for Skeptic of NYC if I didn't use AI, because I just wouldn't have the time. As AI tools become more commonplace, effective, and helpful I imagine all manner of new efficiencies and productivity will indeed be unlocked. On the other hand, the trends of increasing misinformation, disinformation, people learning about the world from isolated information bubbles, and living in isolated bubbles will continue. People right now are able to 3D print weapons and find victims to brainwash with conspiracy theories anonymously online. On the other hand, peaceful communities all around the world have been living in their own sort of bubbles for far longer than social media has existed.
These are choices people make. How we regulate, interact, and live with technology, including the most advanced AI, is what we make of it. That's why for years I've promoted government regulation of social media, surveillance capitalism, and the sharing/selling of user and behavioral data. Government is slow, tedious, and seems unconcerned with efficiency and productivity. But actually it is our ability to elect and interact with representatives as well as get laws and regulation implemented without private industry's singular goal of profit which enables other goals such as national interest, national defense, and those of state and local communities to be achieved. Don't believe the Suleyman's and Aschenbrenner's who can only see a role for government in the most extreme national security case! Don't accept their narratives which result in transferring power from elected governments to the private companies they financially benefit from!
TL;DR:
I anticipate that some might say that I make the same contradictions that an Aschenbrenner or Suleyman make. For example, I point out that super AI is likely not to be as immediately dangerous as they claim it will be, but I also say that even an AGI precursor to super AI ought to be owned and regulated by the US government. In the name of clarity I will end this post with a break down of the arguments on both sides:
- They say that advanced level AI and maybe AGI is around the corner (circa 2027).
- This will be created by new techniques in machine learning and new, truly enormous data centers.
- In turn, autonomous artificially intelligent workers will be able to replace humans at real jobs. In particular, AI engineers could be replaced with millions of copies of artificially intelligent AI engineers (AI AI engineers).
- This will directly lead to an "intelligence explosion" and superintelligent AI (super AI) at which point all bets are off and humanity is changed forever.
- Everyone will want access to super AI and it will allow everyone (or at least lots of powerful and some dangerous people, although they are never too clear on this point) to create super hackers, robot armies, and super advanced bioweapons while the super AI themselves will do who knows what because they will be so advanced and smart that we won't even understand their language and behavior.
- The only thing humans can do is to try to achieve super AI as quickly as possible in the US so that the US government can attempt to control it and use it to trounce anyone else who would dare try to create it. We can't trust anyone else to control it, even though we ourselves might not be able to control. But also the government has to partner with private startups, because the government is too slow and incompetent.
- Advanced AIs may or may not be conscious, but don't worry about that.
My arguments are as follows:
- I concede that Aschenbrenner's arguments for significant advancement of AI, maybe even achieving AGI by 2027-2030, is plausible. Not likely, but it's within the realm of reasonable conjectures.
- I disagree that autonomous artificially intelligent workers will immediately be able to easily replace humans at real jobs. Still, it's not out of the question and if it does happen in a cost-effective manner, then industry will be very quick to replace humans. We need to be prepared to deal with this should it come to pass, but I concede that society and government both tend to be reactive. However, that's not the end of the world.
- The corollary to this, that the development of AI AI engineers will happen and cause an intelligence explosion, I think is even less likely to happen as quickly as they want us to believe. Coordinating millions of AI AI engineers in enormous data centers is not a trivial task and, in my opinion, will likely take longer to build, debug, and optimize than Aschenbrenner gives credit to. And that's assuming that these AI AI engineers are independent, creative, and truly develop a super AI instead of, for example, merely an advanced and more optimized AI AI engineer.
- The questions of consciousness and agency and whether potential AI workers, let alone AGI and super AI, are deserving of dignity and respect like a human (or perhaps more so than a human in the case of super AI (see Eric Schwitzgebel's excellent book The Weirdness of the World for a genuine philosophical exploration of this and other interesting topics)) are never even brought up by Aschenbrenner and Suleyman. However, these are topics of real impact that we actually need to address, because they directly effect legal and regulatory policy and other ways that humans will respond. It also is important in considering the extent to which AI workers, AGI, and super AI can be used in dangerous ways, or, if any of them are conscious or have agency, then equally importantly how the AI themselves might itself choose to act in dangerous ways. Of course, we need to think about the good they can do as well and balance our responses. To the Aschenbrenners and Suleymans of the world who speak of unimaginably dangerous and alien risks of super AI I have to say that you aren't helping this discussion when you elide these serious assumptions and gaps in our current knowledge (about learning, consciousness, etc).
- Yes, I completely agree with Aschenbrenner that the US government should work on AI workers, AGI, and super AI. I think that once the science of AI is mature enough and its future path to advanced AI is clear, the US government should then and only then treat it like the Manhattan Project in terms of importance and national security, but without the wartime urgency. I don't think any other countries are close to developing these things and there are plenty of ways to contain other countries' abilities to do so. We should do both: work on AGI and contain other countries.
- The AI Manhattan Project should be focused on developing and understanding all aspects of the AI they are creating: going slowly, developing kill switches, analyzing learning and agency to try to bridge our knowledge gaps on consciousness itself, etc. These are important things that need to be done with care and without a focus on profit: this should be about the national interest and advancing human understanding. If there is any urgency it is with the backing of the scientific consensus and US government defense and intelligence apparatus declaring a clear and present danger (as was the case with nuclear weapons). Not merely some investors and private employees with serious financial and personal influence at stake.
This is a complicated topic, particularly because so much of it is speculative and concerning genuinely important human concerns. However, I feel we need to grapple with these things conceptually and soon. We can and need to control our own destiny.
As we've seen, equally as important as understanding these technologies is being careful about the stories and narratives we're buying into. I hope you find mine to be reasonable without oversimplification or important gaps. As Stefanos Geroulanos puts it in his recent book The Invention of Prehistory (which I very highly recommend):
The story of human origins tells us who we are, how we came to dominate this planet and each other, how we invented religion and then discarded it in favor of the gods of progress and technology. It supposedly reveals a million little things about human life, like why we desire and whom, how our emotions work, or how we love and care for others. These grandoise claims prompt far more questions than they answer. [...]
Today, we might reject violence and racism, yet we still fail to recognize the blinders we wear when we look at "humanity." And we might as well admit it: without the grand story of our origins, we simply lack a good definition for humanity. That was not always the case, and I firmly believe that we should have a definition of humanity that does not rely at all on an origin story. But instead we spend our time pining for an origin story because it allows us to admire our granduer.
The AI-pilled talking heads, usually people who directly financially benefit from these new technologies, want us to look at humanity solely through the lens of the tools we create. Let's instead carefully analyze our present situation and eschew grandiose claims as we write our own history together.