Jimmy Cerone

September 16, 2025

Link of the Day: The Visions of Neil Mehta

While this article is nominally about a venture capitalist (and an interesting one at that), what I find most fascinating here are the ideas about AI. First, I think AI will massively scale our ability to learn more than it will do things for us. The best performers in the markets will use AI to surface data from previously untapped sources and dominate. And second, LLMs are airlines. 

What Mehta was willing to discuss is why Greenoaks has largely stayed out of the model war, in which each player insists, almost daily, that AGI is around the corner. “They may evolve to become great businesses, like ChatGPT is, but in their first incarnation they are all kind of bad business models,” he said. “Huge capital investments up front to create this asset, the asset is worth some amount of money, which then depreciates over the course of 12 months, so you have to reinvest again 12 months later. It’s like the airline business in the 1980s; you invest in the best fleet, but then 12 months later the other airline has the newer models, and you don’t pay back the cost of your initial capital investment because the unit economics don’t work. That’s the AI model companies. They have no competitive advantage. If you create a brand like ChatGPT, or if you achieve so much scale that you capture all the capital and no one else can compete, maybe you can escape that. But it’s not obvious that everyone does.”

Scott Galloway also made this point, but I think Mehta does a better job laying out the case here. LLMs require giant capital expenditures that are not able to be amortized over time. Is anyone still using Chat GPT 3? The shelf life of that billions of dollars of investment is near zero. 

In a word, they’re merely scaling the horsepower of transformer models, rather than innovating on the underlying model. Insofar as this is a fair characterization, it would explain Mehta’s skepticism of the potential for returns, competitive advantage, or anything else of classical business interest, which powers the Greenoaks machine.

In a previous post, I linked to an article that laid out the significant advances in AI. We have a loooong way to go before we squeeze the juice out of transformer models, but I'm less clear about how much better they can get. Most people overestimate how much better a technology can get and they underestimate the new ways in which it can be applied. 

A long while ago, Sam Altman said he wasn’t scared of any other LLM competitor. He was scared of someone working on something in the basement that was a totally novel approach. Deep Seek was a peak at this, but the real thing that keep Sam up at night is described below:

For that, one would need to build a model that could learn a lot from very limited amounts of data, and generalize from first principles. Such is the threshold for superintelligence, which could make the kinds of connections and discoveries that would elude AGI.

If something can truly reason, the whole paradigm of sucking up tons of training data goes out the window and the economics of the business change. Allegedly, that’s what Ilya Sutskever is working at with SSI. Mehta is allegedly invested in that company. If I were looking for the next thing in AI, that’s where I’d be watching.