Rory

February 10, 2026

My take on AI's present and future

The thing that LLMs are best for is natural language processing, and at executing tasks that proceed straightforwardly from natural language processing. The thing that LLMs struggle with is that, by and large, there is a limit to what natural language processing is good for. For instance, an LLM-powered search engine is better at interpreting what, exactly a user is searching for, and that's valuable! But that doesn't make it inherently better at finding and aggregating results. If you're looking for something that's easy to find, but hard to articulate, LLMs are great! And if you're looking for something that's a little trickier to hunt down, LLMs start to struggle.

Where they work best, in other words, is with tasks that are computationally fairly straightforward, but that aren't easy to articulate in user-friendly ways. A little ambiguity there goes a long way! (It's why, I think, LLMs are probably here to stay in software development, though I'm unsure of just how big a role they'll wind up playing. Code logic is relatively logical and black-and-white; translating your intent into code, or figuring out how a particular language wants you to do the thing you want to do, can be opaque and frustrating. It's the perfect nexus for what an LLM is geared to do.)

Another example: even if you're not a music producer, you might be familiar with the sight of endless knobs and sliders on soundboards, amps, and controllers. When you're editing a sound wave, there are hundreds of different ways to make tweaks and adjustments, which you need because sound waves are highly organic entities, and algorithmic adjustments to them can go a long way if and only if you know exactly how to tweak the one thing that needs tweaking. In other words, it's a field that's difficult specifically because it already exists in the blind spot that computers struggle to do easily... and it's possible that an LLM could start to translate vague, nebulous user language into specific, effective results. (This already does exist in a lot of software, to be clear—it's a thing that software already does! So LLMs wouldn't be revolutionizing anything here. They'd just be taking things a significant step forward.)

These are also, notably, examples where it's okay if an LLM returns the most average possible result. You want the most generic, straightforward execution of whatever it thinks you're trying to do! The value it adds comes from interpreting you, not from performing any kind of great, intelligent feat of its own. That's where its potential lies: that it can universally interpret and translate the thing you're asking it for into terms it can deliver a completely average result on. So the question becomes: where is a so-so execution of an ambiguous request legitimately powerful? "Help me identify a coding function" might be a good example of that. "Show me how I might tweak a sound wave" might be another. It offers a new computing interface, one that's far more tolerant of user input than other models do. (And it's not a "new" interface per se, because it's the same basic idea that something like Siri or Alexa already aimed to do. It's just a deeper, richer version of that.)

The thing that makes "AI" so neat is that it can deliver "oh neat!"-level results for a broad variety of tasks. But I'm not sure it'll ever deliver better than "oh neat!"—which is to say, I'm fairly confident that that's a hard limit. Yes, it'll get better and better at being "oh neat!" for more and more complex tasks. But I think we've probably already seen most of what makes it neat, and while we'll develop cool new ways of using that to our advantage, the underlying technology is never going to become anything more than what it fundamentally is, and the brutal truth is that that's not going to be enough for the companies that've invested in it to ever become profitable. The whole industry is going to crash, hard, and we're going to be left with a neat evolution of the computer that fails to make the world that much more fundamentally different than it already was.

(If I had to bet, it'll be less revolutionary than the smartphone was, and while I could see it being a bigger deal in the long run than social networks were, I'm not positive that that's the case, and even if it is, I don't think it'll reach that potential in the short term.)

About Rory

rarely a blog about horses