Ricardo Tavares

February 1, 2026

Chat GPT wrote your code, what else is missing?

The world has a lot more code now, since large language models (LLMs) can easily generate it. As the CEO of Anthropic Dario Amodei has predicted, AI is probably writing 90% of all the code in the world. But does that mean we now have ten times as many apps? In part, there has been an increase in published software, but mostly we're seeing a sustained rise in personal software. That's code you can get to do something just for you, not worrying about anyone else. And even to get that far, people have to learn something about software development that goes beyond looking at the code. So what are those other areas where an explosion of random code reveals bottlenecks? Why can't those be fully automated?

bonsai.jpeg


The obvious ones are quality assurance, unstructured data, or having infrastructure that can scale. Others are documentation, security audits, localizations or observability. If this is software for the whole wide world to use, these all become increasingly critical. You can ask your favourite LLM about them and get some meaningful assistance. There are also specific services that will take your money in exchange for fixing each of these problems, as long as you can integrate them into a working pipeline. 

Still, you're probably not fully automating how you test your app, as you can't trust an LLM to know every implied behaviour in how you imagine the thing to work. Not to mention what you can only discover when you try it out. A lot of little decisions emerge from working directly with the code. If you remove the human element from that layer, you're forced to pay more attention to testing what comes out of it. Shipping an app to production is a cycle of continuous improvement. 

That brings us to maintainability. Say you have something working today, but what happens when you need something else tomorrow? The good thing about something as malleable as software is that theoretically you can change your mind an infinite number of times.  As circumstances change, the code may need to do something else or just expand its scope. Mistakes can easily be made when defining what data needs to be kept, which means that now the app needs to see the world in a different way. 

This is actually a huge area of software development. We don't implement something before we need it, but we also want to keep some options open. Can't we just scrap everything and write the whole app again from scratch? Of course we can, even if that represents a significant cost. But usually we want our code to be a space that can incorporate a reasonable amount of changes and expansions. That's why it's said that code is written for humans, not just for the machines. 

Human developers account for how software evolves by making practical predictions. But LLMs essentially can only predict the next word in a sentence. Acclaimed computer scientist Yann LeCun argues that they know only a fraction of the real world, a single abstract representation as expressed in human languages. Developers communicating with clients listen not only to what's being said, but they also pay attention to doubts, inconsistencies, and blind spots for what they understand about that specific domain. 

Both the code and its data need to be modelled in a way that can still make sense as our world keeps moving. That's the hidden value of software development, the right predictions that allow for each subsequent change to become easier to implement. The next step in AI research is about unlocking how software moves across time and recursively gets better. Predictive AI is where we want this generative AI to go. 

About Ricardo Tavares

Creates things with computers to understand what problems they can solve. Passionate for an open web that everyone can contribute to. Works in domains where content is king and assumptions are validated quickly. Screaming at phone lines since before the internet.

🐘
Mastodon  |  🦋 Bluesky  |  🛠️ GitHub


View From the Web