There's a growing temptation in the industry to treat AI as an excuse to abandon the practices that made software development good in the first place. Good abstractions, conceptual compression of complexity, clean architecture. The reasoning goes: "We have AI now, so we don't need to make code readable or maintainable. The AI will figure it out."
If you take that argument to its logical conclusion, why stop there? Let's drop interpreted languages entirely. Let's write everything in assembler. After all, if humans don't need to understand the code anymore, why bother with the abstractions we built for them? The answer is obvious when you put it that way. But somehow, when it's framed as "AI-first," people nod along.
Software development is a craft. It's an art. It's something beyond a job and a paycheck. And if we want it to survive as something humans actually want to do, we need to keep optimizing it for humans. Not despite AI, but alongside it.
Here's the thing people miss: optimizing for humans is also optimizing for AI.
Large language models were trained on billions and billions of data points created by humans. They learned to process information the way we do. When code is well-structured, well-named, and well-abstracted, AI understands it better too. When it's a mess, AI struggles the same way a new hire would.
There's a practical side to this as well. Languages designed for human happiness, like Ruby, tend to work remarkably well with AI. Not because of raw token efficiency, but because of what makes them pleasant for humans in the first place: convention over configuration, expressive syntax, and the ability to infer a lot from very little context. When a framework encodes strong conventions, AI can lean on those same conventions to understand and generate better code. The things that make Ruby enjoyable to read are the same things that make it easier for AI to get right.
I keep thinking about a scene in Aliens. The android Bishop puts a crew member's hand flat on the table beneath his own, pulls out a knife, and starts stabbing between their fingers at impossible speed. He succeeds. The human is fine. But then Bishop nicks himself, because he didn't account for one unforeseen variable: the ship was traveling through deep space and was subject to random disturbances. A slight tremor at the wrong moment, and precision alone wasn't enough.
That's AI in software development today. Incredibly fast. Impressively precise. But here's what actually scares me: AI doesn't feel the tremor. It doesn't know when it's wrong. It will give you the incorrect answer with the exact same confidence as the correct one. And that's worse than just being brittle! Because at least when a system fails visibly, you can fix it. But how do you begin to fix a system that fails and doesn't know it failed? Good luck. That's where we come in. Humans notice when something feels off, even before we can explain why. We read context that isn't written down, we make judgment calls with incomplete information. No amount of speed makes up for not being able to do that.
There's one more thing worth considering. AI is accelerating the rate at which codebases grow. It's now trivially easy to generate 500 lines where 50 would do. And without the people who know how to compress those 500 lines back into 50, you don't just have a maintenance problem, you have an exponential maintenance problem. The developers who understand abstraction, simplicity, and design aren't becoming less relevant. They're becoming more essential than ever. Because someone has to look at what the AI produced and say "no, this should be 50 lines," and actually know how to get there.
The answer isn't to resist AI or pretend it's not transforming our work. It most definitely is. It's to double down on what makes software development work for the humans writing it and the humans using it. Better abstractions, not fewer. Clearer code, not more of it. Languages and tools that make developers want to sit down and build something.
AI is a remarkable tool, but it's still a tool. And tools don't get better by making the people who use them care less about the work.
If you take that argument to its logical conclusion, why stop there? Let's drop interpreted languages entirely. Let's write everything in assembler. After all, if humans don't need to understand the code anymore, why bother with the abstractions we built for them? The answer is obvious when you put it that way. But somehow, when it's framed as "AI-first," people nod along.
Software development is a craft. It's an art. It's something beyond a job and a paycheck. And if we want it to survive as something humans actually want to do, we need to keep optimizing it for humans. Not despite AI, but alongside it.
Here's the thing people miss: optimizing for humans is also optimizing for AI.
Large language models were trained on billions and billions of data points created by humans. They learned to process information the way we do. When code is well-structured, well-named, and well-abstracted, AI understands it better too. When it's a mess, AI struggles the same way a new hire would.
There's a practical side to this as well. Languages designed for human happiness, like Ruby, tend to work remarkably well with AI. Not because of raw token efficiency, but because of what makes them pleasant for humans in the first place: convention over configuration, expressive syntax, and the ability to infer a lot from very little context. When a framework encodes strong conventions, AI can lean on those same conventions to understand and generate better code. The things that make Ruby enjoyable to read are the same things that make it easier for AI to get right.
I keep thinking about a scene in Aliens. The android Bishop puts a crew member's hand flat on the table beneath his own, pulls out a knife, and starts stabbing between their fingers at impossible speed. He succeeds. The human is fine. But then Bishop nicks himself, because he didn't account for one unforeseen variable: the ship was traveling through deep space and was subject to random disturbances. A slight tremor at the wrong moment, and precision alone wasn't enough.
That's AI in software development today. Incredibly fast. Impressively precise. But here's what actually scares me: AI doesn't feel the tremor. It doesn't know when it's wrong. It will give you the incorrect answer with the exact same confidence as the correct one. And that's worse than just being brittle! Because at least when a system fails visibly, you can fix it. But how do you begin to fix a system that fails and doesn't know it failed? Good luck. That's where we come in. Humans notice when something feels off, even before we can explain why. We read context that isn't written down, we make judgment calls with incomplete information. No amount of speed makes up for not being able to do that.
There's one more thing worth considering. AI is accelerating the rate at which codebases grow. It's now trivially easy to generate 500 lines where 50 would do. And without the people who know how to compress those 500 lines back into 50, you don't just have a maintenance problem, you have an exponential maintenance problem. The developers who understand abstraction, simplicity, and design aren't becoming less relevant. They're becoming more essential than ever. Because someone has to look at what the AI produced and say "no, this should be 50 lines," and actually know how to get there.
The answer isn't to resist AI or pretend it's not transforming our work. It most definitely is. It's to double down on what makes software development work for the humans writing it and the humans using it. Better abstractions, not fewer. Clearer code, not more of it. Languages and tools that make developers want to sit down and build something.
AI is a remarkable tool, but it's still a tool. And tools don't get better by making the people who use them care less about the work.