David Heinemeier Hansson

November 25, 2025

Local LLMs are how nerds now justify a big computer they don't need

It's pretty incredible that we're able to run all these awesome AI models on our own hardware now. From downscaled versions of DeepSeek to gpt-oss-20b, there are many options for many types of computers. But let's get real here: they're all vastly behind the frontier models available for rent, and thus for most developers a curiosity at best.

This doesn't take anything away from the technical accomplishment. It doesn't take anything away from the fact that small models are improving, and that maybe one day they'll indeed be good enough for developers to rely on them in their daily work.

But that day is not today.

Thus, I find it spurious to hear developers evaluate their next computer on the prospect of how well it's capable of running local models. Because they all suck! Whether one sucks a little less than the other doesn't really matter. And as soon as you discover this, you'll be back to using the rented models for the vast majority of the work you're doing.

This is actually great news! It means you really don't need a 128GB VRAM computer on your desk. Which should come as a relief now that RAM prices are skyrocketing, exactly because of AI's insatiable demand for more resources. Most developers these days can get by with very little, especially if they're running Linux.

So as an experiment, I've parked my lovely $2,000 Framework Desktop for a while. It's an incredible machine, but in the day-to-day, I've actually found I barely notice the difference compared to a $500 mini PC from Beelink (or Minisforum).

I bet you likely need way less than you think too.

About David Heinemeier Hansson

Made Basecamp and HEY for the underdogs as co-owner and CTO of 37signals. Created Ruby on Rails. Wrote REWORK, It Doesn't Have to Be Crazy at Work, and REMOTE. Won at Le Mans as a racing driver. Invested in Danish startups.