I couldn’t agree more with Nate Jones take in LLM's are bad at code and product. It’s hard to write critically about LLM without sounding overly contrarian or that you are fishing for clicks.
I hear claims that AI will replace a job or will accelerating a task and often counter: “So, what has YOUR experience using LLM been like?” The response is usually somewhere between “I haven’t really used it” to “It looks credible, but I don’t trust it.”
It’s easy to believe in the magic when you aren’t using it daily, and equally as easy to be disappointed when the output looks good, but the code won’t compile.
A while back I started a side project for the purpose of testing AI tools in various aspects of the building software. Recently I began sharing those notes not to expose the gaps in AI but to counter the statements that productivity gains from AI are immediate and automatic.
They are not, and we need to think critically about how to deliberately incorporate these tools into our workflows where they are effective and where it makes sense.
I love his perspective from the product side that: “Chad can't give you the business judgement to say that a particular thing is the right thing to build now.”
I often feel the same way in architecture decisions and regularly have debates about whether it's better to build a PWA or native app, or to use microservices or a monolith. My correct answer is always “it depends” and LLM isn’t at a point where it can understand the context without a human perspective.