Brayden Haws

December 3, 2025

AI Opportunity

I have spent the last 3 years ingesting endless content on AI and building as many things with it as possible. I have learned a ton (and had way too many missteps to count). But despite all the learning and progress, I keep coming back to three things when I think about AI. These are broad generalizations, and many are bucking these trends (and I could be wrong about all of this). But maybe you will find something useful in my scattered thoughts.

Where Are the Products?

Nat Friedman asked this in a Stratechery interview in March of 2023. In expanding on this thought, he said, “That said, I think even if the researchers stopped right here and they didn’t produce any more capabilities, it would take us something like five or ten years to digest just what GPT-4 can do and all the other state-of-the-art models can do, into products.” Models have only gotten more capable in the 2 1/2 years since this interview. But I still feel the same way. There are AI products everywhere. And every non-AI product now has annoying pop-ups telling you to try their new AI features. Even so, it still feels like there should be more AI in the wild by now. Ilya Sutskever shared a similar take on the Dwarkesh podcast, “We got to the point where we are in a world where there are more companies than ideas by quite a bit. Actually, on that, there is this Silicon Valley saying that says that ideas are cheap, execution is everything.People say that a lot, and there is truth to that. But then I saw someone say on Twitter something like, ‘If ideas are so cheap, how come no one’s having any ideas?’ And I think it’s true too.” We are doing a lot with AI, but there is a lot more we could be doing. 

From my personal observations, there are two things driving this gap. The first is that many of us feel paralyzed by all the progress. We see the latest models from Anthropic and OpenAI released, and are so impressed. But then we think if this model is so powerful, the next one will be even better, and we wait for it. We keep waiting for the next big breakthrough and forget to look at everything that is possible now. The other issue I see is that so many legacy products are trying to bolt AI onto their products, whether it fits or not. This leads to a lot of user distrust in AI, and we can’t getadoption. It also drives down morale for teams when the stuff they built doesn’t work or doesn’t get used. 

The big thing I got from Nat’s interview was that we shouldn’t be waiting. And we shouldn’t be trying to fit AI into places where it doesn’t make sense. Instead, what he was pushing for was building more AI native products. Looking at industries that need automation and coordination. And thinking about products as AI-first. Meaning if I had to start all over, how would I build the product on an AI foundation, rather than trying to apply a coat of AI paint to it? There is so much to explore; we all need to keep pushing here.

Quality and Taste Will Win

In the same interview series as mentioned above, Daniel Gross shared, “We're entering this sort of odd area of AI where things are getting pretty big. ChatGPT, we were saying, might have a billion users at some point in the next 12 months, and the sad thing to me, and actually the really alarming thing to me, is not the capability of the models or whether it's connected to the Internet or not. To me, it's the fact that the models, no one has really spent time making them sort of wonderful and fun in a Pixar way”. Daniel went on to explain his belief that quality and taste will be major deciding factors in which AI teams and products win. This is why we are seeing an explosion in the popularity of AI evals. If you have been online lately, you’ve seen countless articles and podcasts on evals as the next big PM skill. Recently, Kevin Weil from OpenAI said, “Writing evals is going to become a core skill for product managers. It is such a critical part of making a good product with AI.” So not only do we need to figure out how to use all the potential of the models, but we also need to figure out how to put real taste into the products and ensure the outputs are consistent and high-quality. I won’t say too much more about evals here, but I am all in on them, and wrote some more here.

Bringing Personal Software to the Enterprise

My most recent pet peeve is when I open any Google product at work (Docs, Sheets, Gmail) and I get 10 pop-ups telling me to use Gemini. I usually ignore them, but once in a while I try, and I am always disappointed. For someone who loves AI, I hate it showing up in products that I use every day. But I love using AI to build my own “personal software”. Tools I need to help me do my job better, that are not that useful to anyone but me. I have built so many of these tools, and they have made me a believer in AI and AI native tools. While I don’t think we’re gonna see big gains from AI being in our email, I do believe we’ll see a huge lift from people building their own tools. 

The big question I am wrestling with here is, can we/how do we bring personal software to the enterprise? I’ve seen ambitious individuals turn Claude Code into their entire operating system or automate lots of their life through n8n. But these are skilled, technical people. This isn’t possible for the average knowledge worker, nor should we expect them to do this. So, how can we bring all the flexibility and power of these tools, but make them so easy that anyone can build their own AI software? I’m not sure if it’s possible today, but I am interested to see where it goes in the future.

To me, these speak to all the potential that is still out there. Even if we are in a bubble and an AI winter is coming, the technology isn’t going away, and eventually, we will figure out all that it can do and how we can best use it.

About Brayden Haws