Gary Marcus writes persuasively that markets are massively over pricing LLMs, and this could lead to some very bad decision making in the near term - https://garymarcus.substack.com/p/what-if-generative-ai-turned-out. He writes that if the US gets into an AI “war” with China things could get hairy,
But what has me worried right now is not just the possibility that the whole generative AI economy–still based more on promise than actual commercial use–could see a massive, gut-wrenching correction, but that we are building our entire global and national policy on the premise that generative AI will be world-changing in ways that may in hindsight turn out to have been unrealistic.
On the other hand, I did a user churn calculation in seconds using OpenAI’s GPT Code Interpreter, it shaved at least 30 minutes of work off of a task (I’m writing this blog post in the time that I have saved). This is a small example of Jevon’s Paradox - https://hex.tech/blog/jevons-paradox-demand-for-insight/ - efficiencies can, in some cases, lead to more demand as the cost of a good goes down.
I think we will see a couple of trends playing out:
- How well can we get high quality data into the hands of our staff?
- How much autonomy in decision making can we drive down to our staff?
- How can we encourage staff to be more exploratory in their approach to the tools that they use to get their job done?
- How much will LLMs improve to continue to lower the barrier of effort for interacting with data?
I’m confident that LLMs will improve in all of these regards, are we as organisations confidently thinking hard about how we will create the environment that encourages and enables our employee to take advantage of them?
Classifications from OpenAI:
- market pricing of llms
- concerns about the impact of llms in ai economy
- examples of efficiency and demand with llms
- trends in data access and autonomy with llms