This tweet thread is worth looking at:
https://twitter.com/random_walker/status/1681748271163912194
What this tells us, and what is worth paying attention to, is that when building on top of LLMs that are outside of your control, the fine-tuning, or model, can move underneath you. If you have strategies for making the LLM work for you through prompt engineering, then your strategy could stop working. Suddenly.
What would be great is if LLMs made available a clear indication of when key components or the model or the fine tuning change, but it's not in our gift to make that happen.
So we need to:
- continue how to think about running local version of models.
- build observability, and be ready to change our strategies
https://twitter.com/random_walker/status/1681748271163912194
What this tells us, and what is worth paying attention to, is that when building on top of LLMs that are outside of your control, the fine-tuning, or model, can move underneath you. If you have strategies for making the LLM work for you through prompt engineering, then your strategy could stop working. Suddenly.
What would be great is if LLMs made available a clear indication of when key components or the model or the fine tuning change, but it's not in our gift to make that happen.
So we need to:
- continue how to think about running local version of models.
- build observability, and be ready to change our strategies