Things I’ve been reading this week:
- https://www.semianalysis.com/p/google-we-have-no-moat-and-neither - the cost of fine-tuning GPT-capable models is going from $100M dollars to $600!!
- https://magazine.sebastianraschka.com/p/finetuning-large-language-models - a good overview of approaches on fine-tuning
- https://www.lorcandempsey.net/p/2fd4d55c-021b-488c-be5d-a016771b45fa/ - what might this all mean for libraries! Lorcan has an in-depth overview.
- https://doaj.org is 20 years old this week, well done DOAJ1
- Google have released BARD, with search, this tweet shows some of the things it can do! https://twitter.com/itsPaulAi/status/1656649454726856707
- https://www.forbes.com/sites/alexzhavoronkov/2023/02/23/the-unexpected-winners-of-the-chatgpt-generative-ai-revolution/?sh=1aa11e9e12b0 - Forbes thinks BMJ should be one of the winners of the LLM revolution!
- hat Tip to Jon Treadway who pointed me to the well know economics blog - https://marginalrevolution.com, I’ve been finding it fascinating!
- AllenAI are building a science specific LLM - https://blog.allenai.org/announcing-ai2-olmo-an-open-language-model-made-by-scientists-for-scientists-ab761e4e9b76?gi=fbcdffe74e3b
- Snoop Dog has a good take AI - 24min, 24 seconds into this: https://milkeninstitute.org/panel/14644/conversation-snoop-dogg-and-larry-jackson