Michal Piekarczyk

May 25, 2024

The bag of words

There's this result that gets cited , you change memories as you recall them. The unreliable witness pulls of the telemeres of thought on each retrieval. But what if its not that we modify memories, so much as the representation of the memories is a high dimensional concept and there's post processing that wraps it in context to produce the linear representation which is your thought ? 

I learned last year the likely reason some people read fast, its they read in chunks, and speed read advice says, drop the linear narrator in your mind, just raise your gaze from the page, less tunnel, more vision and average the text as you read. English is more contextual than it used to be, according to Kevin Stroudt, explaining on his history of English podcast, Old English had way more conjugation and you can pretty much jumble words in a sentence without meaning words their lose, as Yoda would say. 

This is how large language models process information too. Next token predictors spit out tokens sure, but when an LLM "reads" it takes in everything at once, averaging, focusing with multiple heads.