My friend recently told me about something an author said about George Saunder's A Swim in a Pond in the Rain, "It's a book about writing, but like all great books about writing it's actually a book about thinking". Some people characterize writing as actual thought. Others make a less stark claim and say that when you write what you're doing is formalizing your thinking. Do I think that writing is actual thought? Do I think that writing is the formalization of my thought? And if so, is it the only way I formalize my thought?
TL;DR: no. I don't think writing is actual thought and I don't think that writing is particularly formalizing of thought, although of course it is deeply related to thought, logic, and argumentation and their relationship to language. Can writing be used as a tool to help formalize thoughts? Yes. And so can spoken language, all manner of tools, sensory input and output, and things internal and external to the mind and body.
While some academics and philosophers, particularly continental philosophers of the later 20th century, discuss the primacy of spoken language over written language or vice versa, these distinctions sometimes seem to miss the point: language is not itself thought. That seemingly minor distinction has significant ramifications which I will explore below.
TL;DR: no. I don't think writing is actual thought and I don't think that writing is particularly formalizing of thought, although of course it is deeply related to thought, logic, and argumentation and their relationship to language. Can writing be used as a tool to help formalize thoughts? Yes. And so can spoken language, all manner of tools, sensory input and output, and things internal and external to the mind and body.
While some academics and philosophers, particularly continental philosophers of the later 20th century, discuss the primacy of spoken language over written language or vice versa, these distinctions sometimes seem to miss the point: language is not itself thought. That seemingly minor distinction has significant ramifications which I will explore below.
INNER EYE, INNER MONOLOGUE
A recent study about people who claim to lack an inner monologue, lack an ability to visualize things in their minds, lack an ability to recreate sounds in their minds, or some combination thereof (known as aphantasia), was conducted by researchers who themselves claim to fit into those categories. While it’s unclear what exactly this condition means, it is suggestive that perhaps what we think of as thought is either not universal or else has a more elusive foundation even than language, pictures, sounds, etc. One person, I can’t recall if it was a study participant or one of the researchers, said that when they write they actually don’t see or hear the words in their mind, it simply comes out and they can read and edit it afterward. The Skeptics Guide to the Universe podcast episode #979 discusses the aphantasia study: https://www.theskepticsguide.org/podcasts/episode-979.
Revisiting the claims:
Revisiting the claims:
- Writing is actual thought: this example is inconclusive, but would seem to lend some credence to doubting that writing is actual thought.
- Writing is the formalization of my thought: this example is again inconclusive. If one doesn't know one's own thoughts, then how can one say?
BIAS
Bias of all types, such as motivated reasoning, stereotyping, oversimplification, confirmation bias, etc, are often not obvious to the person who is biased. Indeed it seems reasonable to think that all people are biased. It’s not clear how and which biases might be subconscious, structural, etc, but in my experience it seems to be far more common that bias is implicit, overlooked, and not made explicit. So while writing may ideally enable editing and reflection, including the uncovering and correction of bias, it is unclear to me that writing always includes the bias present in thought or that we can always identify it despite our best efforts, at least at an individual level. Certainly, the act of writing by itself is insufficient, because editing the writing and reflecting on the writing could be considered additional activities beyond the initial task of writing.
However, using writing as a tool to clarify and expound upon a thought, editing it and perhaps uncovering and removing bias, is not necessarily going to change the original thought. The person having the thought could continue to have that same thought despite knowing that if they spend time analyzing it they will uncover a bias. In this sense, writing as a tool for learning and changing one's thoughts might be considered as distinct from writing as a tool for communication, storing knowledge, analysis, etc.
Revisiting the claims:
However, using writing as a tool to clarify and expound upon a thought, editing it and perhaps uncovering and removing bias, is not necessarily going to change the original thought. The person having the thought could continue to have that same thought despite knowing that if they spend time analyzing it they will uncover a bias. In this sense, writing as a tool for learning and changing one's thoughts might be considered as distinct from writing as a tool for communication, storing knowledge, analysis, etc.
Revisiting the claims:
- Writing is actual thought: this example implies that writing can be incomplete when compared to thought or even the opposite, that writing can be more complete than thought.
- Writing is the formalization of my thought: this example suggests at least that there could be some limits to the formalization of thought through writing.
INTERPRETATIONS, ASSUMPTIONS, EXPECTATIONS
Bias leads into my next concern which is that all text is open to interpretation, assumptions, expectations, and other unstated things. For example, one aspect of humanity is that we learn in a cultural and social way so that we don’t have to relearn everything our ancestors did and don’t have to always be explicit about the context of what we mean. But that means that disagreements over meaning are common place and can vary greatly. This is an important point: our thoughts are themselves could be inchoate to begin with. Human thought might leverage a biological version of something like the models used in machine learning so that our brains are not attempting to store and compute things from scratch, tediously thinking through every step in a computation, network of memories, feelings, and facts we know, but rather relying on something like pattern matching, heuristics, and models (I explore this further in later sections below).
Despite that incompleteness, when we translate our incomplete thoughts into writing we even still leave a lot of our thoughts out of the written results. For example, it seems reasonable to believe that we have some notion in our minds of the cultural and social context that informs our present thoughts, but which we’d never be able to include were we to write our thoughts out in excruciating detail, either because it is physically impractical, largely superfluous to our aims, or perhaps too difficult to fully express.
The philosopher Eric Schwitzgebel actually recently wrote a blog post about something similar to this. Referring to interpretations and the imprecision of words and thoughts, including by the very author of a particular writing, Schwitzgebel writes, "Even assuming that such flat misuse or malapropism is rare in philosophy, in a smaller way, words like "democracy", "belief", "freedom" are not entirely at each philosopher's behest. These words are neither exact in meaning nor complete putty in our hands. What we say is not precisely fixed by our intentions." You can read the full post here: https://schwitzsplinters.blogspot.com/2024/04/flexible-pluralism-about-others.html?m=1.
Revisiting the claims:
- Writing is actual thought: prima facie, in writing we don't include a lot of context which in fact informed the writing and the interpretation of it in thoughts.
- Writing is the formalization of my thought: in this case there could be the reverse happening. Thought is formalized through learning and then is simplified for communication when translated into writing.
COMPOSING MUSIC AND OTHER FORMS OF WRITING
We can take this even further. What exactly can be communicated in writing? Presumably an author is one who communicates in a written language, like I’m doing in English right now. But what about music? Does composing music count as writing? And if so, what relationship does it have with thought? Is a text written in English describing a piece of music closer, further, or just differently situated from the thought of it than the formal musical composition is? These questions lead us to consider if perhaps there are many different ways to represent thought: musical scores, written texts, mathematical formulas, but also diagrams, the large variety of art forms, etc.
Coming back to music specifically, I think it is at least possible that composing music in notes written on a page could be both more formal and also less close to the original thoughts when compared to say a live performance of the music. Yet, a digital recording of a live performance offers some of the same or at least similar conveniences as the written composition: portability, ability to edit, persistence through time, etc., while simultaneously offering a representation of the music which could be far closer to the composer's original thoughts. The recording does not require the listener to do anything more than listen to the sounds and perhaps reflect on them. The written composition or for that matter a written description in English, actually requires far more interpretation and creativity on the part of the reader in order to recreate the sound in their minds (if they have that ability at all). Conductors and musicians become famous for their interpretations of well known pieces of music. Musicians, composers, and others often disagree about how a piece of music they've heard should be written.
Revisiting the claims:
Coming back to music specifically, I think it is at least possible that composing music in notes written on a page could be both more formal and also less close to the original thoughts when compared to say a live performance of the music. Yet, a digital recording of a live performance offers some of the same or at least similar conveniences as the written composition: portability, ability to edit, persistence through time, etc., while simultaneously offering a representation of the music which could be far closer to the composer's original thoughts. The recording does not require the listener to do anything more than listen to the sounds and perhaps reflect on them. The written composition or for that matter a written description in English, actually requires far more interpretation and creativity on the part of the reader in order to recreate the sound in their minds (if they have that ability at all). Conductors and musicians become famous for their interpretations of well known pieces of music. Musicians, composers, and others often disagree about how a piece of music they've heard should be written.
Revisiting the claims:
- Writing is actual thought: if written music were the same thing as musical thought, then we would expect far less room for interpretation of written music and descriptions of music.
- Writing is the formalization of my thought: in this case we have an example where formalizing thoughts doesn't necessarily get us closer to more original or clear thought. Writing can be an outline or starting-off point for thoughts and actions.
MY PERSONAL EXPERIENCE WITH MUSIC
When I first started listening to music with real interest and a desire to remember and recreate pieces in my mind, I found that I couldn’t recreate them. It took me concerted effort over a long period of time (my middle school years) to teach myself how to remember sounds, recreate them, and simulate them in my mind. To this day, when I seriously inspect sounds that I recreate in my mind they seem to be a simplified version of the real experience. Despite that, I can read music, talk and write about music, create music, etc.
Revisiting the claims:
Revisiting the claims:
- Writing is actual thought: here we have the opposite of writing being deficient in some way when compared to thought. If my thought is deficient in some way, then it could be possible to write more precisely about it. For example, I might not be able to precisely recreate a C# sound in my mind, but I might be able to perform a C# on an instrument, recognize it as a C#, and write the note on a score.
- Writing is the formalization of my thought: this recapitulates one of the arguments above: that perhaps it is my thought which is being formalized as I learn how to listen to and recreate music in my mind, which later enables me to engage more fully in expressing my thoughts on music in writing.
WORKS OF FICTION
David Foster Wallace wrote the lengthy and detailed (some say overly detailed) work of fiction called Infinite Jest which is somewhat famous for having copious endnotes. Yet, originally David Foster Wallace wanted Infinite Jest to involve the use of hyperlinks and eReader devices, not the final paper format the book was published in. When his publishers pushed back, he tried to compromise by including footnotes in the book instead of hyperlinks. Eventually bowing to his publisher's pressure, he agreed instead to the use of endnotes. It seems that one reason he originally wanted to use hypertext is because he wanted his writing to more closely represent non-linear aspects of thought, including facts and tangents that inform or result from a narrative, more directly in the text with more contextual connections. I don’t know for sure, but things like that at least suggest it might be possible that language and story telling have structural limitations in their current forms that are different than at least some thoughts. Tom Clancy supposedly said, “The difference between fiction and reality? Fiction has to make sense.”
Revisiting the claims:
- Writing is actual thought: thought must be translated in some way into writing and can, at least in some cases, lose something in that translation.
- Writing is the formalization of my thought: if translating my thoughts into writing can cause something to be lost, then formalizing my thought in this way could be counterproductive because I don't know what was lost, I have to spend additional work adding back the lost thing, or the conventions of writing or even writing itself may be too limited to convey the lost thing.
SIMULATION
Some thoughts seem to be something like a simulation, within my mind, of some aspect of reality. This reminds me of video games that have 3D visuals, because when programming a 3D video game physics engines are used. It's like a simulation of the world or part of the world. That physics engine, whether trying to be realistic or not, may not follow comprehensive scientific physical and mathematical formulas, but rather can rely on simplified math that give the expected results (shadows, reflections, acceleration, etc) in a more concise and efficient way.
A mathematical formula can be written down and understood and this begs the same questions as above regarding composing music: e.g., are authors of written language describing a physical reality closer to thought or further away from thought than authors of, say, a mathematical physics model? But that is just the tip of the iceberg. For example, when I imagine myself doing something in the real world, which physics engine is the simulation in my mind based on? Is it based on elaborate computations similar to scientific physics, or is it some simplification like in a video game? Could I effectively explicate the simulation using only English writing or would it make more sense to use mathematics or something else entirely?
Revisiting the claims:
A mathematical formula can be written down and understood and this begs the same questions as above regarding composing music: e.g., are authors of written language describing a physical reality closer to thought or further away from thought than authors of, say, a mathematical physics model? But that is just the tip of the iceberg. For example, when I imagine myself doing something in the real world, which physics engine is the simulation in my mind based on? Is it based on elaborate computations similar to scientific physics, or is it some simplification like in a video game? Could I effectively explicate the simulation using only English writing or would it make more sense to use mathematics or something else entirely?
Revisiting the claims:
- Writing is actual thought: this is more than just another example of something that might be easier to think rather than try to explicate in detail in writing. In fact, we are getting closer to considering what thought is and that it is different than language in a fundamental way.
- Writing is the formalization of my thought: while this is another example of formalization in thought before ever writing it, I think it elucidates further aspects of this phenomenon. In particular, mental simulations that are based on (implicitly or explicitly) physics and mathematical models suggest that formalization in this manner could be a structural feature of the brain, of how we learn (with or without writing), and possibly passed down in our DNA (I explore this further in later sections below). Writing can undoubtedly contribute greatly to learning in a number of ways, but there appears to be significant formalization of thought without writing per se. After all, babies seem to learn a great deal about their environment before they can speak or read.
QUANTUM PHYSICS
Interestingly, there is a whole branch of physics which many experts believe doesn't have a suitable explication outside of physics nomenclature and mathematics: quantum physics. There are many books devoted to attempting to interpret what the mathematics and physical experiments mean, but which do not converge on a single specific interpretation (i.e., there is no scientific consensus on a single preferred interpretation). Indeed, many such books leave the interpretation open, suggest using all of them depending on the situation, or even suggest not bothering trying to interpret it at all (i.e., "shut up and calculate"; see my book review of Physicist Chris Ferrie's Quantum Bullsh*t here: https://world.hey.com/cipher/a-review-of-the-book-quantum-bullsh-t-6972ab3c). This would strongly suggest it is possible to think of something (e.g., mathematics, physical processes, etc) which cannot be easily translated to more descriptive written language or narrative such as is typical with English. Still, it remains unclear if this is a failure of language, culture, evolution, the structure of the brain, a temporary failure of imagination or a more permanent lack, etc.
Revisiting the claims:
- Writing is actual thought: so far in this blog post this is probably the clearest example of something that humans can think about (understand, reason about, learn, etc), but have a hard time writing about (interpretation, meaning).
- Writing is the formalization of my thought: this is perhaps the clearest example of something that at least some experts in the field actively advise against formalizing (interpretations of) in writing. Shut up and calculate!
COMPUTATION
Speaking of calculations, that brings us to computation itself. If we look at music as an analogy, we have composed written music which is transcoded into movements that produce sound, usually by a human making those movements with a musical instrument. Music might originate in a thought, just like a story or some English words can be a thought, but in this analogy there is a distinct difference between the written music and how it gets interpreted and performed. Written music as well as recordings of music offer many of the same conveniences of written language: permanence, replication, ability to edit, etc. But at this point in my argument it seems reasonable that thoughts of music may differ considerably from their written composition and that writing the composition by itself is not sufficient to get closer to communicating the thought or formalizing music beyond the convenience it offers as a tool.
Quantum physics seems like another clear example of this divergence between thoughts and writing about the thoughts. At what point does writing turn into mathematical formulas, computations run on a computer, or even physical experiments? In some sense, communication of data and error correction is what we are really talking about. Writing is merely one of many things that help us model aspects of reality and live life.
Computer programs are an interesting analogy. A computer program, say written in the Python programming language, is translated into assembly code, binary and ultimately electrical and magnetic signals which are manipulated within the circuit structure of the CPU and other computer hardware components. This would seem to be an apt analogy to thought. A human reads a sentence in English and that gets transmitted from the eyes and translated into electrical and chemical signals in the brain which are manipulated within the neurons, other cells, and nerves.
In both cases (computer and brain), we tend not to be aware of what’s happening at a low level and indeed there is a significant gulf between the language (e.g., Python, English) and the resulting computation. Of course, there are exceptions to this, but I think we can agree that it is generally true. A Large Language Model (LLM) and a human brain can both in some sense produce writing, but the exact computational process, and therefore by extension, the exact nature of the thoughts that underly as well as inform and make up the final output, are not known exactly. Importantly, in the case of the LLM we can know the deep learning algorithms (though they may have hidden layers we can't see into) and we can choose the exact data inputs on which to train it. That is a level of control and knowledge that we don't currently have for the human brain. While I tend to agree with principles of scientific materialism and think that eventually we will have fundamental knowledge of how the brain and consciousness works, the jury is still out (for example, see my blog post about Schwitzgebel's book The Weirdness of the World: https://world.hey.com/cipher/how-to-think-about-consciousness-in-the-age-of-artificial-intelligence-a-review-of-schwitzgebel-s-d435ff4b).
So what exactly are machine learning algorithms doing anyway? The generalized, and much simplified explanation of what they are doing computationally, at least for what is usually called "deep learning", is matrix multiplication on vectors representing data (for example, the contents of books) which defines a mathematical surface and forces the finding of a minimum in this surface. The resulting definition of a minimum is more or less what is known as a model. It is essentially a mathematical formula which you can run new data through to see if it fits: a form of categorization that at least superficially mimics how humans categorize things, but with a complicated computational foundation that we engineered using computers expressly for this purpose. The learning part of it is the process of feeding the data through the layers of the algorithm in order to hone the model.
The obvious question then is how does this relate to human learning and how we categorize things? Humans remain famous on Earth, amongst ourselves anyway, for being able to quickly learn about our environment from a young age and categorize the things we encounter. A human child might easily point out a real dog in the world after only having seen a cartoon representation of one. Yet, our best machine learning algorithms require enormous numbers of examples and significant fine-tuning. If human brains are finding the minimum of a high dimensional surface in order to develop a model of how to categorize things, we are doing it seemingly with far less input and maybe less computation than our computer systems. If that's not what we're doing in our brains, then presumably figuring out what exactly we are doing will help us create much more efficient and effective machine learning algorithms. I touch on this some more in the next section.
Notice also that with LLMs we have a new take on writing as a tool for formalizing thought. We can write to an LLM, read its reply, and use that information in our own writing and thinking. The end product, our final written work, is no longer representing only our own thought, but now also includes the LLM's replies and our thoughts and reactions to those. There is also a recursiveness here: we can feed our LLM-informed writing back into the LLM. But the real point here is that models like those found in LLMs could be a more fundamental aspect of thought and learning in general. In this light, language might turn out to be less fundamental.
Quantum physics seems like another clear example of this divergence between thoughts and writing about the thoughts. At what point does writing turn into mathematical formulas, computations run on a computer, or even physical experiments? In some sense, communication of data and error correction is what we are really talking about. Writing is merely one of many things that help us model aspects of reality and live life.
Computer programs are an interesting analogy. A computer program, say written in the Python programming language, is translated into assembly code, binary and ultimately electrical and magnetic signals which are manipulated within the circuit structure of the CPU and other computer hardware components. This would seem to be an apt analogy to thought. A human reads a sentence in English and that gets transmitted from the eyes and translated into electrical and chemical signals in the brain which are manipulated within the neurons, other cells, and nerves.
In both cases (computer and brain), we tend not to be aware of what’s happening at a low level and indeed there is a significant gulf between the language (e.g., Python, English) and the resulting computation. Of course, there are exceptions to this, but I think we can agree that it is generally true. A Large Language Model (LLM) and a human brain can both in some sense produce writing, but the exact computational process, and therefore by extension, the exact nature of the thoughts that underly as well as inform and make up the final output, are not known exactly. Importantly, in the case of the LLM we can know the deep learning algorithms (though they may have hidden layers we can't see into) and we can choose the exact data inputs on which to train it. That is a level of control and knowledge that we don't currently have for the human brain. While I tend to agree with principles of scientific materialism and think that eventually we will have fundamental knowledge of how the brain and consciousness works, the jury is still out (for example, see my blog post about Schwitzgebel's book The Weirdness of the World: https://world.hey.com/cipher/how-to-think-about-consciousness-in-the-age-of-artificial-intelligence-a-review-of-schwitzgebel-s-d435ff4b).
So what exactly are machine learning algorithms doing anyway? The generalized, and much simplified explanation of what they are doing computationally, at least for what is usually called "deep learning", is matrix multiplication on vectors representing data (for example, the contents of books) which defines a mathematical surface and forces the finding of a minimum in this surface. The resulting definition of a minimum is more or less what is known as a model. It is essentially a mathematical formula which you can run new data through to see if it fits: a form of categorization that at least superficially mimics how humans categorize things, but with a complicated computational foundation that we engineered using computers expressly for this purpose. The learning part of it is the process of feeding the data through the layers of the algorithm in order to hone the model.
The obvious question then is how does this relate to human learning and how we categorize things? Humans remain famous on Earth, amongst ourselves anyway, for being able to quickly learn about our environment from a young age and categorize the things we encounter. A human child might easily point out a real dog in the world after only having seen a cartoon representation of one. Yet, our best machine learning algorithms require enormous numbers of examples and significant fine-tuning. If human brains are finding the minimum of a high dimensional surface in order to develop a model of how to categorize things, we are doing it seemingly with far less input and maybe less computation than our computer systems. If that's not what we're doing in our brains, then presumably figuring out what exactly we are doing will help us create much more efficient and effective machine learning algorithms. I touch on this some more in the next section.
Notice also that with LLMs we have a new take on writing as a tool for formalizing thought. We can write to an LLM, read its reply, and use that information in our own writing and thinking. The end product, our final written work, is no longer representing only our own thought, but now also includes the LLM's replies and our thoughts and reactions to those. There is also a recursiveness here: we can feed our LLM-informed writing back into the LLM. But the real point here is that models like those found in LLMs could be a more fundamental aspect of thought and learning in general. In this light, language might turn out to be less fundamental.
Revisiting the claims:
- Writing is actual thought: in my estimation, thoughts likely have identifiable aspects that are more fundamental than language and difficult to explicate in languages such as English.
- Writing is the formalization of my thought: it seems likely that thoughts are already formalized or structured in some ways before they ever get turned into writing. This does not say much one way or the other, but rather provides additional layers to what it might mean to formalize thought.
ECORITHMS
I find it really interesting that we tend to forget or leave out the fact that computability seems to be a fundamental aspect of physical reality which has its own set of laws. The book I’m reading right now, Probably Approximately Correct by Leslie Valiant (published in 2013), talks about something called ecorithms which are algorithms for learning that are based on universal rules of computation. The implication is that life on Earth is able to evolve and adapt by following ecorithms, in some cases storing them, adjusting them based on new environmental pressures, and passing on at least some aspects of them in DNA. In building his argument for ecorithms Valiant first looks at the universal laws of computation and then uses that to attempt to define the limits of what is learnable. He comes away with two key assumptions, plus an observation:
- Learnable Regularity Assumption: "[Humans] are quite good, but possibly not perfect, at categorizing."
- Invariance Assumption: "The context in which the generalization is to be applied [of what thing fits into a particular category] cannot be fundamentally different from that in which it was made."
- Observation: Any operations we (e.g., our brains) perform on measurements used to categorize something should be polynomially bounded in the sense of computational complexity.
I haven't finished reading the book, so I don't yet want to comment on Valiant's ideas and arguments directly. However, I found one passage to be quite provocative:
I believe the primary stumbling block that prevents humans from being able to learn more complex concepts at a time than they can, is the computational difficulty of extracting regularities from moderate amounts of data, rather than the need for inordinate amounts of data. For example, the difficulty of discovering the elliptical nature of the orbits of the planets was not that the amount of data needed took hundreds of generations to compile, but that elliptical orbits as seen from Earth did not constitute a regularity that humans found easy to extract.
From this perspective, we can see that the underlying process of learning and processing information remains obfuscated even while writing may help in communicating and storing knowledge. Maybe a better way to conceptualize it is that learning, and by extension thinking, has embodied aspects: the structure of the brain and the things that make it up, DNA, and the environment. These things may contribute in significant ways to how and what we think, but we tend to not think about them or mention them and probably actually overlook them when we communicate.
That leaves writing in the position of an imperfect tool for communication and at least to me it's unclear how writing can be used to facilitate learning beyond the obvious way of communicating ideas in a stored manner. The act of writing might force some grappling with expression and editing and clarification, but it's not clear that the act of speaking or thinking using language can't do precisely the same thing, or that language itself is always the best tool. Can we leverage writing in such a way as to directly change or add to our thought processes and ecorithms so that we can learn more and learn differently? I don't know.
Revisiting the claims:
- Writing is actual thought: Valiant's ideas in Probably Approximately Correct continue and work toward clarifying the arguments that thoughts likely have identifiable aspects that are more fundamental than language and difficult to explicate in languages such as English.
- Writing is the formalization of my thought: here again we don't have a new insight that could change things one way or the other. Rather we have an elaboration on how our thoughts may be formalized by evolutionary processes independent of writing and a gap in our knowledge of how writing can effect how we learn.
CONCLUSION
A lot of writing is informative and beautiful, but those are completely separate things from writing’s relationship to thought. While there is clearly some relationship between writing and thought, I think there are a number of aspects to explore which are not clear-cut or precise. In particular, currently there are no precise answers to the questions of how thoughts get turned into writing, which thoughts get turned into writing, or even what constitutes a thought in the first place. If language is fundamental to thinking then perhaps writing could be closer to thought than I am suggesting here. However, when I look at specific examples, such as those I outlined above, I am left with the impression that there is a lot more to our thoughts than our facility for language lets on. Writing undoubtedly can be used as a tool to formalize thoughts, but here too I am left wondering in what other ways my thoughts are being given form and shape beyond language, let alone writing specifically.
One fascinating thing Valiant explores in his book are the limits of what is learnable, both in a theoretical, general sense and then also specifically for humans as we are today. Based on what we've explored in this blog post, it seems reasonable to conclude that brains on Earth probably share some computation-based learning methods thanks to the process of evolution. It may also be true that humans have a special facility for language as well as more neurons and more highly interconnected neurons than the brains of other species on Earth. Is our ability to think and learn little more than the result of over-parameterized (a machine learning concept that refers to using more parameters in a neural network than is seemingly necessary: https://medium.com/matrixntensors/beyond-the-bloat-over-parameterization-in-neural-models-42e47f6d2ddd) neuron-based computations, constrained by evolutionary pressure and inherited traits? Or are our brains more flexible in learning methods or complicated in architecture and what role does language play? What is the space of the learnable in our universe and can the computational capabilities of our brains encompass that entire space or only a subset of that space? I guess I should finish reading his book.