The Laws of Thought by Tom Griffiths
The Laws of Thought by Tom Griffiths
Most people acquire, almost by cultural osmosis, a rudimentary understanding of how the physical world operates.
Apples fall; planets orbit; if I jump off a building, I’ll accelerate at 9.8 meters per second squared until air resistance slows me; Newton has us covered.
Science in its grandiosity, has corralled the physical world with imperial efficiency. It has mapped the genome, weighed subatomic particles, and photographed black holes
The physical world once inscrutable, now appears understandable.
Yet, the internal world, by contrast, our mind’s mechanisms of reasoning, inference, categorization, and logical computation, remains largely terra incognita, its principles known to few and formalized by even fewer.
The Laws of Nature are familiar; the Laws of Thought remain largely inscrutable.
For a species capable of using mathematics to predict the oscillations of quantum fields, we remain perplexingly ignorant to our own cognition. We possess no canonical framework for curiosity, no prescriptive law for envy, no generally sanctioned account of how a child extrapolates grammar from the detritus of everyday speech.
Cognitive science, inaugurated in the mid-20th century with the ambition of rendering thought empirically tractable, has yielded formalism and devised experiments. What it has conspicuously failed to yield is consensus. Physicists may quarrel over interpretations, yet they cohere around the architecture of the universe…kind of. Cognitive scientists, by contrast, are still disputing the blueprint of the mind.
Is the mind a system of explicit rules manipulating symbols with bureaucratic precision? Is it a distributed network of units adjusting weights in response to data? Is it a probabilistic inference engine, perpetually updating beliefs under uncertainty?
Could mathematics, which charts the physical world with precision, also chart the landscape of our minds?
In The Laws of Thought: The Quest for a Mathematical Theory of the Mind, Tom Griffiths takes up this interesting quest. The book traces the long and uneven effort to apply mathematics—the tool that so successfully tamed the physical world—to cognition itself. If the Scientific Revolution delivered the Laws of Nature, Griffiths asks whether we are finally approaching a comparable account of the principles governing thought.
The story begins with the Scientific Revolution, when observation, experiment and mathematical formalism coalesced into a method powerful enough to explain phenomena from falling apples to celestial mechanics. Some thinkers, notably Hobbes, Descartes and Leibniz, wondered whether reasoning might be treated similarly—as a form of calculation. The ambition was clear: to render thought measurable, perhaps even computable. For centuries, that ambition outran available tools.
A breakthrough arrived in the 19th century with George Boole’s algebra of logic, which showed that reasoning could be rendered in symbolic form. From this flowed formal logic and, eventually, the conceptual foundations of computing. Yet only in the mid-20th century did scholars begin rigorously testing mathematical theories of thought against empirical data. The Cognitive Revolution transformed the mind from a philosophical curiosity into a scientific object.
Early models cast the mind as a rule-following system: cognition as symbol manipulation. Formal grammars, as developed in linguistics, showed how finite rules could generate infinite sentences. The approach was elegant…and incomplete. Human reasoning is often graded rather than binary; categories blur; learning proceeds with a speed and flexibility that rigid rule systems struggle to explain.
Two further frameworks emerged. Artificial neural networks replaced explicit rules with continuous representations and statistical learning. Given sufficient data, such systems can approximate remarkably complex functions. Modern artificial intelligence rests on this architecture. Yet its appetite for data is like hungry hungry hippopotamus.
Bayesian models offer a different perspective. Drawing on probability theory, they describe learning as rational belief-updating, shaped by prior assumptions. Humans, on this view, are efficient learners not because they passively absorb information, but because they approach the world with structured expectations. The central question shifts from “What rules govern thought?” to “What priors does the mind bring to experience?”
Griffiths presents these three traditions—rules and symbols, neural networks and Bayesian inference—not as mutually exclusive dogmas but as complementary approaches to a stubborn problem. Cognitive science has not achieved the theoretical consolidation of physics. Disagreement persists. Yet he suggests that recent work hints at convergence: after centuries of fragmentation, a more coherent account of the Laws of Thought may be emerging.
The book’s contemporary relevance is hard to miss. Artificial intelligence systems now perform tasks once deemed uniquely human, from strategic gameplay to fluent conversation. Their successes are impressive, yet they typically require prodigious quantities of data. Humans do not. Understanding why may illuminate both the promise of automation and its boundaries.
In the age of AI, this book is more than an intellectual history. It is a sober assessment of how far we have come in mathematizing the mind—and how far we have yet to go.
