
Dwight Ellefsen/FPG/Archive
The laws of thought
Tom Griffiths, William Collins (UK) Macmillan (USA)
For almost 70 years, cognitive scientists have been fighting a civil war. On one side is computationalism, which claims that intelligence can best be explained by rules, symbols and logic that can be expressed in equations. On the other is connectionism, where intelligence comes from vast, connected networks based on the brain’s neurons, and no one component is intelligent, but somehow the system as a whole is.
That struggle has shaped everything from cognitive science to the artificial intelligence that is now transforming the global economy. This month, two new books are coming in from different sides. For me, that’s what stands out The Laws of Thought: The Quest for a Mathematical Theory of Mind. In it, Princeton professor Tom Griffiths traces the long quest to formalize thinking in mathematical laws, and explains why modern AI is the way it is—and what the future may hold.
Griffiths frames the story around three competing and increasingly entangled mathematical ways of formalizing thought: rules and symbols, neural networks and probability. The first treats thinking as problem solving—break a task into goals and subgoals, then navigate it with formal steps. It drove early AI, but also showed why human common sense is so hard to bottle, with the number of rules AI had to follow soon spiraling into tens of millions of requirements.
Neural networks employ explicit rules to learn from examples, building intelligence from many simple units whose interactions produce complex behavior. This is (sort of) how humans work, but probability and statistics add a third ingredient: uncertainty. The brain does not have access to perfect information, and what makes us human is how we weigh evidence and update our beliefs.
For Griffiths, none of the three frames is enough. Realistic accounts of intelligence, whether human or machine, will mix all three. He makes his case historically, looking at how people have tried to map the processes of the mind using mathematics, using archives and interviews with scientists. As a result, his book is detailed and engaging, if a little heavy.
A different approach has been taken by the neuroscientist Gaurav Suri and Jay McClelland i The Emergent Mind: How Intelligence Arises in Humans and Machinesin which they argue that the mind is an emergent property of interacting networks of neurons, biological or artificial, that can generate thoughts, feelings, and decisions. It draws on McClelland’s history as a pioneer of connectionism.
The two books offer interesting and contradictory versions of the generative AI revolution. For Griffiths, a large language model (LLM) confirms his hybrid vision: it is impressive, but hallucinates and stumbles, and a symbolic layer will be needed to fix it. For Suri and McClelland, the same LLM is a confirmation: it is impressive how much reasoning emerged from a network alone.
The problem with The Emergent Mind is not so much the thesis as the delivery, as the tone alternates between folksy side and clumsy phrasing. Explaining maths and science was always supposed to be difficult, and none of the books quite deliver, but The laws of thought comes closer because describing AI history means focusing on what each framework can and cannot explain.
The Emergent Mind has a more provocative manifesto, where the authors see no fundamental barrier to more autonomous, goal-driven AI coming from purely neural architectures. As a result, it can feel less grounded in reality.
However, Griffith’s book gives you a solid sense of the “languages” we have to describe thought and why the future may well lie in messy overlaps.
Could that future even signal peace between the two camps?
Two other great books on machine intelligence

Algorithms to live by
by Brian Christian and Tom Griffiths
This is a lively, non-technical tour of how ideas from computing can illuminate everyday decisions, including how an algorithmic approach can improve human decision-making. It was co-written by Griffiths a decade ago, before the ChatGPT revolution, but is still relevant.

Restarts the AI
Building artificial intelligence we can trust by Gary Marcus and Ernest Davis
Current neural networks may be impressive but crazy, this book argues. It argues for hybrid systems that recover strengths from the rules-and-symbols approach – one of the three mathematical frameworks in Griffiths’ new book.
Chris Stokel-Walker is a technology writer based in Newcastle upon Tyne, UK






