Building Mind Out of Matter
What is a mind? How does a system of cells come to perceive, remember, and imagine? These questions once seemed unapproachable, philosophical riddles beyond the reach of science. But today, a new picture is emerging. Not a single discovery, but a growing understanding: the mind is not a substance or essence bestowed upon the brain. It is not something separable, transferrable, or stored like a file. Rather, the mind is an emergent process. It is what matter does when organized and active in the right way: a dynamic pattern, inseparable from the physical substrate that gives rise to it.
In this chapter, we’ll trace how that organization arises: how neurons, circuits, and memory systems give rise to a layered architecture that can perceive the world, construct a self, and participate in a shared reality, through patterns built atop patterns, encoded in the architecture of life itself.
Communication as Constraint: How the Neuron Was Born
Life becomes more adaptive when its parts can communicate. In a single-celled organism, responsiveness is limited to the signals that pass through its own membrane, inputs from the environment, or at most, simple cues from nearby cells. With the evolution of multicellularity, differentiation appears. One cell may specialize in locomotion, another in digestion, another in repair. This specialization brings power, but also dependence. Coordination becomes essential. Without communication, the differentiated parts fall out of sync and the whole structure collapses.
But communication is not free. It comes with costs, and most importantly, with constraints. Chemical signals diffuse through fluids, but they are slow and imprecise. Direct cell-to-cell contact is faster and more specific, but it comes with a hard limit: geometry. A typical cell, packed within a tissue, may touch only half a dozen neighbors. Its membrane is finite. Its range of influence is narrow.
If information is power, then cells are weak when isolated. It is in their ability to influence others, and to be influenced, that they gain leverage over the world.
This tension, between the need for more communication and the limits of cellular locality, set the stage for one of biology’s most astonishing inventions: the neuron.
Michael Levin’s work on bioelectric signaling reminds us that all cells in tissues talk to each other (Levin 2021). Neurons are not special because they alone communicate, but because their geometry liberates them from the local constraints of tissue, allowing them to broadcast signals across the body.
The neuron is a cell that sacrifices nearly everything for communication. It devotes its architecture to projection, its energy to signaling, its membrane to the precise transmission of information. With its branching dendrites and long axon, a single neuron can coordinate with thousands of other cells, some near, some astonishingly far. Where ordinary cells whisper to their neighbors, neurons speak across the body.
This innovation unlocked a new level of complexity. Neurons could transmit signals not only to adjacent cells, but across tissues and even across the entire body. More importantly, they could begin to encode the order of events: who fired first, who fired next. With the neuron, evolution gained a new kind of building block, one that could not just create structure, but encode sequence as well.
The Brain as a Contrast Engine
Even the neuron began as a humble cell, constrained by the same rules of chemistry and geometry as all the others. What made it different, what freed it, was a subtle shift in the architecture of its membrane.
Some cells evolved excitable membranes, thin layers of lipids studded with protein channels that open in response to voltage. These allow ions like sodium and potassium to rush in and out, creating a pulse of electrical activity. A signal. A difference that propagates.
Once this capacity evolved, it was elaborated into the action potential: a regenerative wave that travels down the length of the axon, creating a clean, directional, time-stamped message.
This time-stamping matters. In neural systems, it is not only which neurons fire that carries meaning, but when they fire. Timing determines whether signals converge or miss one another at a downstream neuron, whether rhythmic inputs synchronize into resonance, and whether sequences are perceived as patterns or noise. The action potential is essentially a tiny electrical wave that races down the axon. Like a ripple moving along a rope when you flick one end, it regenerates itself as it travels, keeping the same size and speed no matter how far it goes. This consistency means that each spike carries not only spatial information but also temporal precision. In this way, the nervous system doesn’t just transmit signals, it orchestrates them in time.
To be meaningful, however, the signal must reach another cell. This led to the synapse, a specialized junction for transmission. Some synapses used electrical currents directly through gap junctions, but most relied on chemical signaling, releasing neurotransmitters across a narrow cleft to influence the next neuron.
At this point, neurons gained a second axis of richness: not only when to signal, but how. Some synapses were excitatory, increasing the chance that the next cell would fire. Others were inhibitory, suppressing it. What began as communication became computation.
The brain’s architecture came to rely on the balance of these two forces, excitation and inhibition, to create contrast. One of the most powerful patterns in this architecture is the on-center off-surround circuit: a network configuration of neurons in which, once central activation is confirmed, it becomes enhanced while surrounding alternatives are actively suppressed.
This circuit is especially prominent in the sensory systems, where it sharpens edges in visual space, highlights changes in sound, and filters touch. Variations of it also appear in systems that select actions, allocate attention, or resolve competing inputs. It is a general strategy the brain uses again and again: amplify the signal, dampen the noise (Grossberg 2021).
The brain is a contrast engine, interpreting the world by filtering difference.
But evolution did not invent ion-channel waves from scratch. Long before neurons, other living systems had already discovered the same trick. To see the deeper continuity, we can widen the lens beyond the brain itself.
Proto-Bodies and Attractors: Levin’s Broader Lens
The brain is not the first place evolution discovered the power of electrical signaling. Long before neurons, bacteria were already using ion channels to coordinate with each other. Recent work by Arthur Prindle and colleagues at UCLA has shown that bacterial biofilms generate potassium waves, enabling thousands of single cells to synchronize their metabolism across the community. What emerges is a kind of “proto-body”: a larger agent with behaviors that no individual bacterium could achieve alone (Prindle et al. 2015; Prindle et al. 2017).
This is constructive emergence in its purest form: new entities with new powers arising from simpler parts. A biofilm is more than the sum of its bacteria, just as a brain is more than the sum of its neurons. The brain is not a direct descendant of bacterial biofilms, but it is a rediscovery of the same ancient trick: ion-channel waves as a medium for building larger, coordinated intelligences.
Of all the body’s organs, the brain has always seemed to me to be the most miraculous in its development. The neocortex, vast and uniform at birth, resembles a general-purpose learning substrate, waiting to be trained by experience. Other regions, such as the brainstem, the cerebellum, and the hypothalamus, come into the world already highly structured, with circuitry for balance and coordination, basic drives, breathing, heartbeat, and homeostasis largely in place. How can one organ be both rigidly predesigned and radically open?
Levin’s systems-level perspective offers a way to understand this. In his view, development is not a blueprint for every detail but a search through a space of possible forms, guided by higher-level attractors. The neocortex’s attractor is that of a general-purpose adaptive sheet; the brainstem’s attractor is that of immediate survival control. Looked at this way, the apparent miracle of brain development becomes less mysterious. The fine details remain overwhelming, but the higher-level view shows how systems can self-organize toward stable patterns without needing every step pre-specified.
This brings us to one of the most powerful ideas in neuroscience: that perception itself stabilizes through resonance.
Resonance: When Perception Becomes Real
One of the most powerful ideas in cognitive neuroscience, most fully developed by Stephen Grossberg, is that perception becomes stable through a process called resonance (Grossberg 2021). The brain does not passively receive signals from the senses. Instead, it compares incoming data against stored expectations, hypotheses, and memory. When these two streams, bottom-up input and top-down prediction, come into alignment, a loop forms. Activity stabilizes. Competing interpretations fade. The brain locks into a coherent perceptual pattern.
This bottom-up and top-down language reflects the brain’s layered architecture. Lower levels of the hierarchy are closer to raw sensation: light hitting the retina, vibrations reaching the ear. Higher levels represent increasingly abstract constructs, such as object recognition, spatial awareness, or emotional salience.
As signals flow upward, they are gradually transformed into more abstract representations. As expectations flow downward, they constrain and refine interpretation. What emerges from this two-way interplay is recognition. This is resonance, an agreement between levels of abstraction, a dynamic match that brings perception into focus.
Neuropsychologist Nicholas Humphrey proposed a conceptually similar idea in A History of the Mind, describing consciousness as arising from reverberant activity in sensory feedback loops. Rather than fading as transient signals, neural activity can cycle back on itself, creating a self-sustaining resonance that endows experience with duration and presence. Humphrey framed this in an evolutionary context, arguing that such loops transformed sensation into something biologically significant by giving it temporal depth (Humphrey 1992). Consciousness is not continuous, it occurs in moments. This temporal aspect of resonance may be what we think of as a moment of consciousness.
Grossberg, meanwhile, developed the idea of resonance into Adaptive Resonance Theory, providing a formal mathematical and neurobiological model of how resonant feedback loops enable stable perception, learning, and attention (Grossberg 2021).
Imagine looking at a line on a page. Edge detectors near the retina respond to local contrast. Those signals feed up to neurons tuned to detect lines. But the traffic doesn’t only go up. Higher-level neurons send predictions back down: “If this is a horizontal line, I should be seeing this specific pattern below.”
When the bottom-up signal and the top-down expectation match, the loop locks in. Competing hypotheses are suppressed by inhibitory surround. The resonance stabilizes.
Perception doesn’t work like a video camera. It’s not passive recording, but rather, a mutual confirmation between levels of abstraction.
These resonances may stop at a surface, a motion, a feature, or it may propagate upward to the recognition of an object that matters to us. When that happens, we become aware. As Grossberg puts it, “All conscious states are resonant states, but not conversely.” In other words, not every resonance yields awareness, but whenever awareness occurs, it is grounded in resonance.
In Grossberg’s framework, only certain types of resonances, specifically those involving sensory or emotional features, enter awareness. Others, like those involved in motor planning or spatial mapping, remain unconscious but are no less essential to adaptive behavior.
Conscious perception, then, is not raw activity. It is the emergence of a self-sustaining pattern, a resonance that has survived competition, aligned across hierarchical levels, and been selected for its relevance.
Resonance Beyond Perception: Stabilizing Behavior
Philosophers sometimes worry that consciousness might be epiphenomenal (along for the ride with no causal effect of its own). The idea is that while the brain’s neurons and molecules carry out all the real work, consciousness is just a shadow cast by those processes. Sean Carroll pushes back on this view. As he has argued, the difference between doing something consciously and doing it unconsciously is not trivial. At the emergent level where human beings act and decide, consciousness clearly has causal powers. Whether or not I consciously notice something changes what I do, just as “tables” change our world even though the concept of a table does not exist at the level of atoms and molecules (Carroll 2025).
Grossberg’s framework helps explain why. Resonance does not merely stabilize perception into a coherent image of the world, it can also stabilize behavior. Unconscious processes can run learned routines with impressive efficiency, but they are brittle: they default to the past even when the present context calls for something different. Conscious resonances, by contrast, hold goal-relevant representations active long enough, and integrated widely enough, to override those habits (Grossberg 2021).
Consider driving. If your mind drifts, you may find yourself turning toward home on autopilot rather than continuing toward a new destination. In that case, unconscious routines have taken over. My own hypothesis is that when you consciously maintain the intention to reach the unfamiliar place, you are sustaining a resonance that extends higher up the hierarchy, into abstract, goal-oriented planning. Skill execution, by contrast, may rely on resonances that remain in lower levels of the hierarchy, never reaching the threshold that makes them conscious. Perhaps what practice really does is to reconfigure the hierarchy: structures that once required higher-level resonance are gradually consolidated into lower levels, where they can be executed smoothly without conscripting higher levels into their stabilizing resonances. This would explain why conscious effort is essential in early training but later fades away, replaced by fluid unconscious execution.
Why not make all resonances conscious, then? Because unconscious resonances are adaptive in their own right. They allow the brain to operate efficiently, conserving energy by restricting integration to narrower networks. They also allow behaviors to run more quickly and precisely than conscious deliberation would permit. Think about reaching for a cup of coffee: if you had to consciously coordinate each muscle movement, the action would be clumsy and halting. Instead, after practice, the behavior becomes a skill, executed smoothly by unconscious routines.
Psychologist Daniel Kahneman has famously distinguished between two modes of thought. Fast thinking, or System 1, is automatic, unconscious, and efficient: ideal for practiced skills and familiar contexts. Slow thinking, or System 2, is deliberate, conscious, and effortful: necessary when flexibility, reasoning, or novel situations demand attention (Kahneman 2011). Neither mode is sufficient alone. If all our actions were unconscious, we would be locked into habit, unable to adapt. If all were conscious, we would move through life slowly and awkwardly.
Consciousness, in this view, is resonance that climbs high enough in the hierarchy to recruit abstract, temporally extended, goal-relevant representations. It stabilizes not only perception but also memory, planning, and action, ensuring that behavior remains aligned with context rather than swallowed by habit. Conscious states are resonant states that matter at the human scale, where causality is meaningful.
Consciousness, then, is not an inert byproduct. It is the emergent stability that allows perception, memory, and behavior to align. By stabilizing high-level goals against the gravitational pull of habit, resonance makes consciousness both real and causally powerful in the lives we live.
Current (as of 2025) large language models seem to mimic what we think of as consciousness. Some theorists even argue they can pass a version of the Turing test. Their fluency, responsiveness, and breadth of knowledge can give the uncanny impression of consciousness. But a crucial distinction remains.
Large language models operate in discrete turns: a prompt produces a response, then the system falls silent until the next input. Nothing happens in between. Human brains, by contrast, never turn off. As animals embedded in a dynamic world, we must anticipate, plan, and act continuously. To do this, the brain sustains resonant activity: loops of prediction and confirmation that stabilize perception and action in real time. These resonances give experience its temporal flow: the sense of “now.” Without them, consciousness would collapse into isolated sparks of input and output, as it does in artificial models.
This distinction helps explain both the brilliance and the limitation of current transformer models. Language models show that hierarchical architectures trained on massive input can achieve extraordinary abstraction and creativity. But they also remind us what makes living brains different: not just hierarchy, but the resonance that keeps hierarchy alive.
Memory as Description: Reconstructing the Past with the Architecture of the Present
But what happens when the moment passes?
The brain does not store experience like a video. Instead, evidence suggests it retains structured representations that prioritize meaning over raw detail, highlighting what mattered. These are not perfect recordings but reconstructions shaped by relevance, context, and emotional weight. Often, they resemble narratives: who did what, where, and why. While sensory impressions may be part of what’s stored, the underlying structure seems to reside in higher layers of the hierarchy, layers that can later regenerate elements of the original experience by reactivating perceptual pathways.
This is not merely theory. Cognitive neuroscience increasingly supports the idea that episodic memory is reconstructive. Witness testimony is unreliable. Memories change over time. And neuroimaging shows that the same networks involved in perception and imagination are involved in recall.
We do not store the moment. We store the meaning of the moment, and reconstruct the rest. What’s retrieved is not a static file, but a dynamic pattern, often reactivated through cues that recreate the original resonance.
Language plays a key role here. Our ability to create structured, abstract descriptions may be what makes detailed episodic memory possible. Language is a compression tool, a way to encode and transmit complex structures. And perhaps, over evolutionary time, it was internalized, not just to plan or label, but to remember. We will explore language in more detail in the next chapter.
Memory in a Social Brain
Why would we evolve a memory system that stores experience as narrative?
One reason is social coordination. Humans are not solitary. We live in groups. To function in a social world, we must make sense not only of the world, but of ourselves. We need to tell stories that explain what we’ve done, why we did it, and what it means.
Episodic memory may have evolved not just to help us survive, but to help us belong.
Philosopher Philip Pettit argues that language did not evolve as internal thought made external, but as external communication made internal. First we talked to others, then we learned to talk to ourselves. Episodic memory may follow the same path. First we shared stories, then we began to store them. These shared and internalized stories didn’t just help us survive, they helped us become someone within our social environment. They became the raw material of identity (Pettit 2008).
The Self: Center of Narrative Gravity
And so we arrive at the self, not as a soul, or a locus, or a spark, but as a model.
Daniel Dennett calls the self the center of narrative gravity: a useful abstraction that organizes how we behave, just as the center of mass organizes how objects move (Dennett 1991). The self is not a thing we find inside the brain, it is a pattern we construct across time, across memories, across relationships.
It emerges from the interaction of resonance, memory, and narrative. It is the name we give to the protagonist of our internal story.
And it is social. We build our self-models in response to others. We shape our actions to make them legible. We ask: what kind of person am I? What will they think of me? We imagine how others model us, and model ourselves in turn.
We do not merely remember the past. We rehearse who we are.
The Network of Meaning: Compression at the Apex
At the lower and middle tiers of the brain’s hierarchy, information is organized by levels of abstraction: edges become surfaces, surfaces become objects, objects become concepts. But beyond the level of concepts, something new happens. The mind stops stacking ever-higher layers and begins folding relationships back upon themselves.
Through language, we recursively combine existing concepts to generate new ones, each phrase or metaphor a temporary scaffold for a fresh abstraction. “Quantum entanglement,” “social contract,” “neural resonance,” all are higher-order constructions born from strings of earlier ideas. If the hierarchy were extended linearly, it could, in principle, continue forever. But the brain economizes. Instead of building a new layer for every possible combination, it compresses these recursive relationships into a dense, cross-linked web: the associative network of meaning.
This network doesn’t sit above the hierarchy so much as contain it in compressed form. Every node represents a concept, but the connections encode how those concepts interact: causally, metaphorically, analogically. Because these relationships are recursive, the network can represent potentially infinite conceptual structures using finite material. A sentence is a path through this network, a traversal that momentarily lights up one thread among countless possibilities. Even the act of coining a new word simply marks the consolidation of one such path into a reusable shortcut.
In that sense, meaning is both hierarchical and lateral. Hierarchical because it depends on layers of abstraction built from perception; lateral because it encodes relationships among those abstractions in a compressed, self-referential web. The network of meaning, then, is not the top of the hierarchy in the usual sense, it is the compact reflection of the hierarchy itself, capable of infinite recombination within finite space.
A Mind Built in Layers
We began with the simplest form of communication, between cells, and arrived at the emergence of identity. At each step, the brain added a new layer of structure: neurons began transmitting signals. Circuits emerged to create contrast. Resonance stabilized perception. Descriptions allowed experiences to be encoded into memory. Narrative gave those memories meaning. Self-models integrated behavior across time. And culture embedded each mind within a larger whole.
Each layer builds on the ones below, adding new functions, new constraints, new possibilities. Matter becomes structured. Structure gives rise to recognizable patterns. And through those patterns, meaning begins to emerge. Mind is not a mysterious essence hovering above or hidden within the brain. It is a layered architecture, each new layer solving the limitations of the one before.
In the next chapter, we’ll explore the role of language: the scaffolding that links thought to thought, person to person, and mind to culture. For just as the self emerges from perception and memory, culture emerges from selves in communication.