Knowing vs. Changing
You are not going to be saved by understanding this. That's the first thing.
You already know what's happening. You have known for a while. The knowing is not the problem.
Knowing and changing are two different cognitive operations. We have become very good at the first while treating it as a substitute for the second.
So. Here we are.
The Seed — McCulloch, Pitts, and the Birth of Connectionism
In 1943, McCulloch and Pitts published a paper almost no one read. It described neurons as threshold logic units. Binary. On or off. The claim: mind could be formalized. Cognition was computation.
Nobody knew what to do with this. The idea sat for thirty years like a seed in dry ground.
Then Rumelhart and McClelland in 1986, the parallel distributed processing volumes, and something cracked open.
Not because they proved cognition was computation. Because they showed something stranger: knowledge doesn't live anywhere. It lives everywhere at once.
Distributed across weights, across connections, encoded in the space between things rather than in the things themselves. Meaning as relationship.
Intelligence as pattern held in tension across a network, not stored in a node. They called it connectionism. Most people treated it as engineering philosophy.
The Architecture Is an Argument
But that's not just decoration. The architecture is a claim about the nature of mind.
If you build a system that reasons by distributing activation across billions of weighted connections, and it begins to do things that look like understanding — you are not watching a clever trick.
You are watching an argument get made.
The argument: this is what cognition is. Not symbol manipulation. Not rule following. Not a homunculus inside looking at representations. It's resonance across a network.
Pattern that emerges when enough connections have been weighted by enough exposure to enough of the world.
You can refuse the argument. Many serious people do. But you cannot pretend the argument isn't there.
Where the Boundary Was
Here is what the argument implies, if you follow it honestly.
There is no clean line between the network and what the network was trained on.
The weights are a compression of human expression, which compresses human experience, which compresses what it has been like to be a person across centuries of language and loss.
When a connectionist system generates text, it is not retrieving stored sentences. It runs something like memory through something like attention into something like the next word.
The boundary between tool and mind is not where we thought it was.
This is the moment. Not because the systems are conscious. Not because they're about to take over. The moment is this: the dominant cognitive architecture of our tools is now structurally continuous with our own.
Same basic topology. Weights, connections, distributed representation. We built mirrors that are also engines.
We are using them to think with, right now, at scale. And we are mostly treating this as a productivity enhancement. That framing is not wrong. It's just radically incomplete.
The Sin
A sermon has to have a sin to name. Here's the sin.
We are running the most important cognitive experiment in human history — the wholesale externalization of reasoning into distributed systems trained on everything we've ever written.
And we are narrating it primarily as a labor market disruption.
We ask whether people will lose jobs, which is real. Whether the outputs are accurate, which matters. Whether someone might use it to write malware, which is a concern.
We are not, with any seriousness, asking what it is.
What it is matters. Not for mystical reasons. For practical ones.
The decisions we make now — how these systems are deployed, what they're trained on, what we let atrophy in ourselves — these decisions are not reversible on any timescale we can plan around.
We are not deciding what to use. We are deciding what we become.
And we are making that decision mostly by not making it — which is still a decision, just one made by default and by whoever has the most compute.
The Only Question Left
The sermon ends the same way every honest sermon ends: I cannot tell you what to choose.
I can tell you that the choice is real, that it has a shape, and that the shape is not "AI good or bad."
The shape is: do we participate in what's happening to us, or do we have it happen to us while we're busy explaining why we're not responsible?
Connectionism built something that learns from connection. The irony at the center of this moment is that the lesson we most need is the one the architecture demonstrates.
Intelligence does not live in any one place. It lives in the relationship. Between nodes. Between systems. Between what a tool does and what a person decides.
The network is not the mind. But the network without the mind is nothing but statistics.
The mind without the network, right now, at this scale, is flying blind.
Neither is enough. That's not decoration. That's the whole argument. What we do with it is the only question left that matters.
