This is part of the Graph Minds Notebook series
It is a strange thing: we acknowledge the generally social nature of our species, but resist acknowledging the specifically social nature of our intelligence.
Traditional accounts of human sociality focus on aspects like shared care of offspring, collectivized security from predators, game-theoretic cooperation around shared resources, “social contracts” to foster peaceable coexistence, and so on. In these accounts, collective thinking behaviors are treated as epiphenomena at best, and pathologies at worst.
We like the idea of dissolving individuality when it comes to playing together, fighting together, laboring together, feeling together, praying together, dreaming together, or dancing together — all behaviors that involve some intelligence in varied narrow senses, but are not primarily about thinking. But we are extraordinarily wary of the idea of thinking together, even though we obviously do a great deal of it (you’re doing it with me right now).
In my random readings on these themes over the years, I have hardly ever encountered a thought along the lines of “It is adaptive for humans to band together into collectives because it allows more powerful collective intelligences to emerge.”
One of the few examples is the variability selection hypothesis (Potts, 1999) that explains human intelligence as a response to environmental variability. From the abstract of the paper that introduced the idea:
According to the VS hypothesis, wide fluctuations over time created a growing disparity in adaptive conditions. Inconsistency in selection eventually caused habitat-specific adaptations to be replaced by structures and behaviors responsive to complex environmental change. Key hominid adaptations, in fact, emerged during times of heightened variability. Early bipedality, encephalized brains, and complex human sociality appear to signify a sequence of VS adaptations—i.e., a ratcheting up of versatility and responsiveness to novel environments experienced over the past 6 million years.
But this sort of argument is surprisingly rare, and even here, the association between increased general1 intelligence and complex sociality seems almost incidental. In fact, the argument that complex brains evolved to navigate the problems of sociality (ie politics) is more popular than the argument that they evolved to take advantage of collective thinking.
The popular line of thinking seems to be: I need a big brain because you might try to cheat me with your big brain while we’re huddling together to escape the lion.
It doesn’t seem to be: Brains need to be bigger to work together better, the way server chips meant for the cloud tend to be bigger than consumer device chips.
I am not interested in arguing about these evolutionary questions here, but it does strike me as odd and revealing that this line of thinking is so poorly represented in our thinking about intelligence.
It almost feels like there is a sort of conspiracy to construct intelligence as a primarily individual trait, and hide the extent and depth of its social character from ourselves.
Idealized conditions of maximally collectivized security, material abundance, aesthetic experience, sentimental experience, and emotional experience are imagined as utopias. But idealized conditions of maximally collectivized intelligence, true hive-minds, are generally imagined as dystopias, as in the Borg2 in Star Trek, or the Cyber-Men in Doctor Who.
Consider, for instance, the depiction of Zion in The Matrix, where the ideal collective is depicted as a rave — an orgiastic matrix of intelligence-suspending sentiment and sexuality that is presented as an antithesis to the titular matrix of humans-as-batteries-in-vats running a complex shared simulation together (aside: it might be fun to write a story that inverts those valences — Zion as the oppressive dehumanizing matrix, and the collective simulation-for-batteries as the escape to true freedom).
In fact, I can’t think of a single mainstream positive depiction of advanced collective thinking.
Humans dissolving themselves and vibing together through dance and music is good.
Humans thinking together is oppressive assimilation into Borg cubes.
Even the potential threat of competing cloud-scale silicon-based “superintelligences” provokes us mostly into imagining ways to augment individual, rather than collective intelligence.
We do not typically imagine utopian stories about wiring a group of brains together into Borg cubes, using high-bandwidth brain-to-brain connections, to defeat evil AGIs that might be trying to turn us all into paperclip-machine fuel. That’s another fun story idea right there — Borg cube vs. AGI paperclip factory. It’s the King Kong vs. Godzilla matchup of the intelligence discourse.
It says something that the prospect of maximal collectivization of our own intelligence is far more threatening to us than enslavement by competing alien intelligences. The latter is merely oppression, but the former is assimilation.
This fear does not seem to stop us from appreciating the power of collectivized intelligence from a safe distance, however.
For instance, we are quick to notice effectively collectivized intelligence in other species.
We are awestruck by the coordinated hunting of wolves or orcas, yet spin our own collective hunting behaviors into tales of individual heroism.
We find the eusocial intelligence of insects like ants and bees particularly striking, since the swarm appears so much more intelligent than the automaton individuals, yet we turn our own much more capable coordination mechanisms, such as markets, bureaucracies and corporations, into cartoon antagonists for courageous rebels operating individually or in small bands of heirloom brains.
The personal computer was introduced to the global collective consciousness through the 1984 Apple ad that depicted its virtues in terms of an individual intelligence standing out against a backdrop of drone-like “IBM-mainframe assimilated” collective intelligence. And yet today, we mostly use “personal” computers to access the power of vastly more powerful cloud computers running services like search.
For some reason, we are reluctant to entertain the possibility that a full expression of human intelligence is perhaps necessarily and intrinsically social. And so Fear of the Borg holds us back from exploring the most promising directions for the future evolution of intelligence itself.
The primary modern manifestation is what I’ve previously labeled waldenponding — a fearful retreat from the technologically mediated modes of rich connection that would enable such maximal collectivization. The fetish object of the waldenponder is the individual brain doing “deep” work, with minimal collectivization, and maximal egoism. The job of the collective is to merely recognize, correctly value, and appropriately reward the work of the individual, not participate in the doing. Shut up and buy my genius.
Doing “deep learning” in vast, pooled, individuality-dissolving intelligence collectives is apparently for machines and insects, not us. Even though we’re just coming off a couple of centuries doing exactly that in vast and hyper-specialized industrial economies. What might it mean to lean into that, instead of resisting and retreating?
In this new series, I want to unpack this Fear of the Borg, investigate how well-founded it is, and try to construct an alternative mental model of a maximally integrated social intelligence that addresses any well-founded aspects of the fear, but is not unduly constrained by mere unexamined egocentrism.
In other words, the question I want to tackle is: how can we learn to stop worrying, and love the hive mind?
I will call this notional maximally safely collectivized intelligence a Graph Mind.
Maybe the safe limit really is one individual waldenponding by themselves and there is no safe collective.
Or maybe there is nothing to fear from assimilation into Borg cubes.
I’m open to all conclusions here.
I first came up with the term in 2019, as an alternative to the clumsy “Global Social Computer in the Cloud,” or GSCITC, which I came up with in my old Against Waldenponding post.
In that post, I was mainly focused on examining patterns of retreat from collectivized intelligence (waldenponding), but did not offer much by way of constructive proposals for building graph minds that work better than say twitter.
I got started on such a proposal with my May 11, 2020 post, Superhistory, not Superintelligence.
I plan to fold the core ideas of both those posts, which are sort of like prequels, into this Graph Mind series.
A dozen themes that are on my mind, in no particular order, include:
Bureaucracies and markets as intelligences
Network effects as intelligence evolution processes
Wisdom/madness/conviviality of crowds
Human collective intelligence vs distributed computing
Collective memory and narrative as computation
Brain-to-brain connection technologies
Swarm theory, but for bigger-than-ant brains
Vibes, moods, sentiments, and other collective pre-intelligences
Egoism vs. surrender tensions in collective intelligence
Collective intelligence and the experience of deep time
Deep learning/ML as a mirror of collective intelligence
Silicon futures vs. Neuron futures vs. converged futures
I may or may not hit all these themes, and there might be important ones I stumble upon and chase after as I develop this. This isn’t a table of contents or a plan. It’s a starter set of index cards.
The overall idea is to start with a fairly focused and perhaps somewhat alarming question — how to make a good Borg — and arrive at perhaps an interesting alternative conceptualization of intelligence itself.
One that might perhaps achieve escape velocity from tedious “race against the machine” and “AGI” discourses, which I find to be profoundly boring, ill-posed, and not even wrong.
One of the reasons I suspect there’s something important there is just how threatening Borg-like conceptions of intelligence are to modern humans. It’s communism! It’s turning ourselves into sheeple! It’s destroying freedom! It’s scarier than Skynet!
Anything that sparks such a strong derangement syndrome has got to have something interesting going on inside, right?
What is so bad about being jacked into a large collective intelligence?
After all, 7 of 9 seemed pretty happy about it, and unhappy to be wrenched out of it. Was she just in a cubical cult, or was she on to something before the damn Federation brainwashed her into the cult of weakly social-democratic individualism?
But maybe not. Maybe there’s no there there. Maybe we’ll find out that our intuitive suspicions are well-founded, and there is no such thing as a good Borg.
Or maybe we’ll come up with a good recipe for one.
Or maybe, in the process of poking at this idea, we’ll find out that it doesn’t hang together coherently at all.
Or maybe some other weird notion will hijack this series entirely.
We’ll see how far we get.
We are Graph Mind. We will explore.
I mean “general intelligence” in a loose and weak relative sense here, since I don’t believe it exists in any strong sense the way IQ nerds and AGI worshippers believe.
If I recall correctly, the idea of the Borg drew inspiration from Asimov’s idea of a planetary intelligence called Gaia (which presumably drew from James Lovelock’s Gaia notion). But it is revealing that flipping the valence from positive to negative led to a much more resonant bit of science fiction.
One hive-ish mind I've noticed that does seem to get some human love is the scientific community as a whole. There's a lot of deference paid towards building off the work of others, citing references. Perhaps a positive hive model could grow from that memeplex?
After a lore series of crypto inspired posts, the hive mind intro makes to mention of DAOs? Friendship with the blockchain over, ML is your new best friend indeed.
Also your hive mind could be called the Venkatesh DAO lol