This piece is an adapted version of a much longer Team Human monologue. Listen to the whole thing here.
I was making a familiar argument lately, at least for me, about how we need to abandon oversimplified 20th Century style movement politics for something much more complex. It was talk called “from the movement to the moment,” in which I explained how simple stories where we keep our eyes on the prize and march toward a singular goal end up justifying a whole lot of bad stuff along the way. I brought up the early World Trade Organization protests in Seattle and Occupy Wall Street, which confounded the press and politicians because they were made up of so many disparate groups and causes. The media didn’t know how to cover such protests, because they couldn’t be explained in a simple headline or two-minute TV report.
The digital age seemed to promise a more nuanced, collaborative, and real-time approach to movements, and to everything. Where television was about globalism and unifying dreams, the internet would embrace the true complexity underlying distributed solidarity. We’d stop telling ourselves stories, and start enacting social justice from the bottom up.
And while I was explaining all this, another set of stories and visions started percolating in the back of mind. I remembered how digital technology and the early Internet looked to us in the late 80s. Some folks - mainly business types - were talking about the “digital revolution,” but for me and my posse, it was more of “renaissance.” We wouldn’t simply replace one elite with another. We were not, as a revolution implies, drawing a circle. Rather, we were - as “re-naissance” suggests - retrieving and rebirthing old ideas in a new context. Just as the original Renaissance increased our appreciation for dimensionality with perspective painting and circumnavigating the spherical world, our renaissance would increase our appreciation for dimensionality with fractals and holgrams, or orbiting the planet with satellites.
The defining image of this early digital sensibility was the fractal. Until the fractal, we used Euclidian geometry to understand our world. This meant measuring the volume of clouds as if they were spheres, or the coastlines of islands as if they were polygons. In order to contend with something as chaotic as the ocean, you just superimpose a grid of lattitude and longitude lines over it. Does the ocean have anything to do with that grid? Of course not, but it’s a way to describe a simple location without having to deal with all those complex wave patterns. You just ignore them. That tendency to oversimplify, and to relate things from the real world to idealized and abstracted shapes, informed everything we did, from monocultural crops to city planning to capitalism.
But with computers, we gained the ability to calculate in new ways. Benoit Mandlebrot, a mathematician at IBM (and my high school friend’s father, in fact) was working at IBM trying to find new ways to predict the seemingly random interference on phone lines, when he came up with the idea of using non-linear equations. Instead of looking for a simple equation, like 2x + 1 = y and coming up with an answer, he got the idea of using the feedback loops of computers to feed each answer back into the equation. (Feedback is what you get when you point a microphone at its own speaker. It listens to its own noise, and feeds it back into the system again and again until you hear that screech.) So instead of just getting an ansewr of y, he’d put that answer back in the equation as the new x.
(Most simply, that means if you started with x = 1, you’d get answer of 2x1 + 1 = 3. Then you take 3 as the new x, and get 2x3 + 1 = 7, and then plug in that 7, and so and so on.)
While the graph of that equation will make a simple line or curve, he used more complicated equations and got much more complex graphs. That’s what we called fractals - those paisely psychedelic patterns that looked like fern plants, coral reefs, galaxies, and other natural phenomena.
We were all thrilled. It seemed as if the oversimplified Platonic ideals we had been using to impose colonialism, capitalism, and all sorts of other -isms on the natural world would be overturned, in favor of these new chaos-embracing modeling systems. Instead of turning to the cartographer to tell us that a particular point in the ocean was this many degrees lattitude or longitude, we’d embrace the surfer’s mentality and contend directly with the complexity of the waves themselves. Armed with these non-linear feedback loops, computers would be able to model anything.
What we didn’t take into account, however, at least not immediately, was that these models were not reality. They were really just an awful lot like reality. Yes, fractals and other digitally rendered systems looked like ferms, coral reefs, galaxies, weather systems, and so on, but they were just models of these things. They were more complex than simple maps, but their apparent complexity camouflaged their artifice. They’re really really cool. You can zoom in on any section and it will continue to render gorgeous, self-similar patterns. And these models can yield terrific answers, new ways of framing and seeing and understanding systems, of farming, or running societies…but they’re still abstracted and disconnected from reality. They can model a metropolis with a complexity closer to Sim City than a board game like Monopoly. But they’re still just models.
And we keep forgetting this. Every time computers move up a notch, or do something seemingly more complex, we begin to think “this time it really is going to do it.” The web seemed as complex as reality until the dotcom boom reminded us it was just a series of business plans. Then web 2.0 and social media were supposed to do it. Then ultra-fast trading, derivatives, and algorithms. Then it was the blockchain that would finally be able to record and instrumentalize every single aspect of reality.
Today, it’s Artificial Intelligence. These Large Language Models. These are compositional techniques for rhetoric, yet many people think we’re creating life itself. We are not. We are really just creating another layer of abstraction: a way of mining all the rhetoric we’ve put out there and then synthesizing it into forms that simulate language without using any knowledge or thought. Real thinking is to an AI like waves are to a lattidue line.
In an AI, there is genuinely no one home. It’s all model. No reality. It’s looking at everything we’ve ever modeled - or at least all the models that we’ve digitized (and that human beings have tagged for it) and then developing language around those models. It’s all a form of auto-tuning and auto-completion — of taking what has already happened and putting it back together in the most statistically probable way. As Alfred Korzybski would remind us, the map is not the territory; but neither is the model.
Still, interestingly enough, AI’s facility with models may actually be of service to us humans as we strive to distinguish our models from our reality, our simulations from life. What better than an AI to distinguish between AI-generated disinformation and real news? Unlike humans, AI’s can actually access all the models out there at once. The same features that allow AIs to source material from everywhere allow them to recognize whether something else has been sourced and composed from the same slush pile of prior creations.
The truly killer app for AI’s in our current civilization may be to serve as digital narcs. They can inform on each other, revealing when something supposedly real is just one of their fellow AI’s creations. But more importantly, they can help remind us when something is just a map, a model, a social construction rather than a given circumstance of nature. That’s the first big obstacle I’ve been harping about for the past year two - the thing standing in the way of ourreaching coherence or functioning as a society. We are walking around mistaking too many things and institutions as given circumstances or conditions of nature that are really just social constructions. From the money in our pockets to the fact that we need a car to get to work or that we need to be employed at all or that we need to pay rent to some landlord in order to be allowed to sleep in an apartment.
When we’re born into such a world, of course we accept such conventions at face value. It’s how things are. But the trick to moving beyond them is to alienate ourselves from them, and recognize their inventedness so we can “program” them differently. As the ultimate modeling machines, AI’s may just be able to help us do that. Their job is not to create even more compelling models (as they are currently being deployed) but to recognize that the money in your pocket is not value, but a particular form of currency designed to keep a particular class of people in power. AI’s can trace the orgins of our social constructions leading us to accept extractive and self-defeating approaches to work and life, while also recognizing the synthetic behaviors of all the other bots out there trying to fool us into adopting one destructive model or another.
Maybe, just maybe, AIs can become the next and last generation of feedback loop, exposing the false promise of the totalizing systems of the digital age, whose higher levels of complexity still don’t mean a thing to those of us living here on the ground attempting to flourish together through a myriad spectrum of value exchanges we don’t even know how to perceive or measure much less model. None of that stuff is in the models because none of it has even been consciously sensed, recognized, labeled, or recorded.
Our renaissance must not be an affirmation of the last one’s abstracted ideals at greater resolution or rendered with greater complexity. (Second verse, same as the first…) It must instead be a reclamation of the more experiential, ineffable, and irresolvable qualities of real life. We can’t fight over these created models and histories anymore. They cannot be resolved. They are not real. They are models. Games. Rhetoric. Approximations. They are figures, and never ground.
"That tendency to oversimplify..." - "wisdom is the capacity to extract the important information from the trivial, to see the forest in the tree", that is the motivator behind our misshapen tendency to oversimplify and over abstract. But the core drive of this process is our key survival and enlightenment mechanism. It is through discernment (and its byproducts - simplification, abstraction and models) that we achieve renaissances - Columbus smashing the egg. The problem I would argue lies in where we apply that drive.
"The Dark Ages weren’t dark because people lost techniques or science. They were dark because people lost people. It’s a lot of work to be human, and it’s work which must be kept up, or it begins to fade. " - Cordwainer Smith
When we apply discerning models to our tools we win, when we apply inhuman models to humans, we fail. In other words the core problem here is that we need to remember that our tools and models were meant to aid us, our wisdom, our goals, our work, and not replace us or feed into abstract systems that will ostensibly work instead of us.
What is more, our models need to simplify the resources we need in order to empower _us_ by removing trivial encumbrances to our wisdom and artifice. On top of that, our models need to be discerning, so that instead of oversimplifying, they use filters made by _us_, they expand our discernment like a telepathic signal onto datascapes the sizes of which we cannot handle alone.
A hammer can do miracles in the experienced artisan's hand. Same for a map or a language model. I argue the gist of the problem is that our modern hammers have no handles for human hands.
Imagine a neural network that has full access to every book that has ever been printed, every text ever published. At the press of a button the simple app it powers allows the user to search through all texts ever written. Let's say you type in "behavior modification" and you can immediately see the flow of that term through history and gain functional insight that only a PhD could give you.
Compare that to a neural network that transforms a book called "Behavior Modification" into an easy-to-digest, 5 minute movie, all at the press of a button.
You and I know the difference in value between the two. AI never will. A lack of discernment.
Then of course there are the tougher questions: why most scientific data is locked away? why paywalls? But that is question of smashing not eggs but human egos.
"how digital technology and the early Internet looked to us in the late 80s" - you could then see something in the future that never blossomed. That thing you saw is not gone though, but just submerged in a sea of irrelevancy. Every time we take two steps forward we are flooded by the jungle of novelty - the present shock of complexity - but the only way out of that deathly paralysis into the future without future shock is to take a step back - simplify, filter with models, see the goals again with clarify, and take another two steps with our own feet, and not with the feet of models. Evolution always drove us thus. And when we refuse she punishes us. Think of how the aeolipile was crushed due to the fear of replacing slave labour. Similarly, what you saw in the 90s in Cyberia was what will soon be clear again once we clean the clutter.
The map is not the territory? What if the ultimate conclusion of that thought is that we are the only map that is a part of the territory(Mandelbrot?). For man still is the measure of all things.
A model can build better than Columbus a thousand ships till the day he dies, but it better build one good one. For it is only Columbus that can take it to the New Atlantis.