1 Comment

"That tendency to oversimplify..." - "wisdom is the capacity to extract the important information from the trivial, to see the forest in the tree", that is the motivator behind our misshapen tendency to oversimplify and over abstract. But the core drive of this process is our key survival and enlightenment mechanism. It is through discernment (and its byproducts - simplification, abstraction and models) that we achieve renaissances - Columbus smashing the egg. The problem I would argue lies in where we apply that drive.

"The Dark Ages weren’t dark because people lost techniques or science. They were dark because people lost people. It’s a lot of work to be human, and it’s work which must be kept up, or it begins to fade. " - Cordwainer Smith

When we apply discerning models to our tools we win, when we apply inhuman models to humans, we fail. In other words the core problem here is that we need to remember that our tools and models were meant to aid us, our wisdom, our goals, our work, and not replace us or feed into abstract systems that will ostensibly work instead of us.

What is more, our models need to simplify the resources we need in order to empower _us_ by removing trivial encumbrances to our wisdom and artifice. On top of that, our models need to be discerning, so that instead of oversimplifying, they use filters made by _us_, they expand our discernment like a telepathic signal onto datascapes the sizes of which we cannot handle alone.

A hammer can do miracles in the experienced artisan's hand. Same for a map or a language model. I argue the gist of the problem is that our modern hammers have no handles for human hands.

Imagine a neural network that has full access to every book that has ever been printed, every text ever published. At the press of a button the simple app it powers allows the user to search through all texts ever written. Let's say you type in "behavior modification" and you can immediately see the flow of that term through history and gain functional insight that only a PhD could give you.

Compare that to a neural network that transforms a book called "Behavior Modification" into an easy-to-digest, 5 minute movie, all at the press of a button.

You and I know the difference in value between the two. AI never will. A lack of discernment.

Then of course there are the tougher questions: why most scientific data is locked away? why paywalls? But that is question of smashing not eggs but human egos.

"how digital technology and the early Internet looked to us in the late 80s" - you could then see something in the future that never blossomed. That thing you saw is not gone though, but just submerged in a sea of irrelevancy. Every time we take two steps forward we are flooded by the jungle of novelty - the present shock of complexity - but the only way out of that deathly paralysis into the future without future shock is to take a step back - simplify, filter with models, see the goals again with clarify, and take another two steps with our own feet, and not with the feet of models. Evolution always drove us thus. And when we refuse she punishes us. Think of how the aeolipile was crushed due to the fear of replacing slave labour. Similarly, what you saw in the 90s in Cyberia was what will soon be clear again once we clean the clutter.

The map is not the territory? What if the ultimate conclusion of that thought is that we are the only map that is a part of the territory(Mandelbrot?). For man still is the measure of all things.

A model can build better than Columbus a thousand ships till the day he dies, but it better build one good one. For it is only Columbus that can take it to the New Atlantis.

Expand full comment