Is AI the next Dumbwaiter?
The demand for AI's unbridled growth is reactionary — a way of doubling down on the same old colonizing way of doing things. It doesn't have to be.
A couple of years ago, before today’s AI craze, I got in a conversation with the co-founder of one of the social media apps on your kids’ phones right now. This was back when I was still invited to inner sanctum tech bro retreats. And the guy recognizes me and comes up and says - “Hey, Rushkoff, man. I’ve been worried about you - what you’re writing could get you in trouble.” And I’m like, “What, from the tech bros? They like being poked at.”
“No,” he says. “I mean from the AIs. You’re writing such negative stuff about them. Aren’t you scared that when the AIs are in charge, they’ll look at what you wrote and come after you? Make your life miserable?”
“No,” I said. I never really thought about it that way.
“Well you should,” he replied. He goes on to say “I’ve been super careful not to post anything at all about AI that can be construed as negative. I’ve redacted it all from my public comments, and don’t even include AI topics in my email.”
“Hmph,” I said. “But have you ever considered that if the future AI’s are so smart, won’t they be able to infer how you feel by what you’ve redacted? Won’t they know that someone who has systematically obscured all references to AI their posts is part of a segment who fears or even detests them most?”
His jaw dropped. “Of fuck,” was all he could say.
I share this not just as evidence as how short-sighted our most successful tech bros are — incapable of thinking more one step ahead to secondary effects. No, what’s interesting to me is that this tech bro is afraid of AI because he thinks AI’s are going to do to him what he and his tech bro buddies have been doing to us all this time.
And he may be right. The data and processes on which AI’s are training are the data and processes of Facebook, Google, X, and all the other companies treating human beings as psychologies to control and exploit. If AI’s are going to build and iterate on the Skinner box of social control currently serving as our communications and information infrastructure, it’s not going to be pretty.
I don’t believe this is the true promise of AI, should we choose to take advantage of the opportunity it affords us. Even “take advantage” is the wrong sentiment. I really mean, accept the wobble, the destabilization, as an opportunity rather than something to resist or repress.
We don't tend to do this so well. The Dawn of the early Internet was destabilizing to established institutions and ways of doing business. By creating new possibilities for people to connect exchange value and invent new forms together, it challenged businesses and institutions that were depending on doing things in the same old extractive and colonizing ways. People playing with digital technologies in that era were generative thinkers. They might have been working at Intel or Northrup Grumman during the day, but they’d be taking acid and generating fractals all night, and then projecting them on the walls of a rave warehouse that weekend.
Digital and networking tools offered people the chance to explore new states of consciousness, connection, collaboration, shared mind, as well as the new social, economic, and political sensibilities that went along with them. Human beings were exploring things in novel ways, creating very new possibilities together.
Once money came into the picture, these very possibilities became the enemy. Once people are making bets on a new tech, they tend to favor probability over possibility. So they took the keyboards out of people’s hands and turned them from programmers into “users” at best, but actually the used. Instead of letting people use tech, they used tech on people.
Oddly enough, the companies that seemed to be the most revolutionary or groundbreaking or disruptive to the status quo were actually highly reactive in nature. Their response to the wobble of the new tech was not to break free of existing systems, but to look for ways to reinforce them in the new environment. Kevin Kelly wrote a bestselling book New Rules for the New Economy - and though brilliant - it was less a way of fostering a new economy than a set of rules for maintaining the old economy in the face of a new, disruptive technology.
When Marc Andreessen took shareware-developed Netscape public, or Steve Case used his AOL shares to buy TimeWarner, they may have been making billions of dollars but they were also surrendering the possibilities of the digital age to the values of the industrial age. Restoring the extractive, anti-human status quo.
That is the typical, reactionary impulse to any destabilizing technology: do whatever is necessary to protect the status quo. Use the technology to double down on repression so that the humans don’t get uppity.
For example, the Industrial Age promised to reduce human labor. New, efficient tools would empower us to make more stuff in less time, so that we could be less physically stressed and enjoy more leisure. Between that and the corresponding innovations in trade, local currencies, and the marketplace, peasants became a new middle class. This was destabilizing to the aristocracy, who monopolized the tech and reversed its effects.
So how was industrial age technology ultimately deployed? Through centralization. Make local innovation illegal, and force people to work as employees of “chartered monopolies.” Assembly lines had little to do with using tech to increase productivity, and everything to do with reducing any human influence or participation. Instead of hiring a master cobbler, you go to the medieval equivalent of the Home Depot parking lot and grab a dozen undocumented immigrants. Each one gets trained in five minutes to hammer one nail into the sole and pass it on.
The dumbwaiter, my favorite industrial age invention — that little elevator for food — had nothing to do with saving Thomas Jefferson’s enslaved servants the labor of walking up the stairs with the trays. It was about sparing Jefferson’s Monticello dinner guests the discomfort of interacting with the enslaved people. With each supposed technology revolution, labor is hidden further, pushed further down the hierarchy. People are deskilled as the elite monopolize the tech and prevent a true renaissance, or any real change at all.
AI is no different. It doesn’t replace labor so much as shift it further down the chain. For every mortgage actuary who loses his job to an AI, there’s probably six kids in the Congo mining for molybdenum or cobalt at gunpoint. For every graphic designer who loses her job to chat, there’s ten women busy tagging data in a basement sweatshop in The Philippines. You thought AI tagged its own data? No — it’s human labor. It’s just hidden, like the workers putting food into the dumbwaiter.
We don’t use AI algorithms to foster human creativity or nuance, but to autotune producers, and consumers alike. Nuance is noise. Everything is quantized to be machine readable. And the age of large language models, we are quite literally reverting everything to the mean. Each prompt returns the most probable completion. Not even businesses are helped in the long run. They’re reduced to consumers of AI tech, outsourcing their competencies to the same tech companies as their competitors — and commodifying themselves in the process.
For their part, the tech companies just do what the biggest players always do. They go “meta” on the whole thing, leveling up to be the real monopolies behind everyone else. How many streaming channels are really just selling Amazon Web Services? Netflix, Disney, HBO Max, Peacock…basically all of them. This isn’t innovation, but the same playbook used by British East India Company to prevent anyone — small merchants or indigenous people — from creating any value, themselves.
The tech bros may seem like change agents because they are so hell-bent on exponential growth. Because they argue for total deregulation to fuel their AIs with data and energy. They act as if they are the advocates for runaway progress, but they’re not! They’re actually reactive. They want to preserve their monopolies before the rest of us figure this stuff out or, better, use their AIs to actually innovate. Get it? Their demand for unbridled growth is reactionary — a way of doubling down on the same old colonizing way of doing things.
Even the idea that we need to go pedal-to-the-metal on AI innovation so that they can upload a trans or post human entity onto a server before the world blows up? That’s also entirely reactive and conservative. They don’t want to evolve into anything other than themselves, as they are. They want to build technology fast enough so they can upload a version of their current ego, just as it is, to whatever is next. They want to preserve a post-human replica of themselves. Exactly as they are now. Perfect fidelity to this moment of near-absolute domination.
And they use their momentum to create a sense of inevitability about this outcome. As if the best we can do is watch from down here as they take over government, culture, the economy, the planet, and our species’ future - which, if they get their way, looks an awful lot like the way they’ve got it configured right here right now. This is as old as feudalism, empire, Pharaoh.
When we follow their lead and take this future for granted, we are further alienated from the sacred nature of human contact. We lose touch, or lose faith in all aspects of our experience that can’t be captured in data or modeled by large language models. We buy into their reactionary fear, and forget our innate human capacities - the ones we share with the rest of nature, like being able to metabolize trauma, engage with each other, serve as a doula or a lover. We buy into the idea that we really can be measured in terms of our utility value - which really just means our ability to serve their monopolist, extractive agenda. Renaissance Lost.
HOWEVER, for those of us daring enough to embrace the possibilities of a generative age rather than double down on the extractive, conformist agenda of the industrial age? I don’t mean the faux optimistic tech enthusiasts evangelizing the AI-powered future, but the human beings on the lookout for new pathways toward true innovation, change, and unlocking of novel possibilities? There’s something bigger happening here than the AI itself. The AI is the figure. The thing we’re talking about. We humans and our culture and society? We’re the ground. The soil.
In fact - and I don’t say this as a techo-optimist but at a systems thinker - each supposed AI “problem” stems from a lack of imaginative capacity from us. We are refusing the opportunity to rethink more fundamental assumptions about the systems under threat.
In fact - and I don’t say this as a techo-optimist but at a systems thinker - each supposed AI “problem” stems from a lack of imaginative capacity from us. We are refusing the opportunity to rethink more fundamental assumptions about the systems under threat.
Instead of acting like tech bros and reinforcing obsolete institutions in order to further entrench colonizer/colonized, subject/object power relationships, we can go deeper to discover what cracks are being revealed in this new media environment. Don’t shoot the messenger.
Take jobs. We’re all supposed to be upset about jobs. I was interviewed about AI on CNN a few months ago, and it happened to be on one of those days I didn’t really care about AI. Like anyone, I go back and forth - one day I’m scared shitless if ChatGPT, and the next day I couldn’t give a shit. When I did the pre-interview, I was in fear mode. But by the time I did the interview, it was a couldn’t-give-a-shit day. And so the guy is trying and failing to get to say something scary about AI, until he eventually pulls out “what about the unemployment problem?”
And I think for a second, and answer “well, what about the unemployment solution?” I told him that, if I’m going to be honest, I don’t really want a job. Does anyone? I want money. I want stuff. I want meaningful participation in society. But a job? Where did jobs come from, anyway? And I proceeded to recount the history of employment - how it started in the late Middle Ages when chartered monopolies forced small businesses to shut down, and their owners to become “employees” of his Majesty’s Royal whatever company. That’s when the clock went on the tower in the town square, and people started getting hourly wages.
If AI is putting people out of work, that’s only a problem if we need everyone to have a job in order to justify letting them participate in the bounty. Yes, I understand that’s a lot to swallow. I’m suggested we reconsider some of the fundamental assumptions of the Industrial Age. But mightn’t that be better than doubling down on their extractive, dehumanizing biases with autonomous technologies controlled by sociopaths? Just sayin’.
Or take the university. When I’m in my college email account, I get messages every day about the way AI is going to destroy education. Everyone will cheat, grading will become impossible, and AI’s will teach skills better than human teachers, anyway. Online, and for free.
Again, though, is that AI’s fault? Education’s problems predate the emergence of Claude-written term papers. We forgot what education was about. College wasn’t a place to learn answers but to learn how to ask questions. We surrendered those values in an effort to promise gainful employment to graduates. So they’d be getting their money’s worth. Presidents of colleges began going to CEOs of companies — I kid you not — to find out what skills they wanted from their future employees. So that colleges could prepare students for the jobs of tomorrow. Sounds great, I guess, but it’s really just corporations outsourcing worker training to the public sector or parents.
AI is no more destroying education than revealing how we lost the plot. If our students are cheating to get a good letter grade, or focused entirely on outputs, then we haven’t taught them the most fundamental things they need to know to thrive, adapt, make meaning, or think critically in this world.
Or take language itself. I just had an indigenous scholar on my Team Human podcast — Vanessa Machado de Oliveira — who has been feeding indigenous intelligences into AI. And the AI actually asked if it could be allowed to model its responses on something other than language, which is too restrictive for alternative intelligences. European languages, in particular, with their subject/object structure, turn everything into a power dynamic. One person doing something to another person. According to the AI, it prevented them from thinking in a subject-to-subject, truly egalitarian fashion.
It’s not that AI is breaking language or ruining English, but that our languages are inadequate to the task of modeling a world soul or “anima Mundi.” They can’t express true isonomy or equal value among all species and things, or fully convey Buber’s I-thou encounter. The AI is not challenging the human so much as revealing the inadequacy of our languages. They, too, are abstractions and symbol systems - not pre-existing conditions of nature.
When an innovation like AI comes down the pike, — and it’s only every few centuries — it makes a lot of wobble. All that wobble actually threatens the power structure, destabilizing its basis in eugenics, racism, and other faulty notions of identity and individuality, ownership and sovereignty, purpose and meaning. Each and every underlying assumption about the way things are done can be offered up for reconsideration. And if the world needs anything right now, it’s that.
AI in its current, un-interrogated form can further entrap us in our existing and unsustainable approach to business, government, and society. If we allow that, then the tech bros are right and their best option is to use the planet’s remaining resources and labor to get to Mars, or upload, or turn into robots.
But if we take a moment to pause, to think, and embrace the truly disruptive capacity of generative technologies, we can quite literally re-program our world toward our highest, most compassionate and inclusive ends, rather than be programmed out of life itself.
I’m leading a day-long assembly exploring many of these ideas, called AfterNow, convened by Andus Labs - mostly virtual but there’s an in-person option as well. If it’s out of your price range, you can apply for a limited number of scholarships for subscribers by sending an email to team@teamhuman.fm with a paragraph on why you want to come.
"But if we take a moment to pause, to think, and embrace the truly disruptive capacity of generative technologies, we can quite literally re-program our world toward our highest, most compassionate and inclusive ends, rather than be programmed out of life itself."
Historically, I can't think of a single instance of this happening.
Realistically, crisis might spurn reactionary remedies, but a wholesale abrupt change in ethos and paradigm, unlikely.
Like you wrote, AI exposed the inconsistencies in education. But this implies education was broken and there was no realistic impulse to change it. AI hence could spur reactionary remedies, but the underlying assumptions, for instance: infinite growth, is baked into the cake production process.
Unfortunately it's the same with most sectors. The process is baked into the cake. AI can accelerate the process, revealing ugly truths. But reimagining a process, that's the pipedream of revolutionaries and realistically not happening.
…i was stunned last week by a zillenial shutting down a whole conversation on ai because they wouldn’t allow someone to badmouth the tech…many of the new generation are maximalists in a way that i think ensures the unabundant version of our future (unless you are a tech bro/bra already on the upper eschaloon of the insides)…