I’m halfway through this podcast, but I couldn’t wait to share it.
I don’t know if you’ve ever encountered an insight into some change that is unfolding in the world that struck you so profoundly that you realize nothing you know will ever be the same, but this podcast had this effect on me.
It’s conversation between Tom Bilyeu of Impact Theory and Emad Mostaque, founder of Stability AI. (Mostaque is apparently a controversial figure, but I found him intelligent and cogent throughout the hour of this interview I’ve watched so far.)
All I can say is: we are simply not ready for what’s coming, and it’s coming breathtakingly fast.
Mostaque puts things in no uncertain terms: “AI is not going to replace humans. Humans with AI will replace humans that don’t use AI.”
Mostaque’s own story is fascinating. He was a hedge fund manager with a son who had a non-verbal form of autism, and he began using AI tools to research treatments, eventually finding a regimen of drugs and therapy that have helped his son to be able to succeed at a regular school.
Mostaque believes that one of the many advantages of AI will be in the healthcare arena. He argues that the problem with many aspects of healthcare, including conditions like his son’s that aren’t supposed to have a cure, is that human intelligence doesn’t scale. There just aren’t enough really great doctors to dig into and solve the problems. But AI is, essentially, human intelligence at scale. Mostaque envisions personalized healthcare through AI analysis of unique factors that could be dramatically better than what we have available today.
Which is not to say that he doesn’t see the dystopian side. But there isn’t any way to stop this train. It’s going to cause massive displacement. It’s going to put countless human beings out of work. Mostaque acknowledges that we will not likely be able to create enough new jobs to replace those that are lost. It’s impossible to predict how many industries it will decimate, but Mostaque’s assessment is that anything you can do sitting in front of a computer is something AI will be doing in the very near future.
As I listened, digested, discussed this with my wife, it started to unfold in my mind.
This is a change I don't think people are ready for. We don’t know how to accept that very soon, AI will be able to do anything humans can do, and do it better. They won't be distracted by social media. They won't get tired. They're incredible at listening. They're scoring above human beings on empathy and creativity tests. They are doing law and medicine and science and programming. They are going to be able to replace us in most environments where a person physically interacting with the environment isn't needed, and in those situations, I believe we'll see dramatic development of humanoid and other AI bodies so they can perform those roles as well.
There WILL be a dividing line in human history of the time before AI, and the time after. It will be like the advent of nuclear weapons and the creation of the internet all rolled into something more ubiquitous and paradigm-shifting. Those of us who remember the before time will be the last of our kind. Nothing is going to be the same again.
There are two other major factors on the horizon that I see as the other pieces to what is shaping up to be a moment that sends the human race into sustained ontological shock. Here they are:
The AI revolution.
The official disclosure of non-human technology and non-human beings being prepared by the US and other governments. If you’ve not been keeping up on this, there are hearings coming on July 26, and new legislation drafted last week seeks to declassify UFO/UAP information. It mentions “non-human intelligence” 22 times, and discusses “reverse engineering of technologies of unknown origin or the examination of biological evidence of living or deceased non-human intelligence.” It’s not just a fringe amendment to the NDAA. It’s being spearheaded by senate majority leader Chuck Schumer and has bi-partisan support.
The Impending Demographic Collapse and the End of America-Backed Globalization
I’ve shared geopolitical analyst Peter Zeihan’s work with you before, but it’s becoming increasingly relevant. His book, The End of the World is Just the Beginning, offers deep insights from history and present day geopolitical reality that in human terms alone, the world is about to undergo epochal change. The Bretton Woods era is over, and we’re expected to see global instability with massive economic shifts, independent of points one and two.
I was profoundly struck by the realization that those who do not seek to understand and find a strategy to deal with what is coming are likely to be blindsided by it, with potentially catastrophic results. If had to go on gut intuition alone, I would say (and this assessment surprises even me) that the AI revolution is going to cause the single biggest impact of the three. It’s somewhat insidious, because right now most people in the world have never heard of ChatGPT. The development of AI is happing at an exponential pace, but it’s doing so in a highly technical arena where the average person isn’t necessarily encountering it. For many, probably most people, its sudden emergence as a dominant force in every industry is going to seem as though it came out of literally nowhere.
We are not ready for this much change.
This sobering conversation is one of many on the topic, but I found it uniquely insightful. I’d class it as a “must listen.” I’m interested to hear your thoughts.
It's hard for me to sort out how much of this is hype. As Mostaque acknowledged, he does have a reputation for exaggerating things. But if he's even half right, I agree with him that the problem will get so big and so complicated that we will probably need AI to solve it. While I sympathize with and generally support efforts to slow AI's development, part of me wonders whether the cat isn't already out of the bag. If the arms race is already on, perhaps it's best for good actors to aggressively pursue AI development. Either these predictions flop, and there was nothing to worry about, or the predictions are true, and our best hope for survival is developing an AI capable of counteracting the negative impacts of AI. If that's the case, then the sooner we have that 'good' AI in place the better.
I think Bilyeu is right on when he talks about the arc of technological development leaning towards progress, but with the caveat that it doesn't care about the individual. With every paradigm shifting technology, the rising tide has eventually lifted all boats, but in the short term it plunged many to the bottom of the ocean. I think of how wretched the lives of factory workers in the early days of the industrial revolution were, or even sweat shop laborers today. I get that farm life can be brutal, and I can't ignore the fact that many voluntarily chose to work in hellish textile factories and coal mines. But it'd be hard for me to say that their lives were improved.
As AI emerges, I hope we won't be like the people who shrugged their shoulders at child labor, or at workers being locked into their buildings with no fire exits (or today with Apple's suicide prevention netting in their factories). I think now is a good time to start educating ourselves about the history of worker exploitation and corporate irresponsibility.
As much as I disagree with socialism, I suspect that that way of looking at the world is much better equipped to spot the potential pitfalls of AI. In any case, another interesting thing about AI is that it could dramatically change the material conditions that we base our ideologies off of. If the changes these men predict happen, it will really allow us to see whether such and such an economic or sociological theory represents an eternal truth of nature or just something that worked well based on the circumstances of that time.