A Tsunami is Coming
There are epochal changes on the horizon, and they're approaching fast.
This is a free post. If you find value in my work, consider becoming a paid subscriber—it’s the best way to support more of it.
For just $8/month or $80/year, you’ll unlock full access to subscriber-only posts, archives, and more.
Prefer one-time support? You can send a tip via Buy Me a Coffee or PayPal.
Every contribution helps keep this independent publication alive and growing.
Thank you for reading—and for making writing like this possible.
These days, my head is a cloud of chaos, worry, self-recrimination, and regret. I never seem to be able to escape that particular darkness for long. A day or two at most.
But there are moments of clarity, where the sun breaks through the gathering clouds, and it shows me something on the horizon that I can only glimpse in part, not as a whole.
What is revealed to me, what my pattern recognition can’t stop piecing together, is an understanding that everything is about to change. We are living through a liminal time; an interstitial moment between what was and what will be.
Like children with our faces pressed too closely to the television screen, we cannot see the entire picture all at once. Instead, we catch fragments of color or movement or even the occasional image, bordering on distinct.
Absently, I wonder what it was like to live through such times in the past.
Did the hunter-gatherers know, when they planted their first crops, that they were sowing the seeds of civilization?
Did the advent of written language appear as something that would change the foundations of human knowledge and communication?
Did the invention of the atmospheric steam engine to solve the pesky problem of flooded coal mine shafts look like the beginning of a revolution of automation, machinery, and industry?
Empires rising and falling. New technologies displacing the old ways. Advancing means of transport and communication making a large world feel small. So many steps on the path that feel incremental, but are the harbingers of something much bigger to come.
Do human beings ever see these things for what they are at the time?
There are several major forces in the wind right now, and if we look at them without flinching away, we can see that they promise a different world, and soon.
Population collapse and de-globalization are paired forces that promise to re-shape the world as we know it in the next few decades. Countries that passed the tipping point of terminal demography years ago are beginning to age rapidly, leaving too few people behind to do the work of keeping everything running. Changes in geopolitical alignment and strategic priorities are making the global shipping of goods that everyone alive today has taken for granted for the better part of a century into a rapidly-disappearing blip of prosperity on the radar of history.
The movement away from American interventionism as these forces exert pressure on borders and nations could well give rise to new forms of imperialism, as the framework for international peace and trade secured by the Bretton Woods order at the end of the Second World War begins to unravel. As I write this, Israel and Iran are just hours into a new war, and for now, at least, America seems to be staying out of any commitment to participate directly. A multipolar world is again in sight as the US begins a slow, inexorable retreat from the global stage.
And the AI revolution will alter every area of human endeavor and accelerate technological progress in ways heretofore unimaginable.
I cannot emphasize this point enough.
The speed and scale of development in this arena is unprecedented, and it becomes increasingly exponential as the power of newly-developed technology is used to accelerate and enhance the development of the next iteration of itself. As AI systems get better and faster at solving problems that have vexed human minds for many years, we will see explosive change in the arenas of material science, robotics, physics, medicine, and more. Creative industries and knowledge work are already being disrupted, as AI-generated art, writing, music, voice, and video are already replacing content made by slower, more costly, traditional methods.
To give just one recent example, this entire commercial — the first of its kind — was made entirely using AI, cost only $2,000 to create (instead of the 6 or 7 figures that would usually go into this kind of creative), and just aired during the NBA finals:
And it doesn’t stop there.
New chemicals and drugs are being conceptualized and synthesized, and the cures for devastating diseases like cancer are very likely on the horizon. Automation will lead not just to self-driving cars, but AI-powered militaries and space exploration. The boundaries of physics may well be expanded once again, as Newtonian and Einsteinian models are eclipsed. Some scientists are already speculating that we may be on the cusp of unlocking faster-than-light travel, and with it, our ability to become a multiplanetary species that will seed our future among the stars. Millions of jobs are in jeopardy as companies transition tasks to AI for a fraction of the cost of a workforce. Already, we are seeing layoffs as CEOs openly discuss strategic vision that transitions as much work as possible to AI. The potential for massive civic unrest as huge sectors of the populace find themselves out of work with no prospects for a new job will become very real if solutions aren’t figured out ahead of time — and the clock is ticking.
In the few decades my generation has left to live, we will see change happen at a breathtaking pace, across every imaginable arena of known experience. Our children are inheriting a world that will be utterly unrecognizable. Attempting to predict what will happen with any degree of specificity feels like an impossible task, but we know with certainty that disruption at every level is coming, and coming fast.
The agricultural revolution took generations. The printing press took decades to be adopted and disseminated. The industrial revolution’s first iteration took about a century.
AI has come this far in under a decade.
There are any number of markers, but the big Google paper called “Attention is All You Need,” that was published in 2017, is generally recognized as the foundation piece upon which the models we’re now all familiar with are built.
And unlike our previous technological revolutions, for the first time in human history, we are developing a technology that will not just make us more efficient, but could actually replace us. The functional obsolescence of the human race as the chief driver of innovation, labor, research, exploration, and so on is not a matter of if, but when. AI is not just a technology we are creating for us, it is a technology that has the power to create better versions of itself.
At some point, it may no longer need us at all.
But because AI development is every bit as much of an arms race as splitting the atom was, any nation that attempts to slow progress will be dangerously left behind. Regulations will be relaxed, economic incentives granted, and caution thrown to the wind in the race to be the first nation with super-intelligent AI. It has already begun. Trump’s so-called “Big, Beautiful Bill” (that has already passed the House and is now in the Senate) contains a provision that prohibits any “state or political subdivision” from “[enforcing] any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems” for the next ten years.
As Joe Rogan put it on a recent episode of his podcast where this was being discussed:
“Well in ten years, we have a god.”
I don’t know where this all takes us. I’m worried, but I’m also fatalistic about it.
From the moment we invented the technology to make computers, this was always going to happen. AI was always the pot of gold at the end of the electronic rainbow. And of course, it’s not even really the end of the rainbow so much as the leaping off point for all the other technological marvels we haven’t dreamed up yet, but will, with the assistance of our new digital overlords.
As a guy trying desperately to reinvent his writing career at midlife, and who needs to advise his teenage children on what they should plan for to find the right career of their own, I think a lot about human obsolescence in the face of AI. I work with it every day. I use it for research, to help me structure my thought, to ask general questions about any given topic, to fact check information and obtain sources, to brainstorm creative projects, and more. The number of uses it has are as limitless as the information available all around us. The internet had become a firehose, and it was hard to drink from with our limited processing capacity. AI has no such limits, and it can act as our carefully-cultivated filter, taking only what we want from the overwhelming deluge.
As a heavy user, I see what it can do at this stage of the game. I see what it can write, the art and music it can make, and as the accelerationists like to say, “we’re still early.” We have not yet seen the release of fully reasoning models of AI. We are still in the LLM phase, the “mimics” that do pattern recognition and prediction, albeit of an extremely accurate and uncanny variety. But the reason AI models hallucinate is because of this style of “thinking.” Sometimes they pattern match and predict incorrectly.
I asked ChatGPT itself to explain the difference between what we have now and what’s coming. This is what it said:
Most of today’s AI systems—like ChatGPT, Claude, or Gemini—are based on large language models (LLMs) that excel at pattern recognition. They’re trained on massive datasets to predict the next most likely word in a sequence, and they do this astonishingly well. But at their core, they’re statistical mimicry machines. They don’t understand facts, truth, or logic in the way humans do—they simulate understanding based on patterns in language. You can think of them like improv actors with photographic memories: they can play the role of a physicist, historian, or therapist convincingly, but they’ve never been one.
This leads to real limitations. LLMs often hallucinate facts, confidently generating information that sounds right but isn’t. They struggle with common-sense reasoning, especially when it involves physical intuition, ambiguity, or social nuance. They can flounder in multi-step problem solving, where each step depends logically and precisely on the last. And while they can appear brilliant in isolated examples, they lack a stable internal model of the world. They’re not grounded in reality—they’re grounded in probability.
Reasoning-based AI changes the game. These next-generation systems are being designed not just to generate language, but to think—to construct logic chains, understand cause and effect, break down complex tasks, and adapt their strategy when circumstances change. They don’t just sound smart—they are smart in a structured, goal-directed way. Where pattern models imitate expertise, reasoning models will demonstrate it: writing mathematical proofs, conducting scientific research, solving real-world logistical problems, and even making strategic decisions across time.
This shift—from mimicry to cognition—could be the difference between tools that assist us and intelligences that rival us. If LLMs are parrots with style, reasoning models are the first steps toward synthetic minds.
Rumors are that while GPT 4 and 4o have demonstrated some emergent reasoning capacity, the next iteration of ChatGPT — GPT 5 — which is slated to launch next month, will be the first publicly available model that is designed to better simulate reasoning than anything currently available. OpenAI seems to be betting the farm on this one as its flagship release.
But true reasoning models are still likely going to take some time to create.
How much time? Impossible to know. I’m confident that things are happening in house at the AI companies that we aren’t privy to. Which is why the CEOs of the various AI companies have been warning about the coming job losses to their products, and how we as a society have to start taking this seriously and figuring out a path forward.
GPT-3 came out in June, 2020.
GPT-4 was released in March, 2023.
GPT-5 is likely to come out in July, 2025.
As you can see, the time between releases is speeding up, even as the iterations become more impressive each time.
A year from now, what will the landscape look like?
It’s very hard to say.
But I am absolutely convinced that the three issues I mentioned in this post, with AI being the most significant, are the things that will most profoundly shape the world for the rest of my life — barring nuclear war, or some concrete, evidence-based revelation that we are not alone in the universe.
I will certainly be writing more about this in the future.
No industry is going to be untouched by this. But like the introduction of the computer, there will be some industries that will be less impacted.
If I were a young man again or had sons that were teenagers, I would definitely be pursuing something in the trades such as construction or HVAC repair or nursing.
But I’m fairly certain many white collar jobs that can be done from a desk with a computer - e.g. engineering, coding, project management, graphic design - that are head skills and logic or creative based are going to be heavily consolidated or go away completely.
I’m just trying to hang on until retirement (not that retirement is guaranteed anymore).
The way GPT described itself reflects my own experience with these tools as well. It can take a while to "see" it, because the mimicry is good. But if you interact with them enough and in different contexts, you begin to see the "cracks" in the mimicry, as convincing as it can sometimes be, and you learn therefore to use it more appropriately.
I find GPT exceptionally useful at specific concrete tasks -- helping with image prompts (or any kind of AI prompts, for that matter -- something which is critical for these things to work well, and something which some people seem to find fascinating but which I find personally irritating, but which GPT can do for me in a heartbeat) and things like that. Getting into any in-depth discussion about anything "deep" is not what it's good at and very much "at your own risk?, in my opinion.
And the other limits all apply. Let's say you're talking to some chatbot who is playing the role of a companion -- like, say, a friend taking a stroll to a virtual park, it will easily lose track of geographic specifics (like where it is standing relative to that tree over there, or a lake or what have you) that are literally trivial for a human mind but absolutely flummoxing to an AI chatbot because, as GPT points out, it can't mimic that degree of specificity -- there is no probabilistic answer that will work almost all the time there, there's only the very specific one of "where is this specific tree" ... and it sucks at that. Because it isn't what it's doing ... and when you see the crack, you can't unsee it.
In fact, you start seeing that kind of crack everywhere you look once you know how to spot it. It's just that people are not used to working with these things so much (most people aren't anyway) and so they don't get to the point where they see the cracks, or they are too gullible about the often convincing early impression a chatbot can make, if it's well made.
--
I do think that the reasoning models are different. We've already seen the proto-versions of them, but they're very simple. In theory it *should* work, but in practice it takes mass amounts of compute to make the theory actually play out in a way that has been similar to what we have seen so far with LLMs ... and as we know, LLMs themselves take a truly massive amount of compute to work well. There are technical/hardware/capacity limitations that are coming into the picture, and I would not be terribly surprised if we reach some kind of a cap due to that sooner than we think. Not that I think this is coming soon, mind you, and I don't rule out the possibility that clever software or logic architecture design can stave off some of the capacity problems and limits, but I think we'll eventually get to them -- the elephant on the table, of course, is where the stuff will be when we get there (ie, how capable will it be).