We're Gonna Need a Bigger Frame
Our AI replacements are coming to disrupt our future, and we're simply not ready for them.
The following is a TSF free post. If you want access to our comment box & community, subscribers-only posts, The Friday Roundup, and the full post archives including this one, you can grab all of that for just $5 a month (or even less on an annual plan) by subscribing right here:
My boys hate school. Hate with a capital HATE.
Most of them get good grades. They're very smart. They do what they need to do.
But they're miserable at school.
Last night, I had a long, wide-ranging conversation with Alex, my 14 year old, in between episodes of "Encounters" (Netflix UFO documentary)" and the latest American Alchemy with Diana Pasulka.
We talked about everything from aliens and AI to the nature of consciousness and the possible effects of epigenetics to the idea of "echoing thoughts into reality" (his words, not mine).
He was SO fired up. His thoughts are still raw and unrefined, and he has a little bit of an articulation bottleneck, but some of his phrasing was very lyrical, and the sheer horsepower of his intellect was unmistakable.
At one point, I told him I thought he should try writing. “The way you say things is really poetic,” I told him. “I’m really good at identifying writing talent, and you’ve got the gift. You just need to practice.” I told him writing is a great way to hold a dialogue with your own thoughts. To examine your beliefs. To figure out if your feelings hold true.
He told me that he only associates writing with the formal structure and rules of the essays required of him at school, and it kind of ruins it for him. I told him he needs to know the rules so he can be free to break them, and that regardless, he should throw away the rules and try writing for fun.
When I looked at the clock and realized it was after midnight, I told him we had to wrap it up. He was animated and happy, but reluctantly agreed.
“See?!” He exclaimed, “This conversation was so much better and so much more interesting than anything I ever do in school! I feel passionate about this stuff! Why can’t I be doing this instead?”
This kid, who was bursting with philosophical theories about the interplay between the intellectual and the physical arenas, or how thoughts can create realities given enough time and effort, or his interest in the various topics we were discussing, is the same guy who will go into emotional shutdown, stare at me with cold, dead eyes, become almost completely non-verbal, and mope around the house on every school night, dreading going back to the building where he is supposedly learning and being formed for a broad perspective and successful life.
When he gets this way, my inquiries to him about what’s going on and why he’s upset and whether anyone at school is bothering him and if he has anything he needs to talk about are always met with shrugs and dismissals. “It’s just that building,” he’ll mumble. “The minute I walk in there, something happens to me. It just sucks the life out of me.”
Funded by the state but founded by Catholics and driven by people committed to a classical, great books education, it’s been the best overall educational solution we’ve found for our large family. Uniforms, prohibitions on pop culture on campus, high academic standards, mandatory participation in the arts every year K-12, Socratic dialogues in the upper grades, and teachers who by and large seem to really care about the Good, the True, and the Beautiful, it has never been perfect, but it has always been better than most.
Our 18 year old daughter has flourished there, and will graduate this Spring, most likely with honors, having participated in everything she could have wanted. It’s been a hard year of early mornings as her extracurriculars (choir and drama) require that she be there every day at or before 7AM, but it has been a sacrifice we’ve been willing to make (with some sleepy grumbling, especially from the other kids) so she can finish up her academic career having left everything on the field.
But our boys are languishing. And it’s not the first time. When COVID hit in 2020, we pulled them for temporary homeschooling, and we quickly saw them go from a heterogenous, age-and-grade segmented group to a family of brothers who love hanging out with each other again. Our move to New Hampshire was supposed to double down on that while giving them the great outdoors instead of desert suburbia to explore, but it was a bad fit for us, we were going through the aftermath of some serious familial crises, and homeschool never materialized. When the funk began to recede, and realizing we needed help on the educational front so we could work, we moved back across the country primarily so they could return to their former school, which had been great in many respects and was happy to have them back.
Once again, however, we saw it gradually break them down, make them grumpy, adopt certain public school behaviors and mentalities, and cause them to seek escapism every moment they could find. We’ve let them stay home the past couple of days as we evaluate a possible return to homeschooling, and they’ve been happy and helpful and fun to be around. Whatever is happening at school, it’s clearly not working.
And it’s probably not all the fault of the school.
We’re a tech-savvy family, and while I’ve worked from home for the past decade, my wife’s been at it even longer. I used to build and repair computers for work, and my wife designed, provisioned, and migrated data networks, and co-founded an internet service provider, so it’s fair to say we’re a bit nerdier than most. Most of our work happens in front of screens. We play computer games together as a family, networked across a couple of desktops and half a dozen laptops. We have several VR headsets, and they all get used. We watch comic book and sci-fi movies and television. We read the same kinds of books. We hang out and watch podcasts and shows about UFOs or weird phenomena. My kids, some of whom are into coding as a hobby, self-identify as gamers. Some of our friends, who are much more inclined to the luddite life, don’t really get it. But that’s OK. It’s just who we are.
Still, it means that tech is a big part of our lives, and it influences how we think. How we consume information. How we learn.
Don’t get me wrong — half of my kids, at least, are voracious readers, as long as it’s something they’re interested in. But in general, traditional methods of learning are challenged by the tech-saturated status quo, and we see that in them. A status quo, I should add, that is very likely going to map to whatever work our kids end up doing, just as it has with ours.
Gary Vaynerchuck, a self-made multimillionaire business consultant and social media mogul who moved to America from the then-Soviet Union when he was five, has talked repeatedly over the years about how the education system fails entrepreneurs and people who think differently. These videos are all short, and relevant, so I hope you’ll watch them:
Another:
One of the worst things, Vaynerchuck argues, is how school trains children to be conformists who seek external validation from a system that is “destroying” them:
Vaynerchuck’s thoughts were indirectly echoed this week by Hannah Frankman, founder of Rebel Educator:
Megan Fritz, Assistant Professor of Philosophy at University of Arkansas at Little Rock, evidently disagrees — though her comments come in a totally unrelated post and thread:
Let’s throw Gary Vee back into the mix for one more round, this time on the Google/ChatGPT angle in education:
Let’s Try to Make Some Sense Of All This
I don’t know what the right answers are. If you made it this far in the post hoping I’d hand them to you on a silver platter, let me apologize right now. I am trying to figure this out, same as you.
What I do know is this: arguments over what is being done in education, how it’s adapting to tech, how much of a role screens play in education, the value of asynchronous learning, whether and where LLMs like ChatGPT fit into homework, all of it is, in my view, just re-arranging deck chairs on the Titanic.
People, myself included, are not processing what is happening in the world of AI and how it is going to fundamentally alter the way we consume and produce ALL information, including the information many of us use to do our jobs. Like a Bugatti Veyron still a few miles away but traveling at top speed, we can’t detect it quite yet, but it’s coming, and fast.
Related:
Our framework is insufficient for the task. It’s like a new predator totally outside the realm of our evolutionary psychology has just landed on the planet, and we have no natural defense mechanisms to warn us of its impending approach.
A few examples might help to contextualize the tremors already spreading from AI’s approach:
In an article entitled, “AI Excels at Empathy Test That Human Doctors Regularly Fail,” we learn that ChatGPT-4 scored a 90% on the US Medical Licensing Examination, “a multi-step test of would-be doctors’ skills — including bedside manner. The bots were judged on their empathy, communication, professionalism and ethical judgment.”
This was a significant step up from GPT-3.5, which only scored 62.5%.
“Artificial cognitive empathy, or AI’s ability to mimic human empathy, is an emerging area of interest. Accurate perception and response to patients’ emotional states are vital in effective healthcare delivery,” the researcher wrote in the study.
And according to Tech Crunch, automated “CarePods” which can perform basic medical tasks like blood draws, throat swabs, and blood pressure readings — with diagnoses provided by AI — are right around the corner.
How about a different area of science?
Some incredible news just came out of the world of chemistry, where a Google AI created new, never-before conceived of chemical compounds out of existing materials.
And not just a few:
Scientists have painstakingly discovered roughly 20,000 different types of materials that let us build anything from computer chips to puffy coats and airplane wings. Tens of thousands more potentially useful materials are in the works. Yet we’ve only scratched the surface.
The Berkeley team developed a chef-like robot that mixes and heats ingredients, automatically transforming recipes into materials. As a “taste test,” the system, dubbed the A-Lab, analyzes the chemical properties of each final product to see if it hits the mark.
Meanwhile, DeepMind’s AI dreamed up myriad recipes for the A-Lab chef to cook. It’s a hefty list. Using a popular machine learning strategy, the AI found two million chemical structures and 380,000 new stable materials—many counter to human intuition. The work is an “order-of-magnitude” expansion on the materials that we currently know, the authors wrote.
Using DeepMind’s cookbook, A-Lab ran for 17 days and synthesized 41 out of 58 target chemicals—a win that would’ve taken months, if not years, of traditional experiments.
[…]
The AI eventually produced 2.2 million chemical structures, 380,000 of which it predicted would be stable if synthesized. Over 500 of the newly found materials were related to lithium-ion conductors, which play a critical part in today’s batteries.
“This is like ChatGPT for materials discovery,” said Dr. Carla Gomes at Cornell University, who was not involved in the research.
According to Bloomberg, “Banks using generative artificial intelligence tools could boost their earnings by as much as $340 billion annually through increased productivity, according to consultants hoping to help the industry adapt in this fast-moving area.”
The Pentagon is pushing the development of AI-powered autonomous drones, and, according to this report at the Associated Press, “There is little dispute among scientists, industry experts and Pentagon officials that the U.S. will within the next few years have fully autonomous lethal weapons.”
Two new tools that allow the animation of a static photo using a wireframe model have dropped in the past month or so, creating endless possibilities for creating video content using existing images:
There are so many developments happening so fast, it’s almost impossible to predict just how disruptive AI will be.
On an episode of the Triggernometry Podcast that aired earlier this year, entitled “All Hell is About To Break Loose,” Eric Weinstein talks about the human problems all this AI is likely to cause. (This is a 6 minute excerpt, but worth your time.)
Weinstein argues that AI will overturn the existing capitalist system, altering the categorizations of capital and labor and forcing the need for a new economic model. He also worries that people who are pushed out of their jobs by AI, even if their material needs are met, may struggle to find meaning and purpose, which could disrupt the social fabric as a whole.
The point is, whether you think the Titanic is unsinkable or not, it’s the evening of April 14, 1912, and we’re all passengers aboard that majestic ship on her maiden voyage. It’s a dark, clear night. And oh, by the way?
There’s a giant frigging iceberg ahead.
So, to return to the question of education, it seems pertinent to ask, “Do we have the luxury of fretting about whether the classical methods of education should be painstakingly restored, against all the tech odds stacked against them, or should we be getting the life preservers out?”
Or alternatively, “Does it matter much whether kids can write a term paper on the life cycle of the botfly with a thesis statement and the correct number of supporting paragraphs when they may not be able to eat in ten years if they don’t know how to prompt wrangle an LLM?”
Or even, “Does it matter how much they memorize or how well they can write if they live in a world where they never have to do it again unless they actually want to?”
That last question feels dangerous. But it reminds me of how every kid in the 80s and 90s was told they had to learn how to do math without a calculator because we weren’t always going to have one…and we all carry one around in our pockets with us everywhere we go.
How often — please be honest — do you pull out a pencil and do back of the napkin math, instead of calculator on the phone math?
As a professional writer, I’m in a weird place on all of this. I know how important my skillset is and how hard I’ve worked to attain it. And as I mentioned, I was trying to sell my son on the importance of writing as a cognitive development and self-expression tool just last night.
But I also recognize the danger I’m in. And I can’t help but feel that this is just history repeating; that perhaps the present controversy is just an iteration of one so ancient that Plato was exploring it in his Socratic dialogue with Phaedrus, as they discussed concerns that writing could diminish the mind if used in lieu of memorization and oration:
“They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks. What you have discovered is a recipe not for memory, but for reminder.”
Related:
What we are facing is something I refer to as an “epochal paradigm shift.” There’s really nothing analogous in history to the way ubiquitous deployment of AI will change the shape of human knowledge, both in terms of inputs and outputs. The closest examples might be the invention of the printing press — particularly its democratization of knowledge — and the invention of the Internet. Both of these truly groundbreaking developments, however, built upon existing epistemological paradigms. The tools for gaining access to knowledge changed, but the way knowledge was obtained — lots of searching, reading, human-powered pattern recognitions — did not. It was still work, it just got more efficient.
AI wipes that out. AI is a digital demigod. An online oracle. Ask AI a question, and it spits out a surprisingly thorough answer, based on an incomprehensible cross-reference of the sum total of available human knowledge it has access to.
And yes, AI sometimes hallucinates. You’ll still be able to fact-check it for a little while. But I guaran-damn-tee you that within a decade, you won’t have a search tool available that doesn’t have an AI (with all its programmed-in biases) built right in. Your fact-checking tool and your information tool will be consubstantial and undivided.
But our powers of prediction can only go so far.
We can postulate a better world with or without things we see as goods or evils respectively, but we can’t “echo that thought into reality” just through sheer force of will.
And what are our metrics for whether our kids (or ourselves) know what we need to know?
How certain are we that if kids don’t conform their writing styles to the mandated formats of middle school teachers, they are suffering some irreparable loss?
Will they be better off entering some kind of neo-dark age mystical tech priesthood, where they can’t read and write exceptionally well, but do know how to type-chant the words that will obtain succor from the machine gods?
Conversely, should those of us who still believe in the old ways teach our children to fight assimilation into the Borg — because let’s face it, an AI future is a future of human beings living in constant, seamless contact with machine-augmented reality — and put pen to paper? Do we buy up all the old typewriters, all gloriously internet-disabled, and put them to use? Will our children grow up to become repositories of arcane knowledge, sought after for their rare skills in the lost and ancient arts? Or will this path mean they simply struggle to pay the rent on their tiny homes in the sprawl as those who submit to the somatic infusion of collaborative intelligence become metahuman elites?
If I didn’t care about classical education, we wouldn’t have made the sacrifices we have to give it to our kids. On the other hand, if I wasn’t becoming acutely aware that it’s an outdated model that no longer fits with the world we live in, causing kids to feel an increasing disconnect between their lives and their academic expectations, I wouldn’t be chewing on all of this.
School. Work. Science. Creative arts. Medicine. Coding. It’s all getting hit at the same time by a tsunami of LLMs sucking down computing cycles from GPU cloud farms. That’s not even a sentence that would have made sense to almost anyone in 2019.
Obviously, these questions (and their potential answers) matter, but they feel as much like shadows on Plato’s cave wall as deck chairs on the titanic. I worry that the effort and time spent resisting the change risks losing the chance of escaping the cave alive. But is it worth leaving the cave if the old ways are preserved?
Then again, maybe the light beyond the cave exit is a trap; a Schrödinger’s box. Perhaps, like Sheriff Holston finally working up the nerve to follow his wife and venture irrevocably outside the silo to see if the desolation he saw through the cameras was real, we will discover only a cruel psyop and painful death.
As I try to make sense out of the patterns emerging from the fog, I feel like I’m doing a jigsaw puzzle without the box art for reference. I can see that the picture that is coming into view is of an AI-dominated future, but so far, the puzzle is full of gaps, and the final image is still unclear in its particulars.
There’s an awful lot that can hide in those gaps.
One of the really bad things our educational system does to writing is to inflate or pad it. "Write an essay on this subject in no less than six pages." I found my writing improved when I deliberately kept my essays to two pages. Cut cut cut.
Regarding math, not only do I use pencil and paper, I now do more mental math than ever. I use the machines to check the work.
Your article has ignited a contemplation of what lies ahead. It wasn't until later stages of my life that I recognized my learning challenges included ADHD and dyslexia. To cope, I adeptly utilized available technologies, diligently piecing together solutions in an effort to align with societal expectations. But as these conventional expectations begin to dissolve, I wonder: what inherent structures will arise? Which societal constructs will vie for dominance? Will there be a spiritual or secular lean? And in the quest for meaning, who will emerge to define the order of tomorrow?