🔒 You’re reading a paid subscribers-only post with free preview. Thank you for supporting this work.
Paid subscriptions make it possible for me to dig deeper, publish more frequently, and remain independent.
Want to offer one-time support? You can send a tip via Buy Me a Coffee or PayPal.
Every contribution helps keep this independent publication alive and growing.
Thank you for reading—and for making writing like this possible.
For those unfamiliar with X (formerly Twitter), the title of this post may be a bit confusing. But this is an important issue, so please allow me to briefly explain.
“Grok” is the name of the large language model (colloquially, “AI”) developed by xAI, Elon Musk’s artificial intelligence company, which recently acquired the X platform itself. It, along with ChatGPT and Claude, is one of the top three AI services being used daily by countless individuals for tasks as varied as you can imagine.
Grok is also integrated into the X platform itself. With just the click of the Grok button, it will analyze any post, and users can actually reply inline on a public post with a question like, “Hey @grok, is this true?”
But increasingly, I am seeing posts like this one, decrying this use:
I am here to argue the contrary.
As I said in my own post on X today:
No, it's not scary. It's actually heartening. The impulse to separate fact from fiction is what is driving this behavior, and that's a good sign. We've spent the past five years floundering, overwhelmed with information none of us can understand in its full breadth and scope, losing faith in experts and institutions alike.
We now have an epistemological "referee" of sorts — a machine that is supposed to be objective, has access to vast amounts of human knowledge, and has the capacity to parse that information and make distinctions.
Grok, ChatGPT, etc., are not Gospel. But they are a tool that allows us to cut through the overwhelm and look for the truth. If people think we trust them too much, remember that we used to trust authorities and experts just as much. We have all, collectively, relied upon those who had the time and dedication to study the subjects we could not to help us to understand what we should know about them.
Again, the impulse to separate truth from falsehood is why we are asking @grok to explain and verify. That's a far better thing than simply succumbing to a firehose of questionable information with no critical thought.
I’ve been thinking a lot about AI and its rapid development, and although I share the concerns of many about the dangers, I’m ultimately fatalistic about it. It is far too useful and potent a technology to ever get the toothpaste back into the tube. The breakneck pace of AI development is not merely a competition to have a cutting edge product and market share — it’s quite literally an arms race. Nations will continue to incentivize and de-regulate their domestic AI companies to ensure they are not left behind by rivals or enemies. The battle for AI dominance is nothing less than a battle for control of the next phase of history. Within a decade, it will be the dominant force shaping the future across every sector of human endeavor.
If we’re going to be stuck with it, I see no point in dwelling only on the dangers.
We may as well identify the upsides, too.
As someone who has been “very online” for over 30 years, I feel as though I’m as much of a “digital native” as anyone can be. But over that span of time, I have watched the internet turn from a trickle of information to a torrent, and from a torrent into a deluge.
When I’m familiarizing myself with a topic du jour, or researching in preparation for a piece of writing, I am often totally overwhelmed with the number of available sources, their potential bias, the issue of misinformation, the need to cross-reference to find points of common agreement, and so on.
In 2010, Google’s CEO at the time, Eric Schmidt, famously revealed how much information was being generated online:
Every two days now we create as much information as we did from the dawn of civilization up until 2003, according to Schmidt. That’s something like five exabytes of data, he says.
Let me repeat that: we create as much information in two days now as we did from the dawn of man through 2003.
“The real issue is user-generated content,” Schmidt said. He noted that pictures, instant messages, and tweets all add to this.
Naturally, all of this information helps Google. But he cautioned that just because companies like his can do all sorts of things with this information, the more pressing question now is if they should. Schmidt noted that while technology is neutral, he doesn’t believe people are ready for what’s coming.
“I spend most of my time assuming the world is not ready for the technology revolution that will be happening to them soon,” Schmidt said.
But it’s not 2010 anymore. I asked Grok (because why not?) how much information is being generated every day in 2025:
Approximately 463 exabytes (EB) of data are projected to be created daily by 2025. This equates to 463 quintillion bytes (4.63 × 10²⁰ bytes) or roughly 212.8 million DVDs per day.financesonline.comweforum.org
Doing some quick back-of-the-napkin math, that’s nearly 93 times as much information as was being generated every 48 hours in 2010, and that means we will soon surpass the marker where we generate 100 times as much information as was created from the dawn of human civilization until 2003 — every. single. day.
Schmidt said we were not ready for the tech revolution that was happening.
He was right.
Keep reading with a 7-day free trial
Subscribe to The Skojec File to keep reading this post and get 7 days of free access to the full post archives.