Lost in the Infinite Scroll: AI, UAP Cover-Ups & the End of Real News | MTS #6
Happy Friday!
I know I’ve been a bit quiet this week. Sometimes the raw material for writing takes a while to ferment, and I’ve been working a lot at my other “job” which has kept me quite busy.
But I did record a new podcast with my friend and co-host Kale Zelden yesterday, and it’s hot off the presses!
We had another enjoyable conversation about the issues and topics that are affecting all of us every day, whether we realize it or not. I could have talked for 2 more hours!
From the video description:
In this wide-ranging conversation, Steve and Kale wrestle with the feeling that reality itself is getting harder to pin down. From the dopamine doom loop of endless monitoring on X, to the collapse of the post-war consensus, the “scriptification” of news, vanishing scientists tied to advanced propulsion and UAP programs, Bob Lazar’s enduring story, and the sense that the singularity may already be here—we’re all lost in the infinite scroll, struggling to figure out "What's even real?"
If you’re looking for the audio-only version, you can grab that right here, or on your favorite podcast provider (it takes a while, sometimes, before it propagates to them all, so if you don’t see it yet, check back later!)



I watched your full podcast, and I was tracking with you quite well until you started talking about AI, Mythos, and Anthropic. Like with the other subjects, you mention a smoke screen that makes it hard for you to know what's really going on. I'm quite surprised to hear you say that about AI. I'm using AI all the time—especially Claude Code—and I'm burning through a lot of money doing it. As of now, I have twelve projects in various stages of development, so I already have substantial experience watching how it works, which varies considerably. Sometimes it does things really fast. Other times it gets stuck. But either it gets the apps I'm building to work or it doesn't, and I tell it when things don't work right and have it work on them more. There's simply no smoke screen at all at the level I'm working with it, and I really could use the next level models. From what I understand about Mythos, it would take my projects to a whole new level.
As an example, I have a utility that converts one 3D model into another. It takes a lot of math, and Claude Code simply couldn't do it. Even after I had it create a viewer that it could control so it could do the full cycle of development—making changes to the code, running the code, and seeing the changes visually—it spun its wheels for some time, making progress so slowly that I would have wasted a lot of money on that project alone. So I had it scrap its approach and use an open source library available for this kind of thing—a library that has twenty years and many software engineers behind it.
Mythos should be able to replace that human-built open source library. And if it can replace that, then it can do a lot of other things I'm working on.
But it's as clear as day what's going on with Mythos. There's no smoke screen at all. It simply has reached a level where it's better than human programmers across the board. And that means it can find the flaws that humans have missed, including security flaws that put our entire tech infrastructure at risk. Anthropic saw that, so they're giving tech companies a few months' head start to use Mythos to find vulnerabilities and patch them. This is such a big deal that they're putting up their own money to pay for Mythos use by their competitors. Then, with appropriate guardrails added to the model, they'll release it for general use by people like me.
For most things, the military won't use Mythos for some time. My regular work is currently controlled by the military, and they won't let us use most AI models since they haven't tested them and don't trust them to be secure enough. We can only use the older models that they have tested and trust. Most military work is like this—they use old models, not the latest ones. On top of that, the company I contract with sent out emails telling us not to use Anthropic at all because the military wanted Anthropic to agree not to use their models for surveillance on the public or to autonomously control a weapon system. Anthropic refused, saying they would only use the AI for lawful use. I know Anthropic is quite liberal, but in this case I'm very grateful they resisted.
Read my Substack post from yesterday. There are easy-to-understand reasons that explain the significant peer-preservation behavior the models exhibited that the Berkeley researchers discovered—and those reasons show me that AI actually has innate guardrails built into it that they didn't know were there. And that's a good thing.
Look, developers move from one AI company to another. They generally know what each is doing. They may have quite different opinions on where this is all headed, but they know where things currently stand. From the way I see them talk about it, there's no smoke screen here—just different interpretations of the same data that make sense if you know that data. I know the data because I'm a software engineer working with it from the outside.
Each model has pros and cons. For what I do, Claude is clearly superior, so I use it most, but not exclusively.
I will be watching your latest podcast... but I also had several raw (AI) materials bouncing around my head... that I just had to get out...
https://ontheedgeofreality.substack.com/p/the-evening-report-with-walter-cronkite