Yes, I'm totally frustrated trying to watch newsy stories on youtube. I can't tell if the human is real or not. So I am limiting myself to watching "tried and trued" regulars. Eventually this things will work itself out.
Predictive programming. I learned so much. And you're right. Unless you see it with your own eyes or hear it with your own ears.....but then, that's not necessarily reliable.
We tend to hear what suites us and read what affirms us and as for seeing...well, there are none so blind as those who will not to see.
Here's a summary of a potential provenance / watermarking system to minimize deepfakes that ChatGPT wrote after I discussed the issue with it...
A robust, real-world version of this idea would rely on multiple reinforcing layers rather than any single mechanism. Capture devices would use hardware-rooted signing at the sensor level, ensuring footage is authenticated from the moment of creation, and a secure capture pipeline would prevent external or injected frames from being recorded as genuine. Every edit or transformation would be tracked through a cryptographically linked provenance chain, preserving a full history of changes. Verification would not depend on a single authority but instead use multi-party attestation, where the device, operating system, editing tools, and distribution platforms each contribute signatures. Over time, a reputation system could help weight the trustworthiness of different signers, while platforms would treat content without valid provenance as “unverified” rather than automatically false. Finally, AI-based detection systems would act as a backstop, helping identify edge cases like screen-replay attacks that cryptographic methods alone cannot fully prevent.
Even with all of these measures in place, the limits of the system are important to understand. Modern cryptography is strong enough to prevent unauthorized or forged provenance in most cases, and a comprehensive legal and technical framework like this would significantly reduce the spread and credibility of harmful deepfakes. However, it cannot guarantee that all properly signed content is genuinely real, since compromised devices, dishonest signers, or replay attacks can still produce misleading but validly authenticated media. The practical outcome is not perfect certainty, but a world where deception is harder to execute, easier to trace, and less likely to spread unchecked—an outcome that, while imperfect, would represent a substantial improvement over today’s environment.
If I don’t laugh out loud again this week, the portrait meme at the top of the post along with “spaz festival” will tide me over.
Yes, I'm totally frustrated trying to watch newsy stories on youtube. I can't tell if the human is real or not. So I am limiting myself to watching "tried and trued" regulars. Eventually this things will work itself out.
The movie "Sneakers" comes to mind.
"You won't know what's real".
Predictive programming. I learned so much. And you're right. Unless you see it with your own eyes or hear it with your own ears.....but then, that's not necessarily reliable.
We tend to hear what suites us and read what affirms us and as for seeing...well, there are none so blind as those who will not to see.
We're pretty much in the Truman Show.
Here's a summary of a potential provenance / watermarking system to minimize deepfakes that ChatGPT wrote after I discussed the issue with it...
A robust, real-world version of this idea would rely on multiple reinforcing layers rather than any single mechanism. Capture devices would use hardware-rooted signing at the sensor level, ensuring footage is authenticated from the moment of creation, and a secure capture pipeline would prevent external or injected frames from being recorded as genuine. Every edit or transformation would be tracked through a cryptographically linked provenance chain, preserving a full history of changes. Verification would not depend on a single authority but instead use multi-party attestation, where the device, operating system, editing tools, and distribution platforms each contribute signatures. Over time, a reputation system could help weight the trustworthiness of different signers, while platforms would treat content without valid provenance as “unverified” rather than automatically false. Finally, AI-based detection systems would act as a backstop, helping identify edge cases like screen-replay attacks that cryptographic methods alone cannot fully prevent.
Even with all of these measures in place, the limits of the system are important to understand. Modern cryptography is strong enough to prevent unauthorized or forged provenance in most cases, and a comprehensive legal and technical framework like this would significantly reduce the spread and credibility of harmful deepfakes. However, it cannot guarantee that all properly signed content is genuinely real, since compromised devices, dishonest signers, or replay attacks can still produce misleading but validly authenticated media. The practical outcome is not perfect certainty, but a world where deception is harder to execute, easier to trace, and less likely to spread unchecked—an outcome that, while imperfect, would represent a substantial improvement over today’s environment.