I was originally going to post this video as another example of a Ukrainian Bradley eliminating Russian troops and vehicles, since we haven’t had one for a while, and this one is a really clear example of that. But the Bradley clip is from 2024, so I think is of limited use to understanding how mechanized combat engagements (the main thrust of this Cappy Army video) unfold in 2026.
Instead, I want to talk about certain qualities of the video itself. Obviously, the overlays are created in post, either by humans or AI. This is standard practice and accepted by now. But what really caught my eye was the very clarity of that video. I’ve watched a lot of drone-filmed combat footage from Ukraine, and it doesn’t look like this, especially in 2024.
Remember that scene from Bladerunner where Deckard sticks a photo in a machine and has it automatically enlarge and enhance a reflection that shows the replicants he’s hunting?
It’s been ripped off in endless CSI-esque crime dramas. It wasn’t actually possible to do that with photographs in 1981, but here in the brave new world of The Future, it is possible to do that to video, to an extent, with all the additional information captured in the other frames of the video, and models that tell you what things are supposed to look like from other videos.
The Bradley video looks like it started as actual drone combat footage, and then went through several rounds of sharpening and enhancement.
I’m not saying this is a “gotcha” or “J’acuse!” sort of way, I’m simply pointing out the current coefficient of friction on the part of the slope we’re on. This is a perfectly acceptable use of AI, to better inform viewers, and I’m betting that 99.9% of such enhanced images are merely making existing footage look better. But right now, not only is that additional .1% using this technology to fool you, they’re doing active A/B testing to determine exactly the best way to fool you.
Maybe read this post again.