Ok, any #video folks out there who know how to do what I want to do? I don't know what words to search for because I don't know what this technique is called. Boosts welcome, suggestions welcome.
I have a pool cleaning robot. Like a roomba, but for the bottom of the pool. We call it poomba. Anyways, I want to shoot an MP4 video with a stationary camera (a GoPro) looking down on the pool while the robot does its work. So I will have this overhead video of like 3-4 hours.
I want to kinda overlay all the frames of the video into a single picture. So the areas where the robot drove will be dark streaks (the robot is black and purple). And any area the robot didn't cover would show the white pool bottom. Areas the robot went over a lot would be darker. Areas it went rarely would be lighter.
I'm just super curious how much coverage I actually get. This thing isn't a roomba. It has no map and it definitely doesn't have an internet connection at the bottom of the pool. (Finally! A place they can't get AI, yet!) It's just using lidar, motion sensors, attitude sensors and some kind of randomizing algorithm.
I think of it like taking every frame of the video and compositing it down with like 0.001 transparency. By the end of the video the things that never changed (the pool itself) would be full brightness and clear. While the robot's paths would be faint, except where it repeated a lot, which would be darker.
I could probably rip it into individual frames using #ffmpeg and then do this compositing with #ImageMagick or something (I'm doing this on #Linux). But 24fps x 3600 seconds/hour x 3 hours == about 260K frames. My laptop will take ages to brute force this. Any more clever ways to do it?
If I knew what this technique/process was called, I'd search for it.