mastodon.world is one of the many independent Mastodon servers you can use to participate in the fediverse.
Generic Mastodon server for anyone to use.

Server stats:

8.4K
active users

#imagemagick

1 post1 participant1 post today

Ok, any #video folks out there who know how to do what I want to do? I don't know what words to search for because I don't know what this technique is called. Boosts welcome, suggestions welcome.

I have a pool cleaning robot. Like a roomba, but for the bottom of the pool. We call it poomba. Anyways, I want to shoot an MP4 video with a stationary camera (a GoPro) looking down on the pool while the robot does its work. So I will have this overhead video of like 3-4 hours.

I want to kinda overlay all the frames of the video into a single picture. So the areas where the robot drove will be dark streaks (the robot is black and purple). And any area the robot didn't cover would show the white pool bottom. Areas the robot went over a lot would be darker. Areas it went rarely would be lighter.

I'm just super curious how much coverage I actually get. This thing isn't a roomba. It has no map and it definitely doesn't have an internet connection at the bottom of the pool. (Finally! A place they can't get AI, yet!) It's just using lidar, motion sensors, attitude sensors and some kind of randomizing algorithm.

I think of it like taking every frame of the video and compositing it down with like 0.001 transparency. By the end of the video the things that never changed (the pool itself) would be full brightness and clear. While the robot's paths would be faint, except where it repeated a lot, which would be darker.

I could probably rip it into individual frames using #ffmpeg and then do this compositing with #ImageMagick or something (I'm doing this on #Linux). But 24fps x 3600 seconds/hour x 3 hours == about 260K frames. My laptop will take ages to brute force this. Any more clever ways to do it?

If I knew what this technique/process was called, I'd search for it.

Continued thread

Quant aux choix des dons, chaque salarié⋅e du 24ème a disposé de 14 tranches de 24€ à répartir aux projets libres de son choix. Ensuite, nous les avons mis en commun pour se répartir les paiements redondant. Une méthode bien efficace : en moins d'une demi journée, nous avons pu choisir et aider 30 projets.

Voici la liste des dons : github.com/24eme/banque/blob/m

(2/2)

Restoring a vector image without AI

Here I document an afternoon of toying with ImageMagick and using the properties of GIF in an attempt to cheat myself into a crisp raster image as if it were rendered freshly from a vector graphic.

https://www.sindastra.de/p/3581/restoring-a-vector-image-without-ai

Sindastra's info dump · Restoring a vector image without AI - Sindastra's info dump
More from Sindastra

Wanna make a tiny planet from an image with imagemagick? That's easy:

`magick IN.png -distort arc 360 OUT.png`

Wanna blend the 2 sides together? That's MUCH harder........

`in="IN.png"; w="$(magick identify -format '%w' "${in}")"; h="$(magick identify -format '%h' "${in}")"; magick "${in}" -gravity east -crop "$((w/20))x${h}+0+0" \( -size "${h}x$((w/20))" gradient:white-transparent -rotate 90 \) -alpha set -compose xor -composite \( "${in}" -gravity west -crop "$((w/20))x${h}+0+0" \) -compose dst-over -composite \( "${in}" -gravity center -crop 90%x100%+0+0 \) +append -strip tmp.png; magick tmp.png -distort arc 360 OUT.png; rm tmp.png;`

(WARNING: uses `tmp.png` as a temp image)

Still want to try and get rid of the dark banding around the blend but it's at least better than a seam line.

Imagemagick didn't like `-distort` after `+append` for some reason...... I think it's some kinda global image offset that `-strip` only fixes during saving?

Ich mache heute nur noch Bürozeug, bevor ich morgen buckeln gehe. Das Wetter ist eh zu bescheiden um draußen am Haus weiter zu arbeiten.

Eigentlich möchte ich immernoch all meine Dokumente in #Paperless bringen, aber ich bin mit der Qualität der Scans zu unzufrieden. Aus irgendeinem Grund ist der 'Weiß-Ton' bei jedem Scan unterschiedlich, wie als müsste ich jedes mal einen Weißabgleich machen. Ich habe bereits schon mit #ImageMagick experimentiert, aber keine zuverlässige Ergebnisse bekommen.

I had a moment of inspiration and created #ggg take a look (still #experimental #foss software)

ggg: #guile #scheme #glyph #generator

codeberg.org/jjba23/ggg

Through #svg generation from #lisp we leverage a (wip) #dsl and apply some #math knowledge to build pixel perfect project #markdown / #org badges.

It also scripts #imagemagick to export to #png or #webp .

You can then use the svgs in your #codeberg (or #github) repository #readme for example.

I provide a #guix manifest in the repo

Replied in thread

@masek this output of any use?

$ convert IMG_2025-05-18-13420375.jpg -colors 2 +dither -type bilevel -define png:bit-depth=2 -define png:color-type=0 output.png
~/.../pictures/PokemonGO $ identify output.png output.png PNG 1472x2968 1472x2968+0+0 8-bit Grayscale Gray 4c 39147B 0.000u 0:00.002
~/.../pictures/PokemonGO $ file output.png output.png: PNG image data, 1472 x 2968, 2-bit grayscale, non-interlaced

oddly enough #imagemagick seems to think it's 8-bit while file thinks 2-bit?