mastodon.world is one of the many independent Mastodon servers you can use to participate in the fediverse.
Generic Mastodon server for anyone to use.

Server stats:

9.2K
active users

#zstd

3 posts2 participants0 posts today

Today I discovered that there is also an excellent compression format, #Zstandard (ZSTD), is fast with an excellent compression ratio, developed by #Meta, and released as #opensource
I needed to backup my files because I have to wipe my PC and reinstall #Linux. Now I have a dilemma: choosing between #antiX and #Lubuntu. I have a fairly decent computer, so I could even install a more full-featured OS, but I prefer an OS that doesn’t use too many resources.
#zstd
youtube.com/watch?v=k5XsiuxHv_A

Continued thread

2/2 I also tested the impact of compression levels and I can say, it is not worth the CPU to go beyond the default #zstd level of 3. Base64 encoded mail attachments do not compress very well. This way it is fast and reliable.

To move the mailbox to the new server, I just resynced the newly creates maildir:username@domain.tld and configured the new dovecot similarly for compression and access.

Voila!

Brand new PEP by @emmatyping to add Zstandard to the standard library:
peps.python.org/pep-0784/

Will it make it in to 3.14 before the feature freeze on 2025-05-06? It'll be close but it's possible!

The PEP also suggests namespacing the other compression libraries lzma, bz2 and zlib, with a 10-year deprecation for the old names.

Join the discussion to give your support, suggestions or feedback:

discuss.python.org/t/pep-784-a

Python PEPs
Python Enhancement Proposals (PEPs)PEP 784 – Adding Zstandard to the standard library | peps.python.orgZstandard is a widely adopted, mature, and highly efficient compression standard. This PEP proposes adding a new module to the Python standard library containing a Python wrapper around Meta’s zstd library, the default implementation. Additionally, to a...
#PEP#PEP784#zstd

#zstd 1.5.7 is out and it's honestly quite amazing.

Highlights for me are:

- ~10% faster at small block sizes common in databases(=filesystems?)
- Limited multi-threading by default. (You could already enable it manually; this only concerns the defaults.)
- A new --max flag that inches zstd closer to #lzma/#xz. We'll need to see more testing on how close exactly but it's impressive they managed to broaden the range this far in a single format.

github.com/facebook/zstd/relea

GitHubRelease Zstandard v1.5.7 · facebook/zstdZstandard v1.5.7 is a significant release, featuring over 500 commits accumulated over the past year. This update brings enhancements across various domains, including performance, stability, and f...

#Zstandard (aka #zstd) v1.5.7 is out:

github.com/facebook/zstd/relea

'"[…] a significant release […] brings enhancements across various domains, including performance, stability, and functionality […]

The compression speed for small data blocks has been notably improved at fast compression levels[…]

The --patch-from functionality of the zstd CLI […] v1.5.7 largely mitigates the speed impact of high compression levels 18+,

The compression ratio has been enhanced slightly for large data across all compression levels […]"'

I love playing around with #compression

In this case, it's all text-based data in csv and xml formats.

Size:

32,696,320 202411.tar
 4,384,020 202411.tar.bz2
 4,015,912 202411.tar.zst
 3,878,583 202411.tar.bz3
 3,730,416 202411.tar.xz

zstd was invoked using zstd --ultra -22
xz was invoked using xz -9e
bzip2 was invoked using bzip2 -9
bzip3 has no compression level options

Speed:

zstd    54.31user 0.25system 0:54.60elapsed 99%CPU
xz      53.80user 0.06system 0:53.93elapsed 99%CPU
bzip2    5.33user 0.01system 0:05.35elapsed 99%CPU
bzip3    3.98user 0.02system 0:04.01elapsed 99%CPU

Maximum memory usage (RSS):

zstd    706,312
xz      300,480
bzip3    75,996
bzip2     7,680

*RSS sampled up to ten times per second during execution of the commands in question

#bzip3 is freaking amazing, yo.

#DataCompression #bzip #bz3 #zstd #zst #zstandard #xz #lzma
#CouldaBeenABlost ;)

#komprimace #gnu_linux
#Zstd je neskutečné dobrý #opensource komprimační algoritmus, měl by se víc propagovat (a používat).
Složky s webovým obsahem (HTML, CSS, obrázky atd.), celkem 220 Mb dat.
Jednovláknová komprimace do .tar.gz za 8 sekund a výsledek: 147 Mb.
Vícevláknová komprimace do .tar.gz za 1 sekundu (a taky 147 Mb).
Vícevláknová komprimace do .tar.zst za 1 sekundu a výsledek: 15 Mb (!!!)
GNU #gzip víc než 1 CPU nepodporuje, ale #pigz ano, který jsem použil pro porovnání rychlosti.

Continued thread

So I added xz and zstd tests to my run. Here are the results for the first test against my backup of "Metal Gear Solid". #ZSTD seems to be pretty stinking crazy for compression.

xz compress time - 6 minutes, 8 seconds
xz decompress time - 35 seconds
xz compressed size - 803 MB

zstd compress time - 5 seconds
zstd decompress time - 2 seconds
zstd compressed size - 874 MB