mastodon.world is one of the many independent Mastodon servers you can use to participate in the fediverse.
Generic Mastodon server for anyone to use.

Server stats:

8.1K
active users

#pgsql

0 posts0 participants0 posts today

pgxWrappy — потому что жизнь слишком коротка для ручного сканирования в Go! �

Если вы устали от бесконечного rows.Scan(), ручного разбора вложенных структур и километрового boilerplate-кода — pgxWrappy создан для вас! 🚀 Делаем неудобное удобным! Жить с удобством

habr.com/ru/articles/931242/

ХабрpgxWrappy — потому что жизнь слишком коротка для ручного сканирования в Go! �🚀 Запарило ручное сканирование в pgx? Встречайте pgxWrappy - супер-удобную обертку для PostgreSQL в Golang! 🐘💨 🔥 Больше никакого boilerplate-кода! // 🚫 БЫЛО (кошмар на 10 строк): rows, _ :=...
#Go#golang#pgx

Professionnellement, je commence doucement à me projeter dans l'après CDD, c'est à dire au printemps-été 2026.
J'aimerais aller vers #ClermontFerrand pour me rapprocher de gens que j'aime.

Donc si vous êtes dans un service public, une association ou une SCOP du coin qui recherche ou est susceptible de chercher un géomaticien, n'hésitez pas à me faire signe.

Barman woes on OVH

So, due to a cascade of Barman’s errors and corrupted backups due to running out of backup space, I had to pretty much clear out my Barman backup storage.

After resetting everything, however, I was not able to make a full backup, because Barman was not able to receive write-ahead logs from PostgreSQL.

2025-06-04 22:00:21,209 [523687] barman.cli ERROR: [Errno 5] Input/output error: '/backup/barman/pg/wals/00000001000000CD/00000001000000CD00000009.tmp'
See log file for more details.
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/barman/cli.py", line 2390, in main
args.func(args)
File "/usr/lib/python3/dist-packages/barman/cli.py", line 1600, in archive_wal
server.archive_wal()
File "/usr/lib/python3/dist-packages/barman/server.py", line 2651, in archive_wal
self.backup_manager.archive_wal(verbose)
File "/usr/lib/python3/dist-packages/barman/backup.py", line 847, in archive_wal
archiver.archive(verbose)
File "/usr/lib/python3/dist-packages/barman/wal_archiver.py", line 213, in archive
self.archive_wal(compressor, wal_info)
File "/usr/lib/python3/dist-packages/barman/wal_archiver.py", line 356, in archive_wal
shutil.copystat(src_file, tmp_file)
File "/usr/lib/python3.12/shutil.py", line 388, in copystat
_copyxattr(src, dst, follow_symlinks=follow)
File "/usr/lib/python3.12/shutil.py", line 338, in _copyxattr
os.setxattr(dst, name, value, follow_symlinks=follow_symlinks)
OSError: [Errno 5] Input/output error: '/backup/barman/pg/wals/00000001000000CD/00000001000000CD00000009.tmp'

Input/output error? That’s odd, but the stacktrace tells us a lot, as from the function name (os.setxattr), we can deduce that it’s trying to set xattrs on our WAL files. The underlying storage for my backups is OVH’s Backup Storage, accessible over NFS. And NFS, for most of its life, was not able to support xattrs even if the underlying filesystem does. The support has been added to NFS 4.2, while OVH (still) uses 4.1.

So, how to fix this?

Initially, I thought of downgrading Barman, because things had worked before. But that did not help, and so I had to go digging into the source code (which was painful as I am not a Python guy).

# Perform the real filesystem operation with the xlogdb lock taken.            # This makes the operation atomic from the xlogdb file POV            with self.server.xlogdb("a") as fxlogdb:                # If the content has changed, it means the file was either compressed                # or encrypted or both. In this case, we need to update its metadata                if content_changed:                    shutil.copystat(src_file, current_file)                    stat = os.stat(current_file)                    wal_info.size = stat.st_size

So, if content_changed is true, we use copystat from shutil.py, which copies xattrs from the original file.

# If the bits of the file has changed e.g. due to compression or encryption            content_changed = False            # Compress the file if not already compressed            if compressor and not wal_info.compression:                compressor.compress(src_file, tmp_file)                files_to_remove.append(current_file)                current_file = tmp_file                content_changed = True                wal_info.compression = compressor.compression            # Encrypt the file            if encryption:                encrypted_file = encryption.encrypt(current_file, dst_dir)                files_to_remove.append(current_file)                current_file = encrypted_file                wal_info.encryption = encryption.NAME                content_changed = True

Ah, so therein lies the rub: Barman assumes that it needs to do this if the content is either being compressed or encrypted. And it just so happens that I’ve also enabled GZIP compression so as not to run out of space again. Well, we have to deal with this the old-fashioned way (by lowering the retention policy). After disabling compression, Barman was able to make backups again.

Hope this helps someone, because it sure as fuck would’ve helped me.

Continued thread

With enough search iterations, I was able to find enough references to cobble together what I needed. So, now I have my minimally-permissioned query-user.

Told the person who submitted the automation-request, "hey: I worked out creating a minimally-permissioned
#PGSQL user to support this effort. I wrote it up and it's available at ".

They responded, "oh. We already have a query-user. It's ".

Curious to see if I simply re-did what they did, I fired up a permissions-query:

psql -c "SELECT * FROM information_schema.role_table_grants WHERE grantee = '';"

For each. My user had 40 grants,
all at just the SELECT level. Their user had nearly 170 grants at various levels. Probably going to recommend that they go with my user since it's sufficient for the tasks they're wanting to accomplish but has significantly fewer grants associated with it (and I'm the kind of person who errs towards the "smaller permission-sets are better" #security posture).

Как добавить или расширить диск в Linux

На виртуалке кончилось место ? Не беда, у нас виртуалка - увеличим диск. Не хотим размещать файлы на основном диске ? Не беда у нас виртуалка - добавим новый диск. Всё бы ни чего, но виртуалка сама железо не настроит, в том смысле что конечно виртуальная машина предоставляет доступ к железу, но ресурсы этого железа программам предоставляет не виртуалка, ресурсы предоставляет операционная система. Эта инструкция будет о том как настроить операционную систему Ubunta, для управления новым железом (в нашем случае дисковым пространством).

habr.com/ru/articles/871230/

ХабрКак добавить или расширить диск в LinuxВсе привет! Бывает что место кончилось, а желание писать ещё нет. В том смысле что базы росли росли и выросли - место на диске кончилось, или uploads могли сожрать всё свободное место. Что можно...

Hey PostgreSQL users, is there a way to get the *expanded* IPv6 text representation of a v6 inet or cidr data type? Pg does a lovely job of compressing it correctly, but I need it uncompressed.
(Why: ".ip6.arpa", sadly. I'm doing DNS work.)

Exporting to a file and running a standalone utility is _not_ a viable solution in this scenario.

Feel free to boost for visibility.

#pg#pgsql#postgres

Top Five #PostgreSQL Surprises from #Rails Developers, by Andrew Atkinson
medium.com/pragmatic-programme

1. Covering indexes and how to use them

2. PostgreSQL data storage in pages and the relationship to query performance

3. Topics related to storing and accessing ordered data

4. Why SELECT * is not optimal, and how to enumerate all table columns in queries to help spot opportunities to reduce the columns

5.Using PostgreSQL for more types of work

The Pragmatic Programmers · Top Five PostgreSQL Surprises from Rails DevelopersBy Andrew Atkinson
I must admit that I have used Veeam the last time back in the days when I created a vCloud product at a hosting company with ESXi 3 if I remember correctly. Was happy to see that it was still pretty straight forward and also happy to see the #PGSQL integration.

I can't say anything regarding Avamar, never used it :)

Как реализовать и оптимизировать UPSERT в Greenplum 6

Привет! Меня зовут Антон Васильев, я работаю инженером технической поддержки компании Arenadata и нередко сталкиваюсь с довольно каверзными задачами и багами. Одной из них была проблема оптимизации механизма UPSERT в Greenplum 6. В этой статье я хочу рассказать, как эта задача может быть решена.

habr.com/ru/companies/arenadat

ХабрКак реализовать и оптимизировать UPSERT в Greenplum 6Привет! Меня зовут Антон Васильев, я работаю инженером технической поддержки компании Arenadata и нередко сталкиваюсь с довольно каверзными задачами и багами. Одной из них была проблема оптимизации...