mastodon.world is one of the many independent Mastodon servers you can use to participate in the fediverse.
Generic Mastodon server for anyone to use.

Server stats:

8.4K
active users

#imagedescription

2 posts2 participants1 post today
Replied in thread
@Gemma ⭐🔰🇺🇸 🇵🇭 🎐 I do 1 and 3.

1 to such extents that the actual alt-text only contains a short description where "short" means anything between ca. 900 and ca. 1,400 characters. The long description goes into the post, and it regularly measures several tens of thousands of characters. Also, I don't describe what's in the image as I can see it in the image, I describe what's in the image as I can see it at the place where the image was made, i.e. at an almost infinitely higher resolution and, if need be, with the ability of looking around obstacles.

Someone somewhere out there might be interested in these details and at the same time consider having to ask for further descriptions lazy or maybe even ableist.

What I no longer do, however, is describe images within my image at more details than visible in the place where I've taken the image. In one of my last image descriptions, I would otherwise have had to describe not only multiple images in my image, but dozens of images in one image in my image and probably even more images in these images.

3 to such extents that I even transcribe text that's unreadable in the image, but that I can read at the place where the image was made. Also, I've once had a sign (unreadable of course) in English, French and rather broken German. I transcribed all three languages character by character, and I translated the French and the German text into English right after transcribing each one of it. Another reason why my long image descriptions are so long. This irritates screen readers because they can't switch languages mid-text, but if 100% verbatim transcripts are the rule, then so be it.

The only thing I no longer do regarding this is transcribe all-caps as all-caps because screen readers may or may not misinterpret them. Also, I don't transcribe Roman numbers as such.

#Long #LongPost #CWLong #CWLongPost #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #Transcripts
hub.netzgemeinde.euNetzgemeinde/Hubzilla
Replied in thread
@Baranduin
Oh does this feel like my inner monologue when I post a photo. It is a bummer that it, at times, prevents me from posting more photos, but I hope this make me a little more quality over quantity.

I actually keep entire categories of things out of my images because I can't describe them up to my own standards. This includes realistic buildings. I would first have to look up loads of architectural terms to describe all details of a building, and then I would have to explain each and every one of these architectural buildings in a way that absolute laypeople understand my image description without ever having to ask me anything or look anything up themselves.

The last time I posted an image with a building was this post. I actually went around and looked for a nice motive for a new image post for quite a while. There was one harbour scene which I thought looked spectacular enough to show, but which was impossible to describe. So I fell back to this motive. I thought it's not too bland, not too simple and at the same time not too complex. Besides, the one building in the image is totally unrealistic and without all the tiny details that would make up a realistic building.

And then I ended up taking some 30 hours over two days to describe the image in over 60,000 characters. The building alone took up some 40,000 or so. This is still the longest image description in the whole Fediverse, I think. Here is the image describing log thread.

My last image post before that was this one with still over 25,000 characters of description for one image, and I consider it outdated slop.

It was the last time that I described an image in my image with more details than visible in the original of that image itself. And that's where I got sloppy. I completely forgot to transcribe what's written on the license plate above the office door of the motel in that image in my image. And I couldn't be bothered to give detailed descriptions of the two 1957 Chevy Bel Airs parked in front of the motel because I really wanted to get that description done. In the actual image, all of this is sub-pixel-sized. You wouldn't know it's even there if I didn't mention it. I did describe the motel, but it's a fairly simple building, and I decided against describing what's visible through the windows with open blinds from the camera angle in the image in my image.

In the next image, the one with 60,000+ characters of description, I stopped describing images in the image beyond what I can see in the place where the image itself was taken. That was because one image is a destination preview image on a teleporter. The destination is a kind of teleport hub. The preview actually (if only barely so) shows over 300 single-destination teleporters, a few dozen of them with their own preview images.

So I teleported to that hub to describe it in detail. And I looked at the teleporters and their preview images. Turned out, not only do these preview images pretty much all have text in them and not necessarily few bits of text, but some of them actually have images within themselves again.

I would have had to describe that image in my image, dozens of images in that image in my image and a number of images in these images in that image in my image. For each of the latter, I would have had to teleport three times from the place that I originally wanted to describe. I would also have had a whole lot more text to transcribe. All on a sub-pixel scale several times over.

Not only would that have been a humongous task, but more importantly, it would have inflated my image description and my whole post to more than 100,000 characters. Mastodon would probably have rejected my post for being too long. And this would have rendered the whole effort futile. In the few places in the Fediverse that would still have accepted my post, nobody cares for image descriptions.

AI certainly can't get inside my brain well enough to write accurate descriptions. Even if it could would I? hmmm.

I've only used AI to describe images twice. And in both cases, that was to show just how bad AI is at describing images about an extremely obscure and quickly changing niche topic at the level of accuracy and detail which I deem necessary for that topic.

I guess one problem that you're facing is that next to nobody in the Fediverse can even grasp what you're thinking about, what you're taking into consideration for your image descriptions. That's why you got next to no feedback upon your first comment in this thread.

I have one advantage here: What you're pondering, I have actually done. If I feel like people won't understand what I'm thinking about, I point them at one or several of my actual image posts, and/or I post a quote from one of my actual image descriptions. Still, almost nobody actually goes and reads through any of my image descriptions, but I guess they get the gist, especially when I post snippets from my actual descriptions.

CC: @Icarosity

#Long #LongPost #CWLong #CWLongPost #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta
hub.netzgemeinde.euUniversal Campus: The mother of all mega-regionsOpenSim's famous Universal Campus and a picture of its main building; CW: long (62,514 characters, including 1,747 characters of actual post text and 60,553 characters of image description)
Replied in thread
@ScotsBear 🏴󠁧󠁢󠁳󠁣󠁴󠁿 Just for me to be on the safe side: What are your minimum requirements for alt-texts and image descriptions so you refrain from sanctioning a user?

Full, to-the-point adherence to the Accessible Social guidelines, the Cooper Hewitt guidelines, Veronica With Four Eyes' various guidelines etc., even though they contradict each other?

Do you demand image descriptions be detailed and informative enough so that nobody will ever have to ask the poster about explanations and/or details because they're all already in the descriptions, no matter how niche and obscure the content of the image is?

If there is already a lengthy image description in the post itself (imagine all character limits you know in the Fediverse; it's longer than all of them by magnitudes), do you still demand there be another description in the alt-text, even though the alt-text actually points the user to the description in the post, because there absolutely must be a sufficiently detailed and accurate image description in the alt-text, full stop?

In fact, do you sanction image descriptions in general or alt-texts in particular if you think they are too long? For example, if you stumble upon an image post from me that has a "short" image description of 1,400 characters in the alt-text and a "long" image description of over 60,000 characters in the post itself (and I've actually posted such a thing into the Fediverse; here's the link to the source), will you demand I discard two days and some 30 hours of work, delete the long description and cut the short description down to no more than 200 characters? Maybe even while still retaining the same amount of information? Lest you have me dogpiled and mass-blocked or worse?

By the way, I think I've gathered a whole lot of experience and knowledge about describing images generally and specifically for the Fediverse, and I also see the high level of detail in my image descriptions as fully justified.

#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta
www.accessible-social.comWriting Image DescriptionsTips on how to write effective image descriptions to make visuals accessible.
Replied in thread
@Icarosity It's similar for me, only that I always put a gigantic effort into describing my own images twice, once not exactly briefly in the alt-text and once with even more details in the post itself. Sometimes I find an interesting motive, but when I start thinking about how to describe it, I don't even render an image because it isn't worth doing so if I can't post it.

I haven't posted a new image in almost a year. In fact, I've got a series of fairly simple images for which I've started writing the descriptions late last year, and I'm still not done. So much about "it only takes a few seconds".

Before someone suggests I could use Altbot: I'm not even sure if it'll work with Hubzilla posts. And besides, no AI on this planet is fit for the task of properly, appropriately and accurately describing the kind of images that I post.

@Baranduin And then there's me who has managed to describe one image in a bit over ten thousand words last year. Good thing I have a post character limit of over 16.7 million. And I actually limited myself this time: I did not describe images within my image in detail, in stark contrast to about two years ago when I described a barely visible image in an image in well over 4,000 characters of its own, and that wasn't the only image within that image that I described.

CC: @Logan 5 and 999 others

#Long #LongPost #CWLong #CWLongPost #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta
MastodonIcarosity (@nancywisser@mastodon.social)5.71K Posts, 72 Following, 463 Followers · mostly harmless
Observer: always looking and curious about overlooked things, especially plants, especially native plants. I take a lot of pictures. I have a cat and I grow slipper orchids. Oh yeah also—I’m an old
Just a visitor here—Tumblr is my home and there I am geopsych 
death trap clad happily
Replied in thread
@Logan 5 and 999 others First of all: You must never put line breaks into alt-text. Ever. (https://www.tpgi.com/short-note-on-coding-alt-text/, https://joinfediverse.wiki/Draft:Captions#Line_breaks)

Besides, that will certainly not be the day that I'll post my first image after more than a year.

It's tedious enough to properly describe my original images at the necessary level of detail, and one image takes me many hours to describe, sometimes up to two full days, morning to evening. Not joking here. I certainly won't put extra effort into turning at least the 900 characters of "short" description that go into the alt-text into a poem. And I definitely will not also turn the additional 20,000, 40,000, 60,000 characters of long description that go into the post into a poem as well. (And yes, I can post 60,000+ characters in one go, and I have done so in the past. My character limit is 16,777,215.)

#Long #LongPost #CWLong #CWLongPost #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta
TPGi · Short note on coding alt text - TPGiThe other day, in relation to a github comment, I was asked by my friend Mike[tm]Smith “Can alt have line breaks in it or does that do weird things to...
Replied in thread
@Georgiana Brummell Isn't the A2I post quite a bit out-dated?

Some two years ago, I've read about screen readers not supporting more than 200 characters in alt-text. But people who actually use screen readers told me that all screen reader software available has long been upgraded to support an infinite number of characters. And next to nobody uses old versions with a 200-character limit anymore.

And now I often see posts and articles, even recent ones, mention a hard limit of 125 characters for alt-text in screen readers. This must actually be leftover information from the mid-2010s at best.

Case in point: I've never seen anyone in the Fediverse being criticised for what would be absolutely excessively long alt-text by Web design standards. Proof enough that screen readers can easily handle 800 or 1,000 or more characters of alt-text.

As far as I'm informed, the only issue is that screen readers cannot navigate alt-texts, i.e. you cannot rewind to a certain point within an alt-text and have it re-read from there. You can only jump back to the beginning of the alt-text and have the whole alt-text re-read. The longer an alt-text is, the less convenient this is.

By the way: I've started working on an entire wiki on how to describe images properly and write image descriptions in general and alt-texts specifically for the Fediverse. It will take quite a number of existing guides and how-tos and the like into consideration and link to them. It will also take both Mastodon's culture and the special perks of the various places in the Fediverse outside of Mastodon into consideration. When guides contradict each other, I'll mention that as well.

It has to be a wiki because it will contain so much information that it simply wouldn't fit onto one page anymore. Also, I want to be able to point people at certain aspects of describing images or writing alt-texts such as how colours should be described, why people's races should never be mentioned and why explanations do not belong into alt-texts. I don't want to tell them to scroll down to a certain paragraph. I want to show them one page that specialises in that particular topic.

I'm not sure if that's utter overkill, if that'll stand in the way of just "doing it" and actually drive people away from describing images. But in my opinion, someone has to tell people how to do it properly.

#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #A11y #Accessibility
hub.netzgemeinde.euNetzgemeinde/Hubzilla
Replied in thread

@naturebystu
#alternativeText #imageDescription #inclusion
Currently I can't see any alt text on this image.
#Accessibility is a virtue on Mastodon. There are quite a few #blind or visually impaired people who use this medium with their screen readers for precisely this reason. There are also quite a few users who neither favor nor boost posts with media without #alttext text. There are even users who use a css to filter all media without alt text.

@yuriposting so pretty,,

Image description: It’s a digital illustration of a cute noontime scene featuring a lady wearing a white gown and a maid in a long black dress dozing off in a lavish victorian style room. The picture is watermarked “Illust by Rrrrrrice.”

A draft blows from the window sending the curtains billowing toward the lady and her maid with sheer fabric fingers that wrap around them, carrying with it the scent of hickory and currants. Warm rays of sunlight filter in. Having finished reading a book for now, the lady rests in an olive colored high-backed chair with a brown tasseled blanket and cozy white pillow. Her maid, brunette with her hair kept in a bun, is doubled over on a red carpet and pressed against the lady’s legs. She lies her head in her lady’s lap and allows their fingers to intertwine. There is no other option in this situation—this is what must occur.

Precious metal and ceramic furnishings fill the room: china, a clock, and vases resting on the mantel. To the side of the chair is a table holding a porcelain lamp with a tan tasseled shade. On the wall are some candlesticks in gold holders and, of course, there are several portraits on the wall, presumably other members of the house, dressed in fancy clothes, which are held in gold frames.

Overall it has a serene but very wealthy aesthetic to it. Stylistically it has a lot of big blocks of color with most of the detailed rendering concentrated on the focal points like the lady’s face and cascading blonde hair.

#ImageDescription #AltText #Alt4You

RE: https://sakurajima.moe/@yuriposting/114700242549863247

Danbooru tags: 2girls apron artist_name black_dress blanket blonde_hair book brown_hair candle candlestand chair closed_eyes closed_mouth curtains dress fireplace hair_bun highres indoors kneeling long_hair maid maid_apron master_and_servant multiple_girls open_book original painting_(object) picture_frame puffy_sleeves reaching rrr_(reason) scrunchie sitting sleeping table vase victorian victorian_maid waist_apron white_apron white_dress white_scrunchie yuri
Sakurajima (桜島)Yuri Posting (@yuriposting@sakurajima.moe)Attached: 1 image Artist: rrr (reason) Media: original Source: https://www.pixiv.net/en/artworks/121425842
Replied in thread
@nihilistic_capybara I don't know what they expect. Also, I hardly ever get any feedback for my image descriptions unless I explicitly ask someone for it.

But I've actually asked blind or visually-impaired users a few times, and in the few occasions that they actually answered, they said that this amount of description is okay.

After all, the limitations in navigating alt-text with a screen reader only apply to actual alt-text "underneath" an image. They do not apply to image descriptions in the post which can be navigated like the rest of the post text.

#Long #LongPost #CWLong #CWLongPost #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta
hub.netzgemeinde.euNetzgemeinde/Hubzilla
Replied in thread
@nihilistic_capybara
The description you have given is a meter long and frankly (again please forgive my ignorance I know nothing about the blind and how they navigate the web) contains too much details to the point where using a screen reader to listen to this turns into a very boring podcast.

Someone somewhere out there might be interested in all these details.

Allow me to elaborate: My original pictures are renderings from very obscure 3-D virtual worlds. You may find them boring. Many others may find them boring.

But someone somewhere out there might be interested. Intrigued. Excited even.

They've put high hopes into "the metaverse" as in 3-D virtual worlds. All they've read about so far is a) Meta Horizon failing and b) otherwise only announcements, often with AI-generated images as illustrations. Just before they saw my image, they thought that 3-D virtual worlds were dead.

But then they see my image. Not an AI picture, but an actual rendering from inside an actual 3-D virtual world! One that exists right now! It has users! It's alive! I mean, it has to have users because I have to be one to show images from inside these worlds.

They're on the edge of their seat in excitement.

Do you think they only look at what they think is important in the image? Do you think they only look what I think is important in the image?

Hell, no! They'll go on a journey through a whole new universe! Or at least what little of it they can see through my image. In other words, they take in all the big and small details.

If they're sighted.

Now, here is where accessibility and inclusion comes into play. What do accessibility and inclusion mean? They mean that someone who is disabled must have all the same chances to do all the same things and experience all the same things in all the same ways as someone without their disability. Not giving them these chances is ableist.

Okay, so what if that someone is blind? In this case, accessibility and inclusion mean that this someone must have the very same opportunity to take in all the big and small details as someone who has perfect eyesight.

But if I only describe my images in 200 characters, they can't do that. Where are they supposed to get the necessary information to experience my image like someone sighted?

They can only get this information if I give it to them. If I describe my image in all details.

And that's why I describe my original images in all details.

And stuff like the text not being legible. I don't know how you read that text cause I am unable to read it as well.

Again: I don't look at the image. I look at the real thing. The world itself. Like so:

  • I start my Firestorm Viewer.
  • I log one of my avatars in.
  • I teleport to the place where I've rendered the image.
  • If I want to read a sign, I move the camera closer to the sign. If necessary, reaaaaaally close. (I can move the camera along three axles and rotate it around two axles independently from the avatar.)
  • What's a speck of 4x3 pixels in the image unfolds before me as a 1024x768-pixel texture with three lines of text on it. In fact, I could move the camera so close to at least some surfaces that I could clearly see the individual pixels on the textures if anti-aliasing is off.
  • Not only can I easily transcribe that text, I can often even identify or at least describe the typeface.

This gives me superpowers in comparison to those who describe images only by looking at the images. For example, if there's something standing in front of a sign, partially obstructing it, I can look around that obstacle.

Imagine you're outside, taking a photo with your phone, and you want to post it on Mastodon. There's a poster on a wall somewhere in that image with text on it, but it's so small in the image that you can't read it.

Now you can say the text is too small, you can't read it, so you can't transcribe it.

Or, guess what, you can walk up close to that poster and read the text right on the poster itself.

#Long #LongPost #CWLong #CWLongPost #OpenSim #OpenSimulator #Metaverse #VirtualWorlds #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta
hub.netzgemeinde.euNetzgemeinde/Hubzilla
Replied in thread
@nihilistic_capybara Yes. As a matter of fact, I've had an AI describe an image after describing it myself twice already. And I've always analysed the AI-generated description of the image from the point of view of someone who a) is very knowledgeable about these worlds in general and that very place in particular, b) has knowledge about the setting in the image which is not available anywhere on the Web because only he has this knowledge and c) can see much much more directly in-world than the AI can see in the scaled-down image.

So here's an example.

This was my first comparison thread. It may not look like it because it clearly isn't on Mastodon (at least I guess it's clear that this is not Mastodon), but it's still in the Fediverse, and it was sent to a whole number of Mastodon instances. Unfortunately, as I don't have any followers on layer8.space and didn't have any when I posted this, the post is not available on layer8.space. So you have to see it at the source in your Web browser rather than in your Mastodon app or otherwise on your Mastodon timeline.

(Caution ahead: By my current standards, the image descriptions are outdated. Also, the explanations are not entirely accurate.)

If you open the link, you'll see a post with a title, a summary and "View article" below. This works like Mastodon CWs because it's the exact same technology. Click or tap "View article" to see the full post. Warning: As the summary/CW indicates, it's very long.

You'll see a bit of introduction post text, then the image with an alt-text that's actually short for my standards (on Mastodon, the image wouldn't be in the post, but below the post as a file attachment), then some more post text with the AI-generated image description and finally an additional long image description which is longer than 50 standard Mastodon toots. I've first used the same image, largely the same alt-text and the same long description in this post.

Scroll further down, and you'll get to a comment in which I pick the AI description apart and analyse it for accuracy and detail level.

For your convenience, here are some points where the AI failed:

  • The AI did not clearly identify the image as from a virtual world. It remained vague. Especially, it did not recognise the location as the central crossing at BlackWhite Castle in Pangea Grid, much less explain what either is. (Then again, explanations do not belong into alt-text. But when I posted the image, BlackWhite Castle had been online for two or three weeks and advertised on the Web for about as long.)
  • It failed to mention that the image is greyscale. That is, it actually failed to recognise that it isn't the image that's greyscale, but both the avatar and the entire scenery.
  • It referred to my avatar as a "character" and not an avatar.
  • It failed to recognise the avatar as my avatar.
  • It did not describe at all what my avatar looks like.
  • It hallucinated about what my avatar looks at. Allegedly, my avatar is looking at the advertising board towards the right. Actually, my avatar is looking at the cliff in the background which the AI does not mention at all. The AI could impossibly see my avatar's eyeballs from behind (and yes, they can move within the head).
  • It did not describe anything about the advertising board, especially not what's on it.
  • It did not know whether what it thinks my avatar is looking at is a sign or an information board, so it was still vague.
  • It hallucinated about a forest with a dense canopy. Actually, there are only a few trees, there is no canopy, the tops of the trees closer to the camera are not within the image, and the AI was confused by the mountain and the little bit of sky in the background.
  • The AI misjudged the lighting and hallucinated about the time of day, also because it doesn't know where the avatar and the camera are oriented.
  • It used the attributes "calm and serene" on something that's inspired by German black-and-white Edgar Wallace thrillers from the 1950s and the 1960s. It had no idea what's going on.
  • It did not mention a single bit of text in the image. Instead, it should have transcribed all of them verbatim. All of them. Legible in the image at the given resolution or not. (Granted, I myself forgot to transcribe a few little things in the image on the advertisement for the motel on the advertising board such as the license plate above the office door as well as the bits of text on the old map on the same board. But I didn't have any source for the map with a higher resolution, so I didn't give a detailed description of the map at all, and the text on it was illegible even to me.)
  • It did not mention that strange illuminated object towards the right at all. I'd expect a good AI to correctly identify it as an OpenSimWorld beacon, describe what it looks like, transcribe all text on it verbatim and, if asked for it, explain what it is, what it does and what it's there for in a way that everyone will understand. All 100% accurately.

CC: @🅰🅻🅸🅲🅴  (🌈🦄)

#Long #LongPost #CWLong #CWLongPost #OpenSim #OpenSimulator #Metaverse #VirtualWorlds #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #AI #LLM #AIVsHuman #HumanVsAI
hub.netzgemeinde.euLLaVA vs my own image descriptionHow an image description by LLaVA AI compares to an image description hand-written by myself; CW: long (almost 29,000 characters, including one long image description of over 25,000 characters), Fediverse meta, image description meta, image of monochrome motive
Replied in thread
@nihilistic_capybara LLMs aren't omniscient, and they will never be.

If I make a picture on a sim in an OpenSim-based grid (that's a 3-D virtual world) which has only been started up for the first time 10 minutes ago, and which the WWW knows exactly zilch about, and I feed that picture to an LLM, I do not think the LLM will correctly pinpoint the place where the image was taken. It will not be able to correctly say that the picture was taken at <Place> on <Sim> in <Grid>, and then explain that <Grid> is a 3-D virtual world, a so-called grid, based on the virtual world server software OpenSimulator, and carry on explaining what OpenSim is, why a grid is called a grid, what a region is and what a sim is. But I can do that.

If there's a sign with three lines of text on it somewhere within the borders of the image, but it's so tiny at the resolution of the image that it's only a few dozen pixels altogether, then no LLM will be able to correctly transcribe the three lines of text verbatim. It probably won't even be able to identify the sign as a sign. But I can do that by reading the sign not in the image, but directly in-world.

By the way: All my original images are from within OpenSim grids. I've probably put more thought into describing images from virtual worlds than anyone. And I've pitted my own hand-written image description against an AI-generated image description of the self-same image twice. So I guess I know what I'm writing about.

CC: @🅰🅻🅸🅲🅴  (🌈🦄) @nihilistic_capybara

#Long #LongPost #CWLong #OpenSim #OpenSimulator #Metaverse #VirtualWorlds #CWLongPost #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #AI #LLM #AIVsHuman #HumanVsAI
hub.netzgemeinde.euNetzgemeinde/Hubzilla
Replied in thread
Are you referring to my mentions being @Erik :heart_agender: and @Roknrol rather than what you're used to, namely @⁠bright_helpings and @⁠roknrol? Using the long name rather than the short name and keeping the @ outside the link rather than making it part of the link? Likewise, the # being outside the hashtag link rather than being part of it?

This is because I'm not on Mastodon. The Fediverse is not only Mastodon. It has never been. So this is not a toot.

No, really. This is what I post from: https://hub.netzgemeinde.eu/channel/jupiter_rowland, https://hub.netzgemeinde.eu/profile/jupiter_rowland. I ask you: Does this look like Mastodon? Have you ever seen Mastodon look like this?

Where I am, this style of mentions and hashtags is hard-coded. And it has been since long before Mastodon was even an idea.

I'm on something named Hubzilla. Hubzilla is not a Mastodon instance. Hubzilla is not a Mastodon fork either. Hubzilla has got absolutely nothing to do with Mastodon at all.

It is its very own project, fully independent from Mastodon (https://hubzilla.org, https://framagit.org/hubzilla, https://joinfediverse.wiki/Hubzilla).

Hubzilla has not intruded into "the Mastodon Fediverse" either. The Fediverse is older than Mastodon. And Hubzilla was there before Mastodon.

Hubzilla was launched by @Mike Macgirvin ?️ in March, 2015, eight months before Mastodon, by renaming and redesigning his own Red Matrix from 2012, almost four years before Mastodon. And the Red Matrix was a fork of a fork of his own Friendica, which was launched on July 2nd, 2010, 15 years ago, five and a half years before Mastodon. (https://en.wikipedia.org/Friendica, https://friendi.ca, https://github.com/friendica, https://joinfediverse.wiki/Friendica)

Friendica was there before Mastodon, too.

Here's the official Friendica/Hubzilla timeline on Hubzilla's official website to show you that I'm not making anything up: https://hubzilla.org/page/info/timeline. Scroll all the way down and notice all the features that you may right now know for a fact that the Fediverse doesn't have, but that Friendica has introduced to the Fediverse 15 years ago, five and a half years before Mastodon was launched.

Again, Mastodon has never been its own network. The Fediverse has never been only Mastodon. When Mastodon was launched in January, 2016, it immediately federated with

Friendica has been formatting mentions and hashtags the way I just did for 15 years now. When Mastodon was launched, Friendica has been formatting them that way for five and a half years already, and Hubzilla has done so for ten months. It is hard-coded there. It is not a user option.

That's because not everything in the Fediverse is a Twitter clone or Twitter alternative. [b]Friendica was designed as a Facebook alternative with full-blown long-form blogging capability. And Hubzilla adds even more stuff to this. This is why Friendica and Hubzilla don't mimic Twitter.

Another shocking fact: As you can clearly see here, Friendica and Hubzilla don't have Mastodon's 500-character limit. Friendica's character limit is 200,000. Hubzilla's character limit is 16,777,215, the maximum length of the database field. And it's deeply engrained in their culture, which is many years older than Mastodon's culture, to not worry about the length of a post exceeding 500 characters.

One more shocking fact: Friendica has had quote-posts since its very beginning. So has Hubzilla. Both have always been able to quote-post any public Mastodon toot, and they will forever remain able to quote-post any public Mastodon toot. And Mastodon will never be able to do anything against it. (By the way: In 15 years of Friendica, nobody has ever used quote-posts for dogpiling or harassment purposes. Neither Friendica nor Hubzilla is Twitter.)

You find this disturbing? You think none of this should exist in the Fediverse, even though all this has been in the Fediverse for longer than Mastodon?

Then go ahead and block all instances of Friendica and Hubzilla as well as all instances of Mike's later creations, (streams) (https://codeberg.org/streams/streams) from 2021 and Forte (https://codeberg.org/fortified/forte) from 2024.

Or you could go ask @Seirdy / DM me the word "bread" and @Garden Fence Blocklist as well as @Mad Villain of @The Bad Space to add every last instance on any of these lists to their blocklists for being "rampantly and unabashedly ableist and xenophobic by design" due to not being and acting and working like Mastodon and just as rampantly and unabashedly refusing to fully adopt and adapt to the Mastodon-centric "Fediverse culture" as defined by fresh Twitter refugees on Mastodon in mid-2022 as well as refusing to abandon their own culture which is disturbingly incompatible with Mastodon's. Essentially try and have four entire Fediverse server applications Fediblocked once and for all because they're so disturbing from a "Fediverse equals Mastodon" point of view.

Or you could go to Mastodon's GitHub repository (https://github.com/mastodon/mastodon), submit a feature request for defederating Mastodon from everything that isn't Mastodon by design and then go lobbying for support for your feature request.

As for why I have so many hashtags below my comments, here is what they mean. Many of them are meant to trigger filters, including such that automatically hide posts behind content warning buttons, a feature that Mastodon has had since October, 2022, that Friendica has had since July, 2010, and that Hubzilla has had since March, 2015.

  • #Long, #LongPost = This post is over 500 characters long. Create a filter for either or both of these hashtags if you don't want to see my or anyone else's long posts.
  • #CWLong, #CWLongPost = CW: long post (over 500 characters long). Create a filter for either or both of these hashtags if you don't want to see my or anyone else's long posts.
  • #FediMeta, #FediverseMeta = This post talks about the Fediverse. Create a filter for either or both of these hashtags if you don't want to see me or anyone talk about the Fediverse.
  • #CWFediMeta, #CWFediverseMeta = CW: Fediverse meta. Or: CW: Fediverse meta, Fediverse-beyond-Mastodon meta. Or: CW: Fediverse meta, non-Mastodon Fediverse meta. Create a filter for either or both of these hashtags if you don't want to see me or anyone talk about the Fediverse.
  • #NotOnlyMastodon, #FediverseIsNotMastodon, #MastodonIsNotTheFediverse: This post talks about the Fediverse not only being Mastodon. Create a filter for either or multiple or all of these hashtags if you don't want to see me or anyone else talk about the Fediverse being more than Mastodon. Otherwise, click or tap any of these hashtags to read more about it in your Fediverse app.
  • #Friendica: This post talks about the Facebook alternative in the Fediverse named Friendica. Create a filter for it if you don't want to see me or anyone else talk about Friendica. Otherwise, click or tap it to read more about it in your Fediverse app. It is also meant for post discovery.
  • #Hubzilla: This post talks about the Swiss army knif of the Fediverse named Hubzilla. Create a filter for it if you don't want to see me or anyone else talk about Hubzilla. Otherwise, click or tap it to read more about it in your Fediverse app. It is also meant for post discovery.
  • #Streams, #(streams): This post talks about the Facebook alternative in the Fediverse commonly referred to as (streams). Create a filter for either or both of them if you don't want to see me or anyone else talk about Friendica. Otherwise, click or tap either of them to read more about it in your Fediverse app. It is also meant for post discovery.
  • #Forte: This post talks about the Facebook alternative in the Fediverse named Forte. Create a filter for it if you don't want to see me or anyone else talk about Forte. Otherwise, click or tap it to read more about it in your Fediverse app. It is also meant for post discovery.
  • #AltText = This post talks about alt-text and/or contains an image with alt-text. It is primarily meant for post discovery.
  • #AltTextMeta = This post talks about alt-text. Create a filter for this hashtag if you don't want to see me or anyone else talk about alt-text.
  • #CWAltTextMeta = CW: alt-text meta. Create a filter for this hashtag if you don't want to see me or anyone else talk about alt-text.
  • #ImageDescription = This post talks about image descriptions and/or contains an image with an image description. It is primarily meant for post discovery.
  • #ImageDescriptions, #ImageDescriptionMeta = This post talks about image descriptions. Create a filter for either of these hashtags if you don't want to see me or anyone else talk about image descriptions.
  • #CWImageDescriptionMeta = CW: image description meta. Create a filter for this hashtag if you don't want to see me or anyone else talk about image descriptions.
  • #Hashtag, #Hashtags, #HashtagMeta = This post talks about hashtags. Create a filter for either of these hashtags if you don't want to see me or anyone else talk about hashtags.
  • #CWHashtagMeta = CW: hashtag meta. Create a filter for this hashtag if you don't want to see me or anyone else talk about hashtags.
  • #CharacterLimit, #CharacterLimits = This post is talking about character limits. It is primarily meant for post discovery. But if you don't want to see me or anyone else talk about character limits, create a filter for any of these hashtags.
  • #QuotePost, #QuoteTweet, #QuoteToot, #QuoteBoost = This post talks about quote-posts and/or contains a quote-post. If this disturbs you, create a filter for any of these hashtags.
  • #QuotePosts, #QuoteTweets, #QuoteToots, #QuoteBoosts, #QuotedShares = This post talks about quote-posts. Create a filter for either of these hashtags if you don't want to see me or anyone else talk about quote-posts.
  • #QuotePostDebate, #QuoteTootDebate = This post talks about quote-posts. Create a filter for either of these hashtags if you don't want to see me or anyone else talk about quote-posts.
  • #FediblockMeta = This post is talking about fediblocks. It is primarily meant for post discovery.

Lastly: Having all hashtags in one line at the very end of a post that only contains hashtags is the preferred way in the Fediverse. For one, hashtags in their own line at the end of the post irritate screen reader users much less than hashtags in the middle of the text. It's actually hashtags in the middle of the text that are ableist. Besides, Mastodon is explicitly designed to have a separate hashtag line at the end of the post.
hub.netzgemeinde.euJupiter RowlandAn avatar roaming the decentralised and federated 3-D virtual worlds based on OpenSimulator, a free and open-source server-side re-implementation of Second Life. Mostly talking about OpenSim, sometimes about other virtual worlds, occasionally about the Fediverse beyond Mastodon. No, the Fediverse is not only Mastodon. If you're looking for real-life people posting about real-life topics, go look somewhere else. This channel is never about real life. Even if you see me on Mastodon, I'm not on Mastodon myself. I'm on [url=https://hubzilla.org]Hubzilla[/url] which is neither a Mastodon instance nor a Mastodon fork. In fact, it's older and much more powerful than Mastodon. And it has always been connected to Mastodon. I regularly write posts with way more than 500 characters. If that disturbs you, block me now, but don't complain. I'm not on Mastodon, I don't have a character limit here. I rather give too many content warnings than too few. But I have absolutely no means of blanking out pictures for Mastodon users. I always describe my images, no matter how long it takes. My posts with image descriptions tend to be my longest. Don't go looking for my image descriptions in the alt-text; they're always in the post text which is always hidden behind a content warning due to being over 500 characters long. If you follow me, and I "follow" you back, I don't actually follow you and receive your posts. Unless you've got something to say that's interesting to me within the scope of this channel, or I know you from OpenSim, I'll most likely deny you the permission to send me your posts. I only "follow" you back because Hubzilla requires me to do that to allow you to follow me. But I do let you send me your comments and direct messages. If you boost a lot of uninteresting stuff, I'll block you boosts. My "birthday" isn't my actual birthday but my rezday. My first avatar has been around since that day. If you happen to know German, maybe my "homepage" is something for you, a blog which, much like this channel, is about OpenSim and generally virtual worlds. #[zrl=https://hub.netzgemeinde.eu/search?tag=OpenSim]OpenSim[/zrl] #[zrl=https://hub.netzgemeinde.eu/search?tag=OpenSimulator]OpenSimulator[/zrl] #[zrl=https://hub.netzgemeinde.eu/search?tag=VirtualWorlds]VirtualWorlds[/zrl] #[zrl=https://hub.netzgemeinde.eu/search?tag=Metaverse]Metaverse[/zrl] #[zrl=https://hub.netzgemeinde.eu/search?tag=SocialVR]SocialVR[/zrl] #[zrl=https://hub.netzgemeinde.eu/search?tag=fedi22]fedi22[/zrl]
Replied in thread
@Erik :heart_agender: @Roknrol What if I transcribe text within my image (for any definition of "text within my image") in a long image description in the post itself which I write in addition to the actual alt-text? And the alt-text explicitly mentions the long description at its end? E.g. "A more detailed description including explanations and text transcripts can be found in the post."

I often have so many bits of text to transcribe (in addition to describing where in the image they are) that I can't fit them all into the 1,500-character limit for alt-texts that Mastodon, Misskey and their respective forks impose on the whole Fediverse.

I'm not talking about screenshots from social media or something. I'm talking about renderings from 3-D virtual worlds where there may be 20, 30, 40 or more bits of text strewn across the scenery within the borders of the image. The rule says that all text within an image must be transcribed 100% verbatim, and it doesn't explicitly mention any exception, so I do have to transcribe them all. In addition, if they aren't in English, I must additionally translate them as literally as possible. There's no way I can fit all this plus a sufficiently detailed and accurate visual description into 1,500 characters.

But if you (or others) insist that all text within an image must be transcribed verbatim in the alt-text, and if you sanction image posts that transcribe the texts in the image elsewhere than in the alt-text, then I simply won't be able to post certain images in an appropriate way.

#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #Transcript #Transcripts
hub.netzgemeinde.euNetzgemeinde/Hubzilla
@Alt Text Hall of Fame
It's okay, you don't have to overthink it! Write how you'd describe the image to a friend over the phone.

This only works with simple real-life photos.

If your image shows more obscure stuff (like mine), this does not work. (@especially Mastodon users: The link goes to a Fediverse post that you may import into your timeline by copying the URL and searching for it.)

#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta
hub.netzgemeinde.euYou can't describe images in Fediverse posts like over the phoneIf I were to describe images like over the phone, I'd expect feedback like over the phone; CW: long (over 2,000 characters), alt-text meta, image description meta
@Nervensäge 💐 I have found a few guides for alt-text and image descriptions, but they may contradict each other.


The existing guides on how to write alt-text in social media aren't worth the effort. They don't tell you anything the guides above, at least not beyond walking you through the process of adding alt-text to images in certain social media, step by step. Most of them only cover American corporate social networks and social media (Facebook, Instagram, 𝕏, LinkedIn). A few add TikTok. Very few also add Mastodon, but even they only walk you through adding alt-text on Mastodon's standard Web interface. They do not deal with Mastodon's special alt-text culture. They assume that all social networks and social media have either the exact same alt-text culture as websites and blogs or none at all. And literally not a single guide covers anything in the Fediverse that is not Mastodon.

Hence my wiki plans. For one, I want to explain alt-text and image descriptions in the Fediverse as a whole. I won't include step-by-step walkthroughs because I can't possibly know every Web UI and every phone app out there, but I will point out that alt-text doesn't work exactly the same everywhere in the Fediverse as on Mastodon. Besides, I want to take Mastodon's alt-text culture into consideration which is being forced upon the whole rest of the Fediverse. Finally, I want to write guides on certain aspects of describing images and writing alt-text and not only compile the information that's strewn about the Web in lots of individual guides, but also link to these guides as references and point out when they contradict each other.

#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta
www.accessible-social.comWriting Image DescriptionsTips on how to write effective image descriptions to make visuals accessible.
@Nervensäge 💐 Just a pity that this one particular guide doesn't really work in the Fediverse.

First of all, the concept of "too long alt-text" or "too detailed alt-text" doesn't exist in the Fediverse, at least not on Mastodon where accessibility standards were defined by overly eager laypeople.

Next, there are no decorative images in Fediverse posts.

Also, only few Fediverse server applications support adding HTML tags to posts. The vast majority of Fediverse users, especially everyone on Mastodon, have a dedicated text entry field for adding alt-texts to image file attachments.

Finally, SEO does not matter in the Fediverse at all.

The whole guide is about alt-text on static websites designed by paid professional Web developers. As opposed to social media users, two out of three of whom can only post plain text.

All this is why I've started putting together a wiki specifically for alt-text and image descriptions in the Fediverse.

CC: @DNS

#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta
hub.netzgemeinde.euNetzgemeinde/Hubzilla
Replied in thread
@Big Pawed Bear Again, I'm NOT talking about the technological side. I'm NOT talking about how certain platforms are rendering alt-texts.

I'm talking about describing an image for one person whom I know vs describing an image for billions of people whom I don't know. That's a huge difference.

Let's suppose I've rendered a picture in a 3-D virtual world. A very obscure one. (Because that's what I normally do.) Chances are people won't get that image without explanations, simply because they don't know anything about these worlds.

If I want to describe that image to my friend Joe over a landline phone, I can ask Joe what he knows about virtual worlds, and whether he needs some explanations first.

If Joe says yes, he'd like some explanations, I can take a deep breath and explain away. If the explanations go too much in-depth, or if they become too big an info dump for him to handle, Joe can stop me while I'm explaining.

After explaining, I can ask Joe what I shall describe to him. Only what's important? Everything because Joe is super-curious about these virtual worlds, and he wants to know all the details so he can imagine what that virtual world looks like?

And Joe can answer. If Joe answers that he does not want a super-detailed description of everything, I don't have to give him a super-detailed description of everything. And if my description becomes too detailed, Joe can rein me in and tell me to stop.

If I want to describe that image when I post it in the Fediverse, it's very different.
  • I post it to not one person, but to many persons. Potentially billions of them, namely everyone with an Internet access.
  • I cannot ask them all what they need explained before I start describing. Especially, I cannot ask everyone of them individually what they need explained.
  • In fact, I can't even know beforehand who will receive my image post.
  • Still, I have to cater to everyone's needs all the same. I have to do so immediately without being explicitly asked to do so. And people's needs are different.
  • Lastly, they cannot talk back while I'm describing/explaining. If my explanations go too much into detail, they cannot stop me in the middle of my explanations. Besides, someone somewhere out there might actually need my explanations in their entirety.
  • The same goes for the visual descriptions. Some may want or need every last detail in the image described in-depth because they're so curious about the topic. Others may only be interested in what they think or what I think is important in the image. But they can't stop my super-detailed describing of absolutely everything in the image for those who want or need it. Even if they could, it'd be unfair towards those who do need a full, detailed description.

The result: I have to deliver the maximum right away. I have to start with a whole lot of explanations because someone somewhere out there probably won't understand my image without these explanations. And then I have to continue with an extremely detailed visual description because someone somewhere out there may want or even require one. Regardless of what everyone else wants.

It's like describing an image to Joe over a landline phone, but I don't know Joe, I don't know what Joe wants or needs, I don't even know that it's Joe on the other end, there may be other people around Joe's phone who want to hear the description, too, and someone has cut off the microphone in Joe's phone first, only to re-activate it after I'm done describing the image three hours later.

#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta
hub.netzgemeinde.euNetzgemeinde/Hubzilla