Right now there is a bit left to be desired when it comes to lemmys accessibility features, but it’s a good idea to be mindful of the fact the fediverse and its platforms tends to have pretty universal accessibility features that will likely come to lemmy sooner rather than later
I really like the text size options Voyager and Memmy.
Much better than text size options on apps for that other site.
Sometimes I think some additional attention to the formatting could be given, like if devs could test more with larger text size. But it’s still pretty good.
Alt-text descriptions should clearly convey both the content and the meaning of the image, and should aim to use as few words as needed. Describe what’s essential to understanding (and enjoying!) the intent of the posted photo — you don’t need to add in a sentence for every visual element, but should include as much as you need to create an accurate portrayal of the image. Cut out unnecessary words and combine separate sentences as much as possible. One to two sentences is usually more than enough room to describe what’s going on.
As mentioned before, these photos convey information to the people scrolling your page, even if you are just posting them to brighten up your feed. They have a purpose, and for that reason, alt text should focus more on the image’s meaning than its aesthetics. This means you’re not focused only on what the object in the photo looks like, but what it is and why it was posted.
I was hoping to see a format that people can easily follow and just fill in the blanks, but I suppose this is the gist of it: Describe the main purpose of the photo succinctly rather than each and every individual thing you can see.
I’ll try to do this, but man it’d be great if there was an AI program that could auto caption/describe pictures as I post them. Or maybe just one that could interpret and describe everything on the screen, for a visually impared user.
There are some apps for Mastodon that do just that and hook into the alt text button that exists within Mastodon.
Ofcourse mastodon have but not sure about lemmy though
Or somebody should build AI-powered image interpretation into screen readers.
Unfortunately those aren’t very reliable, as our current iteration of AI is not very reliable. Most models to use from perpetuate a multitude of different bias pretty heavily.