Facebook has revamped its auto alt text experiment, which provides detailed descriptions of images for people who are blind or visually impaired.
Social network introduced AAT in April 2016, using artificial intelligence and object recognition to automatically generate photo descriptions on demand.
Instagram added AAT in November 2018.
AAT is available to people who use screen readers, assistive technology that converts text and other elements on the screen into speech.
Facebook wrote in a newsroom post on Tuesday: “When Facebook users scroll through their news feed, they find all kinds of content – articles, friend comments, event invitations, etc. of course, photos. Most people can instantly see what’s in these footage, whether it’s their new grandchild, a boat on a river, or a grainy image of a band on stage. But many blind or visually impaired users can experience these images as well, provided they are properly labeled with alt text. A screen reader can describe the content of these images using a synthetic voice and allow BVI people to understand the images in their Facebook feed. “
The company’s AAT technology can now recognize more than 1,200 objects and concepts, more than 10 times the number when the tool was introduced in 2016, meaning more photos will now have descriptions.
These descriptions are also more detailed, as the updated technology can identify activities, animals, landmarks and other details, with Facebook as an example: “Maybe a selfie of two people, outdoors. , the Leaning Tower of Pisa. “
The social network also made it possible to include information about the location and relative size of elements in a photo, providing these examples:
Instead of providing a photographic description of “may be an image of five people”, it can now specify that there are two people in the center of the photo and three more scattered towards the fringes, implying that the two in the center are in the center.
Instead of describing a landscape as “can be a house and a mountain,” Facebook can now highlight the mountain in the main object, based on its size in relation to the house at its base.
The social network said the AAT is available for group photos, the news feed and profiles, as well as when the images are opened in the detailed view, where the image appears full screen and back. -plan is black.
Alt text descriptions are worded simply, allowing them to be translated into 45 different languages.
Facebook said it had asked users who depend on screen readers to provide feedback on what types of information they wanted to hear and when they wanted to hear it, finding that they wanted more information when images came from friends and family and less when they weren’t.
The social network explained what happens when users opt for more detailed descriptions: “A panel is presented that provides a more complete description of the content of a photo, including a count of the elements of the photo, some of which may. not have been mentioned in the description by default. Detailed descriptions also include simple position information – top / middle / bottom or left / center / right – and a comparison of the relative prominence of objects, described as “primary”, “secondary” or “minor”. These words were specifically chosen to minimize ambiguity. Feedback on this feature during development showed that using a word like “ large ” to describe an object can be confusing as it is not clear whether the reference is at its actual size or at its correct size. size relative to other objects in an image. Even a chihuahua looks fat when photographed up close.