Making LinkedIn Media More Inclusive with Alternative Text Descriptions


Co-authors: Vipin Gupta, Ananth Sankar and Jyotsna Thapliyal

As part of our vision to provide economic opportunity for every member of the global workforce, LinkedIn creates a unique environment for members to network, learn, share knowledge, and find jobs. In many ways, the LinkedIn feed has become the core of this effort as the preeminent way to share information and participate in conversations on our site. Alongside text, rich media has become an important component of the feed as well. But the addition of rich media within the LinkedIn feed raises a question: is the feed fully inclusive for all LinkedIn members? 

For instance, can a member who has a vision disability still enjoy rich media on the feed? Can a member in an area with limited bandwidth, which could stop an image from fully loading, still have the complete feed experience? To uphold our vision, we must make rich media accessible for all of our members.

One way to improve the accessibility of rich media is by providing an alternative text description when uploading an image. A good alternative text description describes an image thoroughly while bringing the viewer’s attention to the important details. All the major elements or objects of the image must be identified and projected in a single, unbiased statement. Currently, LinkedIn allows members to manually add alternative text description when uploading images via web interface, but not all members choose to take advantage of this feature. In order to improve site accessibility, our team has begun work on creating a tool that adds a suggested alternative text description for images uploaded to LinkedIn. Although computer vision science has made great strides in recent years, automatic text descriptions are still a difficult task—compounded by the fact that images on LinkedIn tend to fall into a professional or work-oriented category, rather than being more general or generic.

This blog post provides a brief overview of the technologies we are exploring to help us improve content accessibility at LinkedIn, using existing solutions through Microsoft Cognitive Services while also breaking ground to customize our models for LinkedIn’s unique dataset.

Why alternative text descriptions?

There are several ways that alternative text descriptions for images can improve the accessibility of rich media in the feed. For members using assistive technology like a screen reader, alternative text descriptions provide a textual description of image content. Similarly, in areas where bandwidth may be limited, such descriptions allow members to understand the key features of an image, even if the image itself cannot be loaded.

If a member doesn’t provide an alternative text description at the time of image upload, we can turn to multiple methodologies for generating alternative text descriptions at scale, including deep learning, neural networking, and machine learning.



Source link