Categories
XR Accessibility: for people with seeing disabilities
Posted on by Joe Lamyman in Design and development
Extended Reality (XR) experiences tend to focus on providing rich, visual content to convey information. But we need to consider how we convey the information in these experiences to people who can’t see them.
In this post, we’ll explore considerations for designing and developing inclusive XR experiences for people with seeing disabilities.
If you haven't already, you can also explore other articles in this series including:
- Introduction to XR accessibility
- XR Accessibility: for people with hearing disabilities
- XR Accessibility: for people with thinking disabilities
You can also watch my Introduction to XR accessibility talk from InclusiveDesign 24 (#ID24).
Who does this affect?
People with seeing disabilities might include:
- People who are blind and have no sight
- People who have low vision and are able to see some content, but might find content blurry, or may have spots in their vision where they cannot see
- People with colour vision deficiency, who might not be able to perceive information conveyed using colour alone
- People with light sensitivity, who might find bright experiences to be painful or uncomfortable
- People who are experiencing migraines, who might have their vision affected and be unable to focus on content
As we mentioned in Introduction to XR accessibility, disabilities might be permanent, temporary, or situational. Regardless, we need to create experiences that are usable by people with disabilities.
Identify content
We’ll start with one of the most fundamental user requirements. How do we ensure that people who cannot see can understand what’s in our experience and how they can interact with it?
Here are a few user stories that summarise this need:
As someone with low vision, I want to know the objects that are available to me, so that I can find the item I need.
As someone with no vision, I want to know what an item is and any information within it, so that I can understand its purpose.
In short, people need to be able to identify objects around them, and interact with them.
Querying content
If this were the web, we know that content and controls will be in the browser’s accessibility tree. A screen reader can query the tree and announce the content to a user. But this isn’t the case with XR experiences. It might be that a web-based XR experience uses a DOM Overlay, which allows us to use HTML elements. In which case, there would be an accessibility tree. Yet, most experiences will probably be built with game engines. These engines use controls and objects that don’t convey programmatic information. Instead, they rely on appearance to convey information.
There are many ways that we can design for this need, but as always, the solution will depend on the context of use.
You could allow people to query the experience with a shortcut. Once triggered, it would activate a screen reader, announcing content around the player. People could then listen to the content and interact with it, if they want to.
Using Text to Speech descriptions
In the virtual reality (VR) experience Cosmonious High, a Text to Speech (TTS) engine provides information to players. People can use the VR controllers to hover over objects to hear a description about the content.
If someone has low vision, but can still see the experience, we could use personalisation. Examples might include increasing zoom, contrast, or adding visual identifiers around important objects. These adjustments would allow people to find the objects themselves and perceive them.
In addition to this, we could also use a multi-modal approach. For example, you could add multiple identifiers to help people locate objects. You could add haptic feedback, which might be a rumble in the controllers. Alongside this, you could pair the feedback with audio cues and visual outlines. These different approaches will provide people with different ways to find objects.
Navigating through the experience
We also need to make sure that information about how to navigate the environment and user interface (UI) is available to people who cannot see it. User stories for this, could include:
As someone with low vision, I want to know where I need to go, so that I can complete my objectives.
As someone with no vision, I want to know where I am, so that I can understand if I’m in the right location.
For UI elements (for example menus), we could allow people to hover over the content for more information. However, describing environmental and directional information is more complex. This is because we may need to provide the user with the information ahead of time. This might be before they can even interact with it. It may be before people even know about the environment they're going to navigate in to.
When creating an experience, think about how you could describe a location. Think about how you can inform people about where they have to go and what they have to do. Provide clear and easy objectives, that allow people to understand what they must do. Alongside this, describe the space around people, so that they know if they're in the right place.
Using another example, consider a VR experience where somebody's holding a multimeter and looking at a fuse box.
In this case, think about the important information that people need to operate this:
- What do they need to know to use this piece of equipment?
- Where should they place the equipment?
- Are different colours used for different wires or buttons with a different purpose?
- Is there information about the environment that people need to know? For example, a fuse box is in front of them
- How about the state of the objects as well? Is there important information about the switches and their state?
- Is there important information about where the wire from this fuse box goes to?
These are all things that we would need to explain to people if they had low vision or colour blindness.
There are ways that we can provide this information using a multi-modal approach. Instead of an audio description, you might also be able to use audio and sound effects as well. Using spatial audio, you could play a sound effect to guide people to their objective.
Context of view
If someone is zoomed in, there might be a lot of information outside their field of vision. If this information is important, people need to know about it and have access to it.
As someone with low vision, when zoomed in, I want to know important context about content outside of my field of view.
Think about the important information that your experience needs to communicate. This might be objectives or environmental information. People might miss this information if zoomed in to 200%. Don’t design assuming that people can see everything that is on-screen. Instead, consider ways that you can provide information, without having to zoom out.
Your solution for this might depend on how zoom functionality is provided. For example, Apple’s Vision Pro headset provides the ability to use the zoom functionality in a window, or the ability to zoom the whole view to 200%. Knowing how this works, can help you to better tailor the experience for people using magnification.
Personalise displays
We need to make sure that our experiences support people's preferences. Think about the different parts of the visual design that people might want to change. This could include:
- Font family
- Font size
- Font weight
- Colour theme
- Display of captions
- Focus indicators
People might want to change these aspects of the display so that they can perceive the content.
As someone with low vision, I want to change the appearance of text in the experience, so that I can comfortably read it.
Providing larger or customisable text sizes may make the experience more comfortable to use. This might also lead to people being able to use the experience for longer.
Good contrast is also important in XR to ensure that people can perceive content. The colours that we choose might not work for everyone. Instead, providing alternative themes is a great way to offer choice. For example, you could add a dark mode for people who prefer darker interfaces to avoid migraines. Or high contrast mode for people with low vision, who find this more usable.
Moving closer to content
Large fonts may allow people to comfortably read content, but it might not work for everyone. In Jesse Anderson's excellent #ID24 talk An Illegally Sighted Look at VR Accessibility, he explains how sometimes larger text size isn’t enough. Besides, some platforms behave unexpectedly with large text. So instead, Jesse often leans in to get closer to content to be able to read it. Based on Jesse’s experience, we can use the following user story:
As someone with low vision, I want to be able to view text as close as I need to, so that I can comfortably read it.
To achieve this, allowing people to move closer to content they want to read, can be a helpful approach. It enables people to get as close to the content as they need.
An alternative approach is to allow people to move the content. In the Apple Vision Pro, people are able to move the application windows closer to themselves. This way, they do not have to move themselves towards the content. This approach might also work better for people who are unable to lean in due to having reduced movement.
Resetting a view
As people move around, it’s important that they can reset or re-centre their view. This is because sometimes this might allow people to move closer to content to read it. It also helps in cases where content disappears due to issues with tracking or the hardware. Resetting the view brings the content back into focus so that people can continue to use the experience.
As someone with low vision, I want to be able to recalibrate my view, so that I can continue to use the experience.
Allowing people to recalibrate or reset their view allows them to use the experience without disruption.
Summary
XR allows people to experience immersive environments. We must create these environments in a way that includes the needs of people with seeing disabilities. This can be done by providing means to identify and query content, text to speech description, navigation prompts, context descriptions, personalisation, and zoom features.
More information
- Inclusive XR: accessible augmented reality experiences, Joe Lamyman
- Inclusive XR: accessible 3D experiences, by Joe Lamyman
- An Illegally Sighted Look at VR Accessibility, Jesse Anderson
- Barriers Browser, BBC
- Inventing the "screenreader" for VR: Owlchemy Labs' Cosmonious High, Owlchemy Labs
- XR accessibility user requirements, W3C
Next steps
If you're currently designing an XR product, our design review service will provide you with our accessibility expertise and guidance to build an accessible experience. If you already have an XR product, our assessments can help you to understand whether your product meets accessibility standards.
We like to listen
Wherever you are in your accessibility journey, get in touch if you have a project or idea.