Common misconceptions about testing accessibility
Posted on by Ela Gorla in Design and development, Testing
Testing for accessibility is often misunderstood. Teams either overestimate what tools can do, underestimate their own role, or assume testing is something that happens once only, at completion of the development process.
In this post we tackle some of the most frequent misconceptions about accessibility testing.
With so many testing tools and methodologies available, it's not easy to know what the most effective approach for testing the accessibility of your products is. We look at some of these misconceptions and how to address them.
In case you missed them, you can read the other blog posts in our Common misconceptions series:
Misconception 1: I can rely on automated testing tools
You may have come across automated tools that promise to test and identify all accessibility issues within your website or app. It can be tempting to rely solely on these tools when testing for accessibility. However, no automated tool can truly test all aspects of accessibility, no matter how expensive or advanced the tool is. The most they can test for is between 20% and 40% of accessibility requirements from the Web Content Accessibility Guidelines (WCAG).
Accessibility is more than just accessible code
Automated tools mainly focus on the code behind websites and apps. As explained in common misconceptions about WCAG, accessibility is much more than accessible code. It's good visual design, inclusive multimedia, well-crafted editorial, inclusive language and more. Many of these are missed by automated tools.
Human judgment is still required
Many accessibility tests cannot be performed by automated tools as they require human judgment. For example, can generative AI write contextual text descriptions and how can a tool judge whether an error message is clear and provides useful suggestions?
Automated testing tools can help you quickly identify some common issues with your website or app code. However, you shouldn't rely on them alone; use them in conjunction with other testing methods, such as manual testing and user research.
Misconception 2: it should be done at the end of development
In some organisations, accessibility testing is the sole responsibility of QA testers and it happens at the end of a product or feature development process. Issues identified this late can be difficult and costly to fix.
As covered in common misconceptions about implementing accessibility, everyone working on a digital product is responsible for its accessibility. This includes testing against accessibility requirements at each stage of development. For example, designers should check the accessibility of their designs before sharing them with the development team; media producers and content writers should do the same with the content they create; and so on.
When everyone plays their role in both following accessibility guidelines and carrying out basic testing, products should present fewer issues when they reach QA.
Testing accessibility as early as possible and at every stage of development is key to delivering accessible products.
Misconception 3: only accessibility specialists can do testing
For people new to accessibility, testing may seem a challenging and at times overwhelming. They may therefore assume that only accessibility specialists can and should do it. This isn't the case.
As mentioned above, everyone working in digital teams are responsible for the accessibility of products and should be able to perform some basic testing. You don't have to know all about accessibility; you should focus on those aspects of accessibility relevant to your role. Content writers, media producers, designers, and developers should all conduct a separate, specific set of tests.
At times, you may need the advice and expertise of an accessibility specialist, especially when working on a complex or innovative component, for example. However, most of the day-to-day testing can be performed in-house.
Ensure that people working in digital within your organisation have the training and tools required to perform accessibility testing. This will make it an integral part of everyone's job and will reduce your reliance on accessibility specialists.
Misconception 4: I should test across all Operating Systems (OS), browsers, and Assistive Technologies (AT)
Digital products can be accessed across many operating systems and browsers, using different types of assistive technology and adaptive strategies, input devices, and built-in accessibility features. This often leads teams to assume they need to test across every possible device and combination.
Thankfully this is not the case. That would require a huge amount of time and money.
As long as products comply with accessibility standards and best practices, such as the Web Content Accessibility Guidelines (WCAG) and the Inclusive Design Principles, they will work well across devices and with all technology. While you may decide to test some content, such as non-standard components, with a wide range of OSs, browsers, and ATs, this is generally not the norm.
When testing your products, focus on the most commonly used devices, browsers, and ATs. The WebAIM Screen Reader User Survey is a good place to find out about screen reader usage.
Misconception 5: I can just ask people with disabilities for their feedback
Watching people using a product is a great way to identify accessibility and usability issue. Agile User Experience Testing gives teams an opportunity to place their products in front of people with a range of disabilities and get valuable insights.
There may also be people with disabilities in your team or across your organisation who you can reach out to for quick feedback.
However, relying on people's feedback alone is generally not a good approach.
People are unique
We are all unique. Each of us, either with or without a disability, have our own needs and preferences, and use digital products in different ways. For example, not all people use screen readers in the same way.
As discussed in our inclusive user research: analysing findings post, when running user research with people with disabilities we often end up with quite different and, at times, contrasting feedback from different people. If you're not an expert in accessibility, you may easily mistake personal preferences or opinions for accessibility issues. For example, a person new to their screen reader may struggle navigating tables as they are still unfamiliar with table navigation keys; this doesn't mean there is a problem with the tables.
Some requirements may be missed
Unless you have access to a very large panel, there is a good chance you won't be testing all accessibility requirements. For example, Success Criterion 1.4.10 Reflow from WCAG requires that:
Content can be presented without loss of information or functionality, and without requiring scrolling in two dimensions for:
- Vertical scrolling content at a width equivalent to 320 CSS pixels;
- Horizontal scrolling content at a height equivalent to 256 CSS pixels.
Unless you happen to have a person in your panel that enlarges the content and uses the settings listed in this Success Criterion, you won't be able to test this specific requirement.
Running formal or informal user research is hugely valuable and allows you to uncover practical accessibility and usability issues. However, on its own, it cannot replace thorough accessibility testing. As mentioned under Misconception 1 above, you should run it in combination with other testing methods, such as automated testing and manual assessments.
Next steps
Head to Assessments to find out how we can help your organisation validate the accessibility of your products, or learn how our Agile User Experience Testing and User Research Mentoring services can assist you with running user research with people with disabilities.
We like to listen
Wherever you are in your accessibility journey, get in touch if you have a project or idea.