Whenever I am presented with steps on how to conduct an accessibility evaluation, user testing with real users is always listed as a step. What I rarely see is information on why testing is so important, what the benefits from testing real users are, and what you get by testing with real users that you can’t get by using accessibility tools or testing with your own screen reader.
I recently conducted an accessibility evaluation for a client. I used the following set of accessibility tools:
- Fangs (provides screen reader output, headings list, links list)
- Wave Toolbar (identifies accessibility coding errors)
- AIS Web Accessibility Toolbar (identifies WCAG violations, provides color contrast analyzer)
I also tested the site for keyboard-only usage (which entails tabbing through the site and not using the mouse) and I conducted in-person user testing. Not only did I discover additional errors conducting user testing, the testing had other benefits:
- It allowed me to prioritize errors from a user’s point of view. Testing allows you to identify the show-stoppers that should be addressed first because they prevent users from completing tasks.
- It helped me to interpret the Web Accessibility Content Guidelines (WCAG 2.0). Testing can provide insight on how a guideline should be implemented if you’re unsure and testing can let you know what guidelines are the most important for your site (if you do have the ability to implement all guidelines). For example, testing can make it very clear as to what images should have descriptive alt tags and what images should have an empty alt attribute so that the screen reader will skip over them.
Given the scope of my project, I was able to only conduct testing with one user who was blind and used the JAWS screen reader—testing with one user is better than no users, but certainty not ideal. Ideally, testing would be conducted with users who had a wide variety of disabilities such as:
- Low vision and colorblindness
- Deafness and limited hearing
- Motor impairments (users that cannot use a mouse, users with disabilities like arthritis)
- Cognitive seizures (movement on the screen can trigger seizures for those with photosensitive epilepsy)
- Learning disabilities
Testing with users who have different disabilities ensures that a site can be used with assistive devices such as screen magnifiers and voice activated software.
During user testing, I identified a list of issues which the tools couldn’t flag:
- When the user arrived at the home page, the focus was in the username/password fields, which confused her because she was a potential customer of the site, not a current one. She assumed that the boxes were at the top of the page (when in reality they were closer to the bottom) and did not understand that the screen reader had skipped over the navigation, which made it hard for her to find it.
- The user was unable to recover from errors on forms; after submit, focus was placed on the “Cancel” button instead of the error message text. She did not know what she needed to fix on the form.
- Spacer images in the site had alt text. This didn’t seem like a huge deal during the toolbar evaluation, but it quickly became quite obvious that they created a lot of unnecessary “noise” for a screen reader user. The user continually heard “image slash spacer.”
- There were calculator links on the home page that initially seemed clearly labeled, but during the testing it became obvious that the user was unaware the links were all types of calculators. The heading for links was “Calculators” but the word calculator was not restated in each link. Because the screen reader let her skip from one link to the next, she missed that the links were all types of calculators.
- There was a “top” link that had
an up arrow associated with it (Top ?). This link did not seem troublesome in the evaluation, but the user only heard “top.” She did not understand what this meant. When it was explained to her, she said “return to top of page” would have made more sense.
- Testing revealed how confusing some of the form fields were. For example, the user did not understand what a field labeled “Type” was asking for. There were two radio buttons that allowed the user to select a type, but since the screen reader had not reached the radio buttons, she didn’t understand what type meant. The fields in question did not appear to be a problem in the other tests, especially since the fields could be navigated through using the keyboard only. When read aloud, however, page content is presented in a linear fashion. Getting to the controls wasn’t a problem, but breaking
a single concept into multiple parts made it difficult to understand.
- On several forms, notes that pertained to fields were given after the fields. The screen reader did not read the note until the user had finished filling out the field. She needed this information before the field, not after.
In conclusion, analyzing a site using the accessibility tools such as the ones mentioned in the article is a great way to start looking at how accessible a site is, but the real data comes in when testing a site with users.