Taking time for blind technology often means the literal reality that tasks take more time because of the differences between usability and accessibility. There is no doubt I feel fortunate with how much incredible access my technology affords me with the present encouragement for predominantly virtual interactions. Much of the daily work I do involves my computer with support from my smart phone. I’m often asked how I access these devices. As with so many things, there are many possible solutions.
My PC is a normal laptop with Windows 10 and Microsoft Office. The adaptive technology I choose is called JAWS (Job Access With Speech). This is a robust program which does two primary functions for me. First, it speaks items on the screen which most would use their sight to read. There is a series of rules for how it will do this and some adjustable personalized controls to enable the level of detail various users might prefer or require. The second and equally important task of JAWS is to provide a series of keyboard commands to enable interaction without a mouse since that would be challenging for someone blind or significantly sight impaired.
Those features make most things reasonably accessible. Photos or images and graphics still pose a problem for this program, as do various videos without sufficient context, but the basics are there. The “robust” aspect of additional keyboard commands can greatly facilitate the usability of access to many aspects of conventional computer work. While the sighted might glance past a massive amount of links to reach the primary content in the literal blink of an eye, that could be dozens of keystrokes or more for the screen reader without a few usability features. The pleasant reality is that these features and more, such as built in image recognition, have been greatly advanced to ease much of my use. The challenge comes with change.
Unfortunately accessibility/usability is not designed into most products but added into them afterwards. As a result, changes made to the fundamental product often disrupts the accessibility and can devastate the usability. For example, a recent office update to Outlook presently prevents JAWS from reading the auto-complete of email addresses to me when I’m typing to whom a message should be sent. This may seem a small detail, but if I do not know if the right email is selected until after I blindly select and then scan to verify, I’m going to either make more mistakes, have to memorize a lot of complete email addresses, or take a lot of additional time in every email I choose to send. This impact to my productivity is tremendous.
Additionally, the resolution for this usually involves time spent working with technical support at both the screen reader company and the product in question. Once this is complete, there is usually a delay until they work together and resolve the issue and a new build is released by either company with the necessary fix.
This is why I usually attempt to have multiple ways to do most things so that during “down” time, I may still do most of my essential needs. I use an iPhone, which has a built-in screen reader , VoiceOver, to enable me to perform most tasks I might otherwise manage on my PC, albeit with a little less efficiency. It’s a back-up system to ensure I will continue to work while either of my accessibility or usability levels are challenged on either approach.
This may seem like a tremendous amount of additional work and a fair bit of potential frustration. It is. Fortunately, it is also a time when excellent access to all forms of essential platforms has become steadily easier from collaboration on Google documents to attending Zoom or other video conferences. I’m tremendously appreciative of how very far we’ve come and hopeful that we continue to take the steps necessary to see design with a more natural access for all and a more thoughtful part of the process.
After all, it’s how I continue to bring you these blogs, Words for Wednesday, and the presentations at the heart of our 2020 Vision Quest mission.