The primary way that people with low/no vision interact with PCs, phones etc. is through software called a screenreader. A screenreader at a really basic level serves two main purposes:
1. It uses synthesized speech to speak out the label of any interface element that receives focus
2. It also speaks out any notifications that the developer chooses to send to a user, to let them know about any changes that aren’t a result of their interaction
A simple principle but a genuinely life changing one. Enough that smartphone uptake was far quicker among people who are blind than it was in the rest of the population.
On desktop you’re a bit limited with what you can do with screenreader accessible interfaces, as almost everyone who uses a screenreader uses a keyboard as the sole input device, moving sequentially back and forwards through the elements. So that means that you only have a linear picture of the interface, it’s spoken out as a simple list of what’s on the screen, you have no idea about where on the screen anything actually is.
On mobile it works quite differently. Android/iOS/WP all have built in screenreaders that work by tweaking the functionality of gestures. Anything that would normally need one finger is changed to two, e.g. drag down with two fingers to scroll. That frees up one finger gestures, so you can just drag your finger around the screen, and it will speak out the label of whatever your finger is currently over.
So you’re using your finger like a surrogate eye, ‘seeing’ the layout of the screen. This hugely reduces the barriers to games being blind-accessible, as regardless of your level of vision you’re still using the same kind of interface in the same kind of way.
In particular if (as many mobile games are) a game is about navigating an interface rather than navigating an environment, it could be perfectly suited to blind-accessibility, and it could be very easy to achieve it too.
If you’re using native UI elements (e.g. UIKit on iOS), the above just all works automatically, the bulk of the work is simply giving your UI elements logical names, and firing off notifications if a user needs to be told anything. It’s simple enough that Zynga managed to make Hanging with Friends blind-accessible completely by accident. Each platform has their own techniques, eg. Apple’s developer guidelines for working with VoiceOver. There’ll be some interface subtleties that you need to uncover and take care of too, which is where the excellent blind gamer community comes in. You can easily find playtesters on places like Audyssey, Audiogames and Applevis.
There are other ways to approach blind-accessibility, for example Injustice: Gods Among Us achieving blind accessible gameplay through good sound design alone. But for mechanics that suit it, supporting screenreader access is an extremely cost effective technique.
If you’re using an engine, screenreader support falls apart. You’re reliant on the operating system being able to see the individual UI elements, but engines generally just output one single lump of pixels, rather than a collection of native UI elements. The engine will no doubt have its own internal UI element system, but that is completely invisible to the screenreader.
So if you’re a developer looking for a way to make your game blind-accessible, you’re then pretty much left with three options –
1. Scrap everything that’s been done and start again from scratch with a native codebase for each platform, which is not in any way financially viable.
2. Develop a custom auditory interface, ensuring that all aspects of interface and gameplay are accessible using your own bespoke audio. Some developers do take this route (e.g. on FREEQ), and it can actually be desirable, giving a far more immersive audio experience than a generic synthesized screenreader voice, but for most games it is not financially viable.
3. Forget about blind-accessibility.
So to avoid that lose-lose-lose situation, what’s inside the output of an engine needs to be accessible to screenreaders. Obviously that kind of work is something that lies within the realm of the engine developers, there isn’t much that your average game developer can do about it.
There are a couple of potential workarounds.
For developing on PC, you can bypass the issue entirely by sending your notifications to the windows clipboard, and likewise when an interface element receives focus, sending the label of that element to the windows clipboard. Then gamers can use an external bit of software (clipreader) to automatically intercept whatever’s in the windows clipboard and send it on to the screenreader.
It’s not ideal at all, it’s a a hack and uses additional third party software that not all blind gamers know about, but it’s something. The technique was used successfully for the UI of Skullgirls, which gained a good blind following as a result.
There are also third party libraries that can take your text output and pass it to a limited range of screenreaders, such as Tolk, which again Skullgirls has since upgraded to from clipreader.
There was another suggested workaround for iOS8+, a way for engine developers to get around it without having to fundamentally rework how their rendering works. in 8+ there’s a new feature specifically to construct an invisible duplicate UI sitting over the top of your app, and map between that and whatever happens in in the app. Again it’s a bit of a hack, but in theory it would have reduced the amount of work needed, which is critically important if you’re developing for small niches.
Details below, they’re pretty light but come straight from Apple:
Sadly though, it doesn’t work. The in-game rendering is GPU and the native UI layer is CPU, and mobile devices don’t have the bus width to keep swapping between the two without killing framerate. Other devices do however have the bus width, so perhaps the same principle could be applied for other platforms.
And there’s another option for engines, which is to forget about trying to expose the UI itself to the screenreader entirely. Instead, handle focus management inside the engine, and just send out a notification to the screenreader APIs whenever an interface element receives focus. It’s not what the notification systems are designed for, but it’s something. It requires implementation for all of the different individual screenreader APIs, which is a particular issue on PC, but again libraries such as Tolk can help.
How to improve the situation
At the moment, to be frank, we are not in a good place.
1. Engine developers are frantically busy with huge backlogs, and haven’t yet seen blind accessibility as a priority compared to everything else that they’re supposed to be doing (with a couple of rare exceptions, such as Ren’Py).
2. Game devs want to make their games blind-accessible, but they’re blocked from doing so by the engine, and have to tell blind gamers that it’s not technically/financially feasible.
3. This leaves the gamers angry and frustrated, as all they see is a few developers very easily making their games blind-accessible, and other developers inexplicably (and presumed unreasonably) saying it can’t be done.
But this can change. It shouldn’t be a hugely technically or financially challenging thing for an engine developer to implement, and it doesn’t have to be via the backlog route either. It’s the kind of thing that could be suited to a developer’s 10% time, or hackdays.
Game developers can also do their bit, firstly by making feature requests of the engine developers to let them know that it is genuinely something that game developers want to see, and secondly by doing a better job of explaining to gamers that the reason why your game can’t be adapted is because of the type of tech it is built on, rather than just saying it would be too expensive, which leads to misunderstandings.
And lastly blind gamers. There’s a great deal of advocacy work that goes on in the blind gamer community, but it is largely misplaced, aimed at game developers who often genuinely do want to make their games accessible, but whose hands are tied by the tech. So be more understanding when a game developer says it’s not possible, find out what engine they’re building on top of, and put your energies into pursuing it with the developers of that engine instead.
Blind gamers are fantastic to work with, are loyal fans and advocates with huge word-of-mouth power, and as with anyone who has a limited range of recreational opportunities, gaming can be hugely important. And if development costs are kept low, it’s a profitable low-competition niche to cater for too. All sides of the equation are at the moment unnecessarily missing out on the benefits of blind gamers being more included in gaming.
So there really needs to be a turnaround, and it wouldn’t take very much for there to be one. If even just one or two of the more popular engines were to find some kind of solution, you would see huge swathes of games going from completely and fundamentally inaccessible to mostly accessible, with all that’s left from the gamedev’s side being minor interface tweaks and notifications.