I am interested in running online studies for people with visual impairments, however I am wondering how well PsychoPy/Pavlovia will work with commonly-used screen readers etc. (I am in the process of investigating myself, although as this seems like it could be a wider issue of making studies accessible to demographics with specific needs, I am raising it here for discussion). I am mostly looking to present audio stimuli on each trial which should work well, but I wonder how well screen readers would pick up text elements and other built-in PsychoPy components (e.g., rating scales)? I suppose a workaround would simply be to present an audio file with instructions, but it feels less than ideal if for example a participant would like to go back over certain text without having to replay the whole thing.
I would be interested to hear if anyone has any experience of doing something similar, and also if there are any other accessibility issues which are arising with online studies being run currently.
For the Pavlovia and PsychoPy, ie. for the experimenters, we’re keen to make things as accessible as possible. For instance, we have a fair degree of compatibility with screen readers on Pavlovia.org and both Pavlovia and PsychoPy aim to let users navigate purely with keyboards as much as possible.
For experiments, ie for the participants, that aim is a little complicated. We actually use rendering methods (WebGL and Canvas) that make accessibility harder for us to implement. This decision was to optimise rendering speed and flexibility, but it does also have a slight advantage that it makes it harder for bots to read/cheat PsychoJS experiments (see point 3 below).
A lot of what the experimenters implement in their tasks (in the sort of tasks they typically need PsychoPy for) is fundamentally not compatible with accessibility requirements. Below are a few of the issues that spring to mind. These aren’t to say that we shouldn’t implement accessibility within experiments, but illustrate why the issue is thorny and we haven’t implemented such features yet. If we do I certainly think we would make them optional but we also need to work out which aspects of accessibility users want.
Taking the Stroop task as an illustration, a classic example of the sort of study people implement here. This is a speeded reaction time task (not accessible to people with certain learning impairments) requiring the user to report the color of the letters (not accessible to color-anomolous participants) and testing whether they were distracted by the word being spelled out (requires that the perception of the letters is rapid and automatic). If some data were coming from a screen-reader-aided participant and some data from participants using unaided perception the results would become hard to interpret.
Another issue is that the dynamic, rich nature of the stimulus parameters that PsychoPy provides would often be hard to capture in any meaningful way by a screen reader. What happens, for instance, if the experimenter programs their stimulus to bounce or throb using the dynamic position/size parameters? How should we translate that to an audio report? (Possibly we just pass this responsibility on to the experimenter and give stimuli essentially an “alt” field.)
One more issue is with the worry about bots running your studies means that you might not want to allow screen-readers to be reading the contents of the trials. As well as informing humans with anomalous perception, they are also informing bots about what is being presented. If you aren’t recruiting on automated web platforms (like MTurk) then this might not be a concern but for some scientists the increase bot risk would be quite annoying.
So, I don’t want to sound like we aren’t keen to make experiments accessible but it is a tricky issue and we aren’t there yet, for sure.
Keen to see other people’s thoughts though. What things would people most want to see? (given that accessibility is huge domain!)
best wishes, and thanks @mgall1992 for bringing it up for discussion
Jon