psychopy.org | Reference | Downloads | Github

'None' output values, and possible consequences of Psychopy's slip-timing issues

(I am reposting a problem I’ve reported previously, because the original thread has been unresponsive for months, after I provided the required further details. I am hoping in this renewed attempt there will be some suggestions of how these ongoing problems can be worked around.)

In every trial of my fMRI pilot, subjects heard a sound of varying duration, and were asked to press a key to give a rating through a rating-scale component. Two critical problems exist in this experiment’s data, as recorded in its output (CSV & XLS) files:

  1. the rating component’s RT is shown as zero for all trials:
    12-12-2019 16.39.13

  2. The value of several components’ parameters (e.g. phrase.stopped, getLikingRating_2.response etc) is either blank or ‘None’

  3. As a sanity check, I computed the difference (time lapsed) between the start- and stop time of the sound component (or rather of the image component time-locked to it, cf problem #2). While for most trials this difference is (as expected) equal to the stimulus duration (as measured for each WAV file), for other trials it is not; for those trials, the computed difference is always (up to 1s) shorter than the stimulus duration. This difference does not increase with trial number (as the run progresses), instead it simply seems to be greater for the longer stimuli. Because of this problem, in my fMRI GLM analysis, I don’t know whether to enter the actual stimulus duration or the duration psychoPy indicates the sound was on for.

I attach output files that demonstrate these problems. The critical columns (in the CSV) are: AO for problem 1; CF for problem 2; and (in the XLS, the added columns) E to G, for problem 3. I can also upload the psyexp&output files privately to anyone who is happy to have a look into this. Michael had mentioned that output data should be retrieved from the CSV rather than the XLS outputs; however, the same problem exists in both formats.

01_hSs_behav_2019_Dec_06_1107.csv (72.4 KB)
outputProblems.csv (70.1 KB)
outputProblems.xlsx (69.9 KB)

Problem 3 might have to do with psychoPy’s long-standing issue with non-slip timing, which here I’ve had to use because of the variable stimulus durations. As per this older post, the intended duration is in my case known at the start of the routine, but even so there might be a slip-timing issue. But it’s strange then that the sound duration discrepance does not accumulate with time, as would be expected for such an issue; and also that RT columns are simply 0, which suggests rather a programming issue.

Part of the missing information in this data set can be recovered from the LOG files, but I’d of course be keen to find out know what went wrong here. Thanks in advance to the psychoPy team for any help that they may hopefully be able to offer here!!


PsychoPy version (e.g. 1.84.x): 3.2.4 (Builder used)
OS (e.g. Win10): Win10

As an update, problem 3 disappears in the 2020.1.3 version, and/or by making the following settings in the PsychoPy hardware settings for the audio driver etc:
[‘pyo’, ‘pygame’, ‘sounddevice’, ]
[‘ASIO’, ‘Audigy’, ‘Primary Sound’, ]
4: latency critical

However, problems 1 and 2 are still unsolved, thus it would still be very helpful and appreciated if I could try any suggestions to make them work. Others are reporting similar problems, e.g. here.

It might be relevant to note that this script contains a rating scale component (these are known to be buggy, and I remember were considered for being phased out of psychoPy), and that in my trial routine, I have a code component to prevent premature keypresses while the stimuli are still heard, with the following code under EachFrame:

if choiceRating.status == STARTED:
    if not eventsCleared:
        event.clearEvents()
        eventsCleared = True

Thanks in advance once again to this great community!