RT always recorded as zero in output XLS & other (slip-?) timing issues

In every trial of my fMRI experiment (made with Builder), subjects would hear a sound (of various durations between trials) and would then be asked to press a key to give a rating (through a rating-scale component). Typically, RTs would be expected to be between 0.5 and 2s.

I have noticed two critical problems with how this experiment’s data was recorded, as per its output XLS file:

  1. the RT corresponding to the rating component is always shown as zero, for all trials:
    12-12-2019 16.39.13

What could be the cause of this? I don’t think there is anything peculiar about how the experiment is programmed that would explain it. For instance, below is the timing of the rating component, meant to appear after a sound has finished playing, and to stay active for a given number of seconds.

12-12-2019 16.36.26

  1. As a sanity check, I computed the difference (time lapsed) between the start- and stop time of the sound component (or rather of the image component time-locked to it, since the soundphrase.stopped column for some reason only contains ‘None’ values). While for most trials this difference is (as expected) equal to the stimulus duration (as measured for each WAV file), for other trials it is not; for those trials, the computed difference is always (up to 1s) shorter than the stimulus duration. This difference does not increase with trial number (as the run progresses), instead it simply seems to be greater for the longer stimuli. Because of this problem, in my fMRI GLM analysis, I don’t know whether to enter the actual stimulus duration or the duration psychoPy indicates the sound was on for.

I attach the XLS file, which demonstrates the problems described above. The critical columns for problem 1 are X and AG, and for problem 2, E to G. I can also upload the psyexp file, although I’d rather not make it public here.

outputProblems.xlsx (69.9 KB)

My guess is that this has to do with psychoPy’s long-standing issue with non-slip timing, which here I’ve had to use because of the variable stimulus durations. As per this older post, the intended duration is in my case known at the start of the routine, but even so there might be a slip-timing issue. But it’s strange then that the sound duration discrepance does not accumulate with time, as would be expected for such an issue; and also that RT columns are simply 0, which suggests rather a programming issue.

Am I right that this data set is compromised in terms of RTs and possibly all other timing parameters? Fortunately, this was just a pilot, but it would of course be absolutely critical to know what went wrong here, so as to avoid ruining a later full fMRI experiment. Thanks in advance to the psychoPy team for any help that they may hopefully be able to offer here!!

OS (e.g. Win10): Win10
PsychoPy version (e.g. 1.84.x): 3.2.4

The Excel file output really shouldn’t be used for data analysis (and probably really shouldn’t even be an option anymore).

The .csv output should be your first port of call. What does that show?

Thanks Michael for your response. Not sure I understand why taking the data from the Excel file is different & not recommended, since the columns I need are in fact exactly the same as in the CSV file. I too wondered why the redundancy, but then just resolved to use one or the other.

Anyway, that is to say that all the problems I report are equally a problem, mutatis mutandis, for the CSV also: the RT values are all null, the phrase.stopped column is empty, and the computed duration of the playback likewise does not match the audio file length.

Some of these are probably me having done something wrong in the code, just not sure what…
11_hSs_behav_2019_Dec_03_1801behav_main.csv (95.0 KB)

Please let me know if it would be at all helpful were I to send you the psyexp file over private message. Thanks again.

Hi Michael, I was wondering if you have any more thoughts on these problems I am experiencing, so that I can know whether this data set can at all still be analysable despite all those mismatches. More importantly, I would need to know whether I can rely on PsychoPy to record data reliably in future. Thanks again.

The Excel format does processing on the data, taking means and standard deviations, rather than reporting raw trial-by-trial data. It also includes a separate block of information related to the participant and session details, rather than encoding this as columns in the main data table. It really isn’t suitable for proper analysis itself, already being partially processed, and not having a single-table structure.

The CSV you posted looked just like a .csv representation of an Excel output file, rather than being PsychoPy’s native trial-by-trial .csv format.

This is quite likely, but we can’t tell without seeing that code.

Not without more information - a proper .csv file, any custom code you are using, and so on.

I don’t know. Hopefully you also have the original .psydat and log file outputs, which in the worst case, should allow reconstruction of what happened?

You can certainly rely on PsychoPy to record data reliably, if you tell it to do so. What you can’t rely on is running an experiment without testing it thoroughly (especially when using custom code). Testing an experiment includes examining its data output and seeing whether it will produce what you need form your analysis pipeline, before running the first actual participant.

1 Like

You seem to be describing a file other than the one I got/uploaded, which does, in fact, include trial-by-trial data, which is even identical to the CSV that I later uploaded. But again, this is not essential for my point, as the problems are report are there in both files (on account of them being the same for the main trials-routine of interest).

This is the CSV (not XLS) that PsychoPy produced. It is trial by trial, as I explain above and as can be seen in the rows of that file. What am I getting wrong here?

I will send you the code + dependency and output files separately.

I fully agree with that principle. This was in fact a trial/pilot experiment, as I explained above. Hopefully we can find what went wrong so I can fix for the next version, or even recover the critical timings for this version. Thanks again for your help, I appreciate it. Will send the experiment in a few hours.

Hi Michael. I was wondering if you received my private message sent on this website, which contained a link to the experiment files. Thank you once again for your help in getting to the bottom of this problem!