psychopy.org | Reference | Downloads | Github

Timing issue with PsychoJS

I have run a visual masked priming experiment on Pavlovia in the last couple of days. The script also include a way to store actual time of presentation of each routine (i.e., forward mask and prime word), as discussed in this thread. I basically need to be sure that:

  1. the forward mask (########) lasts 500 ms, and
  2. the prime word lasts 33 ms.

When coding the script, I figured it would have been better to set up the routine times in seconds (rather than in frames), since subjects may be using monitor with different refresh rates.

When I tested the experiment on my computer (Mac, using Safari/Chrome/Firefox), the times were more or less exact (with oscillations of a couple of ms). However, I noticed that for half of my subjects (70 in total) the prime time greatly varied. The time prime values recorded in the data file are the following (from the R console):

[1] 0.000 0.001 0.002 0.003 0.004 0.005 0.006 0.007 0.008 0.009 0.010 0.011 0.012 0.013 0.014 0.015 0.016 0.017
[19] 0.018 0.019 0.020 0.021 0.022 0.023 0.024 0.025 0.026 0.027 0.028 0.029 0.030 0.031 0.032 0.033 0.034 0.035
[37] 0.036 0.037 0.038 0.039 0.040 0.041 0.042 0.043 0.044 0.045 0.046 0.047 0.048 0.049 0.050 0.051 0.052 0.053
[55] 0.054 0.055 0.056 0.058 0.059 0.060 0.061 0.062 0.063 0.064 0.067 0.068 0.070 0.072 0.079 0.080 0.083 0.097
[73] 0.102 0.131 0.133 0.142 0.145

So there were trials whose prime ended up not showing up at all…! This is not really ideal, especially for time-dependent design such as masked priming. This could have been due to tasks that were being run on parallel while taking the experiment, which is something we can’t really control over (can we?). I was then wondering if there is a feasible solution to at least limit the damage (that is, reduce the varying range), if not to avoid the issue altogether.