psychopy.org | Reference | Downloads | Github

Stimulus Timing for RT Studies

OS: MacOS 12.0.1 (Monterey)
PsychoPy version: 2021.2.3

What are you trying to achieve?: Two things. (1) I wish to know how accurate is the expInfo[frameRate] value that the data log file reports. (2) I want to present different stimuli each for 200 ms, then, on different trial types, wait for either 0, 200, 600, or 1400 ms before presenting another stimulus to which participants are to respond. RTs are recorded to that target stimulus.

What did you try to make it work?: Regarding (1) above, I present a stimulus for 40 frames (at least I think thats what 0 as the start frame and 41 as the stop frame does), then I record the stimulus duration in terms of a difference between its start and stop times in the log file. From that, I can calculate the “observed” frame rate. I then compare this to the frame rate reported by the expInfo[frameRate] variable in the log file. Regarding (2) above, I have a variable (called TraceInt) in the Excel conditions file and each stimulus used in the experiment has a different value tied to it. The variable indicates the #frames to wait before presenting the target stimulus.

What specifically went wrong when you tried that?:

(1) The values I get for the observed frame rate and the reported frame rate from expInfo[frameRate] don’t agree. So, I am wondering whether the start/stop times in the log file are more accurate.

(2) The problem is that when running the experiment online, different users’ computers will have different frame rates. In order to ensure that the same 0, 200, 600, 1400 ms wait times are applied across different computer setups I tried introducing code (following a suggestion by Becca in the forum) in the builder window prior to the stimulus presentation that would normalize things based on the computer’s frame Rate (from the expInfo[frameRate] variable. That code calculates how many frames on a given machine would be required to achieve 0, 200, 600, or 1400 ms. The problem is that I cannot figure out how to read the relevant variable (TraceInt) from the conditions file and then incorporate that into the code to compute the # frames necessary for that user’s machine.

The code looks like:

waitFrames = int(200 / (1000/expInfo[‘frames’]))

This should ensure that the wait time will be close to 200ms, independent of monitor frame rate. However, I need to replace the 200 with a variable (TraceInt) that changes from one trial to the next, and can’t see how to do that.

The other thing, of course, is that if the start/stop times is a more accurate way of determining the frame rate, then I’m not sure that this method would even work because it is based on the expInfo[frameRate] variable that may be inaccurate. For instance, when testing it out with a single value, 200, it reliably produced observed durations of ~135ms, not 200ms, on my screen.

I’d be very grateful if anyone can help with these issues.

Hi @ardelamater,
regarding your first question: I never used expInfo['frames'] but i guess it is related to the .frames parameter of the Window object (and also the .monitorFramePeriod). These properties are measured automatically when initiating a Window in psychopy (checkTiming=True by default in psychopy.visual.Window). Frame duration (and thus refresh rate) is computed similarily to the idea that you describe - frame times are counted until a sufficient number of frames in a row have very similar time (up to a specified precision - by default the frame times standard deviation should be below 1 ms). If you want to control this process (for example to increase its precision) you can use win.getActualFrameRate().

Regarding your second issue:
to include the TraceInt variable in your calculations - use this variable to store intended presentation time in seconds (and not number of frames!) and then do for example:

waitFrames = int(TraceInt / win.monitorFramePeriod)

So if your requested presentation time is 0.2 seconds and frame period is about 0.01666 (a frame duration for 60 Hz frame rate). Then 0.2 / 0.01666 gives you 12 which is the correct number of frames to get 0.2 seconds on a 60 Hz montior.
Also bear in mind that int() does not round the value, so:

int(0.99)

gives 0. So you may prefer to use int(round(...)) for example (but then don’t be surprised that round(1.5) and round(2.5) both give you 2 :slight_smile: ).

1 Like

Hello @mmagnuski,

Thanks very much for your reply. I’ve discovered some things. First off, the code I originally pasted was a bit off. My code actually used expInfo[‘frameRate’] NOT expInfo[‘frames’]. I also tried what you suggested, waitFrames = int(TraceInt / win.monitorFramePeriod). That works as well. HOWEVER, across the different stimulus conditions I have in my experiment the wait times are supposed to be:
0, 200, 600, or 1400 msec. The OBSERVED wait times on my machine (in a trial run) were 0, 38.1, 552.5, 1596.5 msec. Those are very discrepant values. I calculated the observed times by taking difference scores between the values that appear in the data log file for start/stop times. I am assuming those are pretty accurate, but would love to hear feedback as to whether that is a reasonable assumption regarding how PsychoPy works…

Now, and here’s the really interesting thing, when I redo the experiment but this time using time, instead of # frames, to determine stimulus and wait time durations, I get values in my log file that average 0, 195.7, 593.7, and 1394 msec. Those values are actually quite close to the programmed values (0, 200, 600, 1400), and I can live with that kind of discrepancy.

Thus, if one is running an online experiment, it seems like the best way to accomplish reasonable across-machine timing is to use time for start and duration for stop parameters, and NOT # frames. I assume that timing by time/duration parameters will be fairly robust across machines that have different frame rates. Again, I’d love to hear if that is a reasonable assumption as well.

Regards,
Andrew

1 Like

I would agree that timing in seconds seems safer than frames for online studies.

1 Like

I also agree, I somehow didn’t notice the “pavlovia” tag in the post and assumed an experiment run locally.