I am in the process of developing an online experiment in which I am hoping to present images with sub-50ms stimulus-on-screen timings. I know that specifying the duration of presentation in ms is somewhat meaningless because actual timing depends on the refresh rate of each participant’s monitor. Therefore, one idea was to use this refresh rate to specify duration so I can be as close to the timing I want for each image as possible.
My question is: Is the “frameRate” value (output by default in participant data files) what I need to use as the refresh rate in my calculation? I see this value is measured in the following way:
So it looks like it is using this _lastDelta value and then the frame rate is calculated from that.
When I look at actual frameRate variable values in participant files though, they vary wildly from 7-8 to 57-60. How is this possible? Is it the case that maybe the framerate was that low when the experiment began but later recovered to the actual refresh rate of the monitor? So then I would not be able to use this value because the presentation would beecome too slow later on.
Another option was to use the “frames” in the duration parameter of an object, but I am wondering if this is also just based on the frameRate calculation and so will have the same problem?
Anyway, I am wondering if anyone who has tried this has any insight on best practices to ensure that the specified presentation time is close to what the participants see. Any thoughts are much appreciated!
The 57-60 range is probably accurate, though “60” is the default value and may also indicate that it couldn’t get an accurate delta from the last refresh. However, because 50ms is precisely 3 frames at 60fps, even the difference between 57 and 60 could affect the stimuli in meaningful ways!
I don’t know what would drive the lower numbers.
No, it looks like it’s tied to the number of calls to the render() function in the window code, so it should be accurate to the number of actual frames that included a given stimulus, regardless of how long it took.
You can record stimulus duration (and it’s independent of fps as well), so it should be possible to control stimulus onset and offset with frame count and compare that against the recorded duration to see if it was within expectations.
Just to follow up: So if render is tied to the actual frames, if a participant has a 240Hz refresh rate screen, should I expect that the stimulus duration will be 4x faster than a participant with a 60Hz screen?
The stimulus duration pointer is super helpful! Are you referring to the “Save onset/offset times” checkbox in the Data tab of an object? This might be an annoying question but do you know how accurate this is? I’m worried I am replacing one faulty measurement with another.
Yes, the save onset/offset time box. It should be highly accurate, it uses the system clock, checked on the first and last flip of the trial i believe.
W/r/t speed vs fps, it depends on whether you define stimulus duration in frames or time. If it is defined in frames then possibly, I don’t know how PsychoJS (or JS in general) handles frame rates higher than 60. If it is defined in time, it will cut off after the first frame after the duration has passed, but depending on frame rate the actual time will have a larger or smaller margin of error.
getActualFrameRate() which outputs by default to participant data files (frameRate var) is probably accurate for most people but if the values are low (7-10) or exactly 60, these might not be accurate.
The option I have landed upon to present short duration stim (<50ms) is specifying the duration time in ms but then checking the actual duration using the “Save onset/offset times” checkbox under the Data tab for builder objects. I will exclude participants for whom this value is off a lot, but my experiment design allows for some tolerance.
Using frames might also be a good idea for some as this will be a multiple of the actual fps of the participant monitor, but its unclear how Psychopy renders when the refresh rates are high for participants, and this seems like a less desirable way to do things imo because higher refresh rates might make the stimuli go much faster (on the order of 2-4x for some participants) and I want to err on the side of slower rather than faster.