Timings using number of frames - data output suggests timing is inaccurate

I am creating a task where several different, constant images will be displayed one after the other in the centre of the screen. The period of time should be 350ms in total, with each image presented for 50ms. Underneath these images is a constant image (I’ll call this the stage image) that is on display for the full 350ms.

I am presenting this on a screen with a refresh rate of 60Hz. To improve accuracy of timing, I’m having the images present in frames. Each image presented in the centre presents for 3 frames (~50ms), one after the other. The stage image is set to present for 21 frames (~350ms).

These appear to display very nicely and as expected! However, I’m a bit confused in relation to the output file. I checked the durations of each image in the centre and the stage image and the time in seconds is about half of what I expected (e.g., stage image ~179ms rather than 350ms OR each centre image for ~20-29ms rather than 50ms). I’ve confirmed my refresh rate is 60Hz so not sure exactly what I may have done wrong - I am sometimes getting warnings of frames being dropped (I have several other rountines using images too).

All the images preload earlier in the experiment too using a static component. Any guidance would be appreciated.

Bump :slight_smile:

Are you using duration (frames) or frameN?

Is it worth adding an additional component set to last for .35 seconds to give visual feedback on whether the stage image is actually being shown for half the expected time.

What version of PsychoPy are you using?

Can you replicate the issue in a minimal demo, which you could upload here?

How are you measuring the shorter durations?

Do you get the same issue on another computer?

1 Like


In addition to @wakecarter’s advice, it might be worth reading the manual. It gives some information on how to reduce the number of dropped frames.


Best wishes Jens

1 Like

Hi - thanks!

I’m using duration(frames). As suggested, I included an additional component for 0.35 seconds and it does indeed stay on the screen longer than the component set to 21 frames. This is reflected in the output file too - so I guess this is the issue. There seems to be a big discrepancy between the 2.

I’m using 2023.1.2 but have the same issue if I upgrade to 2023.1.3. Running on a Mac.

I’m measuring the duration (3 frame duration requested in the image component) in the output by comparing the difference in onset/offset times of components (as provided by default).

I’ll see if I can test this on another computer tonight.

Minimal working example here. The task involves an object appearing on the screen, sat on a stage. The object is then hidden behind a moving occluder (this involves several images of the occluder from different angles being presented). The occluder remains static for a period and then uncovers an object behind it.

frames_example.psyexp (79.0 KB)

orderA_block1.xlsx (8.8 KB)

The discrepancy in frames vs time is present across the task but I think the easiest place to see this is in routine named “occluder_up” which is the example I describe above. I’ve disabled the test image component you recommended (set to 0.35) for now which is called “object_test”.


So it seems that for some reason PsychoPy thinks that your refresh rate is 120Hz.

Does your Mac have ProMotion? That seems to be an option whice can give “an adaptive refresh rate up to 120Hz”.

It does seem to think the rate is 120Hz. I’ve just had our IT team come confirm the refresh rate and it’s definitely 60Hz and doesn’t have a ProMotion option. I think the only option is to try this on another computer!

Okay now tested on a lab PC (although running an older version of psychopy) and the timing is as expected. So not sure if something is up with my iMac refresh rate (although everything points to it being 60hz) or how psychopy interprets it. Luckily, I can switch this to a PC from now on :slight_smile:

1 Like