Online image duration timings are poor

URL of experiment:
https://run.pavlovia.org/jacanterbury/timingtestv2

Description of the problem:
for some subliminal image presenation I want to present images for a precise duration (absolute onset timing is not so important but duration is)

I’ve played around with various timings and local PsycoPy scripts running in python appear to be reliably excellent (based on the output in the log file if I’m interpreting it correctly [I couldn’t find anything in the Docs to describe what the numbers mean]) however the equivalent experiment run online yields disappointing timing values in the log file. (i’ve tried two W10 PCs with two different browsers on both, both with 60Hz monitors)

Is it possible to tell Pavlovia to prioritise duration over relative onset to other events?

Or does anyone else have any other suggestions for improving the precision online (NB it seems to always be one frame short so I may just set the duraiton longer online to compensate for this)

Here’s some sample data from my test experiment (it shows an image for 1sec and then the same image for 100ms at one second intervals)

Here’s some sample data from the log file (I inserted column 2 to derive the time interval between rows)

Python timings extract:
image

Pavlovia timings extract:
image

Many thanks
John

Looking at those timings I’m wondering whether it’s actually related to a different implementation of AutoDraw. The shorter timings are one frame short and the longer timings are one frame too long.

One possibility might be to use .draw() instead of setting autoDraw. Alternatively, think about the relative positions of the autodraw command and the timer.

Timings are bound to be worse online but consistent behaviour should be tweakable. Are you setting the autodraws based on a frame count?

@wakecarter thanks for the suggestions, much appreciated. I will investigate and report back (everything is default at the minute)

from a quick comparison of the python vs the JS code, it looks like the frameTolerance on local python sessions is 1ms whereas the decision on when to stop displaying an image on JS sessions seems to be triggered 12.5ms before the flip. As autoDraw seems to log immediately this will give the impression that its turning the imagestim off much earlier on JS sessions than when it actually disappears compared to local python ones.
It’s the first time I’ve looked at the source code so I may well have this wrong however I’d tentatively say that comparing log files for local vs online sessions is not that meaningful. Also the timings for online sessions look to have a lot more wow and flutter in them.

Hi!

Three years have passed but I’m having the same problem, can you remember how solved it?

Cheers,

Hi Harry,

TBH I don’t recall what I did in the end but from re-reading the posts it seems that the log file might just have been incorrectly early with its timestamp due to the nature of the JS code structure. I assume you’re getting similar concerns so I’d suggest looking at the JS code and check whether you think that the timestamp is just the best part of a frame early. If so, then you might not actually have a problem in practice.
You might want to explore what the Timer Resolution setting is for you PC - Mine is set as 1ms by default on W11 but I recall an older W10 machine having it set to ~15ms
I’d be interested to hear back on what you uncover

Cheers,
John

Hi!
I came across your posts and took the opportunity to join the discussion to gain some helpful insights from you. I need to decide whether to implement an online experiment or a lab experiment to test the effect of masking on image recognition, with images presented for 2, 4, 6, or 12 frames. Therefore, precise presentation times are crucial.

An online experiment would obviously be the easiest way to collect a large data sample. Based on your previous experience with PsychoPy for online experiments, do you think I could achieve reasonably good presentation time precision with an online experiment, or would you suggest conducting a lab study?
I would greatly appreciate any feedback!
Davide

Hi John,

Thanks for your reply!

Looking at your problem 3 years ago, from what I can understand it looks like your timings were one frame out (the presentation duration was ~0.019 seconds too short and the interval was 0.019 too long. I’m getting a similar problem in the sense that the image presentation durations are too short, and that the subtracted time seems to be added to the following inter-stimulus interval. But in my case the image presentations are 0.170 seconds too short and the inter-stimulus intervals are 0.170 seconds too long. I need to investigate the properties of the monitor I’m using and the JS code and let you know what I find.

Thanks again!

Hi Harry

Can I just double check there wasn’t a typo in your decimal places above? Did you really mean 0.17s or actualy 0.017s? The later is of course a single frame on a standard 60Hz monitor which feels possible but being ten frames out seems unlikely from the PP engine perspective.
Have you tried looking at the generated JS code, in particualr to see where the timestamps are recorded to check the logic on that wrt when the image will filp?

Hi Wahwah/Davide,
I’ve never done anything online (with pavlovia) where high precision in timings was necessary.Given this the only thing I can suggest is to pilot it online and look at the log files to see what you find. As yo have less control of the hardware when running a study online you’ll likely get more noise in your data but this could be offset by getting much more of it, so you might be happy to tolerate this.
Perhaps @hta17 can advice given his current findings?
I’d be interested to hear what your findings are and what yo udecide to do,
cheers,
John

1 Like

@jacanterbury Thanks very much for the reply!

I’m implementing now an experiment to do some testing about that.
I’ll get back to you to share my results as soon as I’ll be done with that.
Best,

D.

Hi John,

That’s correct, it is not a typo. Luckily the task I’m implementing is an n-back, so some of the images are repeated on consecutive trials (e.g. presented on trial 1 and trial 2), while others are repeated on non-consecutive trials (e.g. presented on trial 1 and on trial 3). The second time an image is presented on consecutive trials it is presented for the correct duration (in fact, only 8 ms to short), whereas in any other situation this is not the case (presented between 170 to 250 ms too short), so I guess my problem has something to do with loading the image.

I’m not familiar with the code in JS or in the normal coder view so I’m pasting below what I assume is the most relevant part of the code, which is the routine that presents the image on each trial. In the output .csv file the presentation of a stimulus is signalled by trial_image.setAutoDraw(true) and the end of the presentation is signalled by trial_image.setAutoDraw(false), so here is the part of the code that involves those functions:

// trial_image updates
if (t >= 0.0 && trial_image.status === PsychoJS.Status.NOT_STARTED) {
// keep track of start time/frame for later
trial_image.tStart = t; // (not accounting for frame time here)
trial_image.frameNStart = frameN; // exact frame index

  trial_image.setAutoDraw(true);
}

frameRemains = 0.0 + 1.5 - psychoJS.window.monitorFramePeriod * 0.75;  // most of one frame period left
if (trial_image.status === PsychoJS.Status.STARTED && t >= frameRemains) {
  trial_image.setAutoDraw(false);
}

Cheers,
Harry

In response to your question, yes, when task works correctly the presentation times for my working memory are good for muy objectives. I need to present images for 1500 ms and the presentation durations tend to be accurate within 8 ms, when it works correctly!

Cheers,
Harry

1 Like

Are the images set in image components to update each repeat? You could try adding a static period at the start of the routine (or in the previous routine) and have them update during that.

Are your images large?

along the lines of wakecarter’s suggestion, perhaps if you make all durations 1 frameduration longer than you want, but for the first frame set its opacity to 0 (so it can’t be seen, but I’m assuming it would be loaded from disk) and then after one frame change the opacity to 1.
This is all just theory, however I’ve used opacity in the past, changing it frame by frame to fade objects in and out and it worked reliably then.
good luck

Hi,

The images are currently set to update each repeat. I previously tried to use the static period in the same routine, lasting 1.4 seconds, and the images failed to load, or at least the images loaded on the first couple of trials and then failed to load on the rest.

The images are between 1 and 5 MB each.

Cheers,

I thought of this kind of solution too, or at least trying to pre-load each stimulus and masking it somehow before making it visible on the screen but couldn’t figure out how. So I will try this for sure.

Thanks!

Do the images have to be so large?

My thoughts too. I’d suggest looking at something like Imagemagick to reduce the file size down to a size that preserves image quality for your purposes but which speeds up load times. I’d be surprised if you couldn’t get them down to 10% the file size without impacting appearance.

@jacanterbury
Hi!
I’ve conducted some tests and would like to share the results with you.

Experiment Design:
I displayed a target image in a routine for either 2, 4, 6, or 12 frames. In a subsequent routine, I presented a mask consisting of 5 images displayed sequentially for 6 frames each.

Presentation Time Assessment:
To measure the presentation time of the target images, I utilized the log files generated by Pavlovia, which include the onset and offset of the routines. However, I’m unsure if this method is entirely accurate, so I welcome any insights or suggestions.

Results:

  • Trials per condition: 1000
  • Percentage of trials with one frame less than expected (% - frame)
  • Percentage of trials with one frame more than expected (% + frame)
  • Outliers: Number of images displaying more than ±1 frame than expected
  • My Mac: MacBook Pro (14-inch, 2021) running Monterey
  • Lab PC: Windows PC

I haven’t included the masking results here, but they appear to be consistently stable. Notably, the results obtained from Chrome provide more granularity due to their sub-millisecond precision in logging.

Additionally, I tested an older Mac, revealing that 10% of the trials exhibited an additional frame than expected. It’s important to be cautious and disable ‘promotion’ on Macs that utilize this technology, as it can lead to significant disruptions in presentation times.

Please interpret these results with care, and I’m open to any feedback or suggestions!
Cheers,

Davide