Description of the problem:
for some subliminal image presenation I want to present images for a precise duration (absolute onset timing is not so important but duration is)
I’ve played around with various timings and local PsycoPy scripts running in python appear to be reliably excellent (based on the output in the log file if I’m interpreting it correctly [I couldn’t find anything in the Docs to describe what the numbers mean]) however the equivalent experiment run online yields disappointing timing values in the log file. (i’ve tried two W10 PCs with two different browsers on both, both with 60Hz monitors)
Is it possible to tell Pavlovia to prioritise duration over relative onset to other events?
Or does anyone else have any other suggestions for improving the precision online (NB it seems to always be one frame short so I may just set the duraiton longer online to compensate for this)
Here’s some sample data from my test experiment (it shows an image for 1sec and then the same image for 100ms at one second intervals)
Here’s some sample data from the log file (I inserted column 2 to derive the time interval between rows)
Looking at those timings I’m wondering whether it’s actually related to a different implementation of AutoDraw. The shorter timings are one frame short and the longer timings are one frame too long.
One possibility might be to use .draw() instead of setting autoDraw. Alternatively, think about the relative positions of the autodraw command and the timer.
Timings are bound to be worse online but consistent behaviour should be tweakable. Are you setting the autodraws based on a frame count?
from a quick comparison of the python vs the JS code, it looks like the frameTolerance on local python sessions is 1ms whereas the decision on when to stop displaying an image on JS sessions seems to be triggered 12.5ms before the flip. As autoDraw seems to log immediately this will give the impression that its turning the imagestim off much earlier on JS sessions than when it actually disappears compared to local python ones.
It’s the first time I’ve looked at the source code so I may well have this wrong however I’d tentatively say that comparing log files for local vs online sessions is not that meaningful. Also the timings for online sessions look to have a lot more wow and flutter in them.
TBH I don’t recall what I did in the end but from re-reading the posts it seems that the log file might just have been incorrectly early with its timestamp due to the nature of the JS code structure. I assume you’re getting similar concerns so I’d suggest looking at the JS code and check whether you think that the timestamp is just the best part of a frame early. If so, then you might not actually have a problem in practice.
You might want to explore what the Timer Resolution setting is for you PC - Mine is set as 1ms by default on W11 but I recall an older W10 machine having it set to ~15ms
I’d be interested to hear back on what you uncover
Hi!
I came across your posts and took the opportunity to join the discussion to gain some helpful insights from you. I need to decide whether to implement an online experiment or a lab experiment to test the effect of masking on image recognition, with images presented for 2, 4, 6, or 12 frames. Therefore, precise presentation times are crucial.
An online experiment would obviously be the easiest way to collect a large data sample. Based on your previous experience with PsychoPy for online experiments, do you think I could achieve reasonably good presentation time precision with an online experiment, or would you suggest conducting a lab study?
I would greatly appreciate any feedback!
Davide
Looking at your problem 3 years ago, from what I can understand it looks like your timings were one frame out (the presentation duration was ~0.019 seconds too short and the interval was 0.019 too long. I’m getting a similar problem in the sense that the image presentation durations are too short, and that the subtracted time seems to be added to the following inter-stimulus interval. But in my case the image presentations are 0.170 seconds too short and the inter-stimulus intervals are 0.170 seconds too long. I need to investigate the properties of the monitor I’m using and the JS code and let you know what I find.
Can I just double check there wasn’t a typo in your decimal places above? Did you really mean 0.17s or actualy 0.017s? The later is of course a single frame on a standard 60Hz monitor which feels possible but being ten frames out seems unlikely from the PP engine perspective.
Have you tried looking at the generated JS code, in particualr to see where the timestamps are recorded to check the logic on that wrt when the image will filp?
Hi Wahwah/Davide,
I’ve never done anything online (with pavlovia) where high precision in timings was necessary.Given this the only thing I can suggest is to pilot it online and look at the log files to see what you find. As yo have less control of the hardware when running a study online you’ll likely get more noise in your data but this could be offset by getting much more of it, so you might be happy to tolerate this.
Perhaps @hta17 can advice given his current findings?
I’d be interested to hear what your findings are and what yo udecide to do,
cheers,
John
That’s correct, it is not a typo. Luckily the task I’m implementing is an n-back, so some of the images are repeated on consecutive trials (e.g. presented on trial 1 and trial 2), while others are repeated on non-consecutive trials (e.g. presented on trial 1 and on trial 3). The second time an image is presented on consecutive trials it is presented for the correct duration (in fact, only 8 ms to short), whereas in any other situation this is not the case (presented between 170 to 250 ms too short), so I guess my problem has something to do with loading the image.
I’m not familiar with the code in JS or in the normal coder view so I’m pasting below what I assume is the most relevant part of the code, which is the routine that presents the image on each trial. In the output .csv file the presentation of a stimulus is signalled by trial_image.setAutoDraw(true) and the end of the presentation is signalled by trial_image.setAutoDraw(false), so here is the part of the code that involves those functions:
// trial_image updates
if (t >= 0.0 && trial_image.status === PsychoJS.Status.NOT_STARTED) {
// keep track of start time/frame for later
trial_image.tStart = t; // (not accounting for frame time here)
trial_image.frameNStart = frameN; // exact frame index
trial_image.setAutoDraw(true);
}
frameRemains = 0.0 + 1.5 - psychoJS.window.monitorFramePeriod * 0.75; // most of one frame period left
if (trial_image.status === PsychoJS.Status.STARTED && t >= frameRemains) {
trial_image.setAutoDraw(false);
}
In response to your question, yes, when task works correctly the presentation times for my working memory are good for muy objectives. I need to present images for 1500 ms and the presentation durations tend to be accurate within 8 ms, when it works correctly!
Are the images set in image components to update each repeat? You could try adding a static period at the start of the routine (or in the previous routine) and have them update during that.