Accumulated > 4 s drift despite using non-slip timing (presenting videos)

My experiment was created using builder and listens for a trigger on the parallel port at the start. Then, the “trials” loop goes through my individual randomisation file specifying the sequence of the experiment.

I have a code component in the trial_selector routine which on every loop sets all loops except the loop for the specified condition to 0 repetitions. For example, if the condition name is filler then n_reps_filler is set to 1 and all other loops are set to 0 repetitions (code is in the “Begin Routine” tab). This way, upon every repeat of the trials loop, only the desired condition is actually presented.

Three of the conditions show video files of < 6.5 s length. Because the “NULL” routine is essentially my ISI and has varying length depending on the trial, I’ve used the trick by @Michael described here to make sure that all my routines are green–i.e. that Builder is using non-slip timing:

Looking at the CSV output I now however see that the paradigm accumulates slightly more than 4 seconds of drift over the exactly 25 minutes that the paradigm is supposed to last. That is, instead of 1500 s the paradigm lasts 1504 s. Checking the log more closely confirms that the drift accumulates slowly over the course of the experiment.

Has anyone encountered a similar issue? In my understanding, using non-slip timing should absolutely prevent a drift of more than 4 seconds?

PS: This is PsychoPy 2021.2.3 on Ubuntu Linux running on quite powerful hardware (so that shouldn’t be the issue).

to begin debugging such issues, first thing you have to do is to get rid of Builder and switch to raw python code
this is just how serious projects work
additionally, how do you measure that the experiment lasts longer than expected? with a physical stopwatch? with an internal .Clock timer in psychopy?
the only thing you can trust - and report - considering you are dealing with video stimulus - is the black-white strobe on your display screen, which ideally should be alternating on every frame so that you can count the actual number of frames presented in each video sequence

Thanks for getting back! You’re of course right about super-precise timing. I should clarify that this is fMRI, so what I am looking for is what is described in the link I mentioned: I just need roughly precise timing of the experiment (a few ms of drift don’t matter) which PsychoPy should automatically achieve from the builder when all routines are green.

Reading from the CSV log/data file, however, I find a roughly 4s drift accumulating over the 25 minutes time course of the experiment (as calculated from begin and end times of routines, relative to the time of the initial trigger sent to the parallel port that starts the actual experiment).

That is not the general recommendation of the most experienced PsychoPy developers. Builder scripts generally implement best practice code - there are many more ways for a programmer to go wrong by developing their own scripts from scratch than are likely occur in a Builder script.

Our general recommendation is to use Builder to create the general structure of the experiment, and then just use code components to insert custom tweaks as required.

Of course, careful testing is always required, regardless of how the script is developed. I have recently detected a slight timing error in Builder scripts in the latest releases that affects non-slip routines:

@Patrick - the issue I document there appears to go in the opposite direction to what you have encountered. I would encourage you to create a minimal reproducible example and submit it as an issue to the Github repository also.

Note that the trick of forcing Builder to use non-slip timing for routines that do actually have varying durations is a very old one. @jon has subsequently improved the performance of non-slip routine timing (subject to the caveat in the Github issues above), so that may make my ancient work-around no longer necessary. Again, careful testing would be the route here, before adopting tricks that might not be needed anymore.


@Patrick I wonder if you could provide a minimal working example of the problem (ie with just enough components, routines and loops to demonstrate it) that I can test on? It’s hard to know from this information where the problem might lie - whether the hack is incorrect or whether there’s an underlying problem


Dear Jon,
Dear Michael,

Thanks for your responses!

My actual paradigm waits for triggers on the parallel port to start (instead of by pressing space as it is set up now), records responses via the serial port and also send triggers to an eye-tracker. I’ve removed all these additional components to create a minimal working example (MWE) which you can find here in my cloud:

The script that I’ve run has the suffix <filename> and was created using the method for non-slip timing described here. I only include one of my different randomisations here in the folder randomizations/run1/ which is hard-coded in the “trials” loop in the MWE.

The script does nothing else but wait for the user to press space to start, then it presents all trials consecutively as specified in the randomization file. The randomization contains a column called time_s which is the time when a particular trial should ideally start. If I look at the different components and their onset, I can see that they accumulate more and more drift over the duration of the paradigm. The onset of the last fixation cross (fixation_NULL.started) in a trial of type None occurred at 1475.50655293 s but should’ve occurred at around 1471.5 s, thereby accumulating drift of about 4 s. Looking at different onset times in the CSV file I can see that the drift accumulates slowly but steadily.

I’d be very grateful to hear any ideas what is going here, resp. why this is happening?

PS: I’ve had trouble installing the latest version of PschoPy on Ubuntu to see if this might make a difference. I’ll try to get that done and report back in the course of next week.

Hi Patrick, I had a look but there’s still really too much in there for me to work through. There are lots of our code components and they require your serial port (i.e. I’m afraid your Minimal Working Example is neither minimal nor working :wink: )

By a MWE we mean literally removing everything that doesn’t isn’t required for the bug to occur. If you remove the End routine, does the problem still occur? Then get rid of the End routine. Does the bug still show when you get rid of the serial port code? Then get rid. etc. You’ll probably have a good idea of most of the things you can get rid of (but I don’t because I’m not sure what’s in all the code comps).

We should end up with 2 or 3 routines, with just a few lines of custom code if needed

Thanks. Sorry, I know this is time consuming to do but it will be less so for you than for me.

For info, I’ve rewritten the non-slip timing method for the upcoming release (2022.2.x) and I’m keen to see if that fixes the issue in your exp.

Hi John,

Thanks for having a look and getting back! All clear, and my apologies–I thought it was minimal enough. I will make the MWE as minimal as possible according to you instructions first thing next week (currently travelling) and get back to you.