| Reference | Downloads | Github

Sync (cumulative delays) between a script's event onsets and fMRI slice acquisition

Hi all,

I know this topic has been discussed before in the psychopy community, but I am not sure a consensus still exists (or whether I understand what it is).

I have a psychopy script that displays audiovisual stimuli and records ratings of them at the end of every trial, while the subject would be scanned. I am wondering how to best deal with the problem of cumulative delays in the script event onsets, due presumably to random variation in the PC’s hardware.

According to this older thread, all that one needs to do is “1) sync the starting time and 2) to record the onsets carefully so that you can later enter them into
the model.”

I guess 1) can be done by simply conditioning the script to start upon receiving a one-off sync signal from the scanner.; and 2) simply involves extracting the exact onset times of all events from the .log files, and somehow (don’t yet know how) correcting for the delays in the model. But if a certain trial’s events are misaligned with (what would be the corresponding) slice onsets at the time of the acquisition, isn’t it the case that this cannot be corrected for subsequently, even when you know when these misalignments occurred and how large they were?!

If my above concern is true, is it the case that the solution is to condition every single event of the script upon receiving a “new slice onset” (TR) signal from the scanner, i.e. insert “wait” triggers? If so, how is this done in PsychoPy?

Any thoughts on this much appreciated.

1 Like

Could you explain what you mean by "cumulative delays in the script event onsets, due presumably to random variation in the PC’s hardware”?

If you’re waiting for participants to respond before starting the next trial then how is your timing working at all for syncing with the scanner?

Sorry, I’ll clarify. The response window is meant to be of a fixed duration: in case of a fast answer, there is padding time until that duration is reached, and in case of no response the script moves on. (I am in fact not quite managing to achieve this aim with the RatingScale component - see separate thread).

The durations of the different epochs in the script (present stimulus, answer, ISI etc) are linked to the scanner’s TR, and so if the total running time of the script were that obtained by summing up all theoretical durations (number of blocks X duration of one block), the script-scanner sync should be fine. But I have been warned by our MR physicist (and the thread I linked to before confirmed it) that with each trial/block, PsychoPy may accumulate some delays that in the end make the total duration greater and thus desync the acquisitions with the intended epochs they were meant to refer to.

My question was how to counter this effect - whether the lags can be taken into account posthoc or whether some “wait” trigger needs to be inserted in the script at block/trial level - although that assumes the actual duration would be less, rather than greater than the theoretical duration.

OK, I think you’re referring to the issue of non-slip timing. That’s a potential problem for people writing their own code but should not normally be a problem for standard builder experiments if they have a trial/epoch with pre-determined duration (which will show up as green on the Flow panel). This is explained in the final post (by me) in the thread you linked and also in the documentation:

It looks like you aren’t getting fixed duration trials, which I’ll take up in that separate thread but as and when that works the cumulative delays will not be an issue for you

Thanks Jon, in that case I will just trust PsychoPy to deliver the exact timings set in the Builder, and not worry about wait signals. Very good to hear this.

Well, don’t just “trust” it. Read the log file and make sure!

Indeed, I will do that. I haven’t found any way to set a preference for event-mode- vs cumulative-timing, as in E-Prime, in case delays do occur. This would be a nice setting to have.

Just had a look at the .log file after running through the entire script. With components meant to start 1s apart (i.e. component A has duration 1s and is followed by component B), the .log file suggests that the time difference between their onsets is actually 1s ± up to 20 ms .

Hopefully these additive errors are normally distributed and centred around zero such that they tend to cancel each other out in the long run, although I’m not sure if that is the case.

Also, this delay is just at the level of a component; there are many components in a routine/trial, and many trials in an experiment. Sorry Jon but can you help me understand why I am wrong to think that cumulative delays do seem like a problem?


I don’t known if this has been fully solved yet, but I struggled with timing myself recently and think I can at least try to clear up the problem a little.

There are two ways to time experiments using clocks. The easiest (and worst) way is as follows:

  1. Run something. Specify time it lasts (as in, this runs for 4 seconds starting now)

  2. When the time to run finishes, begin a new trial/proccess and specify again its time to run.

The problem with this is that each new command to run for a certain time will overshoot. Every frame, a check occurs to see if it’s reached the end time, and if it hasn’t, another check will occur one frame later. That means every trial/process is liable to overshoot by as much as the length of a frame. It will always be a little too long.This adds up, and by the end, your experiment will have run few seconds longer than it should have and is no longer synced to the scanner.

The better way is to have a single clock ticking away that starts after receiving the fist trigger of the scanner. Then, instead of timing each trial/process in the experiment individually, you always poll this single clock. If stimulus A should appear at second 4, last four seconds, and then get replaced with stimulus B, you would make “A” appear as soon as the single clock reaches 4 secs, then display “B” as soon as the single clock reaches 8. Now your timing errors won’t add up, because you let your single clock just run undisturbed. At the end of the experiment, you will have an error overshoot of at most a single frame. This is what Jon calls non-slip timing.

If you aren’t sure which timing method your experiment is using, the solution is to time your entire experiment (let it run for a number of minutes). If it uses nonslip timing, at the end you should have an overshoot error of no more than the length of a frame. Otherwise, you should have overshot your target length by quite a bit more than that. I think Jon has said that builder experiments by default use the correct timing method.



1 Like

[quote=“andrewsilva19, post:9, topic:1094”]
The problem with this is that each new command to run for a certain time will overshoot. Every frame, a check occurs to see if it’s reached the end time, and if it hasn’t, another check will occur one frame later. That means every trial/process is liable to overshoot by as much as the length of a frame.[/quote]

The Builder automatically takes this into account and “under-estimates” the specified durations by approx. one frame, so everything will finish exactly on time. In code, the same can be reached using the StaticPeriod.

I figured builder would do something to correct for this. Really, the practical advice I had for @tudor if they aren’t 100% convinced that the timing is good is to just run the experiment and look at the timestamps, particularly the last ones, to check if the experiment lasts as long as expected or if it will de-sync from the scanner (which, in my experience at least, the scanner keeps close to perfect time).

1 Like

Yes that is absolutely reasonable indeed. Always check whether your timing assumptions are met. :slight_smile: In fact, I for one use a scope to record trigger pulses emitted throughout the experiment, and use these to verify the timing. But I’m probably a bit overly pedantic in that regard anyway :innocent:

1 Like