Confusion about how to implement non-slip timing for trials with known end points

Hello,

I am designing a task-based fMRI experiment with trials with known length (not variable). I put them in a loop in builder. Once in a loop, they turned from green to red. According to http://www.psychopy.org/general/timing/nonSlipTiming.html, routines shown in red are using relative timing and might cause de-synchronization. I don’t know why putting it in a loop causes it to turn red, and don’t know if I now need to be concerned about time slippage?

If so, I’m also confused about how to implement the non-slip timing using code like:

timer = core.Clock()
timer.add(5)
while timer.getTime()<0:
# do something

How do I integrate this code with my existing experiment, what goes in the ‘do something’ section? Can you direct me to an online example so I can see what goes in ‘do something’, and extrapolate how to implement this with my task?

Thank you!!!

This is an unfortunate limitation of Builder that @jon and I have discussed in the past but haven’t done anything to resolve. You do really need to address it in your experiment: I’ve found in an fMRI study in exactly this sort of situation that the timing would drift by at least several hundred milliseconds over the course of 10 minutes or so if the routines weren’t coloured green.

I think my work-around for this was to insert some dummy fixed number like 9999 as the trial duration. Then push the “compile script” button. In that script, do a search for all instances of 9999 and replace with your relevant variable name. Make sure you save this edited script with a different name than the default, otherwise Builder will over-write it any time a change is made. Note that you now need to run this amended script rather than the original Builder experiment. In PsychoPy, this can be done from the Coder view rather than the Builder view. If any changes need to made to your experiment, you’ll need to go back to your original Builder experiment, make the changes graphically there, and then re-do this find and replace strategy in the new generated Python script.

You should insert some timing code to see what the effects of the slip vs non-slip timing are. If you have implemented non-slip timing correctly, you should find that the actual duration should be within 1 ms or so of the intended duration.

Thanks Michael for that recommendation. I’ll play around with that.

I was wondering your feedback on another possibility. I would like to time-lock the stimulus presentation to the TR. (My TR is 2 seconds and my stimulus presentation is therefore also 2 seconds). Can I get around issues of slip-timing by simply adding a ‘wait for t’ component to my routine so that it shows the stimulus while it waits for the next ‘t’ trigger from the scanner and force ends the routine based on the incoming ‘t’?

Thanks so much!

That probably just introduces unnecessary complexity. You can rely on the TR of the scanner being very precise. Similarly, PsychoPy can show stimuli very precisely for two second durations, subject to some caveats. The important thing is to test that your task duration matches the scanner run precisely. The only important TR signal to catch is the very first one. After that, if things are specified correctly, you should find that your task ends within a millisecond or two of the intended duration.

You just need to time all of this to satisfy yourself that it is working correctly. When it does, you’ll subjectively find that the final stimulus ends the moment the scanner noise noise stops.

Hi Michael,

Okay, thanks for your feedback. Forgive my misunderstanding, but I looked through your original suggestion and I don’t understand what it accomplishes. If I put in a dummy fixed number as the trial duration, compile the code, look for the timing, and replace the variable name, how does that help me address non-slip timing? It seems like all I’d be doing is substituting one variable name for another. If you could provide a link to an example or more information about how to tackle this, I’d greatly appreciate it.

You also recommend inserting some timing code to see what the effects of slip vs. non-slip timing are. Can you give me any more specifics on what that would look like?

Thanks so much on any guidance you can provide!!

Because the generated code is different when you have a fixed time duration. You are therefore inserting your variable name into non-slip code. If in the Builder interface you use a variable rather than a fixed time, your routine turns red and the resulting code is different.

By using a fixed time value, you are tricking Builder into generating the code you want, and can then edit that code to use your variable name instead.

Something like the code below (I’m assuming you have a single trial routine running within a loop, where the first trial will start following the first trigger from the MRI). Insert a code component on that routine:

Begin routine tab:

if your_loop_name.thisN == 0: # only on the 1st trial
    start_time = globalClock.getTime()

End experiment tab:

expt_duration = globalClock.getTime() - start_time

print('This run lasted: ' + str(expt_duration))

This is an old thread, but this and an even older thread are the most relevant ones that came up when I searched for information about solving issues with non-slip timing and varying (but known in advance) routine durations.

I think my work-around for this was to insert some dummy fixed number like 9999 as the trial duration. Then push the “compile script” button. In that script, do a search for all instances of 9999 and replace with your relevant variable name. Make sure you save this edited script with a different name than the default, otherwise Builder will over-write it any time a change is made. Note that you now need to run this amended script rather than the original Builder experiment. In PsychoPy, this can be done from the Coder view rather than the Builder view. If any changes need to made to your experiment, you’ll need to go back to your original Builder experiment, make the changes graphically there, and then re-do this find and replace strategy in the new generated Python script.

I found an alternative solution that avoids having to edit generated code, so I’ll try to describe how it works. It’s mostly for my future self, but I thought I’d put it here in case it helps others.

When debugging/developing it became impractical to always have to let Builder generate a script, do a search-and-replace for the faux duration, then load the edited script and run the experiment. (as described in Michael’s post)

I found a hack that can be implemented directly in Builder. Here are the steps for tricking PsychoPy into treating a routine as “non-slip-eligible” while allowing for variable durations:

  1. Set a faux, very long duration for all of the routine’s component durations (say 9999s)

(try letting the Builder generate the Python script if you want, and make sure that in the preparatory section for your routine, there’s a line with routineTimer.add(9999.000000))

  1. Add a code snippet component to your routine

  2. In the code snippet’s ‘Begin Routine’ section, add code for fetching the trial duration you want. This will differ based on where/how you are storing the trial duration data. In my case I used:

trial_duration = stimuli_data[trial_counter]['trial_duration']
  1. Still in the code snippet’s ‘Begin Routine’ section, below the above line, you want to “add” time to the routineTimer by first decreasing it by the faux trial duration you added in step 1, and then increasing it by the trial duration you actually want.
routineTimer.add(-9999.000000)
routineTimer.add(trial_duration)

(or, for efficiency now that you understand what you are doing)

routineTimer.add(-9999.000000 + trial_duration)

So in the end you will have something like

# code snippet component -> 'Begin Routine'
trial_duration = stimuli_data[trial_counter]['trial_duration']
routineTimer.add(-9999.000000 + trial_duration)

The downside with this solution is that your trial’s components never get the chance to end before the routineTimer says it’s time to stop. So if you save the onset/offset times for your components, in your exported data the registered offset time will always be ‘None’. All I did was add trials_loop.addData('t_end_trial', tThisFlipGlobal) (since I’m using a loop) in the code snippet’s ‘End Routine’ section, to keep track of when the trial actually ended. As far as I can tell this seems to be about as accurate as the components’ “offset time registration” normally is.

Apart from the small nuisance with the offset times, this seems to work great so far, when I’ve checked the sum trial durations over time they seem to be as accurate as Michael describes. Please let me know if there is an issue with this solution that I’ve overlooked, since it could spoil precious fMRI data otherwise.

(also, to anyone who stumbles upon this and is confused about what non-slip timing even is: I recommend getting the official book. so many things that aren’t well documented elsewhere IMO became much clearer after reading it, including non-slip timing)

1 Like

@jon has made quite a few changes to the timing code since back then. You might find that non-slip timing effectively applies in most situations now. i.e. I would test a standard, unedited Builder script (in the latest version) against your customisations to see if they are actually still required.

Thanks for the tip,

Maybe I’m missing something obvious here, but I’m not sure what you mean should be done instead. The way I did it now, I don’t need to edit the script that Builder generates. If I try to, instead of using the hack I described, simply input a variable for stimulus durations, then the routine isn’t treated as “non-slip-eligible” by PsychoPy (doesn’t turn green in the flow).

If you want, you can check out the attached minimal reproducible example that shows what I mean. There are just two routines: ‘trial’ and ‘blank_500ms’ (for an inter-trial interval of 500ms).There’s a loop wrapping them both, set to run three times.

In the trial routine I there’s a code component. It has the following content:

‘Begin Experiment’

trial_durations = [3, 5, 9]

trial_counter = 0

‘Begin Routine’

trial_duration = trial_durations[trial_counter]

‘End Routine’

trial_counter += 1

I now want to use the trial_duration variable for specifying the duration of a text component in the trial routine. So for the text component I set the start to 0.0 and stop to trial_duration. The trial routine stays blue in the flow tab, ie non-slip timing isn’t used. What could be done here to make PsychoPy understand that trial_duration is actually always a duration known in advance, and that non-slip timing can be applied? I don’t see any other way than the method you described or the hack I’m using, but again, maybe I’m missing something obvious.

test_nonslip_exp.psyexp (8.4 KB)

What I’m suggesting is that the non-slip timing is no longer necessarily better than when non-slip timing is not applied, as the latter has been improved. So try running it both ways and see if any errors actually accumulate.

this worked perfectly, thanks for this!

1 Like

Did you have the time to check in regards to Michaels suggestion whether “ensuring nonslip timing” vs not ensuring it makes a big difference or not in the newer versions of Psychopy?

No, I didn’t check that, since the solution I found worked fine and I didn’t really understand what Michael meant. Based on what’s written in “Building Experiments in Psychopy”, I don’t think the need for “non-slip timing” would disappear just because of ‘normal’ timing improving, but I could be mistaken.

1 Like

That book was published in 2018, and written (by us) before that. PsychoPy’s development has continued apace since then. I haven’t tested it, but subsequent changes in Builder’s generation of scripts may have improved the timing performance between “red” and “green” routines.