Jittering ITI by code in the builder

Hello everybody
I am using Psychopy 1.85.4 on a MacBook.
I am trying to have an ITI between my trials with a variable timing between two interval (in the experiment I am working right now the interval is from 2.5 to 4 seconds). I have looked at multiple example here on the forum but could not find anything really helping in my case.
I wrote a code and put it at the beginning of the trial.
I set at the ‘begin routine’ window those lines (that I found on some other answer).

import random
Jitter=[randint(2.5,4)]

I set a text component with a fixation cross having as duration $Jitter. I am not sure what to set as start (the ITI should start as the subject press the key so the stimulus disappear).

I tried several things, but either the experiment does not run at all or it runs, but without showing the ITI fixation cross. What I am missing?

1 Like

Hi Barbara,

import random

This line is not necessary. Builder imports several functions from the numpy.random library for you at the start of every script, like this:

from numpy.random import random, randint, normal, shuffle

numpy is a numerical library and we prefer to use that rather than the random functions built in to the standard Python library. numpy provides a comprehensive suite of random functions and you can see documentation for them here:
https://docs.scipy.org/doc/numpy/reference/routines.random.html

So consulting that, you’ll see that the equivalent of the randint() function isn’t what you are after: as the name suggests, it returns a random integer, whereas you want a floating point number between 2.5 and 4.0.

The usual way to do this is to use the random() function which returns a float between 0.0 and 1.0, and then add and multiply that as necessary to get a number in the range you need, e.g.

jitter = random() * (4.0 - 2.5) + 2.5

But note that delay periods need to be some multiple of the screen refresh rate of your screen. e.g. if your display is running at 60 Hz, then any interval must be a multiple of 16.6667 ms. So 3.0 s is a valid interval, as it equals 18 * 16.6667, but say 3.05 would not be valid. A quick and dirty way is just to round your interval to 1 decimal place so that any interval is a multiple of 100 ms (as 100 ms = 6 * 16.6667 ms), i.e.

jitter = round(jitter, 1) # round to 1 decimal place

so your intervals would be constrained to be 2.0, 2.1, 2.2 s, etc

You probably want to record what the jitter was on a give trial, so make sure you do this:

thisExp.addData('jitter', jitter)

to add a column labelled jitter to your data file.

Lastly, make sure your code component is above any component that will use the jitter variable, as it needs to be calculated before it gets referred to.

It’s usually easiest to just split things across several routines. Have the first routine end when a key is pressed. This inter trial interval can then go on a second routine, which will start immediately upon the previous one ending. That way you only need to specify a duration for the stimulus: its onset time will simply be fixed at zero.

1 Like

Hi Micheal, thank you very much. That is very helpful. I am pretty new of Psychopy and Python so have advices around helps a lot. I am wondering if there is any way to set instead of the code, a specific list of jittered timing, for example from an excel file. I am trying to optimize the randomisation of my experiment using some programs like optoseq or afni, so I may have a bunch of fixed sequence instead of asking Psychopy to randomize things for me. There is an alternative way to do this by using, for example, an excel file? and if yes how could I set the loop around each routine? Let say I am doing a simple Simon Task for example (which is part of my experiment). I have a routine called trial with its own condition loop and its own excel file. Then I have Another routine called jittering where the fixation cross timing is a variable of a second excel file, where number of raw is = to number of trials. Now what kind of external loop should I have here? I tried to set a loop having as condition my two conditions (congruent and incongruent) but it did not work. It is likely that I have to use that methods because we need to optimize the stimulus presentation for fMRI, so if you have some insights about it would help a lot. Thank you

Thank you

If the routines are within the same loop, then they share variables from the same conditions file (the one attached to that loop). So yes, you could specify your jitter values in there, along with any other variables you need for each trial.

Hi Micheal, thank you. That could work in the case in which I have a raw in my excel file for any trial. I do for a couple of my tasks, because I had to control a lot of variable on the screen and not managing very well the coding I decided to overcome with excel. In other cases, like for example a simple Simon task I do have just few raw/condition and a bunch and a long sequence of jittered timing to manage. In that case I have to necessarily have three separate loops, one for the trial, one for the jitter, both nested in a third one. My doubt is about what this third one should contain. I had tried to build the loops in this way, but Psychopy run just the trial and again do not show me the jitter ITI. I also have the feeling that depending less on excel and more on coding would make my experiment more efficient in any case, but i still struggle in finding solutions. I am going to attend the Psychopy workshop in April, but I would need those tasks up and running before then. Thank you :slight_smile:

I have a further question about using jitter in experiments. I would like to present a sequence of 5 items - say for 2s each - but have participants guess when the last item will be presented (so to estimate the temporal duration of the sequence, which itself will be fixed at 20s). I want to introduce fixed periods of jitter to fill the remaining 10s (say 2s, 3s, 4s, and 1s) but to have the sequence in which these jitter periods presented vary from trial to trial so that participants can’t reply on a specific jitter period to cue when the final stimuli will be presented.

Can PsychPy support this function? This seems close but I don’t want random periods generated as the point will be to see whether participants can predict when the sequence will end.

Many thanks.

Almost certainly, but you need to describe your requirements more precisely if you are going to get any concrete suggestions. Think the sort of level of detail needed to describe in a methods section of paper.

Thank you for the quick reply and apologies for the confusion. I have uploaded a schematic but I will try and explain in more detail. I would like to build a task that will look to measure how well participants can predict when a target item will appear. This time will be always fixed within a sequence - in my example every 16s. The task, therefore, will be a measure of whether participants (and ultimately patients) can keep an internal count of 16s. The sequence items will always be presented for a fixed period of time.

I could build an experiment in which there is a fixed ITI but then one solution to the task would be to simply wait for the fifth ITI to appear. To prevent this I would like to introduce five jittered ITIs such that the times add up to say 10s. To prevent participants learning the timings of the ITI I would like to vary the ITI times such that ITI1 is 2.5s in trial 1, 1.0s in trial 2, 1.5s in trial 3 etc. and for this presentation to be randomly assigned. The remaining jitter times (T2-5) would also vary each trial, with the total ITI time (T1-5) summating to 10s. After each trial I would have a confidence rating screen, and then a repeat of sequence with a change in T1-5 jitter times. Being a psychophysical task it would require several trials (I haven’t quite done the power calculations yet).

I can see to avoid problems with random jitter times each trial that having five different jitter times that vary position between each trial might be best so that experiments can be counterbalanced across trials. I can see how to set up the experiment with an excel spreadsheet but not how to have variable jitter times each trial.

I hope that makes more sense now.

A little bit but not enough to offer a solution yet. e.g. contradictions between:

and

and whether the target appears at 10 or 16 s (the figure doesn’t correspond to either of these values, and shows different total durations on each trial), and what is the total length of the sequence (i.e. do you wait for a response, or does it just end)?

You clearly know exactly what it is that you want to achieve but try to express it to someone who is completely naive to the task. Describe the exact sequence of events and the relevant constraints in detail, and maintain consistency throughout (e.g. 10 vs 16 s, etc).

There will be issues here with the Builder not using non-slip timing in this sort of situation by default, so we need to be very careful to construct it properly to get accurate timing maintained across the sequence.

Okay - I will try again. Thank you for your patience!

The task is simply to assess whether participants can predict when a target stimulus will appear (in my example an apple, Stimulus 1, S1). Everything else is a distractor as we will not be assessing their ability to remember the subsequently presented sequence. However, in my example the scissors (S5) are a cue that the apple will appear again. The target item (the apple), in my example, will appear every 19s and I would have a keyboard component for the participant to press say the space button when they think the apple will reappear. This keyboard component will become active at the start of each trial, and I will then subtract that latency time from the 19s to get a measure of temporal accuracy. The trial will end after the target item has reappeared and the next screen would be a confidence rating screen - that is a trial will conclude after the participant has made a keyboard response and that target stimuli (S1) has reappeared. I would then create a loop so that several trials can be obtained.

The other stimuli (S2-5) are there to mask the nature of the task which is to predict when the target item will reappear but this provides as much experimental support as possible. We have found that 1.5s is a good amount for a stimulus to be presented in sequence learning tasks which means that stimuli will be presented for 9s in total. However, the reason for the jitter is that participants might just learn the timings of the stimuli and the ITIs if they are kept constant in which case it is a reaction time task instead of a temporal estimation task. This is why having variable jitter is critical.

If we are measuring the ability of a subject to internal estimate 19s then we have 9s of stimuli with a total ITI time of 10s which could be 2s per ITI, or in other words 2s between the sequence stimuli. In order to keep the participants ‘guessing’ and relying only on internal temporal estimation I would like to jitter the ITIs each trial so that participants cannot generate a rhythm between ITIs and stimuli presentation. As the total length of the trial will be always 19s then I would like to jitter the ITIs so that their summed total is 10s but they vary from trial to trial. During trial 1, the first ITI (T1) could be 2.5s, for instance, but in trial 2 it could be 1.0s, in trial 3 1.5s etc. The remaining four ITIs (T2-4) would also vary each trial in a similar way, but their sum would be 10s across every trial. This is just to introduce enough uncertainty into the task so that participants rely on internal counts (it is quite mean I know).

Now I can see that having a random assignment of jitter will mean that the trials are not consistently 19s, thus defeating the purpose of the task. So one solution I thought of was to have five fixed ITI intervals (in my example 2.5s, 1.5s, 3.0s, 1.0s, and 2.0s) and have the position of those (T1-5) vary each trial but also that each of these jitter times are included in every trial. If I were to have these fixed ITI times (which I think could be the most pragmatic way of building the experiment), is there a way in PsychoPy to have the Builder or Coder decide each trial which jitter value is assigned to T1-5 but to also ensure that every value is used to keep the ITI time at 10s and hence the length of the trial to 19s? If not, is there a way of having a psuedorandom presentation of these ITIs across trials.

I know that something like this is possible in ePrime, but, for lots of reasons, we would like to remain open source. I really hope this makes more sense now!

OK thanks Tom, that is much clearer now. Handling the ITIs in the way you describe is straight-forward. But unfortunately due to a limitation in Builder with handling non-slip timing, I’d advise you to break with our normal recommendations and put all 6 stimuli and 5 ITIs within a single routine (normally we’d advise just having a routine with a single stimulus and ITI and looping over it 5 times within a trial, but I don’t think that would maintain the precise timing you need over a 19 s period).

To handle the ITIs, I’d suggest that you insert a code component on your routine and put something like this in the “Begin Routine” tab:

ITIs = [1.0, 1.5, 2.0, 2.5, 3.0] # they sum to 10
shuffle(ITIs) # randomise order for this trial

Then for each of your ITI stimuli, simply enter ITIs.pop() in its duration field (which is set to update on every routine). That is, each ITI within the trial gets the next item from the randomly ordered list of values, and the total duration within a trial will sum to 10 s.

Note that this sort of pseudo-randomisation might not provide the kind of trial-to-trial unpredictability you need. The code above could be easily extended to randomly sample from a set of such ITI sequences if needed. i.e. specify a list of candidate ITi sequences (i.e. a list of lists), randomly select one of those lists on each trial, then shuffle that list as above.

I guess you might also need to record the chose sequence. Let me know how you would want that done (e.g. a separate column for each ITI from 1 through 5, or a single column containing a representation of the chosen sequence).

You should also test the timing performance by consulting the log files, in case you need to switch from specifying durations in time to counting by screen refreshes, but I suspect this time-based approach will work for you.

Thank you so much! That is really helpful. I have tried to build this as you say and I am stuck with a couple of things:

  1. Where you mention build in ITIs is that as the ISI function? That is what I have done so far and have put the code in as you describe it. No screen comes up but each of the stimuli slides are up for a varying amount of time but this seems fixed to me. Should I be putting either a text or an image element between the stimuli slides?
  2. Have I set the experiment up correctly (see below)? Between each of the stimuli - which I have presented for 2s - is an ISI which seems to be varying the amount of time that the stimuli image is shown for. A blank screen/fixation cross would be better but this is still useful as it is.
  3. The readout post experiment does say that some ISIs could not run when I increase the ITI times and I presume this is due to screen refresh rate and so I may need to convert to that instead of time (or adjust the ITI times so that they accord with a 60Hz refresh rate)?

Nearly there - thank you!

Tom

Sorry it also seems the best way to build this would be to have stimulus 1 start at time 0, ITI1 at 1.5s, then stimulus two start after the end of ITI1 although that time is undefined if the ITI durations change every trial. ITI2 would then follow stimulus and again would be undefined. How do I build that uncertainty into the onset times for the stimuli and ITIs?

Not necessarily: the ITIs can just be periods between your stimuli when nothing is being shown. That is, they are defined by the onset times of successive stimuli. i.e. you need to incorporate the ITI timing into the onsets of your stimuli. Currently they all seem to following each other with no intervening ITI period.

e.g. your first stimulus would have a fixed onset time of 0 and a fixed duration of 2.0. The second would have a variable onset time of 2.0 + ITIs[0] and a duration of 2.0. The third stimulus onset time would be 2.0 + ITIs[0] + 2.0 + ITIs[1], etc.

You can enter expected start values into the Builder interface so that the stimuli appear approximately correctly with intervals between them, even though those intervals will actually vary dynamically across trials.

Thank you! That is working perfectly!

Now the last thing… I promise! Ideally during training I could give feedback about how accurate they being. I can get the reaction time up using the feedback Code Component from your book (which is excellent btw). What would be great is if I could show them how far off they are from the 20s by subtracting the reaction time from the trial time (20s). I have had a go using a not so great video tutorial:

msg = "You were (thesum) off"

first = 20
second = int(format(resp.rt))
thesum = (first-second)

But I seem to be runing into problems with string numbers and integers which I can’t seem to get myself around. What I don’t want to do is reveal the time of the sequence is 20s (cos we’re mean). This would be the cherry but you’ve already done so much already.

I’d suggest you round to one decimal point rather than truncate to an integer, so that they don’t think they are perfect when they are within 1 second. Remove the abs() function if you want them to know the direction of their error:

error = round(abs(20.0 - resp.rt), digits = 1)

If you are using the Python 3 version of PsychoPy, then your message could be constructed simply like this:

msg = f'You were {error} seconds off.'

In Python 2, it’s a bit more verbose:

msg = 'You were {error} seconds off.'.format(error = error)

Last question I’m sure! I’ve used this code within the builder under the Begin Routine tab with $msg set within a text component and the drop down tab set to ‘set every repeat’.

msg = 'You were {error} seconds off'.format(error = error)

error = round((20.0 - resp.rt), digits = 2)

But I get this error message which I can’t seem resolve through either changing set every repeat, checking for spelling errors, and it’s definitely within the Begin Routine tab.

msg = ‘You were {error} seconds off’.format(error = error)
NameError: name ‘error’ is not defined

The error message tells you exactly what is happening. Python code is executed sequentially. So your variable called error needs to be defined before you can refer to it when constructing the variable msg. Swap the order of those two lines of code (as per my post above).

Similarly, the code component needs to be above your text component: that ensures that the variable msg is constructed before you attempt to refer to it in your text component.

Thank you very much for all your help and patience. It all works perfectly now.