Inconsistent stimulus presentation time in Psychopy builder

OS Win10
PsychoPy version 1.84.04
**Standard Standalone? Yes
**What are you trying to achieve?:

I’ve made a small experiment that is having a weird issue where the duration of stimulus presentation is inconsistent. I’ll quickly explain the premise:

The numbers 1-9 are rapidly presented on screen in a completely random order, each being on the screen for only 400 ms. The numbers 2, 5 and 8 are targets. When these are presented, participants press the space bar. This lasts for 250 trials. The timing of the stimuli is supposed to be constant, unaffected by user input (or lack thereof).

The problem is that not all stimuli appear on screen for an equal amount of time. Some are seemingly skipped, on screen for only a fraction before being replaced. Others stay on for a full second before switching to the next number.

What did you try to make it work?:

I’ve tried various permutations of time and duration for the start/stop of both the stimuli (text) and the response window (hit). The text is read from $number in an excel spreadsheet (list of 1-9 with colums for target (yes/no) and correct response (spacebar/none). Does anyone know why some trials are almost entirely skipped and others stay on screen far longer than 400ms?

PS. Aditionally, is there a way in the Builder to allow the response window of the first trial continue on for 100-200ms into next trial? I was planning on scoring semi-manually via Excel for trials where the participant is correct but presses after the next trial has already started.

You need to describe for us exactly what you have done. i.e. give all the settings you are using for your text stimuli and keyboard components, are they in the same routine or separated, etc, etc.

And what is exactly is a “response window”?

The trials are all in the same routine, it’s just looped. There are no conditions and the trial order is completely random. Since each session of the experimental task lasts only about 1,5 minute (repeated several times), there was no need to have seperate blocks are moments for the participant to pause. I thought it would probably be the most clear if I provided some pictures of the settings, but since I can only add 1 as a new user, I will try describing it…

The experiment has 2 routines, Instruction and Trial. The Trial routine has 2 components, Text (the stimulus) and hit (the response). Aditionally, the loop uses looptype random, Is trials is checked, no random seed, 10 nReps, no selected rows and for conditions it uses and excel file.

The excel file has 3 colums, first colums (Number) containing numbers 1-9, second colum (target) has yes for 2,5 and 8, and no for the other numbers. Third colum (corrAns) dictates wether the correct response is space (for 2,5 and 8) or none (the rest).

The text component uses time(s), 0.0 as start and 0.4 as end. Color, font, height and position is unchanged and are set to contant. The text box itself contains only $number, and is set to every repeat.

The hit component also uses time (s), 0.0 as start and 0.4 as end. Force end of routine is unchecked. Allowed keys are: ‘space’, ‘none’, set every repeat. Store all keys. Store correct is checked. Correct answer is $corrAns. Discard previous and sync RT are unchecked (but make no difference regarding the stimulus if checked).

The main problem is that every so many trials (not 9 or something logical like that), a trials seems to be skipped, only being visible for a moment, almost like it’s trying to catch up to something. Other trials just last too long. When looking at the numbers flash by, it’s very obvious a few are on screen almost twice as long as the rest.

The response window is the period of time in which the participant can respond to a trial. Currently, the response window is identical to stimulus presentation, during the (what should be) 400ms that the stimulus is on the screen, the participant can respond by pressing the space bar. Once the new stimulus appears (and a new trial starts), the response window for the previous stimulus is closed and there is again 400ms for the participant to respond to the new trial/stimulus. I was wondering if it was possible for the response window to be 200ms of the current stimulus/trial and 200ms of the next. As it stands, I will manually correct the cases in which a participant responds correctly (sees the target, presses space bar) but reacts after 450ms for example. It will show a response after 50ms on the new trial, which might not contain a target, and would therefore be counted as an error.

That is a good detailed description, which still leaves me a bit mystified on the performance issue though. Have you closed all other software other than PsychoPy?

On the response side, yes, it is best if you leave dealing with late responses to the analysis stage. i.e. you might change the way you deal with that later on, and that gives you more flexibility.

Just a thought, could this be an issue related to how the refresh rate of a monitor makes it so that using milliseconds for the timing of stimulus display is ultimately inconsistent?

http://www.psychopy.org/general/timing/millisecondPrecision.html

If that’s the case, stimuli presentation length should be dictated according to number of frames.

@Micheal Yeah it doesn’t seem to be a performance issue. I’ve tried it on several different set-ups and the effect is identical. You’re right about leaving the late responses for a later stage, I already have 2 different ideas on scoring them.

@Daniel You’re definitely onto something here! I’ve tried using number of frames (currently using 24 frames, closest to 400ms for a 60 Hz monitor) and it has partially solved the problem. Very few of the trials seem to last too long now, usually only after a skipped trial, which is great. However, it seems that around the 40th trial (I’ve ran it and counted a few times) there is 1 trial that is only on screen for a fraction of time, and is therefore basically skipped. It happens less often than before, but it still occurs. I’ve altered the number a frames per stimulus, but there’s always a skipped trial there. Any idea what could cause this?

Is the 40th trial always the same item? My question is, is this always the same stimulus, or does it always happen at the 40th time, regardless of the stimulus?

Not sure if it’s exactly the 40th, hard to count, but always around that. There is only 1 type of a stimulus, a random presentation of the numbers 1-9. So yeah regardless of which of the numbers it is, it happens.

There really isn’t much to go on here. Obviously give your experiment and conditions file a really good lookover. Start keeping notes about when it happens, study the output files, look for a pattern, maybe up the logging level in the experiment settings, and when it happens, check the log for any clues, other than that, unless someone else has any better ideas at this point I don’t know how we could help diagnose things.

If it were me, I might make a new experiment, and very carefully rebuild it based on the old one, paying close attention the whole time for anything that looks out of place, or for something I don’t understand.

I guess before that, turn off the internet, disable Dropbox and any other similar background daemons, anti-virus (since the internet’s off, don’t forget to turn it back on!). Maybe your system is causing some kind of burp in the flow?

I think this is very likely something in your code to do with how you’re updating stimuli or the use of an image that takes time to load but, as Daniel and Mike are suggesting, it’s impossible to give more info from this.

The general solution is to get down to a simpler script that is working correctly and see what additional lines just cause the timing to go bad so you can narrow down (for yourself and for us) where the problem might be.

1 Like