OS (e.g. Win10): Win11 PsychoPy version (e.g. 2024.2.4 Py 3.8): 2024.2.4 Py 3.8 Standard Standalone Installation? (y/n) If not then what?: y URL of experiment:attncap_gray [PsychoPy] Do you want it to also run locally? (y/n) It already runs fine locally What are you trying to achieve?: The experiment mostly runs fine, but there are capture stimuli (images of colored dots) that are supposed to appear briefly (for 50ms) around the bounding boxes of search stimuli on each trial. Sometimes the colored dots do not appear at all on a trial. My suspicion is that frames are dropping sometimes. I have used PsychoPy Builder to define the start (2.10) and stop time (2.15) of the dots on each trial (labeled box1, box2, box3, box4), and set the Image to update every repeat (e.g., $color1).
What did you try to make it work?: I found an older thread from 2016 about brief stimuli not appearing, and Jon had recommended not embedding the stimuli in a for loop. I went back and checked my code component and there are no for loops. However, I do have a code component where I define the list of images that will appear (a list of pngs) in the ‘Begin experiment’ tab, then I shuffle the list (shuffle(color_list)) and then set the color (color1=color_list[0]) in the ‘Begin Routine’ tab. Not sure if this is an inefficient way to do it.
Link to the most relevant existing thread you have found:
What specifically went wrong when you tried that?: I do not have any for loops in my code component, but there may be something else that is inefficient and dropping the frames for some reason.
I just tried it and the coloured dots looked fine. Please could you show exactly how they are started and stopped? As you suggest, the most likely explanation is that occasionally the trial is delayed so that the dots are set to disappear before they appear. I would expect this to only happen on the first trial in the loop, unless you have inefficient code – are you recreating visual objects each time in Begin Routine?
No, I don’t think so, because all objects are created in the “trial” routine of the builder. The code component is just used to randomise which image appears in which location. The problem does not just happen on the first trial.
Also, attached is a screenshot of how the image timing is defined.
You are fixing a stop time only .05 seconds after the start time. I would recommend changing this to a duration of 3 frames or .05 seconds. At the moment if the start time is delayed for any reason the stop time doesn’t get moved so you could have a shorter duration or no duration at all.
I did as you suggested and changed the stop time to “duration”=.05, and now I am seeing stimuli on top of each other that should be appearing in succession. I will try the frames next, but I’ve had issues with lags using frames before (which makes sense if the experiment is dropping frames a lot).
If anyone has found success consistently presenting rapid visual stimuli on Pavlovia I’d love to hear some tips! One option that is not ideal is to instruct participants not to have any other processes / applications running in the background while they do the task, but for an online study that is putting a lot of responsibility on the participant. So if there’s anything I can do from my end that would be optimal.
Edit: I found this other thread that suggested extending fixation time between trials- actually, my issue only occurs on main experiment trials, not practice trials - the difference is that the practice trials have an extra feedback screen at the end. I think this will be the solution!: