Hi there, forum.
I am writing a steady state visual evoked potential (SSVEP) experiment. In this first version, it involves flashing simple filled-in circles (Visual.circle) on the screen. I am experiencing a cumbersome amount of loading time for the stimuli, and if I try to preload all of the stimuli in the experiment, I drop frames like crazy when trying to present the stimuli.
Each frame is either empty or has a bunch of filled-in circles. There can be as many as 20 circles drawn on the screen for a flip() at a time. As a total overall count, I’m looking at somewhere around 40,000 circles.
I have tried 3 approaches:
Preload all stims (meaning create Visual.circle objects). Psychopy crawls. Frames dropped all over.
Load 20 circles, adjust the radius and location and whether to draw them each time before flip(). Also results in dropped frames. Can’t keep up the rates I’m displaying (15-30hz)
Pre create all stims, write them out as pngs for later use. During the experiment, load them up as ImageStims. This is also incredibly slow and results in dropped frames. Space intensive too, as I would need about 4000 images across the experiment.
Break up the experiment into runs where the stims are preloaded in small chunks. With small enough chunks (e.g. 10 second run with stims at 30hz), I am able to run without dropped frames. The consequence for me is that there is lots of loading time before each 10 to 20 second run – I actually end up spending much more time loading and coordinating the loading of stims than I do displaying them. This is currently what I do, but it’s not great.
Does anyone have ideas for how I might be able to optimize this paradigm?