OS: Windows 10
PsychoPy version: v2022.2.5
Standard Standalone? (y/n) : y
We are currently developing a simple audiovisual paradigm for a MEG study using the PsychoPy Builder. The study runs smoothly with no major issue: we are able to send the visual (mp4 video) and auditory (wav files) stimuli and we are also able to send triggers through the parallel port. The onset of the triggers is always defined using conditions (e.g. myMovie.status==STARTED), we use the ptb audio library on the high (3) setting. Every trial begins with a >500 ms ISI component so I suppose it is enough to load all the files correctly. No custom code is associated with stimulus presentation and triggers.
Then, we checked with an oscilloscope and a photodiode/audio device the delay between the trigger and the stimulus onset. For the video stimuli, the timing is adequate: we have some lag, but with very little jitter.
However, for the auditory stimuli we have a lot of jitter (sometimes 10s of ms) and often a negative lag (the sound onset happens before the trigger is sent). I have read all the PsychoPy documentation, I have toggled with all the parameters I can think of, but I’ve never managed to produce a sub-millisecond precision as it is promised on the PsychoPy website.
I have tried to make a “barebone” task with just a constant sound played at regular intervals: it helps a little bit I still have quite high jitter (+/- 5 ms).
Anecdotally, I have the same issue with a different auditory paradigm with a completely different set-up, notably the negative lag.
I don’t know where to go from there. Is it possible to have really good timing precision with a PsychoPy experiment made using the Builder? Am I missing something crucial?