Dear Psychopy community,
I am moving from PTB3 to Psychopy and I have some questions regarding the presentation of auditory stimuli in this nice toolbox.
Whereas I have the feeling that the presentation of visual stimuli is quite developed in Psychopy, I find a little bit confusing what is the best way to present auditory stimuli. As far as I have understood, Pygame is the most reliable audio library at the moment, however there is a problem with the latency in the form of a huge temporal bias. I think having a systematic presentation bias in time is not a major issue that can be corrected by shifting the presentation time of the stimulus, however what I dont know is how variable is the presentation time. Does any of you have experience in programing EEG experiments (where high temporal precision is required) in Psychopy using the different audiolibraries? I would be very grateful if you could share how good/bad are the temporal latencies that you are achieving and what setup you are using (windows, linux, mac, special soundcard, etc). (I am sorry I did not have the opportunity yet to plug in an oscilloscope to evaluate the timings)
At this moment I am doing some practices using sounddevice (in a macbook pro, Mojave, Python 3) but I found some issues that I dont know how to solve:
1 I need to recreate the pure tone each time that I want to play it, otherwise I can only play it once.
2 The sound is reproduced many times with a not very nice crackling sound on top of it (it does not sound very clean…).
These two problems are not present if I use the Pygame library instead :S. Is there any way to use sounddevice (that in theory provides a more precise and accurate timing) without these issues.
Thank you very much.