Say I want to create an auditory signal detection experiment where participants are asked to detect words that are masked by noise. To determine the appropriate noise mask for each participant, participants would first take part in a staircase procedure at various noise levels.
I can easily think of how to accomplish this using PsychoPy if:
(1) I am willing to pre-mix all of the words with every possible noise mask using sound editing software. Under this option I would only need to play one sound file at a time (on a single channel).
(2) I am fine with playing the sound files on two different audio channels. I know that PsychoPy can play two sound files simultaneously (or as many simultaneous sounds as there are audio channels available on my sound card). So I could instead play the word as one sound object and the mask as another, setting the volume of the mask dynamically with the staircase.
Option 1 is not ideal because this requires mixing more than 700 word-mask combinations.
Option 2 might be OK, but I don’t know enough about mixing audio to be sure that playing the word and the mask on two separate channels is empirically the same as mixing the two and playing them on a single channel. My first thoughts are that mixing on a single channel will create modulations (in amplitude, etc) that will make the single channel audio distinct from the two channel audio?
In PsychPortAudio, the Psychtoolbox’s sound module, you can dynamically mix sounds on a single channel. To do this, PsychPortAudio allows you to create a virtual slave audio device. Which, as I understand it, simply creates a virtual sound card (with an arbitrary number of channels) attached to a single audio channel. (see: http://docs.psychtoolbox.org/OpenSlave). To me this implies mixing on a single channel (similar to what would happen with Option 1 above).
So my questions are: Can PsychoPy mix audio on a single channel in the same way that PsychPortAudio can? And, more importantly, does it even matter? Is playing word and mask on two distinct audio channels no different from (or empirically the same as) mixing them into one? Or at the very least, no different from what PsychPortAudio is doing by creating a virtual slave?
Thanks in advance