Continuous sound stimuli on left and right ear until response

For the experiment I’m currently working on in the coder, I need to present high-pitch and low-pitch auditory stimuli on the left and right ear until a response with the mouse occurs.

Previously, we used external wave files which were panned to the left or right ear - however, these obviously have a fixed duration. And if you were to put them e.g. into a while loop, they’d be played repeatedly, i.e. with several on- and offsets, rather than as one continuous sound.

Hence, I thought it would be most practical to generate the sounds in PsychoPy using the sound.Sound() function, and make the duration equal to the maximum trial duration (3 seconds in this case). Maximum trial duration means: If no response occurs during this time period, the program will automatically advance to the next trial.

left_low = sound.Sound(300, secs=3, sampleRate=44100, stereo=True)
left_high = sound.Sound(600, secs=3, sampleRate=44100, stereo=True)
right_low = sound.Sound(300, secs=3, sampleRate=44100, stereo=True)
right_high = sound.Sound(600, secs=3, sampleRate=44100, stereo=True)

There are two problems with this:

  1. How do I tell PschoPy that a sound is to be played only on the left or only on the right ear?
    This feels like a really basic question, but I can’t find any lists online mentioning all the possible arguments for sound.Sound() to get an overview. If I set “stereo” on “false”, then the sound is only audible on the left ear, but I can’t find an equivalent for the right ear. :frowning:

  2. How can I make a sound of 3 seconds duration interrupt once a response occurs? Making the duration of the sound equal to the maximum trial duration is easy enough, but that means if the trial ends earlier (=as soon as the participant responds), then the sound will keep playing until the full 3 seconds are over.

I already tried putting the left_high.play() command into a while-Loop to make it play indefinitely; but that just creates some unpleasant distortion. The sound is merely displayed a little longer, and then still gets interrupted automatically after a while, rather than in response to a mouse click.

Try these old threads from @AlexHolcombe for some initial pointers for creating independent sound channels:

https://groups.google.com/forum/m/#!topic/psychopy-users/QOx5_Lh541c

https://groups.google.com/forum/m/#!topic/psychopy-dev/NoSVA7ycBjM

Thanks for the quick reply! :slight_smile: Is there really no way of doing this with the sound.Sound() function?

Meanwhile, someone else told me about the sound.stop() function, so interrupting the stimulus when a response occurs is no longer a problem now! :slight_smile:

This person also told me it is possible to create e. g. a tone which is only displayed on the right ear by creating a stereo sound with an empty left channel (and vice versa).

So all I need to know now is how this argument of the sound.Sound() function is called where I can enter values for the left and right channel :slight_smile: .

Alternatively, we’d have to return to external wave files as stimuli, make them 3 seconds in duration, and then stop them with the sound.stop() function. The panning information (left/right) is already included in the external files.

Those threads are for doing exactly that. Instead of supplying a sound file to sound.Sound(), you provide an array of numbers specifying the sound, different in each channel. After all, a sound file is just a wrapper to exactly that information.

Again, as discussed in those threads above.

Unfortunately, the sound documentation at http://www.psychopy.org/api/sound.html seems to be broken. The docs are, however, generated from the comments in the source code, found at psychopy/psychopy/sound/_base.py at release · psychopy/psychopy · GitHub

Note this section:

setSound(self, value, secs=0.5, octave=4, hamming=True, log=True)

Set the sound to be played.
parameters:
value: can be a number, string or an array:
* If it’s a number between 37 and 32767 then a tone will
be generated at that frequency in Hz.
* It could be a string for a note (‘A’, ‘Bfl’, ‘B’, ‘C’,
‘Csh’. …). Then you may want to specify which octave.
* Or a string could represent a filename in the current
location, or mediaLocation, or a full path combo
* Or by giving an Nx2 numpy array of floats (-1:1) you can
specify the sound yourself as a waveform

For anyone needing more help with this issue, here’s the code I adapted from @AlexHolcombe’s Google Groups post here: https://groups.google.com/g/psychopy-dev/c/NoSVA7ycBjM?pli=1

The code below plays a low pitch to the left ear and a high pitch to the right ear.

Of course, it would be optimal if this could simply be done by (stereo=False, channel=“Left”), for example. Until then, here’s the code:

duration = 0.1
sample_rate = 22500
freq_L = 200
freq_R = 800
bits = -16

n_samples = int(round(duration * sample_rate))
buf_L = np.zeros((n_samples, 2))
buf_R = np.zeros((n_samples, 2))

for s in range(n_samples):
    t = float(s) / sample_rate  # time in seconds
    val_L = math.sin(2 * math.pi * freq_L * t)  # spans from -1 to 1
    val_R = math.sin(2 * math.pi * freq_R * t)  # spans from -1 to 1
    buf_L[s][0] = val_L  # left
    buf_L[s][1] = 0  # right
    buf_R[s][0] = 0  # left
    buf_R[s][1] = val_R  # right

scue_lo = sound.Sound(value=buf_L, sampleRate=sample_rate, bits=bits)
scue_hi = sound.Sound(value=buf_R, sampleRate=sample_rate, bits=bits)