I have prepared an audio task online (the so-called Dichotic Listening Paradigm) where a stimuli consists of two separate syllables that are presented at the same time. The stimuli are programmed to be stereo and in the offline experiment in PsychoPy they are presented that way, meaning that if you only listen to it with one ear, you can just hear one syllable. However, when piloting the study online, I have heard that stimuli are played as mono, meaning they are being presented on both ears at the same time, with only a slightly louder level on a specific ear. This is very contraproductive to what I am aiming for. Does anyone of you have any idea on how to make sure that the stimuli are played as stereo stimuli? (I have them in a .wav format).
This might be a browser-specific issue. Generally, Chrome gives good results (just checked in on Windows 10). Could you ask participants to try that browser?
thanks for your quick feedback. Unfortunately, I’ve tried both Chrome and Firefox now, Stimuli are still being presented in mono and not in stereo. Do you have any other idea?
I’m confused as well. I am working on Windows 10 and tried to do so using both Fireforx and Chrome!
I feel like I’m missing out on giving you a key information here, but I really don’t know what. I’ll explore more and get back to you if I have news. Thanks, any way, for your help!!!
You’re welcome; hope you’ll find out what’s up. To make testing easier I made a little update to the stereo demo I linked to earlier: this version has me saying “left ear” in the left channel followed by “right ear” in the right channel.
For me, earing in both ears instead of one only happens when there are three things at the same time:
Headphones
Javascript (local or online)
The box called “enable audio enhancements” is ticked on windows 10
This last option is called ‘Activer les améliorations audio’ in french, I’m not sure of the path in english, something like Properties of speakers/headphones → Advanced statistics → Signal enhancements
However, for me, it seems complicate to explain online to each subject the process to untick this box, does someone has an idea why it seems different between Python and JS and how to ‘control’ it ?
Yes, I’m not sure about the definition, but for me :
mono is the same sound displayed in both ears
stereo is two different sounds displayed in each ear. Hence, if you silent one sound, you have only one sound in one ear and nothing in the other one, not even a lower sound.
Does it make sense for you ? @kmh, is it correct ?
Hi guys - @Lex, this is how I understand mono and stereo as well.
A short update from my side: I have tried out numerous headphones and browers in combination and realized that the mono / stereo presentation is better when using over-ear headphones in chrome or firefox. It is still not completely separated but overall much better. I don’t know if this helps but at least for me, I can work now with it!
Happy it’s working out for you! For my tests I use:
In-ear and over-ear headphones, connected via Bluetooth. This ensures there are no issues with the jacks (but I wouldn’t recommend it in an actual experiment; Bluetooth can be a bit flaky).
In my demo (link earlier in this thread) the left-right sound is achieved via a stereo sample (not any setting in PsychoPy)
Sound enhancements on the level of the Operating System (Windows 10/MacOS X) can’t be bypassed via the browser I’m afraid. Amazed if PsychoPy could bypass them btw.
Sorry for the late reply. I’ve got a hunch: applications tend to run in “sandboxes”, which means that they are limited in what resources they got access to. This has a couple of reasons, one important one being security restrictions. Python runs on the operating system, while JS runs on the browser. The operating system tends to allow much more than a browser.
I know nothing about Python and PsychoPy and I am trying my best to learn more about it. I need to create an online experiment that requires stereo output at random. May I know
if it’s possible to do it within Psychopy itself, like typing out some sort of code or
if not, how do I embed the stereo sound file inside psychopy? Like what you did. Can it be an MP3 file converted from Logic Pro?
I think that in PsychoPy you can play a mono sample and set the panning such that it’s in the left or right speaker (not 100% sure about this, I focus on online stuff). In PsychoJS this approach does not work yet, but you can have it play a stereo sample. Make a stereo sample with only sound on the left channel and one on the right channel, for example.
Sure, so long as the output is an MP3. See my demo above for an example on how to plug it into PsychoJS