URL of experiment: https://run.pavlovia.org/Hera/shortversion (shorter version of the experiment)
(Tips: Click on smiley to progress
put cursor in orange box to start trial
click in a grey box to give response)
Link to gitlab code: https://gitlab.pavlovia.org/Hera/shortversion
Description of the problem:
PsychoPy version: 2021.2.0
Background info: We are running an experiment online that employs mouse-tracking, includes two conflict tasks (a visual-spatial simon task and an audio-visual sentence task). There are three types of blocks (a simon-only block, a sentence-only block and a mixed block where trials of the two tasks alternate) and each block is presented three times in random order.
The problem lies within the sentence (audiovisual) task.
In a sentence trial, one of four audio files -indicating the position of the object- plays, and the participant has to respond choosing one of the two images -containing the object in different positions- that are presented simultaneously.
There are 8 different trial types (defined in the conditions_BothBlocks.xlsx excel file) corresponding to the different locations of the object presented visually and the corresponding sentence presented auditorily (L stands for Left, R stands for Right, Up stands for Up, Dn stands for Down, con stands for congruent, incon stands for incongruent, jetzt = now, nicht = not, oben = up, unten = down)
The entire experiment runs online smoothly without any error messages however in some trials the sentence is presented as normal and in some it cannot be heard.
Through our own troubleshooting, we see that the problem happens always in Mixed blocks (where the two different tasks, simon and sentence, alternate) and not so much (but still occasionally) in the single sentence blocks.
Even more specifically, in the last run, there were 4 silent sentence trials in the 1st Mixed block, 7 silent sentence trials in the 2nd Mixed block and 9 silent sentence trials in the 3d Mixed block.
Since the issue gets worse the more the experiment progresses, we are concerned if it has to do with some sort of “memory leak”
Audio files format: We have tried both wav and mp3 files and it doesn’t seem to make a difference.
Audio builder and coder components: We used a sound builder component to present the auditory stimuli but which audio file should be used for each trial is coded in custom script as you can see in the code component of the “Trial” routine → code_Both → Begin Routine.
Thank you very very much for your attention and any advice or tips would be most welcome!
Please let me know if I can provide further information.
What is the size of each audio file?
Also, for local run I would choose .wav, for online .MP3
Thanks for the suggestion! These were the sizes for each audio file (first number is the .wav size and the parenthesis number is the mp3 size):
170.6KB (6KB), 189KB(7KB), 207.4KB(7KB) and 248.9KB(9KB).
We (@dbryce) resolved this problem for now and thought we should share this information here in case others come across the same problem. We had three separate folders for different resources in our experiment - one for one set of visual stimuli, one for another set of visual stimuli, and another for our audio stimuli. When we removed all resources from these folders and placed them in the same folder as the .psyexp file and java script files, we could again hear all our audio stimuli again. We guess the experiment was occasionally timing out when having to switch between folders trial-by-trial? We did not realise that the folder structure could cause such problems, so thought this was useful information to share.
We started running the experiment and got feedback from the participants that there were still a significant number of silent trials so we had to take down the experiment.
So apparently the problem has not been resolved yet, so any suggestions and advice would be greatly appreciated!
I think your issue could be related to some Browsers needing some time to initialize the sound hardware for sound output. In one of our experiments, we presented word lists to the participants. Sometimes, the first word or the first part of the first part would just not be audible to the participants.
For us, this was particularly evident when using the Chrome Browser for running the experiment. Firefox was less affected. I assume that Chrome somehow puts the audio device to standby when it is not used for some time and then it takes it some time to bring it back up. (This is just an assumption given that the following hack solved the issue for us).
We implemented the following hack und with this hack, we did not have any reports of words not being played anymore:
Just prior to the trials playing the word lists, we added a new component as ISI. Within this ISI, we added a sound component and as sound file we added ‘A’, such that it plays the tone A for 1 second. Further we set volume to 0. With this hack, Chrome initialized the sound device such that it was then ready once the word list was presented. This way, the first word of the list was also played perfectly.
Some things to note about this hack:
- despite the volume of 0 the A tone was still audible
- we set the playback of 1 seconds based on pure testing. So there is no rationale behind it other than that it seemed to work reliably once we played the A tone for one second.
I hope this helps,