Hello,
I realize this is a recurring question on this forum, although the solutions in previous threads seemed a bit ambiguous so I decided to make a new post.
I am programming a pictoral Stroop task using the Builder in PsychoPy v3.1.5. In my targetScreen routine, a .png image file is presented (targetImage), and the participant will be instructed to report the color of the image (trialResp). I would like to collect the RT of the vocal response using the Builder’s microphone component; I will not ultimately be analyzing the recording, as it were, but only the onset time for the microphone response. I also inserted a keyboard response within this routine to allow the experimenter to code whether the correct color was reported (accuracy).
Based on my reading, it looks like psychopy.microphone will, by default, output each microphone response as a .wav file, with a naming convention of (object name)(unix time).wav. I see that this filename is also returned in the output file by default, under the header objectname.filename. The actual start time for the vocal response, however, seems to have a different scale than the .started returns for the other objects in the routine. For instance, targetImage.started, the time when the picture was displayed, appears to be in seconds since the script began. I cannot tell what units the trialResp.started values are in, as they all appear to have a scale of e^-5.
I was curious whether it is possible to get the vocal RT (ie, the onset time for the microphone recording, in seconds) using the microphone object in Builder, or whether it would be necessary to add code to capture this (and if so, what the returns are that I would need). Because the microphone onset time appears to be in unix time, I thought one solution might be converting the start time for the picture (targetImaged.started) into unix time, and then subtracting this from the time when the microphone response was detected (which I assumed was returned as trialResp.started). Although I may be way off-base.
Just to give a bit of context, I am using a microphone response in order to replicate an experiment we previously programmed in E-Prime. It seems like collecting RT with a keyboard response is much more straightforward in PsychoPy, although in this case I need the program to be compatible with data previously collected using a mic response. I was testing this program out using a USB conference room microphone that happened to be in the office, so I thought it also might be possible that the strange RT values for the vocal response were due to oversensitivity of the microphone.
Thanks you in advance for taking the time to review my post - the PsychoPy community has been a huge help as I’ve become more acquainted with programming in Python.
Cheers,
David Von Nordheim