Recording vocal RT using the microphone routine in Builder

Hello,

I realize this is a recurring question on this forum, although the solutions in previous threads seemed a bit ambiguous so I decided to make a new post.

I am programming a pictoral Stroop task using the Builder in PsychoPy v3.1.5. In my targetScreen routine, a .png image file is presented (targetImage), and the participant will be instructed to report the color of the image (trialResp). I would like to collect the RT of the vocal response using the Builder’s microphone component; I will not ultimately be analyzing the recording, as it were, but only the onset time for the microphone response. I also inserted a keyboard response within this routine to allow the experimenter to code whether the correct color was reported (accuracy).

Based on my reading, it looks like psychopy.microphone will, by default, output each microphone response as a .wav file, with a naming convention of (object name)(unix time).wav. I see that this filename is also returned in the output file by default, under the header objectname.filename. The actual start time for the vocal response, however, seems to have a different scale than the .started returns for the other objects in the routine. For instance, targetImage.started, the time when the picture was displayed, appears to be in seconds since the script began. I cannot tell what units the trialResp.started values are in, as they all appear to have a scale of e^-5.

I was curious whether it is possible to get the vocal RT (ie, the onset time for the microphone recording, in seconds) using the microphone object in Builder, or whether it would be necessary to add code to capture this (and if so, what the returns are that I would need). Because the microphone onset time appears to be in unix time, I thought one solution might be converting the start time for the picture (targetImaged.started) into unix time, and then subtracting this from the time when the microphone response was detected (which I assumed was returned as trialResp.started). Although I may be way off-base.

Just to give a bit of context, I am using a microphone response in order to replicate an experiment we previously programmed in E-Prime. It seems like collecting RT with a keyboard response is much more straightforward in PsychoPy, although in this case I need the program to be compatible with data previously collected using a mic response. I was testing this program out using a USB conference room microphone that happened to be in the office, so I thought it also might be possible that the strange RT values for the vocal response were due to oversensitivity of the microphone.

Thanks you in advance for taking the time to review my post - the PsychoPy community has been a huge help as I’ve become more acquainted with programming in Python.

Cheers,
David Von Nordheim

Hi, I think I’ve arrived at the solution so I wanted to share it in case it helps.

After reviewing the demo word_naming program, I decided to use psychopy.voicekey to collect RT instead of psychopy.microphone. I was able to collect vocal RT after adding the following code components, which were pulled from the word_naming demo. (Note: this code includes the annotations from the demo, as well as some of my own).

Here is the URL for the word_naming demo if you need it: https://github.com/psychopy/psychopy/tree/master/psychopy/demos/builder/word_naming

In routine at beginning of experiment:

Begin Experiment

# The import and pyo_init should always come early on:
import psychopy.voicekey as vk
vk.pyo_init(rate=44100, buffersize=256)

# What signaler class to use? Here just the demo signaler:
 from psychopy.voicekey.demo_vks import DemoVoiceKeySignal as Signaler
# To use a LabJack as a signaling device:
#from voicekey.signal.labjack_vks import LabJackU3VoiceKeySignal as Signaler

In routine where RT is collected

Begin Routine


# Create a voice-key to be used. Change "trials" to reflect name of loop.
 vpvk = vk.OnsetVoiceKey(
     sec=2, 
     file_out='data/trial_'+str(trials.thisN).zfill(3)+'_'+'.wav')
 
 # Start it recording (and detecting). Change targetScreen to reflect name of routine:
 vpvk.start()  # non-blocking; don't block when using Builder
 vpvk.tStart=targetScreenClock.getTime()

End Routine

 # The recorded sound is saved upon .stop() by default. But
 # its a good idea to call .stop() explicitly, eg, if there's much slippage:
 
 vpvk.stop()
 
 # Add the detected time into the PsychoPy data file:
 #vocal RT
 thisexp.addData('vocal_RT', round(vpvk.event_onset, 3))
 #not sure what this is for
 thisexp.addData('bad_baseline', vpvk.bad_baseline)
 #name of .wav file that will be outputted (sound recording)
 thisexp.addData('filename', vpvk.filename)
 #time when the recording began
 thisexp.addData('recordOnset', vpvk.tStart)

2 Likes

Hello @ dvonnordheim

I am trying to use the word_naming task to build a picture naming task but I do not understand the values of the RT in the data files and also the word_naming task doesnt seem to save all the responses in audio files when more than one participant completes the task. Can you help me understand these aspects of the word naming task? thank you

Thankyou for sharing your solution :blush:

@ dvonnordheim
also why doesnt the word_naming task (available in the demo tab of the psychopy builder) ask for participant number?

Hi, for this the ‘show info dialogue’ needs to be selected in the experiment settings :slight_smile:

aha, yes that worked!

Link update, the path to the demo has changed:

Hi BillNumbers,

Do you have a working link to this demo? Looks like the one you posted a few months back is no longer functional.

Thanks,
FG

1 Like

Hi David,

This is such a great resource. I’m just wondering if you have a sample picture naming experiment that you currently run to record vocal responses and calculate RT as you described it here in 2019?

I’ve tried using the code you suggested below but have found that I’m missing a library.

Any pointers you have would be much appreciated.

Thanks!
FG

hello @dvonnordheim , thank you for your solution! But I ran the experiment and I received two errors:

  1. pyo warning there is no midi devices
  2. targetScreenClock not defined
    I am new to psychopy and I have no idea how to solve this, could you help me with it?

Hello, I have the same problem as you. Have you figured it out? Would you mind sharing the solution?