psychopy.org | Reference | Downloads | Github

PsychoPy voice recording

If this template helps then use it. If not then just delete and start from scratch.

OS: Win10
PsychoPy version: 3: 202.2.5

Fariba Ghanbaryan ghanbaryan.info.res@gmail.com 8:19 PM (13 minutes ago)

to psychopy

Dear buddies,

I am designing a picture naming experiment in PsychoPy 3 (V 2020.2.5) from which I want to get the record of the vocal RTs. I’m not good at software language at all, however, I’ve copied the code component from the PsycoPy demo for the word-naming experiment.

The point is that when I add the code to my experiment, it doesn’t run at al.

Each trial in my task includes: a fixation, a word (with two different SOAs), a noise, an image (which I want to be named), and an intertrial interval.

I have inserted the VoiceKey code in the instruction routine as the beginning of the experiment and in the routine with the image.

The experiment design is as follows:
Thesis Experiment.psyexp (50.5 KB)

I get this error. I followed the link but couldn’t fix the problem.
Alert 4210:JavaScript Syntax Error in ‘Begin JS Experiment’ tab. See ‘Line 1: Unexpected token’ in the ‘Begin JS Experiment’ tab.
For further info see 4210: Probable syntax error detected in your JavaScript code — PsychoPy v2021.2

What could be wrong in this design?

Could you please help me with this?

Thank you

If you’re running locally, what’s probably happened is that one of your code components is set to “auto-JS”, meaning the Python code is automatically translated to JavaScript for running online, and there’s some problem translating it. If you’re not running online then this isn’t needed, so changing it to just “Py” gets around this problem.

It’s also worth noting that in 2021.2.0 we made pretty substantial changes to the microphone component, making it a lot more stable, so it may also be worth updating to the latest version and giving it another go.

Dear TParsons

Sorry, I just checked your response.
Although none of the components were “auto-JS”, now it is fortunately running. But the point is that the reaction time that the vocal_RT reports in the .csv file is not a valid number and all the numbers are 0.19, 0.16, 0.17 or so.
And I don’t know what could be wrong with that!

This is the begin experiment code:

The import and pyo_init should always come early on:

import psychopy.voicekey as vk
vk.pyo_init(rate=44100, buffersize=32)

What signaler class to use? Here just the demo signaler:

from psychopy.voicekey.demo_vks import DemoVoiceKeySignal as Signaler

To use a LabJack as a signaling device:

#from voicekey.signal.labjack_vks import LabJackU3VoiceKeySignal as Signaler

this is the begin routine code:

Create a voice-key to be used:

vpvk = vk.OnsetVoiceKey(
sec=5,
file_out=‘data/trial_’+str(Practice_Trials.thisN).zfill(2)+’_’+word+’.wav’)

Start it recording (and detecting):

vpvk.start() # non-blocking; don’t block when using Builder

and this is the end routine code:

The recorded sound is saved upon .stop() by default. But

its a good idea to call .stop() explicitly, eg, if there’s much slippage:

vpvk.stop()

Add the detected time into the PsychoPy data file:

thisExp.addData(‘vocal_RT’, round(vpvk.event_onset, 5))
thisExp.addData(‘bad_baseline’, vpvk.bad_baseline)
thisExp.addData(‘filename’, vpvk.filename)
thisExp.nextEntry()

I have set the durations based on my screen refresh rate. I wonder if PsychoPy could handle many components in one trial and give an accurate RT number!

I’ll check out for the microphone component too.
Thanks a bunch for your help