Save onset time of voice response with microphone response in Builder

If this template helps then use it. If not then just delete and start from scratch.

OS (e.g. Win10): windows 10
PsychoPy version (e.g. 1.84.x): v2020 2.4
Standard Standalone? (y/n) If not then what?:
**What are you trying to achieve?: Save onset time of voice response with microphone response in Builder

**What did you try to make it work?: use voice capture and word naming demos

**What specifically went wrong when you tried that?:

I am a new to psychopy and also new to using this forum (or any forum for that matter…). I created an experiment in which the participant hears a word and then is supposed to repeat it. My experiment works but the voice onset time is not saved in the data file. It does save the time that the microphone starts recording but this does not tell me the response latency as the mic starts recording at the begining of the routine at the same time the audio of the word starts to play. A work around would be to have the microphone response start when the voice starts as the microphone onset is saved in the data but I do not know how to do this either :frowning:

I am new to using this forum so I’m not sure of the procedures used to ask for help but I suppose that it would help if you could see the experiment as well as the data files so I have pasted a link to my google drive folder which contains the task.

Hopefully you can download these files that includes the experiment and data file for a test subject. In the csv data file you can see column AB (header: Practice_repetition.started) which represents the onset time of the microphone recording but not of the voice response.

Let me know if I should do something else to ask for help.

thank u,

Miguel

To understand the problem here you need to understand what Psychopy sees when it records - as far as it is concerned, it’s just getting a meaningless stream of numbers starting at the point the microphone starts recording. Whether or not the participant is currently talking isn’t something Psychopy knows, so it’s something we need to figure out from the data ourselves.

The most robust solution is just to manually go through each audio file and log the point when participants start talking, but I can see why you wouldn’t want to do this. One method may be to continuously call the getLoudness method and mark speech as having started when it exceeds a certain level - however if the participant is in a noisy environment or makes a loud noise (e.g. adjusting their chair) then this might give false readings.

There are also voice recognition packages for Python such as SpeechRecognition which can detect human voices amidst noise, however this may be difficult to implement. The bare bones of the process would be:

  • Install the package
  • add its location to paths in your PsychoPy prefs
  • add a code component with import SpeechRecognition in the Before Experiment tab
  • add a code component with an if statement in the Each Frame tab, querying whether or not they are talking (according to the functions in this package)
  • within this if statement, add thisExp.AddData('speaking.started', t)

However this is a fairly advanced solution so I would not recommend it if you’re new to programming.

thank you very much for your reply. Since I am new to psychopy and python implementing the python speechrecognition package is a bit over my head.

A colleague pointed out the existence of a really easy to use web page that will provide voice onset times for all the .wav files you upload (CHRONSET :: An Automated Tool for Detecting Speech Onset). It is really easy to use and seems to work well and saves you from doing it manually for each file. I’m posting it here in case other people find it useful.