Getting time-stamped keypresses during sound display (pygame backened)

Hi everyone,

I try to get the time of a keypress before, during and after a sound (100 ms duration). My code looks like this:

# Import modules:
from __future__ import division, print_function, unicode_literals, absolute_import
from psychopy import visual, event, core, prefs, gui
import numpy as np
import sys, pyglet, pygame, random

prefs.general['audioLib']=['pygame']

from psychopy import sound

# Create auditory stimuli and array:
audio_stim = sound.Sound(value='c', secs=0.1, octave=4) 


# number of trials depends on choice in GIU:
audio_stimuli_per_block = [audio_stim] * int(subject[8])


# Define Trial Run for stimulus array:
def auditorySyncTap_doTrial_1200(audio_stimuli_per_block): 

    for x in range(len(audio_stimuli_per_block)):

        event.clearEvents()
        trial_clock.reset() 
        tap_time_auditory = 0
        stim_time_auditory = 0
        IOI_auditory = 1200
        
        # For how long do we capture taps before auditory stimulus depends on chosen "Recording Window":
        
        if subject [7] == "50":
            core.wait(.6, hogCPUperiod=.6) # wait 600 ms before stimulus onset to capture preceedingt taps (== -50% of IOI)
        
        elif subject[7] == "25":
            core.wait(.3, hogCPUperiod=.3) # wait and buffer 300 ms before stimulus onset to capture early onset taps (== -25% of IOI)
        
        # get preceeding keypresses:
        key = event.getKeys(timeStamped=trial_clock)
        
        for y in key:
            if y[0]=='space':
                tap_time_auditory=y[1]

        audio_stimuli_per_block[x].play()
        stim_time_auditory=trial_clock.getTime()

        # get keypresses during stimulus
        key = event.getKeys(timeStamped=trial_clock)
        
        for y in key:
            if y[0]=='space':
                tap_time_auditory=y[1]

        # For how long do we capture taps after auditory stimulus depends on chosen "Recording Window":
        
        if subject[7] == "50":
            core.wait(.6, hogCPUperiod=.6) # wait 600 ms after stimulus onset to capture taps (== +50% of IOI)
        
        elif subject[7] == "25":
            core.wait(.3, hogCPUperiod=.3) # wait 300 ms after stimulus onset to capture taps (== +25% of IOI)

        # get keypresses after stimulus:     
        key = event.getKeys(timeStamped=trial_clock)
        
        for y in key:
            if y[0]=='space':
                tap_time_auditory=y[1]
        
        # if we chose 25 percent IOI for Recording Window, we wait half IOI without capturing Taps:
        
        if subject [7] == "25":
            core.wait(.6) # wait for 600 ms more without capturing keypresses
        
        stim_tap_interval_auditory = tap_time_auditory - stim_time_auditory
 
        # write logfile after each trial:      
        logfile_auditory.write(out_string_auditory % (int(subject[0]),int(subject[1]),subject[2],int(subject[3]), subject[4], subject[5], subject[6], int(subject[7]), int(subject[8]), IOI_auditory, stim_time_auditory, tap_time_auditory, stim_tap_interval_auditory))
        logfile_auditory.flush()
        event.clearEvents()

In my data I see that during stimulus display, no keypresses are accessable. Has anyone encountered the same problem and has a solution how I could access keypress time during my sound display?

Thank you very much in advance.

Cheers,
Carrie

Hi,

Just as a style point, you might find it useful to move to more Pythonic ways of running your loops: these look like they are influenced from previous experience with other programming languages. e.g. instead of:

for x in range(len(audio_stimuli_per_block)):
    # other stuff, then:
    audio_stimuli_per_block[x].play()

simply do this:

for stimulus in audio_stimuli_per_block:
    # other stuff, then:
    stimulus.play()

i.e. in Python we can often avoid the need for keeping track of a loop index, and that can simplify the readability of the code.

Now for the actual problem, it is a little bit unclear what “no keypresses are accessible” actually means. But I suspect this is due to your code structure. At the moment you are effectively checking for a keypress only twice per trial, at instantaneous points in time. i.e. event.getKeys() is an instantaneous check. If no key has been pressed since the start of the trial (when you clear the event queue) or since the last call to event.getKeys(), then the code will simply continue.

During those core.wait() periods, no keypresses can be detected in real time. If a key was pressed, it should be found waiting in the queue, but the timing that gets returned to you will be incorrect: the time stamp from event.getKeys() is the time that you issue the check (i.e. the time that pull the event out of the queue), not the time that the key was actually pressed. So event.getKeys() really needs to be used in a tight loop, so that presses are detected soon after they occur.

Hard to make recommendations here without more detail of what you actually want to achieve, but anyway, the event module is now being superseded by a better way of checking the keyboard, which does retrieve the time the key was pressed, even if it occurred quite some time before issuing the command.

See documentation on the new Keyboard class here:

https://www.psychopy.org/api/hardware/keyboard.html

Hi Michael,

thanks for your reply and for the loop-style recommondation. Indeen, I am quite new to programming and happy about every feedback :slight_smile:

What I want my code to do is (I thought) quite simple: Every trial lasts 1200 ms. In the middle of each trial (after 600 ms) I want to present an auditory stimulus that has a duration of 100 ms. Throughout eacht trial I want to access a time-stamped keypress in order to calculate stimulus-keypress-intervals (stimulus onset time - keypress onset time) later on. The keypress can happen before, during or after the stimulus onset. As I use core.wait() to temporally structure my stimulus presentation, I included the additional argument “hogCPUperiod” to capture keypresses also during wait periods (My source was this: https://www.psychopy.org/api/core.html --> " If you want to obtain key-presses during the wait, be sure to use pyglet and to hogCPU for the entire time, and then call psychopy.event.getKeys() after calling wait() ").

My data now makes sense, as you mentioned, that event.getKeys is an instantaneous check for keypresses. As we use Psychopy 2 Version 1.90.3 in our group, I don’t have the possibility to use the new keyboard module, but I will try to install psychopy 3 on my laptop and to rewrite the code, because it seems to me, that the specification “the polling is performed and timestamped asynchronously with the main thread so that times relate to when the key was pressed, not when the call was made” in the documentation of the keyboard module is exactly what I need. Thanks for this hint!

If you have other ideas how to modify the code in Psychopy 2, I am happy about recommondations.

Cheers,
Carrie

Hi Michael,

I just wanted to say thanks again, I implemented my code in Psychopy 3 and now use the new Keyboard module to get time-stamped keypresses and it seems to work fine, thanks for that hint!

Best,
Carrie

Glad to hear it Carrie.

You should also keep an eye out for the imminent release of PsychoPy 3.2. This will include an update to our audio code. As with the new Keyboard class, this is written by Mario Kleiner, who has ported both pieces of code from the PsychToolbox project, which has very high performance in this regard.

With the new audio code, you can be much more confident that the sounds actually begin and end playing exactly when you request them to (at the moment, the onset of sounds can be a little variable, depending on what sound libraries are chosen).

Again, a very worthy adivice, thanks! Just yesterday I noticed a shift in my reaction time data of around +100ms compared to my expected values. I log the time of stimulus onset directly after calling sound.play() and I assume the shift is due to sound latencies of the pygame module, just as you mentioned. I will try out pyo and await the 3.2 release with the psychtoolbox sound module!

Thanks a lot!