Using MovieStim3 stimuli messes with keyboard input on Windows

Hi,

I’m developing an experiment using an extension package made by @yh-luo for eyetracking experiments with infants. I’ve found it to be great so far and I can recommend it to anyone interested.

Anyway, while developing I noticed an issue on Windows where if you use MovieStim3 stimuli components, you might get issues with keyboard input. That is, keyboard input isn’t sent to the experiment runner as expected, meaning the experiment might not respond to input at all, or input might be sent to both the experiment runner, and whatever other window (eg the code editor) was already open. Here’s an example snippet which produces the error.

from psychopy import visual, core
from psychopy.hardware import keyboard

win = visual.Window(
    size=(1024, 768), fullscr=True, screen=0, 
    winType='pyglet', allowGUI=False, allowStencil=False,
    monitor='testMonitor', color=[0,0,0], colorSpace='rgb',
    blendMode='avg', useFBO=True, 
    units='height')

key_resp = keyboard.Keyboard()
text = visual.TextStim(win=win, name='text',
    text='Waiting for space signal',
    font='Open Sans',
    pos=(0, 0), height=0.1, wrapWidth=None, ori=0.0, 
    color='white', colorSpace='rgb', opacity=None, 
    languageStyle='LTR',
    depth=-1.0);

movie = visual.MovieStim3(
    win, 
    "example-movie-file.mp4", 
    pos=(0, 0),
    size=win.size
)

continueRoutine = True
while continueRoutine:
    keys = key_resp.getKeys(keyList=['space'], waitRelease=False)
    continueRoutine = not bool(keys)
    text.draw()
    win.flip()

win.close()
core.quit()

Note that it doesn’t matter if one attempts to show the movie or not, just generating the MovieStim3 instance is enough to produce the bug, at least on my setup.

A quick fix for at least some of the issues is that one can add from moviepy.config import get_setting at the top of one’s script. This ensures that Windows ‘focuses’ on the task as expected, again, at least on my setup.

Anyone interested can read in more detail about the issue here:

I’m using Windows 10 Pro, Version 20H2, ‘OS build 19042.928’. But as can be read in the Issue above, it seems like this problem has been around for a while.

As it’s unclear whether the problems stem from PsychoPy’s interaction with moviepy, or moviepy itself, I won’t open an issue on GitHub. I just wanted to make the information more accessible to others struggling with this, and increase the odds that someone more knowledgeable finds a proper solution.

Interesting. PyHab gets around this problem by having separate stimulus presentation and experimenter windows, and the keyboard focus lands on the experimenter window because it is opened after the stimulus presentation window.

Have you tried VlcMovieStim?

I hadn’t heard of PyHab before, it looks like it has quite a different approach from the more code-based psychopy_tobii_infant package. I should mention that I only use the latter for eyetracker calibration - for the experiment itself, I rely on PsychoPy’s own Builder, with plenty of code snippets, and PsychoPy’s iohub module. PyHab seems to have a lot of material that one would need to familiarize oneself with, but I might suggest it to colleagues who aren’t familiar with coding.

As for VlcMovieStim, I didn’t know that it exists. I tried simply replacing the MovieStim3 instantation with VlcMovieStim now, but I got an error when I tried running the code:

@vlc.CallbackDecorators.VideoDisplayCb
AttributeError: module ‘vlc’ has no attribute ‘CallbackDecorators’

Maybe there’s something wrong with my environment variables, like PYTHONPATH, or I have an outdated PsychoPy version, or maybe parts of what pip install python-vlc adds isn’t included in the Standalone installation. I’m a bit wary about using VlcMovieStim also since it’s, at least in its demo, described as still being in beta. So even if it’s not pretty, I’ll use the import work-around for MovieStim3 for the time being :slight_smile:

@mdc has been working on a big upgrade to it (and to MovieStim3) that will hopefully go out with the next release and enable better video playback and solve some of those quirks.

The other notable thing about how PyHab handles keyboards is that it bypasses PsychoPy’s normal keyboard handling in favor of directly interfacing with the keyboard handler in Pyglet (which controls the window), so it’s possible they specific implementation of the keyboard object in the builder is also part of the problem. There are alternatives to that too, but they require doing more things in the coder view anyways.

W/r/t PyHab, I did at one point get it to play nice with an older Tobii package, and in that case the main challenge was that I wanted it to effectively replace the keyboard with the eye-tracking data in real time (i.e., make it so gaze on/gaze off was determined by the eye-tracker, not the human). If you’re just using the eye-tracker for calibration and then manually coding anyways, it might be very easy to get the two to work together. Let me know if you’re interested in trying that.

1 Like

Great, it would be nice if using movie stimuli became more straight-forward. There’s always a lot of options with Python and it’s hard to tell what dependencies are the best, so I appreciate the work being spent on making these decisions.

W/r/t PyHab, I did at one point get it to play nice with an older Tobii package, and in that case the main challenge was that I wanted it to effectively replace the keyboard with the eye-tracking data in real time (i.e., make it so gaze on/gaze off was determined by the eye-tracker, not the human). If you’re just using the eye-tracker for calibration and then manually coding anyways, it might be very easy to get the two to work together. Let me know if you’re interested in trying that.

I’m not entirely sure how you mean. For checking gaze positioning in PsychoPy I’m just using the iohub API, there’s a function getPosition that get’s the last recorded gaze coordinates. I’m guessing you already know about it, but anyway. I’m using that for checking to see for how long the participant holds their gaze at eg an attention grabber and it works well. Of course, I’m restricted to 60 ‘gaze checks’ per second since getPosition works synchronously with other experiment code, but for our project that’s enough. Maybe PyHab would make this a bit more convenient, but again, it would require some time for me to acquaint myself with PyHab’s API and/or code base, and I don’t want to add a dependency this late in the project. But maybe if I work on a similar project in the future, I’ll see :slight_smile:

Basically the way the project that bridged PyHab and an eye tracker barely needed keyboard inputs at all, because once launched and calibrated the whole experiment was controlled by the gaze data feeding back in. So if the root problem is keyboard input, I’m assuming you’re using keyboard input to advance trials or record some data or something, which is where PyHab might be easier, but if not, no worries.

Oh, now I get it. It’s actually a minor issue in the particular experiment I’m working on - the only time a keyboard press is used is for initializing calibration. I could use a gaze-contingent approach instead, but it’s best to leave this step ‘manual’ since the experimenter needs to do a subjective appraisal of whether the infant seems ready or not. But thanks for the kind offer to help anyway.