Frame drop causing Audio/Video lag with moviestim3

Hi all,

I have a problem with a task for presenting video stimuli. From what I understand the problem is: Frame drop is causing remaining frame to last longer than they are supposed to, which causes a lag between audio and video. And it is a problem for me because timing is critical here.

The error message is as follow:

pyo version 0.8.7 (uses single precision)
*
*
This trial is the sentence 3
*
*
----This is BLOCK 1----
duration=2.97s
1.7332 ERROR avbin.dll failed to load. Try importing psychopy.visual
as the first library (before anything that uses scipy)
and make sure that avbin is installed.
Overall, 1 frames were dropped.
Rating for comprehension was 1 out of 5
Reaction time was 1.43944441637 sec
Trigger sent from COM7 port
----This is BLOCK 2----
Overall, 3 frames were dropped.
Rating for comprehension was 3 out of 5
Reaction time was 0.399489936086 sec
Trigger sent from COM7 port
----This is BLOCK 3----
duration=2.97s
Overall, 5 frames were dropped.
Rating for comprehension was 1 out of 5
Reaction time was 37.4995602761 sec
Trigger sent from COM7 port
*
*
This trial is the sentence 20
*
*
----This is BLOCK 1----
duration=2.90s
Overall, 7 frames were dropped.
32.3580 WARNING t of last frame was 20766.69ms (=1/0)
36.4913 WARNING t of last frame was 520.04ms (=1/1)
38.8914 WARNING t of last frame was 980.02ms (=1/1)
43.0114 WARNING t of last frame was 519.97ms (=1/1)
44.1314 WARNING Multiple dropped frames have occurred - I’ll stop bothering you about them!

In this example I forced quit when a second run started

I have been looking for a solution in this forum and others but without success, and I don’t think I can figure it out on my own.

I started using PsychoPy (and Python as a matter of fact) only few weeks ago, so I apologize in advance if sometimes I state the obvious or otherwise miss the point.

I have been trying to program a task with video stimuli and the idea goes like this:

  1. A video is presented without audio, followed by a rating of comprehension of the video (BLOCK 1)
  2. Same video is presented with the audio this time, and again followed by a rating (BLOCK 2)
  3. Same video is without audio again, and followed by the rating. (BLOCK 3)

The videos are few seconds long, they are in mp4 format, size of 1-4 Mo, 60fps (Presentation screen is 60Hz), there are 20 of them and they are selected at random.

The computer for presenting the stimuli is a laptop running on windows 7 with 2 graphics card (Intel HD 400 and NVIDIA VNS 5400M), I have tested it on another laptop with Intel HD 4000 graphics card and the problem is similar.

I have checked for the avbin file which is located for avbin64.dll in C:\Windows\System32 and
avbin.dll in C:\Windows\SysWOW64 as mentionned in another post.

I have tried using moviestim or moviestim2 but it just crashed when the first video started.

I am using PsychoPy v.1.86.6

All this said, below you will find the code I wrote. I am really confused regarding what is going wrong, and I would most certainly appreciate your input. In advance, thank you!

#!/usr/bin/env python
# -*- coding: utf-8 -*-

#
#Load libraries 
#

from __future__ import division, print_function
from psychopy import visual, core, event, data, gui,logging, sound
from psychopy.constants import FINISHED
import time, os, serial


#
#SET UP
#

#Set path
videopath = r'C:\Users\Stimulus\Desktop\Merkel Videos'
if not os.path.exists(videopath):
    raise RuntimeError("File could not be found:" + videopath)

# Store info about the experiment session
expName = u'PopOutCodedVersion' 
expInfo = {u'session': u'001', u'participant': u''}
dlg = gui.DlgFromDict(dictionary=expInfo, title=expName)
if dlg.OK == False:
    core.quit() 
expInfo['date'] = data.getDateStr()  
expInfo['expName'] = expName

#ExperimentHandler for data saving
thisExp = data.ExperimentHandler(name=expName, version='',
    extraInfo=expInfo, runtimeInfo=None,
    originPath=None,
    savePickle=True, saveWideText=True,
    dataFileName=expName)
    

# save a log file for detail verbose info
logFile = logging.LogFile(expName+'.log', level=logging.EXP)
logging.console.setLevel(logging.WARNING)

#Define presentation window on screen
win = visual.Window(size=(1024, 768),fullscr=True,allowGUI=False, winType='pyglet',
             screen=0)
win.recordFrameIntervals=True

#Define font
sans = ['Helvetica']

#Define Serial Port
ser = serial.Serial("COM7",9600)


#
#Initialization
#

#Initialize instructions
longSentence = visual.TextStim(win,
    text = u"""
Sie werden verschiedene Videos mit und ohne Ton präsentiert.
    
Nach jeder Videopräsentation werden Sie gebeten, Ihr Verständnis für das, was im Video gesagt wurde. 

Zu bewerten DrĂĽcken Sie die Leertaste, um zu starten, wenn Sie bereit sind""", font=sans, wrapWidth=1,
    units='norm', height=0.1, color='black',
    pos=[0,0])
trialClock = core.Clock()
t = lastFPSupdate = 0
    
    
#Initialize Video Stim
videoClock = core.Clock()


#Initialize Feedback
FeedbackClock = core.Clock()

longSentence2 = visual.TextStim(win,
    name='FeedbackResponse',
    text = u"""Bitte bewerte dein Verständnis dieses Videos von 1 bis 5""", 
    font=sans, 
    wrapWidth=1,
    units='norm', 
    height=0.1, 
    color='black',
    pos=[0,0])
    
#Initialize END

longSentence3 = visual.TextStim(win,
    name='End Message',
    text = u"""Danke fĂĽr Ihre Teilnahme. Die Aufgabe ist vorbei.""", 
    font=sans, 
    wrapWidth=1,
    units='norm', 
    height=0.1, 
    color='black',
    pos=[0,0])
    
    
#
#START EXPERIMENT
#
    
# Start Instructions and display until space key is pressed
while not event.getKeys(keyList=['space']):
#t = trialClock.getTime()
    longSentence.draw()
    
    #refresh the screen 
    win.flip()

    # check for quit (the Esc key)
    if event.getKeys(keyList=["escape"]):
        core.quit()

for frame in range (30):
    win.flip()

#
#BEGIN LOOP 
#

# set up handler to look after randomisation of conditions
Trials = data.TrialHandler(nReps=1, method='random', 
    extraInfo=expInfo, originPath=-1,
    trialList=data.importConditions('PopOut_DE.xlsx', selection='0:20'),
    seed=None, name='Trials')
thisExp.addLoop(Trials)  # add the loop to the experiment
thisTrial = Trials.trialList[0]
  

for thisTrial in Trials:
    print(' *\n *\n This trial is the sentence %s \n * \n * ' %(thisTrial.Sentence))
    
    #
    #
    #BLOCK 1
    #VIDEO WITHOUT SOUND
    #
    #
    
    print ('----This is BLOCK 1----')
    
    #
    #VIDEO 1
    #
    
    trialClock.reset()
    
    #create video stimulus
    mov = visual.MovieStim3(win, 
        filename=thisTrial.ID,
        noAudio = True, 
        size=[600,400],
        pos=[0, 100],
        flipVert=False, 
        flipHoriz=False,
        loop=False)
    
    
    print ('duration=%.2fs' %(mov.duration))
    
    #Sound trigger with 500ms wait
    beep=sound.Sound('tone128hz200ms.wav')
    beep.play()
    core.wait(0.5)
    
    
    #Start the movie stim by preparing it to play
    #shouldflip = mov.play()
    while mov.status != visual.FINISHED:
        
        
        #TRIGGER
        #ser.write('1')
        
        
        #Play movie
        mov.draw()
        win.flip()
        
        
        # check for quit 
        #if event.getKeys(keyList=["escape"]):
        #    core.quit()
    
    #End sound trigger after 500ms
    core.wait(0.5)
    beep.play()
    
    
    
    win.refreshTrheshold = 1/60+0.004
    logging.console.setLevel(logging.WARNING)
    print('Overall, %i frames were dropped.' % win.nDroppedFrames)
    
    #
    #FEEDBACK 1
    #
    
    #Prepare Feedback
    t = 0
    FeedbackClock.reset()
    frameN = -1
    
    continueRoutine = True
    
    # Start Feedback and display until answer key is pressed
    event.clearEvents(eventType='keyboard')
    while continueRoutine:
        theseKeys=event.getKeys(keyList=['1', '2', '3', '4', '5'])
        t = trialClock.getTime()
        longSentence2.draw()
        
        #refresh the screen 
        win.flip()
    
        # check for quit (the Esc key)
        if event.getKeys(keyList=["escape"]):
            core.quit()
    
        if len(theseKeys) > 0:  # at least one key was pressed
            Feedbackkeys = theseKeys[-1]  # just the last key pressed
            FeedbackRT = FeedbackClock.getTime()
            print('Rating for comprehension was %s out of 5' %(Feedbackkeys))
            print('Reaction time was %s sec' %(FeedbackRT))
            print('Trigger sent from %s port' %(ser.name))
            # a response ends the routine
            continueRoutine = False
    
    # add responses to handler
    Trials.addData('Feedbackkeys', Feedbackkeys)
    Trials.addData('FeedbackRT', FeedbackRT)
    thisExp.nextEntry()
    
    
    
    #
    #
    #BLOCK 2
    #VIDEO WITH SOUND
    #
    #
    
    print ('----This is BLOCK 2----')
    
    #
    #VIDEO 2
    #
    
    #create video stimulus
    mov2 = visual.MovieStim3(win, 
        filename=thisTrial.ID, 
        size=[600,400],
        pos=[0, 100],
        noAudio=False,
        flipVert=False, 
        flipHoriz=False,
        loop=False)
    
    #Sound trigger with 500ms wait
    beep.play()
    core.wait(0.5)
    
    # Start the movie stim by preparing it to play
    while mov2.status != visual.FINISHED:
        
        #TRIGGER
        #ser.write('2')
        
        #Play Movie
        mov2.draw()
        win.flip()
    
        # check for quit 
        #if event.getKeys(keyList=["escape"]):
        #    core.quit()
    
    #End sound trigger after 500ms
    core.wait(0.5)
    beep.play()
    
    
    win.refreshTrheshold = 1/60+0.004
    logging.console.setLevel(logging.WARNING)
    print('Overall, %i frames were dropped.' % win.nDroppedFrames)
    
    
    #
    #FEEDBACK 2
    #
    
    #Prepare Feedback
    t = 0
    FeedbackClock.reset()
    frameN = -1
    
    continueRoutine = True
    
    # Start Feedback and display until answer key is pressed
    event.clearEvents(eventType='keyboard')
    while continueRoutine:
        theseKeys=event.getKeys(keyList=['1', '2', '3', '4', '5'])
        t = trialClock.getTime()
        longSentence2.draw()
        
        #refresh the screen 
        win.flip()
    
        # check for quit (the Esc key)
        if event.getKeys(keyList=["escape"]):
            core.quit()
    
        if len(theseKeys) > 0:  # at least one key was pressed
            Feedbackkeys2 = theseKeys[-1]  # just the last key pressed
            FeedbackRT2 = FeedbackClock.getTime()
            print('Rating for comprehension was %s out of 5' %(Feedbackkeys2))
            print('Reaction time was %s sec' %(FeedbackRT2))
            print('Trigger sent from %s port' %(ser.name))
            # a response ends the routine
            continueRoutine = False
    
    # add responses to handler

    Trials.addData('Feedbackkeys2', Feedbackkeys2)
    Trials.addData('FeedbackRT2', FeedbackRT2)
    thisExp.nextEntry()
    
    #
    #
    #BLOCK 3
    #VIDEO WITHOUT SOUND
    #
    #
    
    print ('----This is BLOCK 3----')
    
    #
    #VIDEO 3
    #
    
    #create video stimulus
    mov3 = visual.MovieStim3(win, 
        filename=thisTrial.ID,
        noAudio = True, 
        size=[600,400],
        pos=[0, 100],
        flipVert=False, 
        flipHoriz=False,
        loop=False)
    
    
    print ('duration=%.2fs' %(mov.duration))
    
    
    #Sound trigger with 500ms wait
    beep.play()
    core.wait(0.5)
    
    
    # Start the movie stim by preparing it to play
    while mov3.status != visual.FINISHED:
        
        #TRIGGER
        #ser.write('3')
        
        #Play movie
        mov3.draw()
        win.flip()
    
        # check for quit 
        #if event.getKeys(keyList=["escape"]):
        #    core.quit()
    
    #End sound trigger after 500ms
    core.wait(0.5)
    beep.play()
    
    
    win.refreshTrheshold = 1/60+0.004
    logging.console.setLevel(logging.WARNING)
    print('Overall, %i frames were dropped.' % win.nDroppedFrames)
    
    #
    #FEEDBACK 3
    #
    
    #Prepare Feedback
    t = 0
    FeedbackClock.reset()
    frameN = -1
    
    continueRoutine = True
    
    # Start Feedback and display until answer key is pressed
    event.clearEvents(eventType='keyboard')
    while continueRoutine:
        theseKeys=event.getKeys(keyList=['1', '2', '3', '4', '5'])
        t = trialClock.getTime()
        longSentence2.draw()
        
        #refresh the screen 
        win.flip()
    
        # check for quit (the Esc key)
        if event.getKeys(keyList=["escape"]):
            core.quit()
    
        if len(theseKeys) > 0:  # at least one key was pressed
            Feedbackkeys3 = theseKeys[-1]  # just the last key pressed
            FeedbackRT3 = trialClock.getTime()
            print('Rating for comprehension was %s out of 5' %(Feedbackkeys3))
            print('Reaction time was %s sec' %(FeedbackRT3))
            print('Trigger sent from %s port' %(ser.name))
            # a response ends the routine
            continueRoutine = False
            
    # add responses to handler

    Trials.addData('Feedbackkeys3', Feedbackkeys3)
    Trials.addData('FeedbackRT3', FeedbackRT3)
    thisExp.nextEntry()

#print(Trials.data)



#Save an xlsx file
#Trials.saveAsExcel(fileName='OutputData.xlsx',
#                   sheetName='rawData',
#                   stimOut=Trials.trialList,
#                   dataOut=['FeedbackRT3_raw'])
#Trials.saveAsExcel(filename + '.xlsx', sheetName='Trials',
#    stimOut=params,
#    dataOut=['n','all_mean','all_std', 'all_raw'])
#['FeedbackRT_raw','FeedbackKeys_raw','FeedbackRT2_raw','FeedbackKeys2_raw','FeedbackRT3_raw','FeedbackKeys3_raw']


# Start End routine and display until any key is pressed
Endroutine=True
ser.close()
while Endroutine:
    t = trialClock.getTime()
    longSentence3.draw()
    
    #refresh the screen 
    win.flip()
    
    # check for quit (the Esc key)
    if event.getKeys():
        win.close()
        core.quit()

First thing: The avbin error recommends importing psychopy.visual first, so try swapping the order of the first two import lines:

from psychopy import visual, core, event, data, gui,logging, sound
from __future__ import division, print_function

That might get rid of that error at least, but if the video and audio are playing at all, that probably won’t fix the other issue.

As for the lag, I’m a little unclear by what you mean by “remaining frame to last longer”. Just, the frames take longer, or specific frames on specific videos or at specific points in videos take longer?

Hi, thanks for your reply.

I already tried swapping psychopy.visual in first place but then I get an error saying future has to be first.

Regarding the frames, it is unclear to me as well. The video runs like if it was in slow motion, while the audio is fine, and in the log (at the top of my post) there is a warning saying the frames last >500 ms (while they should be 16,7ms). I concluded that somehow movieStim3 or the graphics card was not refreshing the frames at the right rate, but that goes beyond my understanding of psychopy.

It’s almost certainly MovieStim3. Processor and RAM seem to affect performance more than your graphics card for presenting movie stimuli. Does it have this lag on the videos with no audio, or only the ones with audio?

If it’s only happening on the videos with audio, you could try changing which audio library you are using. The default is generally pretty inefficient. You would replace your import line with this:

from psychopy import visual, core, event, data, gui,logging, prefs
prefs.general['audioLib'] = ['pyo']
prefs.general['audioDevice'] = ['Built-in Output']
from psychopy import sound

That will switch you over to the Pyo sound library, which is a bit faster.

If you’re getting slowdown even when the videos have no audio, it’s probably the movie files themselves. How big are the files?

@jonathan.kominsky sorry for the late reply, I was out of the office, but thanks for the advice.

The lag happens on both the audio and no audio conditions, I think the problem really is in terms of video processing.

The size of videos does seem to matter. Once I reduced the size of the videos there seemed to be less of a lag. Although the videos were 1-3 Mo and were reduced to 100-300 Ko, and there is still a frame drop. I can’t help but think that it is not normal that a 100Ko video is too large to be processed, especially that the VLC player works fine.

N.B: I noticed the number of frames dropped tend to increase as the experiment goes on.

Sorry to bother you @jon, but maybe you could tell us more about how MovieStim3 works and what can possibly affect the display of video frames? I think that would be very helpful.

I can answer this in part. I’ve been digging into movie3 for my own projects. The long and short is that it is built on a third party library called moviepy, and many of the issues seem to stem from moviepy rather than psychopy. When there is sound there are some problematic interactions between psychopy’s sound handling systems and moviepy, but if it’s happening when noAudio = True, that’s not the problem.

For video, movie3 seems to be streaming frames from the movie file, decoding using the ffmpeg and other codecs from vlc or avbin. I have run into issues sometimes that seem to be at the level of how moviepy interacts with those, but again mostly with audio. However, I run on a Mac, and don’t need avbin.

I have two guesses about things you could try but I’m not sure either will help. One is to try encoding your videos in a different format and see if that makes. It might play nicer using h.264 or avi encoding or something. It might be something about how it’s decoding mp4, but I would be a little surprised. The other guess, and I don’t think this should matter but why not, is you could call mov3.play() just before the draw loop. I think it should be playing by default, and if it’s progressing frames at all, as far as I know it must be playing, but I’ve had some odd interactions between draw and play/pause commands in the past.

@jon might have better ideas but that’s as much as I’ve been able to work out on my own issues.

That’s about as much as I know too. I definitely think that avbin will have worse performance even if you do get it to work. But i don’t know anything more about what the lags could be

Ok thanks for the insight

Changing the format did not change anything at all, unfortunately.

Adding mov.play() before the loop seems to reduce the amount of frame dropped, but does not resolve the problem.

Just wondering, the way moviestim works, is it buffering the videos before playing them? If not, could we do that? If yes, could it be causing overload and the buffer needs to be “increased” or refreshed?

No, MovieStim loads each frame on each frame. I haven’t tried to do any buffering to memory with it.

Out of curiosity, are you playing the movies at their native resolution? That is, are you manually defining size parameters when you call moviestim.

@jonathan.kominsky No I’m displaying it in a smaller window (600x400), does this make a difference?

I don’t think the rescaling at render time will itself make a difference to rendering speed, but it does mean that you could resize your stimuli in advance to be that size and THAT will make a big difference.

Note that the key to speed here is not the size of the image in Mb (that’s a matter of how much the codec is compressing your images). The key is the size of the image in pixels because that determines how many values have to be passed to the graphics card by PsychoPy. For example:

  • an RGB video in full HD is 3x1920x1080 = 6,220,800 values to manipulate and upload each frame
  • an RGB video at 600x400 is only 720,000 values so it’s a lot less work for PsychoPy!

Ok, great advice, I’ll try and do that. Thank you!

Hi,
I had a same problem and after reading this just changed the video frame rate (it was 30fps and I changed it to 15 fps). I used the WMV file format with moviepy option. It solved the audio lag issue. I am using PsychoPy v3.1.2

I changed the frame rate using this website. https://video.online-convert.com/convert-to-wmv

1 Like