Audiovisual rendering issue for video on 6th routine counter run: brief flash of video, no audio

Hi all, our lab is running a visual EEG experiment showing participants a repeating image followed by a jittered ITI. No issues there - but after the 6th counter (starting from the first routine) we intend to present a video, a social attention getter, which is sampled from one of four potential videos via a random call (video length is either 6s or 7s depending on the video). Unfortunately, the video does not seem to render well, depending on the code tested, either not at all, or it shows briefly/flickers, but there is no audio. Note the videos using a list (rather than dict form) work fine in an earlier and much simpler routine at the beginning of the experiment, so I don’t expect the video is problematic. To this effect, I tried experimenting with .seek(0) incase because they were preloaded for earlier routines, I could ‘hijack’ and reset this (although this would ultimately just use one video rather than four), but this didn’t work either.

I have tried:

  • using .stop() method calls when the video ends, to try and nudge back to the main routine
  • playing with .isPlaying()
  • MovieStim vs Moviestim3() versions
  • code versus component based solutions, with conditions based on counter
  • Looking into whether it’s something to do with ‘win’; this isn’t defined explicitly, but uses the implicit win call. I tried looking into the docs on how to set this (i.e. how does it know the screen info from the experiment settings/how are calibrations stored), but didn’t have any luck.

I would welcome some input on this; it is likely something trivial, but has been stumping me for some time! The associated code for the relevant routine is given below for the start routine, each frame and end routine:

#### BEGIN ROUTINE ####
att_start_vids_dict = {"videos/social1.mp4" : 7, "videos/social2.mp4" : 6, "videos/social3.mp4" : 7, "videos/social4.mp4" : 6}
perSixITIs_vid_index = np.random.choice([0, 1, 2, 3]) # 
perSixITIs_vid = list(att_start_vids_dict.keys())[perSixITIs_vid_index] # this is the KEY, val already in _index
perSixITIs_vid_length = list(att_start_vids_dict.values())[perSixITIs_vid_index] 
varITI_SAG = visual.MovieStim(win, perSixITIs_vid, size=(0.5,0.5), units='norm')

if int(varITIcounter / 6) > 0 and varITIcounter != 1:
    print(f"Social attention getter starting, {varITIcounter}")
    myParallelPort2.setData(25)# stim trig onset
    #attentionGetterMovie.setMovie(perSixITIs_vid)  # Set the movie file
    #attentionGetterMovie.seek(0)  # Reset the movie to the beginning
    varITI_SAG.play()
    thisExp.addData('TrialType', f'ITI Social AG{perSixITIs_vid_index}')
else:
    jitter = round(random.uniform(0.4, 1), 2)
    thisExp.addData('TrialType', 'ITI BlankScreen')

#### EACH FRAME ####
#thisExp.addData('Time Since Exp Start',timeSinceExperimentStart.getTime())
#if movie.isPlaying:
#    thisExp.addData('TrialType', 'ITI Social AG')

thisExp.addData('Time Since Exp Start', timeSinceExperimentStart.getTime())
if varITIcounter != 1 and varITIcounter % 6 == 0:
    if varITI_SAG.status != FINISHED:
        varITI_SAG.draw()
        win.flip()
        #core.wait(perSixITIs_vid_length)
    elif varITI_SAG.status == FINISHED:
        varITI_SAG.stop()
        print("STOP CATCH")

#if not continueRoutine:
#    core.quit()

#### END ROUTINE ####
#if varITI_SAG.isPlaying:
#    varITI_SAG.stop()
#    myParallelPort2.setData(26) # stim offset
#    print("Social attention getter STOPPED")
#varITIcounter += 1 # additive

varITI_SAG.stop()
myParallelPort2.setData(26) # stim offset
varITIcounter += 1 # additive
print("Routine Finished")
continueRoutine = False

Thank you!

Ryan

MovieStim seeks are always slow and imprecise. See here: MovieStim (python) seek behavior: ~5 frames to complete

MovieStim3 won’t solve this issue either, unfortunately. It’s just unreliable about seeking in general, and I never found a solution I was happy with.

I did come up with a solution that is at least somewhat reliable, which you can read about here and in the repository that this post links to: Using seek with MovieStim - #2 by jonathan.kominsky

The solution I came up with introduces about 100ms of delay into the process, but it’s a reliable 100ms and it can tell you exactly when the frame you want is actually presented. However it is not trivial to implement and involves stupid tricks like playing the movie behind an occluding image. If you didn’t have audio I would just recommend not using movies at all, but since it sounds like you are actually dealing with similar stimuli to what I was working with, this might be your best bet.

1 Like