psychopy.org | Reference | Downloads | Github

Preload video in the background

Hello all.

I am running an fMRI experiment and need to know exact how long the scan time should be. I coded my experiment but found that when using movieStim3 it will take a few seconds for the video to load, and the load time is inconsistent across computers, or even between trials. I tried to create a movie list at the beginning to preload all videos but it takes a lot of RAM and takes a super long time to finish loading.
I am wondering it there is a way to load the video in the background when the participant is at rest or doing some key press responses, I currently use core.wait() so it might not work but would like to learn some possible options.

Thanks all.

Hi There,

How many movies are you loading? you should be able to add a code component anywhere in your experiment and make a movieStim instance or a list of movieStim instances then later call them to play (I am here referring to code components, as I am assuming the main experiment framework is in builder? as in code this shouldn’t be an issue)

Becca

Hi @Becca, I am hoping to do something similar to preload five ~5-minute videos I will be playing once each in a loop for a task I designed in Builder. Could you direct me to where I could find how to actually code this in a code component? Thanks in advance!

Actually if you are running this locally you might not even need a code component! Have you tried using a static period (found under custom) and loading the movie during the static period? ezgif.com-gif-maker (8)

Hi @Becca, thanks so much for this suggestion! I was able to easily implement this component in my routine. The trouble is, the video I’m showing is about 5 minutes, so I’m afraid the loading period would need to be a minute or so to have any effect, which is time I can’t really afford. I tried making the loading period 5 seconds and it had no effect. I’ve tried changing the video file size, file type, and various PsychoPy settings, and nothing has gotten me further than about 15 seconds in before the audio starts to sound glitchy and video frames begin to drop. Any other ideas you have I would be grateful for! I am also the person who corresponded with you via this twitter thread earlier this week, FYI!

Is it just one movie stim or a stim that changes trial by trial? how many movies do you have to load? and do you have an instructions phase or anything before movie presentation that you anticipate to take longer than 2 mins?

PS. Yes I clocked the twitter link! I assume the vlc did not bring joy then?

Becca

@Becca Just making sure :slight_smile: It’s five different movie stim (each ~5 min, presented sequentially):

There is an introduction screen, but it takes participants only a few seconds to read.

I tried updating to the newest version of PsychoPy and recoding the task in 2021.2.2 while leaving the backend as moviepy, and that didn’t help. I also downloaded VLC and tried using it as the backend instead, but when I do, the video won’t play and I get an error - I wonder if I need to do anything else to get VLC to be accessible to PsychoPy? I tried running pip install python-vlc from my terminal, which ran, but to no avail.

Here’s the error when I try to run the task with vlc as the backend:

File “/Users/susanbenear/Google_Drive/Dissertation/Segmentation Task/segmentationtask_V1_update_lastrun.py”, line 258, in
depth=0.0,
File “/Applications/PsychoPy.app/Contents/Resources/lib/python3.6/psychopy/contrib/lazy_import.py”, line 120, in __ call__
return obj(*args, **kwargs)
File “/Applications/PsychoPy.app/Contents/Resources/lib/python3.6/psychopy/visual/vlcmoviestim.py”, line 144, in __ init__
self.opacity = float(opacity)
TypeError: float() argument must be a string or a number, not ‘NoneType’
Exception ignored in: <bound method VlcMovieStim.del of <psychopy.visual.vlcmoviestim.VlcMovieStim object at 0x7ffa20119518>>
Traceback (most recent call last):
File “/Applications/PsychoPy.app/Contents/Resources/lib/python3.6/psychopy/visual/vlcmoviestim.py”, line 1185, in __ del__
self._releaseVLCInstance()
File “/Applications/PsychoPy.app/Contents/Resources/lib/python3.6/psychopy/visual/vlcmoviestim.py”, line 292, in _releaseVLCInstance
if self._player is not None:
AttributeError: ‘VlcMovieStim’ object has no attribute ‘_player’
####Experiment ended. #####

OK I can replicate that - it seems to be specific to builder for me. For you what happens if you go to coder view → demos → stimuli → vlcmoviestim

Does that work for you?

Yes! I was able to open the code and run it so that the video of the PsychoPy creator plays.

OK, in that case your vlc set up seems fine. It sounds like a bug with the movie component. Let’s try a temporary work around!

Add a code component, in the begin experiment tab use:

# get the video from the demo resources directory
videopath = 'movies\movie1.mp4'#replace with your movie
videopath = os.path.join(os.getcwd(), videopath)
if not os.path.exists(videopath):
    raise RuntimeError("Video File could not be found:" + videopath)

# Create your movie stim
mov = visual.VlcMovieStim(win, videopath,
    size=600,  # set as `None` to use the native video size
    pos=[0, 0],  # pos specifies the /center/ of the movie stim location
    flipVert=False,  # flip the video picture vertically
    flipHoriz=False,  # flip the video picture horizontally
    loop=False,  # replay the video when it reaches the end
    autoStart=True)  # start the video automatically when first drawn

in the begin routine tab:

mov.play()

in the each frame tab:

mov.draw()

if it doesn’t draw you might need a placeholder to keep the routine going add a text stim and put it offscreen for 5 mins .

Please could you let me know if that works?!

If that works try preloading like this.

In the begin experiment tab:

vlc_movies = []
my_movies = ['movie1.mp4', 'movie2.mp4', 'movie3.mp4', 'movie5.mp4']#path to your movies from this directory

for movie in my_movies:
    mov = visual.VlcMovieStim(win, movie,
    size=600,  # set as `None` to use the native video size
    pos=[0, 0],  # pos specifies the /center/ of the movie stim location
    flipVert=False,  # flip the video picture vertically
    flipHoriz=False,  # flip the video picture horizontally
    loop=False,  # replay the video when it reaches the end
    autoStart=True)  # start the video automatically when first drawn
    vlc_movies.append(mov)

In the begin routine tab:

thisMovie = vlc_movies[cliploop.thisN]# grab the movie from the list based on the current loop iteration
thisMovie.play()# play the movie

In the each frame tab:

thisMovie.draw()#draw the movie frame

Hope this helps!
Becca

Unfortunately this first set of code didn’t work, even with a placeholder text stimulus off screen. I tried to also just run it as its own new experiment in the Builder without my intro screen, button presses, etc. and it still just quits after I put in the participant number and says “PsychoPy quit unexpectedly”.

Here’s the readout when I run the code:

##Running: /Users/susanbenear/Google_Drive/Dissertation/Segmentation_Task/segmentationtask_V1_update_lastrun.py ##
166.1913 INFO Loaded monitor calibration from [‘2019_08_13 14:07’]
0.9695 WARNING We strongly recommend you activate the PTB sound engine in PsychoPy prefs as the preferred audio engine. Its timing is vastly superior. Your prefs are currently set to use [‘sounddevice’, ‘pyo’, ‘pygame’] (in that order).
2021-08-12 09:50:44.124 python[20215:2558570] ApplePersistenceIgnoreState: Existing state will not be touched. New state will be written to (null)
#####Experiment ended. #####

HOWEVER, when I took the code you gave me and added in the missing components from the vlc demo and put it all in the Coder instead of in a code component in the Builder, I was able to play my first video, and even in full screen it played perfectly with no lag! So it’s something to do with the Builder that’s causing these issues. I suppose I could try to do everything in the Coder to avoid these problems. Through a combination of stealing code from demos and Googling errors, I should be able to figure it out :slight_smile: