I’m having issues with visual.MovieStim3. I saw a relevant conversation between Samuel Mehr, @jon and @sol dating back to 2013/2014.
My video is playing at 60-70 fps effective (measured with .recordFrameIntervals) while it’s encoded at 120fps (119.88 to be precise) and I need it to play at this frequency. It’s dropping half the frames, effectively. It’s not massively high definition, just 720p, yet reducing the pixel number by encoding it as 240p increases the fps to roughly 120. Even so, it’s not quite right, I get a lot of jitter in the fps and I need it to be very precise.
My new lab computer has an AMD Radeon Pro WX 3100 graphics card and a separate Asus sound card, I’m running Windows and PsyPy3 on Anaconda Python 3.7.
Let me know if there’s any other relevant info or if there’s any alternative to MovieStim3 that I can try!
Does the movie include audio? How much RAM do you have? How long is the movie overall? What is the movie’s encoding format (e.g., h.264, MPEG, etc.)?
All that might or might not matter, but it’ll help me narrow down. The way MovieStim3 handles audio tracks can be extremely inefficient, though usually that manifests as just outright crashing rather than frame drops.
As for whether there are alternatives, not that I know of but I’d be interested to hear if there were. MovieStim2 is still in there and technically works (sometimes) but I don’t know that it’ll be more efficient. Fundamentally MovieStim3 is running on MoviePy, and a lot of its issues are due to MoviePy’s general inefficiency, but as far as I know there’s no better Python-based solution for loading movie files.
Just tried playing with the audio on mute [noAudio - True] and also tried removing the audio from the movie altogether on encoding but to no avail.
RAM is 16GB. This question made me think of something: is there any way to load the whole video to the video card before I play it? I’m already using the ISI function where I give PsychoPy 15 seconds before I start to load the video but that doesn’t help either.
Movie is encoded in h.265 because it’s the only way I can export from Premiere CC at 120 hz I think.
Even trying the simplest builder view set up with just a 3 second video yields the same result. There is a choice to change the backend to “avbin” but I get the following error: AttributeError: module ‘pyglet.media’ has no attribute ‘ManagedSoundPlayer’. Using Opencv as the backend yields the following error: ImportError: dlopen(/Applications/PsychoPy3.app/Contents/Resources/lib/python3.6/lib-dynload/cv2/cv2.so, 2): Library not loaded: @loader_path/.dylibs/libavcodec.57.107.100.dylib
Referenced from: /Applications/PsychoPy3.app/Contents/Resources/lib/python3.6/lib-dynload/cv2/cv2.so
Reason: image not found.
Essentially, you’re pushing things really hard!
Even 720p is nearly 1,000,000 pixels per frame, so 3 million numbers (RGB) that PsychoPy has to read, convert and push to the graphics card in less than 8ms to meet your timing needs. That’s tough going.
Optimisations:
By turning off waitBlanking for the window you might get some slightly better performance (it allows the system to work a frame ahead if it has a chance.
Make sure everything else is down to an absolute minimum in terms of processing (don’t do other calculations in the each-frame, try not to minimise anything else on screen, especially get rid of text)
Otherwise, it’s time to get your hands dirty inside the movie code and try to find additional optimizations in the rendering code.
I’m also trying to play 1920x1080 video, but at 60Hz, not 120. The relevant conversation link you provided is giving me hope. I’m currently getting only 30Hz, but my machine is not exactly very beefy. I’m planning to test this on another machine soon.
My problem though is slightly different. I have the movie set to loop, and there is a noticeable lag each time the video restarts.Using @enricovara 's mention of .recordFrameIntervals led me to look at that section of the manual. I saved the frame intervals to a file and found that every 61 frames (the length of the AVI), the frameInterval was ~ 0.14sec instead of ~0.03sec.
Hopefully, on a faster machine, I will get 60Hz, but I still notice the delay each time the video loops.
We are in the process of making longer videos so this is not as noticeable, but is there a way to minimize this delay in the video looping?
Does the movie have audio? If so, it’s probably not possible to minimize the delay. Every time you loop (or seek), PsychoPy reloads the entire audio track for a movie file. If there’s no audio, I’m not sure.
@jonathan.kominsky, No audio. I also tried it on the machines the experiment will actually be run on. Still 30Hz, but the “rewind” time went from 0.14sec to 0.08sec.
ImageJ plays them at 60Hz with no problem. Also no pause each loop, but I suspect ImageJ is doing this all in memory.
Yes, I suspect the delay is due to initially accessing the file from disk. i.e. if you create the movie stimulus in a loop, or update an existing stimulus’ movie file attribute in a loop, the disk reading happens once each time the loop runs. Hard to know without seeing your code, but you might avoid this by creating the movie stimulus just once, before the loop starts. Then within the loop, you just have your_movie.draw() commands. This avoids going back to the disk to initialise reading from the movie file.
To reset the movie to the start, you should be able to just your_movie.seek(0) once it has played through, ready for the next iteration.
Can someone explain to me what’s happening under the hood a little, please? If I understood well psychopy is calling moviepy and loading the video in RAM and then passing each frame onto the graphics card and displaying it? And it’s this passing to the graphics which is ‘slow’? Or are we not sure of what the bottleneck is either, for my case?
Why does opening the video simply on VLC work fine? Can anyone think of a way to implement whatever VLC is doping in python?
We use moviepy, which uses ffmpeg to load the movie as a stream. Movies are loaded one frame at a time (once in memory your video is uncompressed raw numbers and that quickly consumes hundreds of gigabytes of memory [see note 1 below] so we can’t load all frames in advanced).
Why don’t we achieve the performance of VLC?
The key differences are:
VLC Is written in C (whereas PsychoPy is written in Python)
VLC only has one thing to do, and can dedicate all resources to that one key task of translating pixels, whereas PsychoPy is trying to capture responses and present other stimuli all with high precision
VLC has many developers with a specific interest/expertise in video rendering, whereas PsychoPy is written by behavioural scientists borrowing libraries from other places
Possibly VLC pre-buffers a few frames in advance to get a smoother transition. I don’t know enough about that (see point 3)
[1]: You were talking about 720p at 120Hz: each frame is 720x1280x3 values at (probably) 16 bits each = 5529600 bytes (roughly 5.5Mb) per frame. At 120Hz that’s ~650Mb per second, or 2.38 TB per minute of movie!
@Michael, all I’m doing is setting movie.loop to True. I just played my video with ffmpeg from the command line, and I can see when the video restarts, so that’s out of PsychoPy’s hands. Why it’s playing at 30fps instead of 60fps though, I need to look into a little further.
Yesterday I tried to change the backend to opencv, but I got an error.
################## Running: C:\Experiments\testAVI\testAVI.py ##################
pygame 1.9.4
Hello from the pygame community. https://www.pygame.org/contribute.html
Traceback (most recent call last):
File "C:\software\condaenvs\perceptionlab\lib\site-packages\psychopy\visual\movie2.py", line 110, in <module>
import vlc
ModuleNotFoundError: No module named 'vlc'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Experiments\testAVI\testAVI.py", line 75, in <module>
depth=0.0,
File "C:\software\condaenvs\perceptionlab\lib\site-packages\psychopy\contrib\lazy_import.py", line 119, in __call__
obj = object.__getattribute__(self, '_resolve')()
File "C:\software\condaenvs\perceptionlab\lib\site-packages\psychopy\contrib\lazy_import.py", line 88, in _resolve
obj = factory(self, scope, name)
File "C:\software\condaenvs\perceptionlab\lib\site-packages\psychopy\contrib\lazy_import.py", line 203, in _import
module = __import__(module_python_path, scope, scope, [member], level=0)
File "C:\software\condaenvs\perceptionlab\lib\site-packages\psychopy\visual\movie2.py", line 116, in <module>
if "wrong architecture" in err.message:
AttributeError: 'ModuleNotFoundError' object has no attribute 'message'
Maybe I need to get vlc installed on this conda environment to see about playing back at 60fps.
You were talking about 720p at 120Hz: each frame is 720x1280x3 values at (probably) 16 bits each = 5529600 bytes (roughly 5.5Mb) per frame. At 120Hz that’s ~650Mb per second, or 2.38 TB per minute of movie !
That’s not surprising anymore. Modern video hardware can push out considerably higher data rates than that. DisplayPort can presently handle 4K (3840 × 2160) at 120 Hz with HDR color.
Yes, a high performance graphics card can do it (once the data are in the right format). I’m just pointing out that there’s a lot to do because we also have to fetch the data from the disk as well, and all this being done by intermediate libs like ffmpeg, which probably means multiple conversions.
Ok, so I got vlc installed so I could use the opencv backend. I’m getting slightly better framerates, but still not 60fps. I turns out though, that if I norm units instead of pixel units with the opencv backend, I get better still, and I don’t get the “rewind” time issue that I have with moviepy (ffmpeg). The rate for moviepy is more consistent though.
Here’s a table I built of the framerates (1/frameInterval) for each run. I use a 1920x1080 60fps AVI video as the stimulus. I’m just pasting the first second of the 10 second flow.
I tried this on a few more machines, and it turned out that I can get 60fps using the NVidia control panel. The 4k monitors we have run at 30Hz, but if I set them to HD mode (1920x1080), they can refresh at 60Hz. I lose a lot of desktop real estate, but I can display a 60Hz HD AVI file at 60Hz.
As for moviepy vs opencv backends, I still have my “rewind” time issue with moviepy, not with opencv. However, the moviepy framerate is more stable. The opencv framerate jumps around a lot.
In the moviepy module, FFMPEG_VideoReader has a bufsize argument in ffmpeg_reader.py, but the VideoFileClip class doesn’t supply the bufsize when it creates it’s FFMPEG_VideoReader instance.
In FFMPEG_VideoReader.__init__:
if bufsize is None:
w, h = self.size
bufsize = self.depth * w * h + 100
I’m going to try and modify this and movie3.py to be able to pass in a buffer size so I can fit the entire 1 sec movie in memory. Not sure if it will help, but it’s worth a try.
On second thought, maybe only a 2-3 frames instead of just one.
Changed a few lines in psychopy/visual/movie3.py, moviepy/video/io/VideoFileClip.py, everything locks up. Putting things back like they were. Not worth the effort.