Dear Psychopy community,
Are there any recommendations for the following situation:
I am trying to show a movie with a trackbar underneath. The trackbar contains markers and when clicked, the movie should display the frame at the corresponding time.
What I have currently is a loop with the following code:
# Draw the user interface
while True:
# Draw UI components
video.draw()
question_text.draw()
trackbar.draw()
for segment in segments:
segment['ui_element'].draw()
win.flip()
# Check mouse input
if mouse.getPressed()[0] == 1 and mouse.previousPressedState == 0:
mouse.previousPressedState = 1
for segment in segments:
if mouse.isPressedIn(segment['ui_element']):
video.seek(segment['t_start'])
break
segment[‘t_start’] is the time in seconds in the fragment. What I would expect is that this advances the position in the video to the specified time, and display that frame at the next draw command. However, it keeps stuck at the initial frame. I have tried a lot of things (including tapping into the updateVideoFrame function calls that MovieStim uses internally), but that did not work.
In the end it seems to be related to the seek function not having effect unless you actually play the movie, but when I toggle between play and pause to have a still frame, it does not always jump to the same frame when I repeatedly click the marker.
We are not using the MovieStim3 because then normal playback seemed very lagging, version of PsychoPy is 2024.2.4
It’s because of this: MovieStim (python) seek behavior: ~5 frames to complete
In terms of solutions, if your movies don’t have sound, my recommendation is honestly just load it as a series of images. If you pre-load all the frames as ImageStim objects and keep them in a list, seeking is very precise and only takes one frame to execute (you’re just changing which ImageStim you draw). The advantage of using movies is that you can present video and audio together, and some memory efficiency thanks to the video encoding, but the decoding is such a messy process that for video-only movies I just don’t think it’s worth it.
If you need a solution for actual movie files, I eventually came up with one for PyHab but it’s a severe kludge. It plays the movie until the seek command actually finishes (because it has to be playing to execute the seek at all), but mutes the movie and hides it behind a still-frame of the first frame until it reports the intended time-stamp, then unmutes it and puts the movie in front. It takes at least 100-150ms to resolve every time and you need to do it slightly differently based on whether the end of the movie has been reached or not.
If you look at this commit from PyHab’s repository you can get the gist of what I had to do: Major movie playback overhaul part 1 · jfkominsky/PyHab@b860e2a · GitHub
To seek while ensuring that it muted correctly required reaching down into the ffpyplayer api directly and ignoring the command queue PsychoPy prefers to use, and I had to do some actual texture processing to extract the still frame from the movie as an ImageStim. We are in full MacGyver territory here, I do not recommend this approach unless you are truly desperate.
Thank you!
I don’t need audio so it is indeed a good idea to preload all the frames as ImageStim objects. Will try that out now