It’s because of this: MovieStim (python) seek behavior: ~5 frames to complete
In terms of solutions, if your movies don’t have sound, my recommendation is honestly just load it as a series of images. If you pre-load all the frames as ImageStim objects and keep them in a list, seeking is very precise and only takes one frame to execute (you’re just changing which ImageStim you draw). The advantage of using movies is that you can present video and audio together, and some memory efficiency thanks to the video encoding, but the decoding is such a messy process that for video-only movies I just don’t think it’s worth it.
If you need a solution for actual movie files, I eventually came up with one for PyHab but it’s a severe kludge. It plays the movie until the seek command actually finishes (because it has to be playing to execute the seek at all), but mutes the movie and hides it behind a still-frame of the first frame until it reports the intended time-stamp, then unmutes it and puts the movie in front. It takes at least 100-150ms to resolve every time and you need to do it slightly differently based on whether the end of the movie has been reached or not.
If you look at this commit from PyHab’s repository you can get the gist of what I had to do: Major movie playback overhaul part 1 · jfkominsky/PyHab@b860e2a · GitHub
To seek while ensuring that it muted correctly required reaching down into the ffpyplayer api directly and ignoring the command queue PsychoPy prefers to use, and I had to do some actual texture processing to extract the still frame from the movie as an ImageStim. We are in full MacGyver territory here, I do not recommend this approach unless you are truly desperate.