VideoStim Not Working Properly

OS: Win11
PsychoPy version: 2022.1.1
What are you trying to achieve?:

I’m trying to run a loop in which I present 72 videos of 2 - 3 seconds in length (none of them are more than 900kb, and the average is about 500kb). After each video presentation, participants then move onto a ratings screen in which they make a series of judgements about the video along sliding scales, before pressing next to view the next video in the series.

What did you try to make it work?:

I’ve inserted a moviestim component in the routine and have managed to get these video stimuli to appear

What specifically went wrong when you tried that?:

However, the problem is that the videos come up for only a fraction of a second rather than 2-3 seconds. Furthermore, they don’t seem to move. If they do appear for more than a split-second, the movement in the video is extremely lagged and slow. Therefore, I don’t understand why the whole 2-3 second clips aren’t playing properly?

Any help would be hugely appreciated! Thank you

Does your video component have a duration or is it set to run to the end of the video? I’m wondering whether there is a lag on the loading of the video and you are then ending the routine at the time the video should have ended if it had loaded with no lag.

Hello Wakefield, thanks for the reply!

My video component does not currently have a duration, but I did set the expected duration to 3 seconds and nothing changed.

In addition, I assumed already that it might be a loading problem, so I moved the video start to time 1.0 seconds (rather than the default start time). However, this didn’t have any effect either

Update: When I change the backend from moviepy to opencv, the video plays perfectly. However, the program then crashes after one video, and gives me this error:

File “C:\Users\sday\AppData\Local\Programs\PsychoPy\lib\site-packages\psychopy\visual\movie2.py”, line 700, in _updateFrameTexture
raise RuntimeError(“Could not load video frame data.”)
RuntimeError: Could not load video frame data.
################ Experiment ended with exit code 1 [pid:13348] #################

Is this familiar?

For context, the program crashes without playing any videos at all if I use either of the other two backends

Further update:

If I tick the “loop playback” option and add a “Next” button (which participants click to move onto the subsequent ratings screen), the videos all load and all play fine. However, I don’t want my videos to keep looping…

If you share a minimal example that demonstrates the issue, I could try to replicate it on my computer. Some initial idea I have is that it might have to do with the encoding of your video file. For example it could be possible that there is a corrupt audio stream associated with the image stream. If this is the case, ticking the “no audio” checkbox might help.

1 Like

Hello Frank. Thank you so much for the offer of help! Fortunately my videos don’t have audio, but I have ticked the “no audio” box anyway.

If you want to take a look, I’ve attached my builder with the video rating task, the spreadsheet for the video task, and a zip folder containing six of my video files!

Videos.zip (2.6 MB)
FiCVideos_for_Forum.xlsx (16.0 KB)
ContextStudy_for_Forum.psyexp (131.4 KB)

Thank you so much! Your help is very highly appreciated

Edit: I’ve just attached updated versions of my builder and spreadsheet! The originals weren’t correct

Hi Sam,

I downloaded your files and everything seems to be working just fine?

What I did:

  • download your files
  • edit your xslx to use relative instead of absolute file paths (e.g. “Videos/WF1A1-HC.mp4”)
  • open the psyexp
  • click on “targetVideo” movie component and change backend to “moviepy”

I see each video coming up after about 2.5 seconds and then run for about 2 to 3 seconds. Once I press the “Next” button the rating scales come up and once I again press the “Next” button, the next video starts.

Did I miss something or did you solve your issue already?

What might make a difference (at least I read that some Mac users had issues with that): The absolute file path tells me that you have you files on a cloud storage. I had them in a local folder. You could try whether this makes a difference for you.

All the best,

Frank.

Hello Frank,

Thank you so much for trying to help me. Unfortunately however, your solutions haven’t worked.

I changed the paths from absolute to relative, but this doesn’t seem to make any difference. In addition, I also come up against exactly the same problems when I try to run the experiment from local storage rather than cloud storage.

Intriguingly, I do have some more information about why this may be happening. Specifically, when I try to run the videos using the moviepy backend, I get these warnings, which probably explain why I only see each video for a juddery split-second:

15.6699 WARNING 2.2512787000159733: Video catchup needed, advancing self._nextFrameT from 0.0 to 0.05
15.6699 WARNING 2.2513092000153847: Video catchup needed, advancing self._nextFrameT from 0.05 to 0.1
15.6699 WARNING 2.251322700001765: Video catchup needed, advancing self._nextFrameT from 0.1 to 0.15000000000000002
15.6699 WARNING 2.251334999979008: Video catchup needed, advancing self._nextFrameT from 0.15000000000000002 to 0.2
15.6699 WARNING 2.251346500008367: Video catchup needed, advancing self._nextFrameT from 0.2 to 0.25
15.6699 WARNING 2.251357899978757: Video catchup needed, advancing self._nextFrameT from 0.25 to 0.3
15.6699 WARNING 2.2513689000043087: Video catchup needed, advancing self._nextFrameT from 0.3 to 0.35
15.6700 WARNING 2.2513798999716528: Video catchup needed, advancing self._nextFrameT from 0.35 to 0.39999999999999997
15.6700 WARNING 2.251390999997966: Video catchup needed, advancing self._nextFrameT from 0.39999999999999997 to 0.44999999999999996
15.6700 WARNING 2.251402599969879: Video catchup needed, advancing self._nextFrameT from 0.44999999999999996 to 0.49999999999999994
15.6700 WARNING Max reportNDroppedFrames reached, will not log any more dropped frames

It seems that the video skips most of the frames to “catch-up” on something.

In contrast, when I run the experiment using the opencv backend, the video runs absolutely fine without any judder, but the experiment then crashes at the end of the first video. Again, I receive warnings, but this time they say:

18.1166 WARNING MovieStim2 dropping video frame index: 1
20.6340 WARNING MovieStim2 dropping video frame index: 53
20.6341 WARNING MovieStim2 dropping video frame index: 54
20.6341 WARNING MovieStim2 dropping video frame index: 55
20.6341 WARNING MovieStim2 dropping video frame index: 56

^ These five frame indexes always refer to the first frame and the final 4 frames of the video

And then directly underneath these warnings, I see:

1.1462 WARNING Monitor specification not found. Creating a temporary one…
File “C:\Users\sday\OneDrive - Nexus365\Documents\DPhil\Trust-Context Study\PsychoPy Experiment\ContextStudy1_lastrun.py”, line 1449, in
win.flip()
File “C:\Users\sday\AppData\Local\Programs\PsychoPy\lib\site-packages\psychopy\visual\window.py”, line 1082, in flip
thisStim.draw()
File “C:\Users\sday\AppData\Local\Programs\PsychoPy\lib\site-packages\psychopy\visual\movie2.py”, line 737, in draw
self._updateFrameTexture()
File “C:\Users\sday\AppData\Local\Programs\PsychoPy\lib\site-packages\psychopy\visual\movie2.py”, line 700, in _updateFrameTexture
raise RuntimeError(“Could not load video frame data.”)
RuntimeError: Could not load video frame data.
################ Experiment ended with exit code 1 [pid:17696]

I think the problem is that certain frames are dropped from the end of the video, which leads to the Runtime Error “could not load video frame data”.

Intriguingly, if I tick the “loop playback” tickbox, the videos are presented fine and the experiment doesn’t crash. Unfortunately however, I then see the first five frames of each video again at the end of the video (which I obviously don’t want for an expression judgement task because it makes the expression look very odd)

Alternatively, the experiment also doesn’t crash (mostly) if I set the duration to $Duration - 0.30 (I’ve already defined each video’s duration in the spreadsheet). I think this is because chopping 0.3 seconds off the video prevents the final four frames from dropping at the end. However, again, I don’t want to lose large chunks of my video. Furthermore, the experiment still crashes after certain videos even when I set the duration to this.

I hope this looks familiar? If so, any more advice would be hugely appreciated! Thank you so much once again!

Hello,

I also got your experiment to run. I also changed the absolute path to a relative path. This is advised anyway if you plan to run the experiment online. Also, users often report problems when storing their experiments in cloud-folder. So, don’t used a cloud-folder. I also used the moviepy as playback.

There were a couple of smaller errors (or syntax has changed from the 2021.2.3 to the newest version, which I doubt): There is a warning that variable gender appears twice in the experiment. Try to use unique names. You used the wrong parentheses, for instance, in the location parameter. I do not know whether Excel reformatted your number but the formula you used to determine the duration on basis of the frame-count, yielded a float with a “,” instead of a “.” But this might be due to my German Excel. I guess all these are no game changers.

Do you need the Next button to go to the rating? You could end the routine automatically when the video is finished by checking the Force end of Routine. Start the rating with some delay such that it does not directly follow the video.

Best wishes Jens

Hi Sam,

the warning you see is created by PsychoPy itself, see here:

As it runs on our machines, I assume your computer is too slow to process the videos. Ideas I have are using a smaller resolution or try a different encoding.

Usually, I saw best results with moviepy. In case you consider converting your movie files, I suggest using ffmpeg as this is also used by moviepy (as far as I know). There is also a version of ffmpeg bundled with PsychoPy. For me, it was located in the following folder:
“C:\Program Files\PsychoPy\Lib\site-packages\imageio_ffmpeg\binaries”

All the best,

Frank.

Hello Sam

I tried to convert your videos using vlc to a more online-compatible format but failed except for two files (WF1D1-NC, WF1D1-HC). All other files had a length of 0 sec after converting. Does your program run on your computer when you are just using these two files?

Best wishes Jens

Hi Jens,

Thank you for your help and advice too! I will definitely use relative paths from now on, and will not use cloud folders. I also have realised that I do not need the next button, so have taken it out of my version of the experiment.

As for the variable “gender” appearing twice in an experiment, I saw that error, but wasn’t sure how to get around it. Say for instance that I have two or three separate subtasks within an experiment which each use the same stimulus identity, which I then want to match up during the subsequent data analysis. For example, I may present some faces without a background context and then present the same faces within a background context in order to compare ratings across these conditions.

How would I do this without using the same column name to identify the stimulus identity?

Thank you once more!

Edit: I’ve just seen your most recent reply. Will check that now!

Hello Sam,

well, simply use gender1, gender2 and gender3. That won’t be a problem in data-analysis. If all three variables contain the same values simply ignore/delete these ones you do not need.

Best wishes Jens

Perfect, thank you so much Jens, that makes sense! And apologies if my questions are naive - I’m very new to this

Also, I’ve just ran that experiment using only WF1D1-NC and WF1D1-HC. When I used the opencv backend, it crashed again, with exactly the same warnings and error as before:

11.0503 WARNING MovieStim2 dropping video frame index: 1
13.3137 WARNING MovieStim2 dropping video frame index: 48
13.3137 WARNING MovieStim2 dropping video frame index: 49
13.3138 WARNING MovieStim2 dropping video frame index: 50
13.3138 WARNING MovieStim2 dropping video frame index: 51

And when I used the moviepy backend, the videos were again lagged and presented for only a split second, with these same warnings:

14.6708 WARNING 2.41182719997596: Video catchup needed, advancing self._nextFrameT from 0.0 to 0.05
14.6708 WARNING 2.4118594999890774: Video catchup needed, advancing self._nextFrameT from 0.05 to 0.1
14.6708 WARNING 2.4118737999815494: Video catchup needed, advancing self._nextFrameT from 0.1 to 0.15000000000000002
14.6708 WARNING 2.4118866000208072: Video catchup needed, advancing self._nextFrameT from 0.15000000000000002 to 0.2
14.6708 WARNING 2.411897999991197: Video catchup needed, advancing self._nextFrameT from 0.2 to 0.25
14.6708 WARNING 2.4119096000213176: Video catchup needed, advancing self._nextFrameT from 0.25 to 0.3
14.6709 WARNING 2.4119207999901846: Video catchup needed, advancing self._nextFrameT from 0.3 to 0.35
14.6709 WARNING 2.411932400020305: Video catchup needed, advancing self._nextFrameT from 0.35 to 0.39999999999999997
14.6709 WARNING 2.4119438999914564: Video catchup needed, advancing self._nextFrameT from 0.39999999999999997 to 0.44999999999999996
14.6709 WARNING 2.4119556000223383: Video catchup needed, advancing self._nextFrameT from 0.44999999999999996 to 0.49999999999999994
14.6709 WARNING Max reportNDroppedFrames reached, will not log any more dropped frames

How exactly did you try to convert the videos? Into which format would you recommend? I have been using ffmpeg via git bash to get the frame number and duration of my video clips for the spreadsheet, but I haven’t used it to convert videos before

Hi Frank, thank you so much for some more detailed advice!

I’d be surprised if my computer is too slow because I am on a desktop computer (Dell Optiplex 7050) which has always been very quick at everything. These video files aren’t huge either so I’m very puzzled.

Nevertheless, if I convert the videos, what format would you suggest I convert it to? Also, the warning page you have linked to refers to movie3 stimuli, whereas my error messages and warnings refer to movie2 stimuli? What is the difference between them and does this matter?

Hi Sam,

I referred to the movie3 warning, namely “… Video catchup needed …”

movie3 uses moviepy as backend and movie2 uses OpenCV/VLC

I usually had better results with movie3, which is the reason that I usually use moviepy (=movie3).

Also your experiment runs just fine using moviepy on my computer.

For video conversion, you would need to play around a little. A typical command looks like ffmpeg -i input_file some encoding options and then filename of output, such as:

ffmpeg-win64-v4.2.2.exe -i WF1A1-HC.mp4 -vf scale=640:428 -an -c:v libx264 -crf 18 tmp.mp4

You can find some help on possible parameters here:
https://trac.ffmpeg.org/wiki/Encode/H.264

Further, I saw that you created your video files with ezgif.com … In case you just used this to concatenate image files to a video, you could also try doing this directy with ffmpeg, thus dropping the ezgif.com part, just in case this added some issues to you video files.

All the best,

Frank.

Hello Sam

I used vlc following these instructions:

Best wishes Jens

Thanks to both of you for the advice! I’ll give it a go and let you know if it works

Update:

I’ve tried to use the VLC media player to convert the files, but unfortunately the conversion process didn’t work. All of the outputted files were 0kb and corrupted.

Fortunately, I’ve had more success using ffmpeg and have successfully converted the videos to h.264 and halved the resolution. Sadly however, I am getting exactly the same issues as before, despite these conversions! I’m at a loss.

Do you think there could possibly be something in how my PsychoPy builder is set up that is causing this? These aren’t big videos (especially when converted) and my computer has never been slow