Occasional FPS lowering

When I monitor the fps of the stimulation window, I occasionally observe some random lowering in the fps.

First I ran a code as simple as the following.

#import libraries
from psychopy import visual, core
import numpy as np
import matplotlib.pyplot as plt

# initiating clock and a list to record timing
trialClock = core.Clock()
t_list = []
t = 0

# create a window to draw in
myWin = visual.Window((600,600))

# record timing
while t < 20:    

    t = trialClock.getTime()
    t_list.append(t)
    myWin.flip()          #update the screen

myWin.close()

#calculate fps
fps_list = 1/np.diff(t_list)

Then, I visualize the fps_list with matplotlib

plt.plot(fps_list)
plt.ylim(0,80)
plt.show()

This gave me this graph.
無題

As you can see, there is an abnormal spike around 400, showing that the fps is low. (The number and timings of the spikes differ every runtime)

How could this fps lowering be avoided? Thanks.

Environment:
OS : Windows 10 Pro
CPU : Intel Core i7-6800K 3.40GHz
Display : S2411W (by EIZO)
python : 2.7.14
psychopy : 1.85.3
use jupyter notebook for coding (but this happens even when .py was ran on python interpreter.)

Hello,

Spikes like this can have many causes based on past experience. Background processes resulted in similar performance issues on my Windows 10 machines, it might be worth looking into. However, some FPS issues can arise from the graphics driver itself or Python’s garbage collector.

Thank you for the reply.

background proccess

Do you mean I should turn some off at the task manager? Or is there any other way to avoid their interruption?
(In short, would you advice me how you conquered your problem?)

graphic driver or GC

If this is the case, I believe I cannot handle this myself. Maybe I should make an issue on github.

It looks from the graph as if that glitch is a single missed frame, right?

Here is some stuff you could try:

1 - You could reject the experimental trials with missed frames instead of eliminating the missed frames. If there is no correlation between the frame miss and the stimulus then rejecting glitch trials will not introduce bias. If the glitches are infrequent and each presentation/trial short, then throwing out trials with timing glitches would be cheap. For the opposite extremes, rejecting trials which contain missed frames would not be practical because too high a proportion would. In the worst case, no trials would have missed frames.

One common situation in which you are likely to see a correlation between the stimulus and the glitch is if your trials vary in duration, because longer trials are more likely to contain glitches than shorter trials. In that case, to preserve randomization and balanced presentation ratios, over-collect data for all stimuli, then reject as many short trials as long trials.

2 - Raise the scheduling priority of your Python process during the timing test. Not really sure how to do that in Psychopy/Python, looks like it had been in psychopy.core but I can’t find it there now. Someone else around here, certainly Jon but probably others, would know what is going on with that. Whatever the API in Python, it will resolve to the system API for setting priority, which on Linux and OS X have settings for real-time task scheduling. You want those. Also, a Linux OS can be configured to run real-time, low-latency and normal kernels.

3 - Timing can vary by OS, try Linux and/or OS X. Per previous point: particularly in conjunction with setting task scheduling priority, because that works differently (and better) in OS X and Linux.

4 - In case that missed frame is caused by the Python interpreter allocating or freeing memory during the test, make sure memory use in your script is constant during the timing loop. In MATLAB I create a matrix before the timing loop and then during the loop replace elements in the matrix with timestamps. There might be something similar which you could do in Python: I would expect that a numpy matrix is a monolithic block of memory and its elements resolve to a C types within instead of to dynamically allocated objects per-element. Python lists, on the other hand, certainly would allocate and free python objects (and therefore memory) as elements are added or replaced.

5- The suggestion to close all other applications during the timing test was a good one.

6- I don’t know if PyPy works with PsychoPy but, if so, then you might see different and better results with that. PyPy is much faster than C Python. Faster is not the same as real-time, but but nonetheless might buy you enough to meet the blanking deadline more reliably.

7- If it were me, I would do everything I could to make it work as well as I could in Psychopy/Python. Then, if I absolutely had to have better timing I would go look at the Psychtoolbox with whatever is the recommended best graphics card and driver for that currently and check if that is any better than what I could get with Psychopy/Python. If that reduces the glitches, then either just use that, or figure out why and copy back the solution to Psychopy.

8- Jon has documentation on display timing here. Someone could look into that constant memory issue (#4 above) to see if it is the cause of the glitches and if Psychopy is allocating/freeing memory to record refresh interval timestamps; Its possible you could get a negative result if you eliminate memory allocations/frees in your test script because Psychopy is allocating/freeing under the hood.

Because there are many possible causes of timing glitches, I am not confident which of those suggestions will help. Therefore, for each I would do comparative testing with enough run time to identify only statistically significant differences between conditions. Maybe someone else has already explored all of that and can tell you what usually works best.

Achieving reliable timing is broadly useful for psychophysics stimulus displays so that documenting and reporting back favorable test results here would win you a lot of positive Karma.

The OS+ Python interpreter are optimized for speed, not for reliably meeting timing deadlines. If simpler solutions do not work then you would be left with the very hard work of messing with the internals of the OS and the Python interpreter and video driver.

best,

Allen

2 Likes

Also, for the purpose of testing timing only, you could use Pyglet or the Python bindings for GLFW to easily rule out internal PsychoPy memory allocation/frees as a contributor to glitches. I don’t actually know what is inside those either though. Calling GLFW from C should eliminate all possibility of memory allocations/frees by your own process during animation timing loops. Timing could improve before testing gets to that point though.

Thank you for your reply. I’ll examine them and feedback as soon as possible. (but there seems there are many things to learn beforehand)

as if that glitch is a single missed frame

I should have supplied this information but this seems more than that. I mean, even when the normal fps is 60, this jumps down to 10 or so rather than to 30. So I believe this is “multiple frames” skip, rather than a “single skip”

That seems strange to me if it is more than one missed frame, but I’ve only ever dealt much with precise video timing in Objective-C, C/C++ and MATLAB, so it could be a Python thing like its garbage collector and I never would have seen that before. You might be able to use Python’s gc module in explore that, such as by either forcing garbage collection during an animation loop and checking to see if that causes skipped frames or by detecting when garbage collection occurs and if that coincides with misses.

Also, after further consideration, I think it is unlikely that appending to a Python list should cause much delay. Still, testing in C would give a good baseline comparison to Python, particularly if you suspect the Python garbage collector. MATLAB is (or was last time I measured years ago) O(n) with the number of elements in a matrix when appending new elements. I would expect Python to be O(1) with the number of elements in a list, assuming a linked-list implementation under the hood, though I am not certain that it is that.

- Allen

Here we go, from the online Python documentation:

Python’s lists are really variable-length arrays, not Lisp-style linked lists. The implementation uses a contiguous array of references to other objects, and keeps a pointer to this array and the array’s length in a list head structure.

This makes indexing a list a[i] an operation whose cost is independent of the size of the list or the value of the index.

When items are appended or inserted, the array of references is resized. Some cleverness is applied to improve the performance of appending items repeatedly; when the array must be grown, some extra space is allocated so the next few times don’t require an actual resize.

-Allen

I got this solved with psychopy.platform_specific.rush() . I’ll post an detailed report when I have time.