psychopy.org | Reference | Downloads | Github

Online frame refresh rate? frame-based coordinate updates for images are jittery online

URL of experiment: Pavlovia
Description of the problem: Image coordinates / size variables updating smoothly offline with PsychoPy, but seem to be refreshing slowly, in a jittery manner, and inconsistently across items online on Pavlovia.
More detail: In one of my routines, I have variables for the position coordinates, size, and opacity of certain image components that update on the basis of the “Each Frame” view of a code snippet. The code snippet checks whether a certain keyboard component has been pressed yet, and if it has, on each new frame, adds or multiplies the previous value of those variables by a set amount to animate various paths (e.g. moving in a straight line, moving in a parabola while shrinking).

Offline, the Python version runs smoothly, and these animations work smoothly.

On Pavlovia, however, the animations are very jittery. It seems to me (since the timing of shrinking and movement are still correctly synchronized with each other) that the frame refresh rate is slower and/or inconsistent. I’m particularly bemused because an earlier pilot version did not seem to have these jitter / frame refresh rate issues. I can’t find any changes that I would have made that would have changed this. Moreover, I have similar animations in a different routine with a very similar code snippet that are jitterier than the offline version, but still smoother than this particular routine.

Is there something that might be affecting online frame refresh rates? Or is there something in PsychoJS (either my code snippet) or something about image drawing that might be causing this?

The offending routine, if you are taking a look at my experiment, is training_display. I provide the (different) PsychoPy and PsychoJS code snippets below as well, for reference. The image components that get updated depend on which loop the routine is in, but I give screenshots of two representative affected image components.

Image component that relies on beamX and beamY variables:

Image component that relies on rightFruitX, rightFruitY, rightFruitSizeX, rightFruitSizeY, rightFruitOpacity variables:

The PsychoPy code snippet (works smoothly offline):

if routine == "training2":
    if len(choose_button.keys) > 0:
        if corrAns == "left":
            if beamX > -0.44:
                beamX -= 0.004
                beamY -= 0.004
            if beamX < -0.42:
                choose_button_manual = "FINISHED"
        else:
            if beamX < 0.44:
                beamX += 0.004
                beamY -= 0.004
            if beamX > 0.42:
                choose_button_manual = "FINISHED"
    else:
        beamX = 0
        beamY = 0.32

elif routine == "training4":
    if corrAns == "left":
        if len(choose_button.keys) > 0:
            if verbID == "beamup":
                if (leftFruitX < 0) & (frameN % 2):
                    leftFruitX += 0.02
                    leftFruitY = (leftFruitY + 0.004) * 1.1 #test
                    leftFruitSizeX *= 0.95
                    leftFruitSizeY *= 0.95
                if leftFruitX >=0:
                    leftFruitOpacity = 0
                    choose_button_manual = "FINISHED"
            else:
                if (leftFruitX < 0) & (frameN % 2):
                    leftFruitX -= 0.012
                    leftFruitY = (leftFruitY - 0.004) * 1.1 #test
                    leftFruitSizeX *= 0.95
                    leftFruitSizeY *= 0.95
                if leftFruitX <-0.76:
                    leftFruitOpacity = 0
                    choose_button_manual = "FINISHED"
        else:
            leftFruitX = -0.44
            leftFruitY = 0
            leftFruitSizeX = 0.84
            leftFruitSizeY = 0.6
            leftFruitOpacity = 1
    else: #i.e. if corrAns == "right"
        if len(choose_button.keys) > 0:
            if verbID == "beamup":
                if (rightFruitX > 0) & (frameN % 2):
                    rightFruitX -= 0.02
                    rightFruitY = (rightFruitY + 0.004) * 1.1 #test
                    rightFruitSizeX *= 0.95
                    rightFruitSizeY *= 0.95
                if rightFruitX <= 0:
                    rightFruitOpacity = 0
                    choose_button_manual = "FINISHED"
            else:
                if (rightFruitX > 0) & (frameN % 2):
                    rightFruitX += 0.012
                    rightFruitY = (rightFruitY - 0.004) * 1.1 #test
                    rightFruitSizeX *= 0.95
                    rightFruitSizeY *= 0.95
                if rightFruitX > 0.76:
                    rightFruitOpacity = 0
                    choose_button_manual = "FINISHED"
        else:
            rightFruitX = 0.44
            rightFruitY = 0
            rightFruitSizeX = 0.84
            rightFruitSizeY = 0.6
            rightFruitOpacity = 1
elif routine == "validation":
    if len(choose_button.keys) > 0:
        choose_button_manual = "FINISHED"
choose_button.getKeys()
onward_button.getKeys() 

The PsychoJS code snippet (works weirdly online):

if ((routine === "training2")) {
    if ((typeof choose_button.keys!== 'undefined')) {
        if ((corrAns === "left")) {
            if ((beamX > (- 0.44))) {
                beamX -= 0.004;
                beamY -= 0.004;
            }
            if ((beamX < (- 0.42))) {
                choose_button_manual = "FINISHED";
            }
        } else {
            if ((beamX < 0.44)) {
                beamX += 0.004;
                beamY -= 0.004;
            }
            if ((beamX > 0.42)) {
                choose_button_manual = "FINISHED";
            }
        }
    } else {
        beamX = 0;
        beamY = 0.32;
    }
} else {
    if ((routine === "training4")) {
        if ((corrAns === "left")) {
            if ((typeof choose_button.keys !== 'undefined')) {
                if ((verbID === "beamup")) {
                    if (((leftFruitX < 0) & (frameN % 2))) {
                        leftFruitX += 0.02;
                        leftFruitY = ((leftFruitY + 0.004) * 1.1);
                        leftFruitSizeX *= 0.95;
                        leftFruitSizeY *= 0.95;
                    }
                    if ((leftFruitX >= 0)) {
                        leftFruitOpacity = 0;
                        choose_button_manual = "FINISHED";
                    }
                } else {
                    if (((leftFruitX < 0) & (frameN % 2))) {
                        leftFruitX -= 0.012;
                        leftFruitY = ((leftFruitY - 0.004) * 1.1);
                        leftFruitSizeX *= 0.95;
                        leftFruitSizeY *= 0.95;
                    }
                    if ((leftFruitX < (- 0.76))) {
                        leftFruitOpacity = 0;
                        choose_button_manual = "FINISHED";
                    }
                }
            } else {
                leftFruitX = (- 0.44);
                leftFruitY = 0;
                leftFruitSizeX = 0.84;
                leftFruitSizeY = 0.6;
                leftFruitOpacity = 1;
            }
        } else {
            if ((typeof choose_button.keys !== 'undefined')) {
                if ((verbID === "beamup")) {
                    if (((rightFruitX > 0) & (frameN % 2))) {
                        rightFruitX -= 0.02;
                        rightFruitY = ((rightFruitY + 0.004) * 1.1);
                        rightFruitSizeX *= 0.95;
                        rightFruitSizeY *= 0.95;
                    }
                    if ((rightFruitX <= 0)) {
                        rightFruitOpacity = 0;
                        choose_button_manual = "FINISHED";
                    }
                } else {
                    if (((rightFruitX > 0) & (frameN % 2))) {
                        rightFruitX += 0.012;
                        rightFruitY = ((rightFruitY - 0.004) * 1.1);
                        rightFruitSizeX *= 0.95;
                        rightFruitSizeY *= 0.95;
                    }
                    if ((rightFruitX > 0.76)) {
                        rightFruitOpacity = 0;
                        choose_button_manual = "FINISHED";
                    }
                }
            } else {
                rightFruitX = 0.44;
                rightFruitY = 0;
                rightFruitSizeX = 0.84;
                rightFruitSizeY = 0.6;
                rightFruitOpacity = 1;
            }
        }
    } else {
        if ((routine === "validation")) {
            if ((typeof choose_button.keys !== 'undefined')) {
                choose_button_manual = "FINISHED";
            }
        }
    }
}

Update for anyone who is curious, plus a word to be wiser than I was and read the manual more closely than I originally did!!!:

Original issue:
Updating imagestim characteristics (location, size, opacity) on every frame in some of my routines was extremely memory intensive, leading to jittery animations online.
I eventually figured out that even the offline version was also dropping frames (even though this had fairly little visual impact in PsychoPy). Online, on Pavlovia, I was getting hundreds of ``[Violation] ‘requestAnimationFrame’ handler took ms’’ warnings. In fact, on a few Pavlovia pilot runs, (2 out of 7 computer/browser combinations I tried), similar issues (WebGL context lost errors) were causing the experiment to completely crash.

My questions above:
Indeed, I’m pretty sure these issues did come from dropping frames. If there were a way to do fewer frame refreshes per second, I bet that would have an impact on these memory issues. But, as the manual clarifies nicely, I don’t think there’s any way to set frame refresh rate manually on most computers:

Section 2.8.2 (page 17):
Most monitors have fixed refresh rates, typically 60 Hz for a flat-panel display…
…[with the exception of] the caveat…that you can now buy specialist monitors that support variable refresh rates (although not below at
least 5 ms between refreshes).

Since the majority if not all of my participants won’t have this special hardware :sweat_smile:, and now that I understand the source of my issues better, I have opted for other fixes.

What actually worked?:
I made the following changes to reduce the memory issues that this code was creating, which has stopped the crashes my piloters were experiencing.

  • First, inspired by this post (Browser issues: exp runs online using Mac, but not Windows), I stopped updating opacity on every frame. (Since I’m already updating image size on every frame, I just changed the image size variables to 0 under the same conditions in my code snippet, and made opacity constant).
  • Second, based on the following advice in the manual, I reduced the number of pixels to just enough to not look grainy at the typical size they occupy on a normal screen. (I created my stimuli images myself in Adobe Draw, and had left them at the original whopping 1024x768pixels resolution… :woman_facepalming: ) For those using Preview on Mac, this is under Tools > Adjust Size on the menu bar.

2.7.2 Tips to render stimuli faster (page 14-15)

  1. Keep images as small as possible. This is meant in terms of number of pixels, not in terms of Mb on your disk. Reducing the size of the image on your disk might have been achieved by image compression such as using jpeg images but these introduce artefacts and do nothing to reduce the problem of send large amounts of data from the CPU to the graphics card. Keep in mind the size that the image will appear on your monitor and how many pixels it will occupy there.
  • Finally, I had (rather foolishly) included a few trials that didn’t need any animation under a routine that did check all of these image attributes on every frame (I had just set those variables to constants in my code snippets). I created a new routine with constant image stim attributes and moved these trials into this new routine.

I hope this helps anyone with similar issues!

2 Likes