Closing of online experiment very slow

URL of experiment: https://pavlovia.org/andero/focus-investment-task

Description of the problem: A task (that makes heavy use of frame-by-frame drawing) runs happily to the end, but then there is a huge delay between the last participant interaction and the completion of the experiment.

The task is a “Multiple Object Tracking” paradigm, so each trial has 9 polygons that move each frame for 5 seconds (I’ve tested it with about 20 trials, but want to eventually run more).

I suspect the frame-by-frame drawing creates some data that gets processed/uploaded at the end. I’ve switched off auto logging from each element as well as from the task, but to no avail.

The regular data file of the experiment is small (30-40 KB). The browser consoles tells me the task takes about 50 - 90 MB of RAM. However, in the task manager, the browser uses up to 2 GB while running the task, including during the closing stage that is the main problem.

Any pointers would be greatly appreciated!

Greetings,
Andero Uusberg

Your experiment is not set to public so we can’t see it.

There does seem to be a memory issue with “every frame” code, you are not the only one reporting problems. See also: Update every frame doesnt work, Pavlovia: black screen and unable to continue (RAM issues)

However, that might not be what’s going on here. The fact that it lags only at the end of the task is unexpected. I take it you see no such behavior if you run it on your local machine in its Python form? I just want to make sure it’s not something odd in the task itself.

Also, how much delay are you we talking? Seconds? Minutes?

Wow, thank you for such a fast response!

I’ve now made the project public.

I think the lag is around a minute (certainly long enough for me to worry that a participant will terminate the window by the time its done).

The code used to run well in Python (although I’ve made quite a bit of manual edits to get it to work in JavaScript since then).

I do notice occasional lags during the task as well (but only for milliseconds), so it might also be the general memory issue. I’ll look into the other thread.

Oh, and I’ve tested it in latest Firefox on Windows 10.

This may be a Firefox thing or a Windows thing. I just piloted a copy of it in Chrome on MacOS and it closed within a second of reaching the end of the experiment. Does it work better if you use Chrome on a Windows computer?

More generally, looking through the code I don’t see anything that would cause it to clog up at the end like that, at least not obviously. There are a couple of frame-drops that I expect are due to the every frame code, but nothing that should happen at the end seems like it would be taking that much more memory or causing that much more load. If it were something like it storing a bunch of extra data for every frame for mouse location or w/e, I would have expected it to show the same behavior on Chrome when I piloted it, but it didn’t do that at all. Of course, because I was piloting, it also had me download the data file rather than send it to the server, but the downloaded file is only 36kb.

Thank you so much once again for taking the time to test this out.

I just piloted it on Edge, and indeed the ending was fine! This largely solves my problem, because I can live with some people facing this issue (and I can warn them to be patient).

There was occasional lagging throughout, so I’ll keep digging whether I can simplify anything further. On that note, do you know if a cricle with its large number of vertices is somehow more complex to render than other shapes such as a square?

Your guess is as good as mine, I’m afraid.

A the end of the study PsychoPy is sending your data/log back to Pavlovia. My guess is that the log file is large (possibly swamped by the every-refresh updates) and therefore taking some time to send. That might be the cause of laggy behaviour during the study too - a lot of every-frame changes might be too challenging for the browser to manage (and log)

This is an area we clearly need to work on but for now the key is to try and keep the every-frame changes to only what you really need

This makes sense. Thank you both and keep up the good work!