How to change the way pavlovia saves the data?

URL of experiment: https://run.pavlovia.org/visual.narrative/visualnarrative/html/

Description of the problem:

If we run our experiment with 8 or more blocks, we get the following error:

when uploading participant’s results for experiment: Visual.Narrative
Request Entity Too Large

With 7 or less blocks, however, the experiment runs fine. We therefore assume that the reason is that too much data is generated, and would like to change the way Pavlovia stores the data by changing the java script.
We looked into the script in the html folder already, but could not find the line which specifies how the data is stored. Do you know of any way to modify how Pavlovia saves the data?

Thanks a lot!

Do you store the mouse position for each frame? That could be a problem since it generates a lot of data.

Hi! Did you find a solution for this? I have the same problem. I also need to store participant responses for each frame and after 8 blocks the browser crashes and shows an “out of memory” error

Do you really need to save the response every frame, or could you just save the data when the response is different from the previous frame?

1 Like

That’s a good idea! I’ll try this and see how it goes. Thank you!

I tried this but it didn’t decrease memory load too much. Chrome now crashes after 10 instead of 8 blocks. Firefox doesn’t crash but instead starts lagging towards the end.

Perhaps my image stimuli take too much memory. Is there a way to delete previous stimuli from memory so that they don’t add up? I have to present 400 images/ 9000 frames block, 12 blocks. Chrome’s memory heap size during the task keeps increasing up a little over 4000MB and then it crashes.

If nothing else works I guess I could split the task into 3 parts.
Someone had a similar problem and splitting the task was their solution - Task crashed or froze for 30% of online participants - #2 by Anthony

How big are your image files? (could you reduce the number of pixels)

How different are they? (could you create images using polygons or image elements)

35 kb per image. I just converted them from .bmp (800kb/image) to .jpg (35kb/image). Surprisingly, memory usage during the task is the same, and in both cases the experiment crashes after 8 blocks. The only difference after the conversion is that resources are loaded faster.
Unfortunately, I can’t use polygons, they are images of faces (KDEF dataset)