Pavlovia: black screen and unable to continue (RAM issues)

URL of experiment: Pavlovia

I’ve been piloting my experiment and in about 10% of cases people report the task crashing and going black. Sometimes, it states that Chrome has run out of RAM. All these errors are Chrome in Windows as my task does not work on any other platform.

This error is only in this task and not my other one. I think perhaps this task stores too much data/is too complex and older laptops aren’t able to run it. I find older laptops also wait around 2 seconds between trials and will often crash.

Any help is appreciated as I am paying people through prolific to do this and lose money each time the task is unsuccessful.

Chris


Can you share the repository of the experiment on the pavlovia gitlab?

The short version is whatever kind of stimuli you’re using, you probably need to make smaller. Images, movies, audio files, whatever it is must be rather large.

Thanks for your reply

Is this the link you meant?: https://gitlab.pavlovia.org/CDawes/prl_march

I also didn’t know (this may be completely irrelevant) if multiple people completing the task simultaneously causes an issue. I asked 5 people individually and all were smooth. 5 people completed at once 4 people said it crashed.

The link isn’t working for me, your study might not be set to be publicly viewable.

W/r/t simultaneous users, I would say it’s unlikely that it matters. This crash has to do with RAM on the computer of the people running it, not server RAM. I can’t say I’m 100% certain, but I’d say it’s unlikely.

Also, in the interim, you can advise people to close any tabs they don’t need while they’re doing the study, which might help a little bit. Still, my bet is that there will be a way to reduce the amount of memory the study takes in the first place.

Just changed it to public now.

This task has quite a lot of elements on the screen at once which could be the issue. I think there are around 20 image components as I’ve made a virtual keypad to enter responses.

Great idea about closing the tabs - i’ll add that into the instructions

Odd, the repository isn’t letting me see the actual code, just the project page. I’ll see if it’s sorted itself out later. In the interim, yes, having 20 image components is probably a big part of the issue! That must mean that the images are individually not large, but are the image files that go into those 20 components sizable?

Refreshed it again in case that works.

The largest image I have is about 60kb - the central stimulus is 16kb.

This is the main trial stage: the numbers are individual images as when people however they become highlighted. This is also usually the page that causes the crash - the rest are relatively simple.

Yes, now I can see the whole thing. Each number is less than 1kb, so it’s probably not the image size then, just the fact that it needs to load that many image objects at once. Running locally with Python, if this came up the solution would be to replace the images with shape objects and a text overlay, because procedurally rendered shapes take up much less memory than images (an image of a square takes more memory than having PsychoPy just render a square, ditto for a picture of text versus just text). That probably also applies here, though I’m not 100% sure (I don’t know exactly how it’s rendering shapestim, but my guess is that it uses a similar procedural rendering approach with divs or something.

So, if you replaced the number pad images with a bunch of ShapeStim that had this highlighting behavior, and TextStim for the numbers, that might solve the problem. Admittedly, it’s going to be a pain to set up initially, but that’s the only thing I can think of that might reduce the memory load.

I agree that I don’t think this is about simultaneous users per se. When I run workshops I routinely get an entire class to run a study at the same time and it never leads to an issue.

I expect it is the complexity. I wonder, especially, if you have lots of things changing on every screen refresh? See if you can reduce any of those rapid changes. We’ve given people a lot of flexibility by allowing all stimuli to change on every screen refresh (as we do in Python) but I think at the moment this is less robust in JS on older machines

1 Like

Thanks for you advice both of you.

For the online version I could take out the “Each frame” code that controls the keypad opacity. As in total, there are 15 images that are check every frame.

Also, would replacing the image stims for the keypad with text components render more efficiently?

Thank you :slight_smile: