Severe memory leak issue for all browsers, task crashes for the majority of participants

Hi @YT_HAN, sorry I just tagged you in to keep you informed while I work on updating you in full, which should be in the morning, thanks again for your patience, s.

Hi @YT_HAN, just to update you on my progress, I intend to implement PixiJS legacy support in the coming days and am hoping that should take care of the Firefox/WebGL issue.

The Chrome and Edge/Windows errors could be anything. It could very well be the problem is custom code added on top of PsychoJS or the boilerplate for example. Would it be possible to try the same study without code components to make sure?

Thanks again for your help tracking down these bugs, s.

Hi @sotiri, thanks for you reply.
1, firefox /webgl sounds good, let me know when it’s implemented.
2, the Unspecified JavaScript thing in chrome is puzzling for me. I don’t think it’s the custom code because if so, why would so many people finish without problem? I can’t reproduce the issue on my own machines normally, only when I disable hardware acceleration. Please se the error messages from the console.

@YT_HAN OK I hear you, on it, thanks, x

Hi @sotiri, just to follow up, I ran another set this week. Same rating story task, with a different set of stories. And there are some issues:

1, the same unspecified JavaScript Error in chrome, what’s strange is that one subject had this issue this time but not last time he did the task, and he said the computer/browser was exactly the same.
2, I have several people telling me that button click became unresponsive at various stages of the task. One reason for this is if they exit full screen, which I can replicate on my own computer.
3, Some people report to me that the task froze, usually towards the end (average duration for the task is 20-30 mins). And when looking at their data, there are multiple trials with empty responses. Has there been any progress on memory leak? I am about to collect data for videos which take more memory than stories, so would like to know about this. I have already set logging to only ‘error’.

thanks!

Dear @YT_HAN, I believe all three errors relate to how the latest version of PixiJS is WebGL only. I have added a PR to hopefully have that addressed in the next release of PsychoPy. For the time being I have created a merge request to bring the relevant edit into your project. Please let me know if there are any issues remaining, thank you, s.

As mentioned in this thread, please add the following line at the top of your main script to disable browser side logging,

log4javascript.setEnabled(false);

Hi @sotiri, thanks for the reply! Just to check, so the only change is to edit the index.html file to load a different version of pixi.js (and this needs to be done manually every time), right? And this version supports browsers both with and without WebGL?

As for the other suggestion on logging, as I mentioned, I set logging levels to ERROR in the Experiment setting box. This normally just generates empty log files for me, so I assume that’s not memory-consuming? Is the line that you provided above specific to turning off logging for PsychoJS, or all console.log?

also, just curious, is there any kind of short pixi demo for participants to test whether the task would work in their browser? This may help to screening out subjects with incompatible browsers to minimize trouble on both sides.

Hi @YT_HAN,

No worries, fortunately the relevant code change has already made it to master. I believe if you are on the development version of PsychoPy and leave the target field blank in settings when exporting HTML you should be getting the updated ‘index.html’ linking in the WebGL with legacy fallback PixiJS package.

Otherwise, yes you would need to manually edit ‘index.html’ or somehow make ‘index.html’ git exempt either by adding the relevant entry in your ‘.gitignore’ or by calling e.g., git checkout -- index.html from within project root before pushing to Pavlovia.

OK generating empty log files should have no effect on front end performance during the course of the experiment. No, that line turns off PsychoJS library logging and should leave any console.log() calls in your script working as expected.

For testing if a given client has WebGL available you can try https://get.webgl.org
Also, @thomas_pronk has created several very simple e2e tests you might find useful in this context.

I am confident the proposed merge should wholly address the issues you reported, because I tried the experiment on all browsers you mention on a Windows virtual machine lacking WebGL and was getting the same types of error messages, which as the screenshots below hopefully show went away once the edit was in place.

Edge before:


After:

Firefox before:

After:

Chrome before:

After:

2 Likes

Thanks so much for your answer and the amount of work put in!! I will implement the changes and report back once I collect a new set of data. I am sure the problems will be gone!!

1 Like

Please could you clarify.

Do you think that when an unspecified JavaScript error appears on some browsers and not others, if might be fixable by adding log4javascript.setEnabled(false);

Could this be added to a code_JS component (either Before Experiment or Begin Experiment) or can this only be added to index.html or the js files directly?

Or does experiment/settings/JS_htmlHeader: link to pixi.js-legacy (5.3.3) need to be added to the top of the legacy js file? Or the html file?

Hi @wakecarter,

Yes that line can be added in a code component, as early in the script as possible, but it only controls browser side logging. In particular, it will disable all PsychoJS built-in logging.

The fix to some, but not all unspecified JS error cases, is an index.html edit that brings in the pixi.js-legacy package instead of the WebGL-only PixiJS and which was recently merged into master via the PR you quote.

Please let me know if you need further details, x

Hi @sotiri, first of all, happy new year and thanks for all the help last year! I have collected one set of video data before Christmas and want to update you.

Some good news first, I believe the browser compatibility issue is gone as nobody reported to me about it.
Bad news is that the task froze midway for some people. At least 28 people out of ~600 (almost certainly some more just returned without telling me) reported that the task froze at some point, some tried several times. From the meta info collected thru Qualtrics, this happened to many Chromebook users, and some linux/windows users. To give you a sense for comparison, 5 people reported task froze for the story version. So, I am wondering if you have any insights on why (Chromebook generally less powerful?) and how to improve this. I am certainly aware that it would never work 100% of the time for online data collection.

Another unrelated question: what is the date on the data filename? Is it the local time for the participant because I can’t match all of them with my local time for sure.

Hey @YT_HAN, happy new year and congrats on running your study. Sorry to hear about it freezing mid way through on occasion, sounds performance related. We are currently developing end to end tests that should hopefully help us capture more information about how the library behaves on different devices and my hope is that in the not too distant future we will be able to refactor some of our code to be more efficient and easier to debug in this respect.

In terms of timestamps for data files, yes that would be a date/time string local to each browser session and is based on the following call, which you can try out in the console while the experiment is running to verify:

moment().format('YYYY-MM-DD_HH[h]mm.ss.SSS')

x

Yes, those tests sound super helpful. So I assume for now, there’s no obvious way to further improve the task, correct? I guess I will just ask subjects not to use a Chromebook if possible, thanks for all the info.

Correct @YT_HAN, I have made a note about Chromebooks and will be giving the matter special attention when our testing workflow is in place. Thanks again for choosing PsychoJS to run your study, it’s always a pleasure answering your queries, x

1 Like

Hi @sotiri , sorry, one more question: I have just noticed that there was an issue with saving data for my retest trials (which present a subset of stimuli in the main trials to get an estimate for test-retest reliability). I basically only need 8 retest trials and I did so by ending the retest loop early once the count reaches 8. so I have the following code in ‘end routine’:

if (trialCounter === 8) {
    retest_trials.finished = true;
    psychoJS.experiment.nextEntry();
  }

I think the psychoJS.experiment.nextEntry();line was needed at the time when I first developed the task in order for the data to save correctly. perhaps following instructions from here (couldn’t remember now): Experiment not saving data Online, generates empty CSV files. However, I just noticed that loop variable (e.g. in video task, the name of the clip) for the last/8th retest trial doesn’t match what was presented (the first 7 do). I couldn’t find the relevant code in the js file for writing the loop variable into data files by searching for the addData function. Am I correct that saving the loop variable is handled in the following function? And somehow because I end the loop early that it actually saved the 9th trial into the data? I believe that getting rid of the psychoJS.experiment.nextEntry();line solves the problem and saves the correct 8th trial.

function endLoopIteration(scheduler, snapshot) {
  // ------Prepare for next entry------
  return function () {
    if (typeof snapshot !== 'undefined') {
      // ------Check if user ended loop early------
      if (snapshot.finished) {
        // Check for and save orphaned data
        if (psychoJS.experiment.isEntryEmpty()) {
          psychoJS.experiment.nextEntry(snapshot);
        }
        scheduler.stop();
      } else {
        const thisTrial = snapshot.getCurrentTrial();
        if (typeof thisTrial === 'undefined' || !('isTrials' in thisTrial) || thisTrial.isTrials) {
          psychoJS.experiment.nextEntry(snapshot);
        }
      }
    return Scheduler.Event.NEXT;
    }
  };
}

Hi @YT_HAN, no worries, could you send me a link to the repo so I can take a closer look? Thanks, x

(the code used to end retest loop early was also used in the story version), you should have access.

Hi @YT_HAN, thanks for the link, yes I can see how calling psychoJS.experiment.nextEntry(); might be interfering in this case and believe it would be OK to remove :blush:

1 Like