TypeError: cannot read getLastResetTime of undefined

URL of experiment: Haley Kragness / Puppy In The Park · GitLab

Description of the problem:
I just had a participant send me the following error, which I have never encountered:

TypeError: cannot read getLastResetTime of undefined

Googling the error didn’t come up with anything useful. Should I be worried or could this just be idiosyncratic for this participant? They reported using Chrome.

The issue is probably not about getLastResetTime but is instead about the object you tried to getLastResetTime of being undefined.

I see what you mean. I’m a little lost about how to narrow down why this might have come up all of a sudden. We’ve run the experiment perhaps over 100 times without ever getting this specific error. Maybe something weird happened for this specific person.

If it’s that rare it could easily be due to a brief memory glitch rather than something solvable.

Right, yes, that makes sense. One thing I’ve really struggled with Pavlovia is that I can get everything working fine on Chrome from my perspective, but then participants report all sorts of random errors that are different for everyone (e.g. the error in my post above, “it stops at the third trial and won’t continue”, other random “red-box” errors like this one for a particular stimulus that won’t load just for them, etc.). I would say it happens for perhaps 10% of people who otherwise appear to be using Chrome and working fine. It makes it really difficult to troubleshoot when the errors aren’t replicable. :confused: I’m not sure whether this is a Pavlovia issue or if I would have the same problems with jsPsych etc.

Hey! These kinds of problems often originate from a combination of two factors:

  1. The bewildering variety of devices and browsers that PsychoJS (or any other web-application) runs on.
  2. That our experiments are also very varied and often involve a bit of programming by researchers themselves.

Big companies throw millions at solving that kind of stuff; the teams that simply test their software are a multitude of times bigger than our whole development team.

TLDR: you’ll likely have this in any system in our field. However, we seem to have the most advanced testing pipeline (and the biggest user base), so my guess is we’ll score a bit better than most.

Yes, this is very understandable. And I’m sure browsers get updated and change all the time. One of the reasons I’m particularly sensitive to this is that we work with a lot of child participants, and once we get them hyped up to play the “experiment game” they’re like, CRUSHED if they don’t get to play. (cute but sad haha). Is there anything we could try to do to mitigate this, for example… do we know if audio files are most likely to cause incompatibilities? Or videos in particular? We try to use lots of both to make the “games” engaging, but it’s sometimes aesthetic and we can definitely modify in many cases.

I feel you. I’ve got two suggestions:

  • Yep, be careful with your formats. However, it seems you’re using the right ones already as far as I can tell. Here is an overview: Media formats suitable for online studies — PsychoPy v2021.1
  • Have you got any control over the devices they use? If so, stick with the same browser (Chrome). Or… given that they are enthusiastic, ask them to try it out in Chrome (instead of whatever browser they are using)

Would that help?

Thanks Thomas! We typically tell them in advance that using Chrome on a desktop/laptop works best, but even when they do, perhaps 10% of them still have some issue pop up. Sometimes refreshing the page once or twice fixes it, other times not. I’m sure various things about the quality of their connection, if they’re using a VPN, etc etc. could possibly interfere. At least it sounds like we’re doing all the right things :slight_smile:

Sounds so indeed! Sorry I can’t be of more help. Meanwhile we’re expanding our testing suite, so any stability issues should become less and less over time.