Some of my participants from Prolific got the following error message upon completing the study (it includes a psychopy experiment followed by a Pavlovia survey):
“Unfortunately we encountered the following error: when terminating the experiment
when uploading participant’s results for experiment: rz/study1_v2
Data for this experiment have been saved more often than allowed by the throttling period
Try to run the experiment again. If the error persists, contact the experiment designer.”
I suspect that this is because I turned on “save periodically”. However, I had to turn it on because the counterbalance feature was locked once in a while (same issue reported here: Counterbalancing error message - #7 by akem10; Counterbalance locked ; Counterbalancing is locked - #2 by Becca ). I couldn’t figure out how to completely avoid that error (I’ve tried resetting reserved spot and not using 1 slot x 100 reps but I still ran into that error) so I had to turn this on so that even when participants got the counterbalance error and couldn’t complete the study, their responses would still be recorded.
So far, it looks like the participants’ data were saved on Pavlovia, but their Psychopy responses and Pavlovia survey responses were saved separately (i.e. survey data were not saved in the exported dataset from the experiment page), so it’s hard to match their responses together.
Any advice for addressing this data saving and the counterbalance error?
Sorry to hear your encountering this error - could I check - on prolific, do you currently allow several participants to join the study at once? (this is a setting that can be changed there) - sometimes limiting this might help prevent multiple participants accessing the shelf at once (which I believe could be a cause of the error).
Yes I am allowing unlimited number of participants to join the study at once (although I’m only having 15-30 slots open on Prolific each time). I’m not able to change that setting now since the study is already running, but I will try that in the future. Thank you.
A related question: Do you know how to match participants’ Psychopy experiment’s response to their Pavlovia survey response? I was trying to see if there are variables in the system that are shared across the two platforms so that I can match their data. For example, I tried using “Response Id” from the survey but it’s not recorded in Psychopy.
I normally use embedded Pavlovia Surveys. However, if you are daisy chaining, then you need to add participant (or some other information) to the link.
How are your participants getting back to Prolific to get paid? Is that based on an anonymous link at the end of the survey?
Thanks for sharing your blog post, and based on your post, I should have had Qualtrics send people’s Prolific ID to Pavlovia.
To give more background about my study:
My Prolific participants would first complete a survey on Qualtrics, and then be directed to a Psychopy experiment (hosted on Pavlovia). The Pavlovia survey is embedded as the last part of the experiment (i.e. last routine in the builder). Then, when completing the survey, they will be directed to Prolific through the completion link.
In this workflow, Qualtics automatically records their Prolific ID, and I also require people to fill in their Prolific ID on the Pavovlia survey. For the Psychopy portion, I added a section in the dialog box (e.g., expInfo['Prolific ID']) at the start of the experiment, so participants can fill in. But I wasn’t sure how to make it mandatory so sometimes they would proceed with an empty entry. This was not a problem because Pavlovia survey responses would be saved with Psychopy responses. But for participants who ran into this error, “…Data for this experiment have been saved more often than allowed by the throttling period…” as I described in my main post, their survey responses were not saved with Psychopy responses. Therefore, for those who didn’t enter their ID for the Psychopy portion, I’m not sure how to find their corresponding Pavlovia and Qualtrics survey data.
Yes, it includes Qualtrics (although it’s not directly in the builder, as participants are directed to Qualtrics first).
On a separate note, is there a place to formally report these errors so the development team can look into them, or other users can be warned about potential pitfalls? I ended up losing about a fourth of my participant data (particularly due to the counterbalance lock issue) while still having to pay them in full, which was quite frustrating. I really appreciate the information available on this forum, and it’s been helpful when troubleshooting after I ran into those errors, but I wish I had known about these issues earlier so that I could have taken precautions and minimized wasted resources (especially since this isn’t something I can fully test on my own without actually launching the study and recruiting a substantial number of participants). It’s also possible that I overlooked something important in the documentation or tutorials. Anyway, thanks again for your help and for all the knowledge shared here!
If the participants go to Qualtrics first then I am more surprised by the Pavlovia Shelf issue, since they should be spread out by the time they get to Pavlovia.
You can make an entry mandatory by adding a * or |req at the end of the field name, e.g. Prolific ID*
While bugs can be formally reported as issues on GitHub
and
I am not sure if that makes them any more visible than posting here.
Perhaps the best option would be to open a Github issue and link to it from here. I am certainly more likely to see it here than on Github, but the opposite might be true for some of the developers.
We use a system called ClickUp to keep a track of bugs and features. I have added a link to this thread to the ClickUp called “Clash between periodic saving and psychoJS.quit” which I created in July 2024 based on this thread Error message when quitting the experiment under certain conditions . Testing with large numbers of simultaneous participants is particularly difficult.
PsychoPy is maintained by a small team, which is why we are able to keep our prices lower than for similar products (free in the case of local experiments). Unfortunately, this currently means that we only have one person able to work on Pavlovia functionality (including Pavlovia Surveys) which is why bug fixes and developments progress more slowly than we would like.
What version of PsychoPy are you using?
Has periodic saving allowed you to recover any data? As far as I can tell it is causing the data loss, and I do not use it in any of my own experiments.
Thanks for the tips on how to make entries mandatory in expInfo! I also really appreciate the detailed explanation and for adding this thread to the list of features to work on.
I’m using PsychoPy version 2025.1.1 (Mac).
Regarding periodic saving, I was able to manually recover a small amount of data after enabling this feature, but most of it was not recoverable.