Hello everyone. Thank you in advance for reading and thinking about my question.
The experiment I want to asking a question about is implemented as a builder experiment and it is an online experiment.
I am working on: MacOs Monterey, using PsychoPy version: v2021.2.3
Description of the problem:
I created an experiment that involves recruitment via Prolific, then a short questionnaire in Qualtrics, and then redirecting participants from there to my Psychopy experiment. As part of the Psychopy experiment, a condition file is selected based on the participant ID assigned by Qualtrics. This works fine with the following code:
It is essential to note, however, that some condition files will not result in a full set of experimental results if participants quit an experiment. Also, sometimes Qualtrics seems to assigned the same participant ID twice, when participants enter the experiment in the same moment. The balancing of experimental conditions in my experiment is based on the assumption that each condition file is used for one participant (I am balancing my exp. conditions across participants), so some condition files not being used potentially results in an experiment that will not be fully balanced. And here lies my problem.
Rather than using Qualtrics’ assigned IDs to determine the condition file, I was thinking about implementing a solution in which one condition file (from a set of condition files) is assigned randomly at the beginning of every experimental session and will then be removed from the list of possible condition files after the experiment has been completed.
Given the tools I know from the Builder, it seems difficult to update such a list of condition files and carry this information to the next participant (or exp. session).
Within one experiment, ofc this is quite straight forward to implement, however, when transferring information from one participant/ experimental session to another, it seems to become more challenging.
Therefore, I wanted to ask that question here. I am thankful for your ideas and I would very much appreciate any thoughts on how this could possibly be implemented in a builder experiment.
Thank you very much for this helpful tip. I think this is a very good solution.
However, I would prefer to implement a solution directly into my JS script, rather than requiring an additional instance. I see a relevant advantage in implementing it myself, as it’s making the procedure more transparent for me, and I have more control over how IDs are assigned, what I do with people who don’t finish half way through the experiment, etc.
Thank you! I appreciate your helpful comment and the reference to use your little procedure with Pavlovia’s “shelf”. My experiment will not use Pavlovia’s “shelf”., so this unfortunately won’t be an option for me. It seems that I will have to find a solution built around different experiments as, according to your post and wakecarter’s previous comment, there appears to be no way to directly implement it in the experimental code.
sorry, that was just a typo, I wanted to say that I am not planning on using the “shelf” in general. However, seems to be the only existing solution out there to manage transferring information from one participant to the other, so I will look into it a bit more. Many thanks in any case:-).