I have a set of 300 images in total that I want to be rated by participants in my experiment. In order to keep down the duration of the experiment, I decided that each participant should rate only 100 of those.
Before rating the images, participants will be assigned to one of 13 groups according to their score in a screening questionnaire using the pavlovia shelf. For each of the groups I set a participation limit of 18.
Any suggestions on how can I randomize the 300 pictures within each of those groups and how I can make sure all pictures are rated equally often within each group?
Thank you for the answer and suggestion. I am not sure if I understand the functioning of selected rows correctly but would this not lead to “bundles” of stimuli that always stay the same? So it would not be fully randomized?
How about a dictionary on the shelf containing a list of 300 zeros for each group name?
At the start of the experiment, download the shelf entry and shuffle a list of 300 indices.
Go through the shuffled list and add the first 100 indices that have a value below 6 are added to a second list and add 1 to the shelf dictionary. Use this second list as the useRows variable for selected rows.
Update the shelf entry with the updated dictionary.
I think this would be better than the method I suggest in my Presentation Cap online demo PsychoPy Online Demos because that accesses the shelf too often. It would be better to accidentally present the pictures slightly unevenly than to have a slower presentation on every trial.
Do you mean a separate dictionary on the shelf for each of the 13 groups?
With each dictionary created as {“image001”: 0, “image002”: 0, …, “image300”: 0}?
So if a participant scores 5 in the screening questionnaire, he or she gets assigned to the “score5” group and I add a code component that loads only the dictionary “score5_randomization” in the shelf while those of the other groups remain the same?
Can I further update the shelf entry with the updated dictionary only at the very end of my experiment? With this I could make sure that the “score5_randomization” dictionary gets only updated when tthe participants has completed the full study and not by those who droped out mid experiment.
Yes, in its simplest form the content of each list would be the same for each group. You have to ask yourself whether complete randomisation across all participants, with equal presentation of each stimulus, is worth the effort and will properly control for confounding variables. Instead of a randomisation with even distribution of stimuli, you could construct three lists in which you control for the confounding variables.
Never forget that a random selection of stimuli could result in a list with undesirable properties. Randomness is not always the answer.
Having said that @wakecarter suggests a way worth trying.