Using shelf to counter balance

URL of experiment: pilot [PsychoPy]

Description of the problem: Yesterday I tried to run a study using shelf to counterbalance ~600 participants recruited via prolific. they were to be assigned to one of 12 groups.

I used to counterbalance routine linked to a shelf record.

In the first instance I recruited 12 participants. both shelf and the data files showed one participant in each group.

I opened up the study for the remaining participants and something clearly didnt work:

I believe I followed the guidance provided here:

https://www.psychopy.org/online/shelf.html#counterbalanceshelf

I cant find any obvious reason for this failure and I would obviously like to ensure it doesnt happen again.

many thanks

From looking at your experiment’s code, it looks like Slots per group is set to 1, but the target on Pavlovia is 50, so I suspect this weird behaviour may be to do with the mismatch there. Try setting it to 50 in PsychoPy too and then running yourself a dozen times to see if the proportions line up.

Also, in the data files for the later participants, what values do you get in the columns law_groups.remaining and law_groups.group? Do they line up with what Pavlovia is reporting? Another possibility is that participants were assigned the correct groups but Pavlovia is reporting their assignments wrong.

on the version saved on my device the slots per group is set to 1 and the repeats 50.

this appears to match my shelf record:

does this suggest that this wasnt synced?

the law_groups.group seem to be report numbers inline with the shelf record. all of the files ive checked seem to be reporting this in a sensible way and I have some code that reads that column and partially sets the filename based on group allocation. the file names appear to be as imbalanced as the shelf record.

law_groups.remaining however is blank. I dont know what to read into this. I did set up psychopy to produce a blank data output with empty columns (including law_groups.remaining). all the other columns seem to have been overwritten properly. this column remains blank, seemingly uniformly.

are there any other thoughts on this?

understanding what went wrong is more or less fundamental to this study and the use of this platform.

I had a similar issue with the new counterbalance function for the shelf in March (allocation to four groups resulted in almost all participants being put on only two of them) so at the moment I’m still using my VESPR Study Portal. I’m not yet sure at what point the new counterbalance goes wrong.

Same for me. It seemed to work at the beginning of data collection, however. Only during the last weeks, participants have been assigned to no other group but the first and last one. Have any changes been made to the counterbalance function 3-5 weeks ago?
grafik

First and last!

I think that’s a clue. It was first and last for me back in March. That will probably be why it’s fine with 2 groups.

I’ll try to find out if there has been any progress on this.

2 Likes

I had a closer look at the time stamps of the datasets - counterbalancing in our experiment has started to be “biased” around May 22/23. That´s when the last participants were assigned to any other bins than the first and the last.
Hope this helps!

I’ve asked @apitiot about this today.

Please could you show a screenshot of your counterbalance routine and the shelf settings?



Thanks!

May I ask if there has been any progress on this issue? I understand that this problem might not be easy to solve! We just really need to continue data collection, so it would be great to know if a solution within the next days is likely or if I need to find a workaround for our experiment?
Thanks again!

I believe that @apitiot is looking into it this week, but I don’t know if he’ll be successful. Personally I would recommend creating your own counterbalance code using the shelf or switching to the VESPR Study Portal.

Here’s some code I used for one of my experiments where I wanted to counterbalance separately for men and non-men:

Begin Routine JS

existingGroups = await psychoJS.shelf.getListValue({key: ["strata"]});

Begin Routine Auto

if expInfo['gender'] == '2':
    if existingGroups[1] > existingGroups[0] + 1:
        expInfo['group'] = '1'
        existingGroups[0] += 1
    elif existingGroups[0] > existingGroups[1] + 1:
        expInfo['group'] = '2'
        existingGroups[1] += 1
    else:
        expInfo['group'] = str(randint(2)+1)
        existingGroups[int(expInfo['group'])-1] += 1
        print('randMan',expInfo['group'],existingGroups)
else:
    if existingGroups[3] > existingGroups[2] + 1:
        expInfo['group'] = '1'
        existingGroups[2] += 1
    elif existingGroups[2] > existingGroups[3] + 1:
        expInfo['group'] = '2'
        existingGroups[3] += 1
    else:
        expInfo['group'] = str(randint(2)+1)
        existingGroups[int(expInfo['group'])+1] += 1
        print('randNonman',expInfo['group'],existingGroups)

End Experiment JS

await psychoJS.shelf.setListValue({key: ["strata"], value: existingGroups});

For this code my shelf entry was a list called strata which started as [0,0,0,0] for four groups.

Here’s an idea about how to assign to 20 groups which randomly selects one of the groups which currently has average or below average participation)

meanN = sum(exisitingGroups)/len(existingGroups)
chooseFrom = []
for Idx in range(len(existingGroups)):
     if exisitingGroups[Idx] <= meanN:
          chooseFrom.append(Idx)
shuffle(chooseFrom)
thisGroup = chooseFrom[0]
existingGroups[thisGroup] += 1

I can confirm that I am looking into the matter.
I will have an answer for you in the coming couple of days.
My apologies for the delay!
Best wishes,

Alain

Dear @TomKelly , dear @AKZ ,

I have found the cause of the problem and have a first solution (that is being improved).
The issue has to do with the weighted approach I am using to assign groups to participants not dealing well with the combination of (a) a large number of participants starting all at once (i.e. reserving many slots) with (b) slot values being much smaller than the number of participants (i.e. 1 in your case, with 50 repeats, instead of say 10 repeats of 5).
This is rather tricky to work out. I am on it.
I would suggest you hold off a few more days before starting again. I will have a better approach, together with settings that you can tweak and adapt to your experiment, by early next week.
Cheers,

Alain