I wasn’t able to find comprehensive guidance elsewhere for the problem of SONA recruitment, counterbalanced list assignment, and SONA credit granting, so I’m posting my current solution in case it helps others.
Description of the problem:
We typically design psychology experiments with a set number of participants in mind, creating n counterbalanced lists for m*n subjects, and we then try to run an equal number of participants on each list. When one participant fails for some reason (they don’t complete the task, or they complete it in an invalid manner), then we would try to run a replacement on that same list. Unfortunately, online research methods aren’t currently set up to do such counterbalanced assignment, let alone accomplish the replacement, so we need to find some way to approximate it.
My current solution
In a nutshell, I’ve set up SONA to link to (and pass its subject ID through) an external php script (https://moryscarter.com/vespr/pavlovia.php; thanks!) that simply increments a participant number assignment when redirecting to Pavlovia. Then my Psychopy/Pavlovia script uses the php participant number to select a corresponding counterbalanced list, and after the experiment it uses the passed-through SONA subject ID to redirect the participant back to SONA for automatic credit assignment. Replacement is accomplished in Psychopy by explicitly creating an array of the names of counterbalanced lists for Psychopy to cycle through based on the php participant number; if you start with an array of {1,2,3,4,5,6} and only need to re-run lists 1 and 4, then it’s not terribly onerous to just (manually) reduce that array to {1,4} so Psychopy assigns future participants to just those lists. It would of course be nicer to automatically identify lists to re-run, but I don’t think that’s currently possible without setting up your own php server.
So it has three parts and four steps:
- SONA: Recruit participants and link them to Morys-Carter’s php redirect (instead of linking to Pavlovia directly as SONA’s guidance suggests), including the SONA subject code (%SURVEY_CODE%) in the ‘id’ field.
- Morys-Carter’s php redirect (https://moryscarter.com/vespr/pavlovia.php): no configuration necessary, beyond a properly formed php call. Though it could be better documented, I think this is actually how he intends people to use the ‘id’ field in his php query.
-
Pavlovia/Psychopy:
a. Give your experiment an expInfo[‘id’] field so it can accept the SONA id in the link from the php redirect, and set your experiment’s online options to use that (instead of expInfo[‘participant’]) in your ‘Completion URL’ option.
b. In a code chunk at or near the beginning of your experiment, use your existing expInfo[‘participant’] field (set sequentially by moryscarter’s php redirect) to assign the participant to a list for which you still need data. This is most easily done by creating an array that enumerates all of the lists when you first deploy the experiment, and then manually removing entries from that array when you have run them successfully. That is, instead of directly transforming expInfo[‘participant’] into a filename (as I had done in the lab, e.g. myItemList = ‘trialOrders/ListNumber’ +${parseInt(expInfo['participant'])}
+ ‘.csv’), you can then index that array:
var arrayOfListsToRun = [1, 2, 4, 6, 53];
var listNumberToRun = arrayOfListsToRun[(parseInt(expInfo[‘participant’]) - 1) % arrayOfListsToRun.length];
myItemList = ‘trialOrders/ListNumber’ +${listNumberToRun}
+ ‘.csv’; - SONA: Should automatically grant credit when participants return via the ‘Completion URL’ after completing the experiment.
I hope this helps others, and please post below if you can think of ways to improve it.