I have very specific randomization requirements, and for this reason the “random” trial option for the conditions file will not work for me. I created a script that generates the randomization that I need and writes it to a conditions excel file, allowing me to use the “sequential” option. However, I need this randomization to be different for every participant that runs my experiment on Pavlovia. Is it possible for me to update the conditions.xlsx file every time someone runs my experiment, by using a JS function in the main script? I am really confused by this idea, because the script would need to push/update the excel file in my repository automatically, without me doing it. Is this possible?
Whilst I am not sure if this is the most practical way, this is one solution (but if you have a large sample this might be impractical).
In your experiment Info add the Field “ID” - this will be a unique number that you give your participant and it will index which conditions file is run (so it will be a value ranging from 0 to your total sample size; remember to start at participant 0 - python indexing!).
In the first routine of your experiment, add a code component. In the “Begin experiment tab” add something like:
As in to automatically count how many participants have already completed your experiment, and then use that to index the correct conditions file? I am not sure that function is supported yet.
But it might be possible to create a .csv output that has a counter in it that is added to each time a participant completes the experiment. That said, it could become tricky if multiple participants complete the study in parallel.
In the solution above I meant that “ID” would be a number explicitly given to the participant to use by the experimenter, so that is not automatic.
You can’t output a custom csv file online so you would need to assign participant via a website that forwards to Pavlovia. I tried via the quota function in Qualtrics but it failed and assigned the same participant number to participants whose Qualtrics
Would people like me to write a webpage which assigns an ID and then forwards to the correct Pavlovia experiment?
Thanks, I think indeed having a counter of completed experiments based on the completed csvs wouldn´t do the trick because very likely many observers will start the experiment before one is completed. The website you are suggesting would help in terms of avoiding the need to send observers to one condition/experiment, then blacklisting them for the next condition (I´m using prolific). But I would need the site to be able to send the observer back to Prolific to confirm completion. In the end however I think I will reduce the number of balanced conditions to the very minimum needed, randomize the rest and just run the conditions as sequential experiments, that way it is easier to exclude participants. I think it´s the least error-prone solution for me. It´s a pity because the assignment was really two lines of python code in my script I can still use it if at some point we recruit observers in person, then we can just assign them a number to enter when the experiment starts.
It will take four parameters.
folder and experiment are required
id is an optional identifier so you can connect the data back to Qualtrics or whatever. The expectation is that this would be random.
researcher is an identifier I am using as a password
You should then be able to assign to conditions using a modulus of participant which is assigned by my web page.
The id is not stored on my web page. I am currently storing folder, experiment and current time. I could drop current time if that was felt to be too intrusive.
Could you please explain a bit more how exactly coded this? It’s not quite clear to me based on your gitlab repo.
I can see you have a groups.csv in your resources containing number of people per group. How did you create/update that file so that it dynamically assigns a participant to a group and thus uses a different condition file.
Due to my counter-balancing I have 16 different task versions for my online experiment meaning that at the moment I am unable to use Prolific to collect data as I am not sure how to re-direct people to the different versions.
Sorry for the confusion. That file comes from the local version of the experiment which I discovered I couldn’t replicate online. I’m now using Qualtrics for assignment to conditions, but I have written an online tool that you should be able to use based on taking the modulus of an incremental participant number:
I’ve coded the experiment in Builder, using the technique where I add a code component at the start, define a new variable, and then use ‘$NewVariable’ in the space for conditions file.
This works fine locally, but when I try to do it on Pavlovia I get an ‘unknown resource’ error as if they can’t find the resource. However, I know the resource is uploaded because when I try to run the experiment referring directly to the file name rather than via a variable initialised at the start of the experiment, it runs fine.
Therefore I think the problem may be something to do with the way JS gets a conditions file from the variable name? But then I don’t know, because the ‘unknown resource’ has the same name as the file I want it to. Does anyone have any suggestions?