| Reference | Downloads | Github

Updating Conditions File for Each Participant

Hi everyone,

I have very specific randomization requirements, and for this reason the “random” trial option for the conditions file will not work for me. I created a script that generates the randomization that I need and writes it to a conditions excel file, allowing me to use the “sequential” option. However, I need this randomization to be different for every participant that runs my experiment on Pavlovia. Is it possible for me to update the conditions.xlsx file every time someone runs my experiment, by using a JS function in the main script? I am really confused by this idea, because the script would need to push/update the excel file in my repository automatically, without me doing it. Is this possible?


Hi There,

Whilst I am not sure if this is the most practical way, this is one solution (but if you have a large sample this might be impractical).

In your experiment Info add the Field “ID” - this will be a unique number that you give your participant and it will index which conditions file is run (so it will be a value ranging from 0 to your total sample size; remember to start at participant 0 - python indexing!).

In the first routine of your experiment, add a code component. In the “Begin experiment tab” add something like:

allConds=['Conditions1.csv', 'Conditions2.csv']
thisCond = allConds[int(expInfo['ID'])]

In this example I have 2 possible conditions files (in your case, files that you have pre-created in your specific order).

In the next routine (your trial routine) enter $thisCond in the “Conditions” parameter this will use the conditions file selected from your list of possible files.

If you get an error when trying to run on pavlovia that the resource cannot be found, copy your conditions .csv files into the html>resources folder in your experiment directory and resync.

Good luck!


1 Like

With a larger number you could take the modulus of the participant number at the base of the number of conditions you have.

1 Like

If you have a large set of condition files, as Becca mentioned, you can use this code:

import glob
# append all files to a list named allConds
allConds =  sorted (glob.glob ('Condition*.csv'))
thisCond = allConds[int(expInfo['ID'])]

Does this work for online experiments? I wasn’t sure if the import would raise a JS error?

Oh, I did not pay attention to the online tag. Sorry.

No, it won’t work for online experiments because glob is a Python library.

However, the problem of this approach is that, if the experiment contains a large number of condition files, it takes several minutes for those file to be downloaded for each participant.

And, another problem is that people do not know their ID number. So, the code should randomly pick a condition file among all of them, regardless of the ID.

Ah I see! Does the solution from @wakecarter above provide a route round that issue?

Not sure. But I do not think it could solve the issue with downloading all the files. The issue of random selection is solvable. Maybe by using this line of code:

var allConds = ['Conditions1.csv', 'Conditions2.csv','Conditions3.csv'];   
const thisCond = allConds[Math.floor(Math.random() * allConds.length)];

1 Like

Will this help with downloading the files?

But, is there anything like the participant number in an online experiment? Thanks!

As in to automatically count how many participants have already completed your experiment, and then use that to index the correct conditions file? I am not sure that function is supported yet.

But it might be possible to create a .csv output that has a counter in it that is added to each time a participant completes the experiment. That said, it could become tricky if multiple participants complete the study in parallel.

In the solution above I meant that “ID” would be a number explicitly given to the participant to use by the experimenter, so that is not automatic.

You can’t output a custom csv file online so you would need to assign participant via a website that forwards to Pavlovia. I tried via the quota function in Qualtrics but it failed and assigned the same participant number to participants whose Qualtrics
sessions overlapped.

Would people like me to write a webpage which assigns an ID and then forwards to the correct Pavlovia experiment?

1 Like

That sounds like it would be an really useful resource. Thank you for your contributions, they are incredibly helpful!

edit: If you have time of course!

Thanks, I think indeed having a counter of completed experiments based on the completed csvs wouldn´t do the trick because very likely many observers will start the experiment before one is completed. The website you are suggesting would help in terms of avoiding the need to send observers to one condition/experiment, then blacklisting them for the next condition (I´m using prolific). But I would need the site to be able to send the observer back to Prolific to confirm completion. In the end however I think I will reduce the number of balanced conditions to the very minimum needed, randomize the rest and just run the conditions as sequential experiments, that way it is easier to exclude participants. I think it´s the least error-prone solution for me. It´s a pity because the assignment was really two lines of python code in my script :frowning: I can still use it if at some point we recruit observers in person, then we can just assign them a number to enter when the experiment starts.

I think managed to write it this morning while building a Harry Potter Lego game.

It will take four parameters.
folder and experiment are required
id is an optional identifier so you can connect the data back to Qualtrics or whatever. The expectation is that this would be random.
researcher is an identifier I am using as a password

You should then be able to assign to conditions using a modulus of participant which is assigned by my web page.

The id is not stored on my web page. I am currently storing folder, experiment and current time. I could drop current time if that was felt to be too intrusive.



1 Like

Hi everyone,
Thanks for all your help!