| Reference | Downloads | Github

Counterbalancing order of True/False responses

Hi all,

Hope you’re doing well!

I have a true/false task that I want to arrange such that when a P goes through this routine, P will have the True choice on the left or the right side of the screen for all trials in the loop.
What I have done is created a rating scale that imports the choices (True or False) from two columns in my conditions file, named ‘true’ and ‘false’:


I have successfully randomized which side these appear on from trial to trial using the following code in a code component for this routine, in the BeginRoutine tab:

# make a list that holds the choices
sd_choices = [true, false] # imports options from the 'true' and 'false' columns in my conditions file
# Options will randomly be on the right or left side of the screen:
SDpractChoice = visual.RatingScale(win=win, name='SDpractChoice', marker='hover', size=1.0, pos=[0.0, -0.5],choices = sd_choices)

If False is on the left side of the screen for the first trial, I need it to stay there for all remaining trials in that loop. It could have been on the right side of the screen; that is what I want to be randomized. This will result in half the Ps haven gotten the False choice on the left side, and the other half haven gotten it on the right side (hopefully). I think the above code is being re-read each time a new trial begins, which means the sd_choices list is being re-shuffled for each repetition, leading to True or False randomly appearing on the left or right side.

How can I make this to where False will be randomly assigned to the left (or right) side of the screen for a routine, and stay there throughout the duration of the loop?

Thank you in advance for your time,

Hi Matthew,

Am I right in assuming that you would like to counterbalance the position of True and False across participants? That is, you would like half of your participants to see “True - False”, and the other half “False - True”, and the order never changes within a participant?

If yes, you could simply cut and paste your code into the “Begin Experiment” tab instead of the “Begin Routine” tab.

Note that by chance you might end up with unbalanced positions across participants. If you wanted to avoid that and you use integers as participant IDs (and you think your participant order is not confounded), you could add the following to the “Begin Experiment” tab:

remainder = int(expInfo["participant"]) % 2  # modulo division; remainder 1 for odd, 0 for even

if remainder == 1:  # odd participant IDs
    sd_choices = ["true", "false"]
else:  # even participant IDs
    sd_choices = ["false", "true"]

SDpractChoice = visual.RatingScale(win=win, name='SDpractChoice', marker='hover', size=1.0, pos=[0.0, -0.5],choices = sd_choices)

Hope this helps.




That worked beautifully.

Initially, psychopy complained that true and false weren’t defined, which makes sense given that conditions file doesn’t show up at the beginning of the experiment, but later (I think that’s what’s happening there). But I just put quotes around them (as in [‘True’, ‘False’]) and that worked! Now I don’t even need those true/false columns in the conditions file.

P.S. sorting Ps by the remainder of their ID was a really elegant way to balance the design, I thought. Will definitely have to remember that trick; it could prove useful in a lot of situations!

Thanks Jan,

Good point, Matthew. I had copied and pasted your code from above, and didn’t notice the missing quotes. I’ve now edited the code, so the solution is actually correct.


1 Like