Randomize sentence with one of three cue types without repeating

OS (e.g. Win10): Win7
PsychoPy version (e.g. 1.84.x): v1.83.04 (I’m using the old version because my other experiments aren’t compatible with the new version and I haven’t had a chance to de-bug it)

What are you trying to achieve?:

In the experiment I need to present 99 sentences (recorded as audio clips), and pair each sentence randomly with one of three cues (i.e., phonology §, semantic (S), tap (T)). The cue is actually a specific word/task that corresponds to that sentence. For example, the sentence “The man ate the carrot” can be cued by the word ‘parrot’ in the P condition, ‘food’ in the S condition, or they’ll be asked to do a tapping task in the T condition.

The goal is to counterbalance assignments of sentences to the P, S, or T conditions.
For example, for participant 1, sent 1 with P, sent 2 with S, sent 3 with T etc. But participant 2 might get sent 1 with S, sent 2 with T, sent 3 with P etc.

Unfortunately, I don’t have a sophisticated way of doing this. Currently, I’ve pseudorandomized the sentences into 3 sets of 33 sentences, one set for each cue. Further, I’ve randomized the sets and cues so that they can be counterbalanced. (This runs perfectly)

I’ve tried to apply this approach, albeit at a smaller scale using a subset of the sentences (Randomization of three lists simultanously) but the problem was that the sentences repeat and the specific cues do not correspond to the specific sentence.

Any help is much appreciated. Thanks!!

Hi, you will need to define what you mean by “counterbalancing” when applied across subjects.

Within subjects, it is straightforward for 33 sentences to be assigned to each of the three types of cue. I’d actually recommend that you have just a single conditions file containing all 99 sentences and their three columns of associated cues, and use a small code component to assign each trial in a balanced way to one of the cues.

What you’ve described with dividing into 3 sets of 33 sentences actually seems to be unbalanced as described?

But truly counterbalancing across subjects is trickier, technically and practically. For a start, with 99 sentences, one has to treat the 99 sentences as a random factor: it won’t be possible to balance the assignment of items against each other. Even a simple balancing within sentences (ignoring interactions between sentences) would requite 297 subjects. Perhaps it is simply best to stick with balanced assignment within subjects and trust that with a sufficiently large n, this within-subject pseudo-randomisation will also achieve a practical level of balancing across subjects.

This would be a perfectly valid design, but you should do some validation afterwards that sentences had roughly similar levels (i.e. approximately 1/3) of assignment to each cue. Again, a large n will solve most ills here.

Hi Michael,

Thank you for your response.

What I meant by “counterbalancing” was that each sentences would be assigned to a one of the three cues randomly across subjects, while minimizing the chances that two subjects might get the same sentences-cues. This parallels with your first thought about having one single conditions file. I heavily rely on builder view with no experience in coding, can you suggest a code component?

I do appreciate your thoughts about how tricky truly counterbalancing will be, I’ll keep this in mind if I choose to go ahead with the pseudo-randomisation.

Ok, let’s say your conditions file has four columns, named sentence, phonology, semantic, and tap. On your trial routine, insert a code component and in its begin experiment tab, put something like this to ensure that we get a balanced assignment of 33 of each of the cue types:

# make a balanced list of cue choices:
cue_list = ['sentence', 'phonology', 'tap'] * 33

# randomise its order:
shuffle(cueList)

In the begin routine tab, insert something like this:

# choose a cue type for this trial's sentence:
cue = cue_list.pop()

# store the type in the data file:
thisExp.addData('cue_type', cue)

# above we just have a string representation of the cue type, to 
# be able to store it in the data. Now we need to get the contents
# of the actual variable corresponding to that cue type:

cue = eval(cue) # eg convert the string 'tap' to the variable tap

In one of your text stimuli, put $sentence and in the other, put $cue

It is important that the code component is above your text stimuli, so that the current value of the cue variable is updated before the text stimulus refers to it. You can manipulate the order of components by right-clicking on their icons.

The code above is a bit tricky, We could just directly choose one of the variables, but then if you saved that in the data, you’d get the entire contents of the cue for that trial, which already exists. For convenience you just want to store the type of cue chosen, so we do the little dance above to switch between a string label and converting it to a real variable name.

With 99 sentences, this would be well-nigh impossible for any feasible number of subjects.

Thank you so much for your help with this!

Hi Michael,

This was incredibly helpful. I was wondering if you knew of a way to translate this python code to Javascript code that works? This works for me when offline, but not online.

Thank you!

Sorry, I’m not a JavaScript person. I’d just start with PsychoPy’s auto-translate option and work from there. If that doesn’t work, then start a new topic here and post the original Python code and the auto-translation and someone else here will probably be able to tell you what to change to get it to work.

Ok, thank you!