Randomization across lists in a priming experiment

OS (e.g. Win10): macOS Sierra
PsychoPy version (e.g. 1.84.x): 1.84.2

Hello everyone,

I am working on a forward-backward priming experiment, and I have a few questions I hope someone can help me with.

That’s what my experiment looks like in the Builder:

As suggested here, the inner loop will have the variable name equal to the name of the column of a csv file conditions.csv containg the names of the existing conditions files. The outer loop_over is linked to the csv file conditions.csv.

The setting above allows a condition file to be randomly selected for each subject; what I need instead is that pair presentation is random across lists. How can I do this? I read something about glob to be implemented in the Coder, but I was wondering if that can be managed in the Builder too.

Thanks!!

What does this mean? Your description above is very good except for the core of the question.

Let me clarify.

As of now, loop_over will randomly select one list at time, and randomly present the prime-target pairs within that list. What I want, instead, is that randomization occurs across lists (i.e., pairs are randomly selected, regardless of the list they are in).

For example, I have list1.csv, list2.csv, list3.csv, each with a certain number of prime-target pairs. I would like to set up my experiment in such a way that, for example, a pair from list1.csv may be followed by a pair from list3.csv, and then followed by a pair from list1.csv again.

Does that make sense?

It’s not clear why they can’t all come from a single randomly ordered list. i.e. what is achieved by separating them into separate lists if they are going to be interleaved anyway?

The different lists are different priming conditions, and I need the info about the conditions each of the pairs are representative of to be printed in the output file. As far as I understand, having separate list is the only way to have that - am I wrong?

Yes. It would actually be much better (and simpler) to have a single conditions file with three columns: one for the condition name, one for the prime, and one for the target.

All three variables will then automatically be linked together in the same row for each trial in the data output file, in a sensible columnar arrangement.

This also means that your Builder flow arrangement reverts to a simple single loop arrangement, rather than the current set-up of nested loops.

And this post will show you how to still have just a periodic rest routine even though you have just a single loop:

Many thanks, Michael, for your help! One quick question, just to make sure I understand what you are suggesting:

So, do I not need to set up any particular specification so that the condition name appears in the output file?

Have you tried running an experiment and just looking in the .csv datafile?

All the columns from your conditions file automatically appear there.

I’m trying but there seems to be an issue with saving the output csv if pushing ‘esc’. I found this topic in the forum, but I still don’t understand how to fix it.

UPDATE: alright, everything is fixed now thanks a new update has been released, which I didn’t know about.

I have a follow-up question. I have two versions of my experiment - say, list1.csv and list2.csv - and I want them to be counterbalanced across subjects. For example, subject 1 will have list 1, subject 2 will have list 2, subject three will have list 1 again, and so forth.

The flow of the experiment now looks like this:

Inspired by this post, I can add a field list to the initial dialog box, and tell loop to look up for the list corresponding to the value in list:

$"list"+expInfo['list']+".csv"

Correct? My question now is - can I do something a bit more advanced? Namely, I can tell PsychoPy to look at the participant number (which will be sequential) and select the appropriate list by matching each of the two lists to a specific condition, i.e I can imagine that participants with an odd number may take list1.csv and participants with an even number may take list2.csv. Do you think it is something doable? If so, can you give me some hints of how to accomplish this?

Thanks!

It is certainly doable. It may not be wise, however. Participants are messy things. What happens when one participant starts but doesn’t complete a session and you discard their data? Does the next one go on to the next alternation, which will unbalance things, or would you like the flexibility for them to have the same condition? It might not be a simple matter of re-assigning the subject ID, as you may also be balancing for age, and sex, and so on…

So I tend to recommend that experimenters retain manual control over the assignment process so you can deal with unintended situations as they arise, rather than being bound to an inflexible algorithm.

But if you must, then insert a code component on the ‘run’ routine and put something like this in its “begin routine” tab:

list_number = 2 - (int(expInfo['subject_number']) % 2)

You should save this number on every trial to the data file so you know what was chosen.

1 Like

Many thanks for your feedback, Michael!