psychopy.org | Reference | Downloads | Github

Free association task, have participants see and rate earlier responses

Hi there, as part of a modified free association task, I’d like to include a section of the experiment that includes participants rating the valence category of the free associates they provided earlier within the experiment.

For more detail on my experiment, see Modified Free Association Routine and Loop Setup. Altogether, I would like to have participants classify all 210 responses that they provided as either positive, neutral, or negative. Setting up the response components for positive/neutral/negative should be easy enough but I’m not sure what I would need to do to have previously typed in responses reoccur for rating at a later point in the experiment.

I’ve tried researching how to have participants see previously provided responses but haven’t found anything (admittedly, I might not be using the appropriate search terms for something like this).

I figure this may be a bit tricky and can provide more information if need be. Thank you SO much in advance

We don’t know how you are collecting your responses, but I’m guessing that there must be code involved if they are typing things out. So at the point in the code when the response is finalised, you would also add that response to a list to be used later.

Later on, when you are looping around the routine where you do the classifications, you would extract an entry from that list to work with on each iteration.

1 Like

Ah my apologies.

I do have some code involved–in a code component, under “begin routine” tab, I have the code:

screen_text = ''

And for the “each frame” tab, I have the code below for collecting responses:

if("backspace" in key_response.keys):
    key_response.keys.remove("backspace")
    if(len(key_response.keys) > 0):
        key_response.keys.pop()   
elif("return" in key_response.keys):
    key_response.keys.remove("return")    
    if(len(key_response.keys) > 2):
        screen_text = ''.join(key_response.keys)
        thisExp.addData("assn_response", screen_text) 
        continueRoutine = False
screen_text = ''.join(key_response.keys)

Could you say a bit more about how I would add that response to be used in a list later on for the valence classifications? And how that I would be able to extract that in the classification routine? Thank you for your help, Michael

Do you also have some code in the “End routine” tab, where you actually save the response in your data file, for example?

That would be the appropriate place to also store it in a list. Show us any code you have there and we can integrate it.

I do not have any code under the “end routine” tab. In my trial routine, I have the code component that I listed above. I have a text component with the text $screen_text, and a key_response component that stores all the keys that were typed, which includes every letter of the alphabet, backspace, and return.

The way the experiment is currently set up, I know the responses are being saved in the data file that automatically generates, so that hasn’t been an issue for me.

Thank you so much for all your help

I would venture to suggest that this is because you haven’t tried out the analysis of your data file format. One should really test the analysis of your data format prior to collecting it for real. I’d suggest that data recorded in this way is not going to be easy to work with… It also stops you running an experiment and only afterwards realising that a crucial piece of information was not stored correctly.

That is probably not going to be that convenient to work with: it might be easier to record entire words, so you don’t have to reconstruct them by processing them later. So I guess you need to record ten responses per trial in the data file, while also maintaining a list of all 210 responses to work with later in the experiment. So in your code component’s “begin experiment” tab, put something like this:

all_responses = [] # initialise an empty list to hold 210 responses

In the “End routine” tab, put something like this:

# store to use later:
all_responses.append(screen_text) 

# record each response in a separate numbered column in the data file:
thisExp.addData('response_' + str(your_inner_loop_name.thisN), screen_text)

wow, this is super helpful! I didn’t realize the way the data was initially set up to inconvenience me that badly so thank you so much for catching and correcting that. The experiment is currently working just how I envisioned it!

How would I then approach having the routine (after the free associations) that pulls all the previously generated responses? I figure it will have to reference some of the code you wrote. Thank you again for all your help

Put a loop around that routine but don’t link it to a conditions file. Instead just give it an nReps value of 210. This should probably be the only loop surrounding that routine. i.e. it shouldn’t be within your main trial loop.

Then of each iteration of the loop, you need to access one entry from the stored list. You do this with the .pop() function of the list. e.g. put something like this in the “begin routine” tab of a code component on that routine:

response_to_rate = all_responses.pop() # select one
thisExp.addData('response_to_rate', response_to_rate) # record it

And then in a text stimulus on that routine, you can just put $response_to_rate and set the text field to update every routine. Ensure that the code component is above the text component, so that it gets to refer to the latest selected value.

If this gives you responses in the wrong order, use all_responses.pop(0) instead, to get them from the other end of the list.

If you don’t want the responses in order, you can do this:

shuffle(all_responses)

to randomise them.

1 Like

wow, this was perfect! There’s a chance I might follow up with you as I continue to work through some of the details of how I want my experiment to look (if that’s all right) but this is exactly how I envisioned it looking! Thank you so much