psychopy.org | Reference | Downloads | Github

Average performance across multiple trials


#1

Hello,

I am using Psychopy builder to run my experiment. The problem I am having at the moment is in calculating running averages in task performance.

Here you can see my experiment. CentFix just refers to a central fixation cross and ‘trial’ and ‘trial2’ are two routines holding different types of stimuli. The first loop ‘trials’ loops around 3 times and ‘trials_2’ only loops once. There is then an outer loop to repeat all of this over and over.

What I want to record is average performance on the first stimuli ‘trial’ as well as average performance on second stimuli ‘trial2’. I have used the following code in the ‘end routine’ of a code component in the routine ‘trial2’:

However, what is being recorded in the excel sheet for nCorr1 (average on ‘trial’ stimuli) is average performance over the three trials that are looped over in the first loop ‘trials’. When entering the outer loop ‘trials_3’, nCorr1 resets, rather than taking into account previous trials beforehand. For nCorr2 (average on ‘trial2’ stimuli), it just seems to be recording 0 or 1. I am guessing this is because ‘trial2’ stimuli only has one trial before entering the outer loop, so there is only 1 trial to average over.

How do I get psychopy to continually calculate average performances as the outer loop ‘trials_3’ loops over and over?

I hope that is clear.

Thanks,
Lucy


#2

Hi Lucy,

Your diagnosis of the situation is entirely correct. New TrialHandler objects called trials and trials_2 are created on every iteration of the outer loop. They are created afresh, and so don’t know any of the history of their predecessors. But when each loop finishes, the data from its TrialHandler is stored in an overarching ExperimentHandler object (Builder calls this thisExp by default).

So in your case, you need to access the data stored in thisExp rather in trials or trials_2.
e.g. try something like:

nCorr1 = thisExp.data['key_resp_2.corr'].mean()

#3

Hi Michael,

Thank you for your help. That completely makes sense! The only thing is now I get this error:

nCorr1 = thisExp.data[‘key_resp_2.corr’].mean()
AttributeError: ‘ExperimentHandler’ object has no attribute ‘data’

Do you have any advice on this?

Thanks,
Lucy


#4

Ahh, yes, that would be too easy. Looking at the API just now, the the ExperimentHandler class doesn’t have an overarching data attribute. Instead, it maintains a list of loop objects (which keep their own data), and also a list of ‘entries’ representing each row. It might be easiest just to maintain a running total of your own, and compute the current average as required.

e.g. something like this: in the Begin experiment tab, put

n_correct_1 = 0
n_correct_2 = 0
n_trials = 0

In the End routine tab, put

n_trials = n_trials + 1

n_correct_1 = n_correct_1 + key_resp_2.corr
n_correct_2 = n_correct_2 + key_resp_3.corr

thisExp.addData('mean_correct_1', n_correct_1/n_trials)
thisExp.addData('mean_correct_2', n_correct_2/n_trials)


#5

Thank you for clarifying that for me. It works well now and is tracking average performance on both sets of stimuli across multiple loops!

Thanks for your advice.

Lucy