Participant Feedback During Task

Is there a way of calculating a participants mean performance level during a series of practice trials, and then using this information to probe the participants with a series of questions when deviating from their standard level of performance on the task?

There’s a way to do everything, somehow :smiley:

In this case, you’d have to calculate the mean performance of your practice trials. If you have the participants’ responses stored as data type “correct” and it’s 1 or 0, then that should be easy: symply call practiceAccuracy = np.mean(yourPracticeTrialHandler.data[‘correct’]), where np is numpy (make sure to call import numpy as np at the top of your script).

Then, while you run your experimental trials, you could check if their performance is below half of the practice every 10 trials:

#this goes inside your for loop
if yourTrialHandler.thisN % 10 == 0 & np.mean(yourTrialHandler.data['correct']) < practiceAccuracy/2:
    # this is where you'd put your feedback, you could make it a message like "still awake?"

I hope that gives you an idea as to where to start.

Thank you!

I will give this a go and return when I have go it up and running.

Good luck!

Just to update this, I originally recommended importing numpy, but probably don’t need to do that. A numpy matrix has a method called .mean(), which does the calculation for you, so you could change the code I recommended to:

practiceAccuracy = yourPracticeTrialHandler.data['correct'].mean()

#....

#this goes inside your for loop
if yourTrialHandler.thisN % 10 == 0 & yourTrialHandler.data['correct'].mean() < practiceAccuracy/2:
    # this is where you'd put your feedback, you could make it a message like "still awake?"
```

that's a bit neater, too.